I thought I’d attempt another go at explaining Bayes’ theorem and falsifiability.

In a previous post, I went over a hypothetical scenario where there are only two possible ways of getting a headache: One was by brain tumors and the other was by head colds. In this hypothetical scenario, the number of people in the world with brain tumors was equal to the number of people in the world with head colds; head colds are responsible for headaches in 50 out of 100 people and brain tumors are responsible for headaches in 100 out of 100 people.

Given all of that information, if you wake up with a headache, what is the probability that you have a brain tumor, and what is the probability that you have a head cold?

Let’s assume that the prior probability for both (H1 and H2) is 10%. Our Bayes’ theorem would be:

P(H1 | E) = P(E | H1) * P(H1) / [P(E | H1) * P(H1)] + [P(E | H2) * P(H2)] + [P(E | ~H) * P(~H)]

= 1.00 * 0.1 / [1.00 * 0.1] + [0.5 * 0.1] + [0 * 0.8]

= 0.1 / [0.1] + [0.05]

= 0.1 / 0.15

= .6666

So the probability of having a brain tumor, upon getting a headache, went up from 0.1 to 0.6666. Thinking otherwise, that since 100 out of 100 people with brain tumors have headaches therefore you have a 100% chance of having a headache, is the Prosecutor’s Fallacy. The probability of having a head cold, also, went up from 0.1 to 0.3333.

The thing is, in this scenario you could *not* have a headache and still have a head cold. Since 50 out of 100 people get headaches due to head colds, it could go either way. Both a headache and a non-headache could be evidence of a head cold; that is the essence of being unfalsifiable. One observation is no more or less probable than the other — *exclusive* — observation. On the other hand, *not* having a headache is pretty strong evidence that you don’t have a brain tumor. Not having a headache *falsifies* the brain tumor hypothesis; absence of evidence is evidence of absence. But, again, you could have a headache and *not* have a brain tumor even if brain tumors cause headaches 100% of the time; there’s still a 33.33% chance that you have a head cold. So one can see the dangers behind confirmation bias. Falsifying, disconfirming evidence is a lot better than confirming evidence.

Let’s up the ante.

Say your friend has two die. One has six sides numbering 1 – 6 and the other is a trick die that has a 1 on all faces. She rolls one of the die at random and it ends up with a 1. What is the probability that the die that she rolled was the normal 6 sided one or the trick die?

For the normal 6 sided die, our probability distribution is P(One | Normal) + P(Two | Normal) + P(Three | Normal) + P(Four | Normal) + P(Five | Normal) + P(Six | Normal) = 1.00. If it is a fair die, then the probability for P(One | Normal) = 1/6 or .1667.

For the trick die, our probability distribution is P(One | Trick) = 1.00.

We can then go through Bayes’ to see what the probability is for her rolling each:

P(Normal | One) = P(One | Normal) * P(Normal) / [P(One | Normal) * P(Normal)] + [P(One | Trick) * P(Trick)]

= .1667 * .5 / [.1667 * .5] + [1.00 * .5]

= .0834 / [.0834] + [.5]

= .0834 / .5834

= .1429

P(Trick | One) = P(One | Trick) * P(Trick) / [P(One | Trick) * P(Trick)] + [P(One | Normal) * P(Normal)]

= 1.00 * .5 / [1.00 * .5] + [.1667 * .5]

= .5 / [.5] + [.0834]

= .5 / .5834

= .8571

So upon rolling a 1, the probability that she rolled the normal sided die is .1429 and the probability that she rolled the trick die is .8571. There is still some ambiguity here, but if you were a betting person you should bet on her having rolled the trick die. But due to falsifiability, if she had rolled any other number then we would have 100% confidence that she rolled the normal die. Again, disconfirmation is stronger than confirmation.

Let’s try another example, this time approximating people’s confidence in their unfalsifiable hypotheses by increasing the prior probability in favor of the unfalsifiable hypothesis. Let’s also introduce a 50 sided die and a prior of 90% in favor of picking the 50 sided die. The probability of her having rolled the 50 sided die and getting 1 between a 50 sided die, a 6 sided die, and the trick die to choose from is:

P(Fifty| One) = P(One | Fifty) * P(Fifty) / [P(One | Fifty) * P(Fifty)] + [P(One | Trick) * P(Trick)] + [P(One | Six) * P(Six)]

= .02 * .9 / [.02 * .9] + [1.00 * .05] + [.1667 * .05]

= .018 / [.018] + [.05] + [.0083]

= .018 / .0763

P(Fifty | One) = .2358

P(Six| One) = P(One | Six) * P(Six) / [P(One | Six) * P(Six)] + [P(One | Trick) * P(Trick)] + [P(One | Fifty) * P(Fifty)]

= .1667 * .05 / [.1667 * .05] + [1.00 * .05] + [.02 * .9]

= .0083 / [.0083] + [.05] + [.018]

= .0083 / .0763

P(Six | One) = .1092

P(Trick| One) = P(One | Trick) * P(Trick) / [P(One | Trick) * P(Trick)] + [P(One | Six) * P(Six)] + [P(One | Fifty) * P(Fifty)]

= 1.00 * .05 / [1.00 * .05] + [.1667 * .5] + + [.02 * .9]

= .05 / [.05] + [.0083] + [.018]

= .05 / .0763

P(Trick | One) = .6550

Upon rolling a 1, the 50 sided die has a .2358 probability of having being rolled, the 6 sided die has a .1092 probability of having been rolled, and the trick die has a .6550 probability of having been rolled. Even given a prior probability of 90% that your friend would pick the 50 sided die. This is the problem with positing hypotheses that can equally explain multiple exclusive outcomes, even if there is a high initial probability of that hypothesis being true. If we had a 100 sided die, and a 90% chance of picking that die, upon rolling a 1 there would only be a .1337 probability that the 100 sided die was picked, in contrast to a .7426 probability that the trick die was picked. A 200 sided die would do worse. 300, even worse. Etc.

I should emphasize that this doesn’t count if the data aren’t mutually exclusive.

How much mutually exclusive data can an all powerful god, philosophical zombies, solipsism, being a brain in a vat, the world being created last Thursday, etc. explain? How many sides would God Dice have? In an effort to prevent their god from being proven wrong, believers have given their god dice every side imaginable.

Bayesian Judo (falsifiability) will always win over goalpost moving (unfalsifiability). A god that can be proven wrong is more probable than a god that *can’t* be proven wrong.