# Daily Archives: September 10, 2012

## Unfalsifiable = Statistical Independence?

Statistical Indepdendence is what happens when the probability of a hypothesis being true is equal to the probability that the hypothesis is true given some evidence. So for example, the probability that Mars is the fourth planet from the sun is equal to the probability that Mars is the fourth planet from the sun given that I’m a male. My being a male has no bearing on whether Mars is the fourth planet from the sun or not.

That is, P(Mars is 4th) = P(Mars is 4th | I’m Male).

Statistical independence is basically what is happening with the Monty Hall problem. Simply put, the probability of you picking the correct door is equal to the probability that you’ve picked the correct door given that the announcer has picked one wrong door; the announcer is going to pick one wrong door whether you picked a successful door or not. It just so happens that the unpicked door is not independent of the announcer’s choice, so that is the probability that changes (you can play around with the Monty Hall problem here to see that changing choices will regress to the mean of being right 2/3 of the time).

On the other side, when a hypothesis is unfalsifiable, this means that there is no observation or evidence that can decrease the probability of that hypothesis. And if there’s no observation or evidence that can decrease the probability of that hypothesis, then there’s no observation or evidence that can increase the probability of that hypothesis (this is why absence of evidence is evidence of absence). Which, in and of itself, means that it is a low probability hypothesis.

But this got me thinking… the two sound a lot alike. It seems to me that independence is just a single instance, as it were, of being unfalsifiable. So, Mars being the fourth planet from the sun isn’t unfalsifiable in and of itself, but my being a male in relation to Mars’ position from the sun regresses in the same way that an unfalsifiable hypothesis does.

What makes a hypothesis unfalsifiable is that the conditional probability of the evidence given the hypothesis is equal to the conditional probability of not having the evidence given the hypothesis (or the conditional probability of some other evidence [of the same reference class] given the hypothesis). So the conditional probability of my being a male given that Mars is the fourth planet from the sun is equal to the conditional probability of my being a female given that Mars is the fourth planet from the sun. This is exactly what happens with an unfalsifiable hypothesis.

The example of unfalsifiability that I give is the probability that god created humans given evolution (basically theistic evolution). This probability is equal to god creating humans given some other means of creation, since there’s no restriction on how an all powerful god would create us; that’s what makes it unfalsifiable (a hypothesis that asserts that humans could only have come about by evolution and no other means, like naturalism [or a non-all powerful god], is a falsifiable hypothesis).

This, then, means that the conditional probability of us coming about by evolution given that god created humans is equal to the conditional probability of us coming about by some other means (e.g. Creationism) given that god created humans.

So are they the same? Thinking more about it, it doesn’t seem so. They are related, definitely, but not the same. What actually drives independence is having Bayes Factor, or the Likelihood Ratio, equal to 1. That is, the conditional probability of the evidence given the hypothesis is equal to the conditional probability of the evidence given some alternative hypothesis. In unfalsifiability, the driving force is the equivalence between the conditional probability of the evidence given the hypothesis and the conditional probability of some other evidence (of the same reference class) given the same hypothesis.

In unfalsifiability, the alternative hypothesis — the hypothesis that is falsifiable — will always gain probability because it is falsifiable. In independence, the alternative hypothesis doesn’t gain or lose probability because the original hypothesis doesn’t gain or lose probability; e.g. the probability that Mars is not the fourth planet from the sun given that I’m male also doesn’t change. Though there can definitely be overlap; if there is some unfalsifiable hypothesis with only two theoretical possibilities (like evolution vs. creationism) and the evidence (the Total Probability) was 50%, this would function just like independence, since in this case P(E | H) = P(E). That would make Bayes’ Factor equal to 1.

So the lesson here is to always take into account alternative hypotheses.

Comments Off on Unfalsifiable = Statistical Independence?

Posted by on September 10, 2012 in Bayes

NeuroLogica Blog

Slate Star Codex

SELF-RECOMMENDING!

Κέλσος

Matthew Ferguson Blogs

The Wandering Scientist

What a lovely world it is

NT Blog

PsyBlog

Understand your mind with the science of psychology -

Vridar

Musings on biblical studies, politics, religion, ethics, human nature, tidbits from science

Maximum Entropy

Skepticism, Properly Applied

Criticism is not uncivil

Say..

Research Digest

Disrupting Dinner Parties

Feminism is for everyone!