RSS

Monthly Archives: May 2012

Logical Fallacies As Weak Bayesian Evidence: Post Hoc Ergo Propter Hoc

So I already have two posts that go over the notion that logical fallacies aren’t necessarily fallacies of probability. The issue with logical fallacies is that in deduction, the conclusion has to follow necessarily from the premises. But we don’t live in a world of deductive certainty; we live in a world of uncertainty: the world of probability.

The thing about post hoc ergo propter hoc is that it is an inductive inference. That being the case, post hoc fallacies should be easily explained using probability theory, thus Bayes’. Thinking about this fallacy intuitively (that is, using quick Bayesian format), it seems that this fallacy is an instance of the Base Rate fallacy. Of course, given that some cause is the reason for some effect, the cause has to come before the effect (unless you live in the world of quantum physics, which none of us do).

This means that the conditional probability, or success rate, of a post hoc argument would necessarily be 1.00, or P(B Happened After X | B Caused By X) = 1.00. But the argument itself is trying to prove P(B Caused By X | B Happened After X); the cause is the hypothesis and what happens is the evidence. Sure, given that god answers prayer there’s a 100% chance you would get a job after praying for it. But that’s a Base Rate fallacy; we are not trying to establish P(Get A Job After Praying To God | God Answers Prayers) but P(God Answers Prayers).

One hundred percent of all effects (in the macro world) are preceded by their causes. Concluding that because the conditional probability is 100% that it means that it is actually the reason is, like I said, a Base Rate fallacy, because we aren’t taking into account the prior probability.

But there’s a second factor that has to be taken into account: The alternative hypothesis. Or, what about an effect that just happens after the “cause” by chance or some other cause? In other words the false positive rate? This, surely, must also be a high number but it doesn’t necessitate 100% certainty like the success rate that denominates post hoc logic. Given this, it seems that the Likelihood Ratio, or dividing the success rate by the false positive rate, returns a very very low quotient. If the success rate is 100%, and the false positive rate is 98%, then this is only a Bayes’ Factor of 1.02 decibles. This means that if we had a 50/50 spread between the hypothesis and the alternative, the post hoc ergo propter hoc logic in this example would only increase our probability to 50.5%.

If we go back to my original example P(Get A Job After Praying To God | God Answers Prayers), we would have to include the alternative hypothesis. There are various alternatives, but let’s just go with P(Get A Job After Praying | Economy Improves). Of course, there’s not a 100% chance that you would get a job when the economy improves, but an improving economy in and of itself has a much higher prior probability than the existence of god. Therefore, in this case, the prior probability of P(God Answers Prayers) doesn’t get much of a boost due to the small difference between P(Get A Job After Praying To God | God Answers Prayers) and P(Get A Job After Praying | Economy Improves).

So post hoc ergo propter hoc is weak (possibly very weak) probabilistic evidence. It’s not strong enough evidence to rest an entire argument on; you would need much more evidence. Or you would need an argument or situation where there is a huge disparity between the success rate and false positive rate, which most post hoc ergo propter hoc arguments never attempt to ascertain.

The god hypothesis, of course, also suffers due to its lack of falsifiability.

 

Wrath of the Titans!

2 Peter 2.4

For if God did not spare angels when they sinned, but sent them to hell, putting them in chains of darkness to be held for judgment

You’re probably wondering what this short verse has to do with Greek mythology and/or some recent movies. It might become a bit more clear if I write it in the original language it was penned in:

εἰ γὰρ ὁ θεὸς ἀγγέλων ἁμαρτησάντων οὐκ ἐφείσατο, ἀλλὰ σειροῖς ζόφου ταρταρώσας παρέδωκεν εἰς κρίσιν τηρουμένους,

ei gar o theos aggelon amartesanton ouk efeisato, alla seirois zofou tartarosas paredoken eis krisin teroumenous

There we go, the offending word: The verb form of the word Tartarus, which the author of 2 Peter is using to mean “cast into hell”. How’s that for syncretism? Both Hades (Matt 11.23) and Tartarus are mentioned in the NT. Of course, someone might counter with the fact that many English words also derive from Greek — like hysteria (womb) or energy (en ergos:: in work) — and this doesn’t mean that we have some syncretism with Greek mythology (our Western religions have syncretism with Greek mythology for other reasons).

But the difference is that Greek mythology was still believed by a great many people when pseudo Peter wrote this epistle. If he was walking around he might have heard some Greeks explicitly talking about Tartarus as though they really believed it existed as it does in Greek mythology; he couldn’t have not known what it meant, unlike how modern speakers of English don’t know what “psycho” or “pneumonia” or “sycophant” originally meant. I imagine if more people knew what sycophant originally meant and implied, feminists would have a field day.

 
Comments Off on Wrath of the Titans!

Posted by on May 23, 2012 in greek

 

Self-Contradictions In Hoffmann’s Latest Essays

So R. Joseph Hoffmann has published three essays arguing against mythicism. I don’t have a bone to pick with mythicism, but I do have bone to pick with bad arguments. Especially self-contradictions.

Here is the bit of self-contradiction that demonstrates that they see Bayes’ Theorem as some manner of sorcery instead of modeling correct thinking when dealing with uncertainty:

But I think the basic factuality of Jesus is undeniable unless we (a) do not understand the complexity of the literature and its context, or impose false assumptions and poor methods on it; (b) are heavily influenced by conspiracy theories that–to use a Humean principle—are even more incredible than the story they are trying to debunk; or (c) are trying merely to be outrageous.  To  repeat Morton Smith’s verdict on Wells, the idea that Jesus never existed requires the concoction of a myth more incredible than anything to be found in the Bible.

The use of any single “theorem” to deal with the values discussed here beggars the credible.

(speaking of poor methods…)

Did anyone notice it? No? It was his reference to a Humean principle. The same Humean principle that is a Bayesian principle, which he then denigrates at the beginning of the very next paragraph. Ironically, if Bayes’ Theorem doesn’t apply, then neither does Hume’s argument that he appeals to since they are the same thing.

David Hume and Thomas Bayes were contemporaries. Hume used logic to arrive at his conclusion, while Bayes used math to arrive at his formula (which necessiates Hume’s conclusion.). If math doesn’t apply, then neither does logic; Bayes is no less applicable to historical questions than a logical syllogism.

If someone thinks that when doing historical analysis, extraordinary claims requires extraordinary evidence, they are a Bayesian. If someone thinks that falsifiable historical hypotheses are better than unfalsifiable historical hypotheses, then they are a Bayesian. Bayes theorem models all correct probabilistic thinking. If historians are dealing with uncertainty, and using probabilistic language, they should know the rules of probability.

Stephanie Fisher writes:

[Bayes’ Theorem] is completely inappropriate for, and unrelated to historical occurrence and therefore irrelevant for application to historical texts

Of course, she is wrong. Unless we have 100% confidence in every single argument and evidence in history, then probability theory will necessarily apply. It doesn’t matter if you are using percentages to the nth decimal place since it’s not about mathematical accuracy but about making sure your conclusions (which are necessarily probabilistic statements in history) follow from your premises (which are also probabilistic statements). Even if you use educated (or even uneducated) guesses, you still have to follow the rules of probability so that your conclusion follows from your premises.

I reiterate: If historians are using probabilistic statements and educated guesses, they still have to know the rules of probability. To say that probability theory doesn’t apply is to say things like Occam’s Razor and falsifiability don’t apply. And if falsifiability doesn’t apply, then that’s not even pseudoscience. That’s religion.

So the self-contradiction, the irony, is that Hoffmann (and Fisher) contradict themselves when they claim that Bayes theorem doesn’t apply. It does apply, you just don’t understand it. Sure, you can use Bayes’ theorem incorrectly just like you can use formal logic incorrectly, but the sweeping statement that it doesn’t apply is to shut yourself out of correct thinking. The contradiction I’ve hopefully pointed out is that they already use Bayesianism intuitively when they think correctly, even for mundane everyday things. They just need to use it more explicitly when doing scholarship, which is Carrier’s point.

Does Hoffmann really think that he needs mathematical precision to the nth decimal place to conclude that if a student of his misses a week of class that the student was probably goofing off instead of having been abducted by aliens? I would hope not: Welcome to Bayesianism.

 
1 Comment

Posted by on May 22, 2012 in Bayes

 

Flesh Eating Bacteria and Probability

It’s well known that doctors are bad at probability:

Here’s a story problem about a situation that doctors often encounter:
1% of women at age forty who participate in routine screening have breast cancer. 80% of women with breast cancer will get positive mammographies. 9.6% of women without breast cancer will also get positive mammographies. A woman in this age group had a positive mammography in a routine screening. What is the probability that she actually has breast cancer?

[…]

Next, suppose I told you that most doctors get the same wrong answer on this problem – usually, only around 15% of doctors get it right. (“Really? 15%? Is that a real number, or an urban legend based on an Internet poll?” It’s a real number. See Casscells, Schoenberger, and Grayboys 1978; Eddy 1982; Gigerenzer and Hoffrage 1995; and many other studies. It’s a surprising result which is easy to replicate, so it’s been extensively replicated.)

On the story problem above, most doctors estimate the probability to be between 70% and 80%, which is wildly incorrect.

That’s why it is in your best interest to learn some probability theory so that you don’t die! Not knowing probability might increase your probability of death; look at the tail end anecdote in this story:

She said that many doctors have a mantra: “If you hear hooves outside your window, chances are it’s a horse and not a zebra,” meaning that you should first consider the obvious explanation. “Our point is, physicians need to be trained to look at necrotizing fasciitis as a horse and not a zebra.”

If you suspect the disease, ask doctors to rule it out. Batdorff cited the case of “a gentleman whose wife said to the emergency room staff ‘could this be the flesh-eating bacteria? They said no. And it was. And he died.”

There’s a lot right with this quote but one thing wrong. No, flesh eating bacteria shouldn’t be thought of as a horse. It’s still a zebra; meaning that it’s still less common than other infections.

But the sound advice — the best advice — is to ask doctors to rule out the more serious (though less probable) possibility. That’s not probability in and of itself, but decision theory. More to the point, ask doctors to do a high success rate/low false positive rate test to rule out the possibility. Like I wrote in the post right before this one: Disconfirming evidence is better than confirming evidence. If flesh eating bacteria give certain symptom 100 out of 100 times, this doesn’t mean you actually have the disease. That, again, is the Prosecutor’s or Base Rate fallacy.

Ask for disconfirming evidence. And don’t just accept “no” from a doctor like in the last sentence of the quote, because like I said, doctors suck at probability just like the rest of us. And that ignorance of probability might cost you a lot of money in repeated doctor’s visits… or cost you your life.

 
1 Comment

Posted by on May 17, 2012 in Bayes

 

Bayes’ Theorem and Falsifiability (2)

I thought I’d attempt another go at explaining Bayes’ theorem and falsifiability.

In a previous post, I went over a hypothetical scenario where there are only two possible ways of getting a headache: One was by brain tumors and the other was by head colds. In this hypothetical scenario, the number of people in the world with brain tumors was equal to the number of people in the world with head colds; head colds are responsible for headaches in 50 out of 100 people and brain tumors are responsible for headaches in 100 out of 100 people.

Given all of that information, if you wake up with a headache, what is the probability that you have a brain tumor, and what is the probability that you have a head cold?

Let’s assume that the prior probability for both (H1 and H2) is 10%. Our Bayes’ theorem would be:

P(H1 | E) = P(E | H1) * P(H1) / [P(E | H1) * P(H1)] + [P(E | H2) * P(H2)] + [P(E | ~H) * P(~H)]

= 1.00 * 0.1 / [1.00 * 0.1] + [0.5 * 0.1] + [0 * 0.8]
= 0.1 / [0.1] + [0.05]
= 0.1 / 0.15
= .6666

So the probability of having a brain tumor, upon getting a headache, went up from 0.1 to 0.6666. Thinking otherwise, that since 100 out of 100 people with brain tumors have headaches therefore you have a 100% chance of having a headache, is the Prosecutor’s Fallacy. The probability of having a head cold, also, went up from 0.1 to 0.3333.

The thing is, in this scenario you could not have a headache and still have a head cold. Since 50 out of 100 people get headaches due to head colds, it could go either way. Both a headache and a non-headache could be evidence of a head cold; that is the essence of being unfalsifiable. One observation is no more or less probable than the other — exclusive — observation. On the other hand, not having a headache is pretty strong evidence that you don’t have a brain tumor. Not having a headache falsifies the brain tumor hypothesis; absence of evidence is evidence of absence. But, again, you could have a headache and not have a brain tumor even if brain tumors cause headaches 100% of the time; there’s still a 33.33% chance that you have a head cold. So one can see the dangers behind confirmation bias. Falsifying, disconfirming evidence is a lot better than confirming evidence.

Let’s up the ante.

Say your friend has two die. One has six sides numbering 1 – 6 and the other is a trick die that has a 1 on all faces. She rolls one of the die at random and it ends up with a 1. What is the probability that the die that she rolled was the normal 6 sided one or the trick die?

For the normal 6 sided die, our probability distribution is P(One | Normal) + P(Two | Normal) + P(Three | Normal) + P(Four | Normal) + P(Five | Normal) + P(Six | Normal) = 1.00. If it is a fair die, then the probability for P(One | Normal) = 1/6 or .1667.

For the trick die, our probability distribution is P(One | Trick) = 1.00.

We can then go through Bayes’ to see what the probability is for her rolling each:

P(Normal | One) = P(One | Normal) * P(Normal) / [P(One | Normal) * P(Normal)] + [P(One | Trick) * P(Trick)]
= .1667 * .5 / [.1667 * .5] + [1.00 * .5]
= .0834 / [.0834] + [.5]
= .0834 / .5834
= .1429

P(Trick | One) = P(One | Trick) * P(Trick) / [P(One | Trick) * P(Trick)] + [P(One | Normal) * P(Normal)]
= 1.00 * .5 / [1.00 * .5] + [.1667 * .5]
= .5 / [.5] + [.0834]
= .5 / .5834
= .8571

So upon rolling a 1, the probability that she rolled the normal sided die is .1429 and the probability that she rolled the trick die is .8571. There is still some ambiguity here, but if you were a betting person you should bet on her having rolled the trick die. But due to falsifiability, if she had rolled any other number then we would have 100% confidence that she rolled the normal die. Again, disconfirmation is stronger than confirmation.

Let’s try another example, this time approximating people’s confidence in their unfalsifiable hypotheses by increasing the prior probability in favor of the unfalsifiable hypothesis. Let’s also introduce a 50 sided die and a prior of 90% in favor of picking the 50 sided die. The probability of her having rolled the 50 sided die and getting 1 between a 50 sided die, a 6 sided die, and the trick die to choose from is:

P(Fifty| One) = P(One | Fifty) * P(Fifty) / [P(One | Fifty) * P(Fifty)] + [P(One | Trick) * P(Trick)] + [P(One | Six) * P(Six)]
= .02 * .9 / [.02 * .9] + [1.00 * .05] + [.1667 * .05]
= .018 / [.018] + [.05] + [.0083]
= .018 / .0763
P(Fifty | One) = .2358

P(Six| One) = P(One | Six) * P(Six) / [P(One | Six) * P(Six)] + [P(One | Trick) * P(Trick)] + [P(One | Fifty) * P(Fifty)]
= .1667 * .05 / [.1667 * .05] + [1.00 * .05] + [.02 * .9]
= .0083 / [.0083] + [.05] + [.018]
= .0083 / .0763
P(Six | One) = .1092

P(Trick| One) = P(One | Trick) * P(Trick) / [P(One | Trick) * P(Trick)] + [P(One | Six) * P(Six)] + [P(One | Fifty) * P(Fifty)]
= 1.00 * .05 / [1.00 * .05] + [.1667 * .5] + + [.02 * .9]
= .05 / [.05] + [.0083] + [.018]
= .05 / .0763
P(Trick | One) = .6550

Upon rolling a 1, the 50 sided die has a .2358 probability of having being rolled, the 6 sided die has a .1092 probability of having been rolled, and the trick die has a .6550 probability of having been rolled. Even given a prior probability of 90% that your friend would pick the 50 sided die. This is the problem with positing hypotheses that can equally explain multiple exclusive outcomes, even if there is a high initial probability of that hypothesis being true. If we had a 100 sided die, and a 90% chance of picking that die, upon rolling a 1 there would only be a .1337 probability that the 100 sided die was picked, in contrast to a .7426 probability that the trick die was picked. A 200 sided die would do worse. 300, even worse. Etc.

I should emphasize that this doesn’t count if the data aren’t mutually exclusive.

How much mutually exclusive data can an all powerful god, philosophical zombies, solipsism, being a brain in a vat, the world being created last Thursday, etc. explain? How many sides would God Dice have? In an effort to prevent their god from being proven wrong, believers have given their god dice every side imaginable.

Bayesian Judo (falsifiability) will always win over goalpost moving (unfalsifiability). A god that can be proven wrong is more probable than a god that can’t be proven wrong.

 
3 Comments

Posted by on May 15, 2012 in Bayes

 

Jerry Coyne: The correlation between religiosity and well-being among U.S. states

Dr. Jerry Coyne, biologist and author of Why Evolution Is True, wrote a fantastic blog post that shows the correlation between income inequality and religiosity in countries around the world. Coyne gave a talk about evolution, religion, and science, and societal dysfunction where he argues that lack of acceptance of evolution is linked to high religiosity, which itself is linked to poor societal health. A commenter crunched some numbers for specifically the United States:

[Dr.] Harry [Roy, professor of biology at Rensselaer Polytechnic Institute in New York] found some relevant data in the United States, crunched the numbers, and did a statistical analysis. He left comments and a link to the analysis, after my post. And he’s kindly done a bit more analysis and allowed me to reproduce it here. What he found is precisely the same relationship among states (using the HDI) as I found among countries: American states with lower HDIs are more religious.

First, a portrait of American religiosity taken from a 2009 Gallup poll:

As we know, the south is really religious (just go there if you doubt that!), and the northeast and west coast states much less so.

And below is a national map of the Human Development Index (HDI) from Wikipedia. This index is a measure of societal well being that differs from the “Successful Societies Scale” (SSS) that I used in my talk at Harvard. The HDI uses a set of traits that differ from those used in the SSS: the former amalgamates three traits (life expectancy, education, and income), while the latter combines 25 traits, including corruption, income disparity, child mortality, access to medical care, suicide rates, and so on. Unlike the SSS, under which the U.S. ranks very low among first-world nations, the HDI places the U.S. at the top when the index is not adjusted for inequality among residents, but falls much lower when adjusted for inequality (see the Wikipedia article on the HDI at link above). The disparity may be due to the inclusion of income inequality in the adjusted HDI; income inequality is highly positively correlated with religiosity across 71 nations.

The south is not so great here, the northeast (and two states on the west coast) are better. That suggests a relationship between religiosity and well being as measured by the HDI.

After crunching the data, Dr. Roy produced this correlation between the religiosity of the 50 states and their ranking on the HDI:

As you see, we have the same negative relationship between well-being and religiosity that we saw for different countries of the West. The correlation here is r= – 0.66897, and the probability (“p”) that this correlation would arise by chance is p = 0.00000012. (A value of p less than 0.05 is conventionally used to show a significant relationship.) This relationship, then, is not only striking but very highly significant in a statistical sense. Harry put a least-squares regression line through the data; its slope is also highly significant.

The only thing to determine is if religion is the cause or the effect of income inequality. There could also be some other variable(s) that is/are driving both indicators. But whatever the cause, it stands to reason that your best bet for a good, healthy society to live in is one that accepts evolution! Now why would god do that?

 
1 Comment

Posted by on May 13, 2012 in economics/sociology

 

Neil DeGrasse Tyson on Atheism

I’m pretty sure most people have seen NDT’s video on why he doesn’t call himself an atheist. (If you haven’t, here it is).

I just want to put a spotlight on a recent Facebook post of his where he wrote:

Thanks for all your candid comments on this wall regarding my short atheism-agnosticim clip on “Big Think”. I found them illuminating for their breadth as well as their depth. I note a few other possibly unexpected things about me: Not only do I not embrace labels, you will never see me debating people on the subjects of UFOs, Religion, Alternative Health practices, Astrology, or Pseudoscience in general. My speeches at TAM 6 & 9 were given reluctantly (I don’t normally attend). I don’t sign petitions. I don’t write to, or lobby congress (although I am happy to testify when asked). I don’t lead or participate in rallies. I don’t picket. And I don’t publicly align with organized causes. Meanwhile, labels and causes have, now and then, aligned themselves with me. In any case, I’m rather specific about how I invest my energies. As an educator, I have found that people are more receptive to learning when they know you don’t have an agenda, and when they determine that your goal is to teach them how to think rather than what to think. Such is the universe I have created for myself

I have to agree 100% with his reasoning, both in the video and his quote here. Because of the current juncture in history, “atheism” is a cause; an identity. And it needs to be, because the adjective “atheist” has been one of the longest lived insults in the history of the human race and that has to change. The fact of the matter is that NDT is an atheist, he just chooses not to apply that label to himself because the people who do that usually have some agenda. And being associated with that agenda he argues would hinder his primary goal as an educator.

Agnostic is in another class altogether so creating a dichotomy between the two is nonsensical. Agnosticism can be a reason for atheism, but it could also be a reason for theism. The way I see it, the way you live your life determines your brand of theism or atheism (or deism or polytheism or misotheism etc.). If you go about your life as though a god exists, then you’re a theist. If you go about your life as though no god exists, then you’re an atheist. You can be agnostic about either proposition, but what you do more accurately reflects your “real” beliefs more than what you say.

What you do will always be more powerful than what you believe. Which is why I think the biggest crime against the human spirit is to reject someone, not because they treat you badly, but because they believe the “wrong” thing.

As a counterpoint to bring up an issue where “agnostic” makes sense, I’m agnostic about whether Jesus existed or not. The existence or non-existence of Jesus makes absolutely no bearing on how I go about my life, so how I act wouldn’t be a good gauge for what I think in regards to that guy’s historicity.

So yeah, even though he might not like it, I consider NDT to be on “my team”: Team atheism (the same is true of Bart Ehrman, sorry lol). But it’s not only because NDT is an atheist, but because we went to the same high school (of course about 20 years apart) 😉

 
Comments Off on Neil DeGrasse Tyson on Atheism

Posted by on May 11, 2012 in atheism

 
 
NeuroLogica Blog

My ὑπομνήματα about religion

Slate Star Codex

The Schelling Point for being on the #slatestarcodex IRC channel (see sidebar) is Wednesdays at 10 PM EST

Κέλσος

Matthew Ferguson Blogs

The Wandering Scientist

Just another WordPress.com site

NT Blog

My ὑπομνήματα about religion

Euangelion Kata Markon

A blog dedicated to the academic study of the "Gospel According to Mark"

PsyPost

My ὑπομνήματα about religion

PsyBlog

Understand your mind with the science of psychology -

Vridar

Musings on biblical studies, politics, religion, ethics, human nature, tidbits from science

Maximum Entropy

My ὑπομνήματα about religion

My ὑπομνήματα about religion

My ὑπομνήματα about religion

atheist, polyamorous skeptics

Criticism is not uncivil

Say..

My ὑπομνήματα about religion

Research Digest

My ὑπομνήματα about religion

Disrupting Dinner Parties

Feminism is for everyone!

My ὑπομνήματα about religion

The New Oxonian

Religion and Culture for the Intellectually Impatient

The Musings of Thomas Verenna

A Biblioblog about imitation, the Biblical Narratives, and the figure of Jesus