# Category Archives: logical fallacies as weak bayesian evidence

## “There’s No Evidence For The Existence of God”

I used to think that the title-quote of this blog post was a good rejoinder when people asked me why I didn’t believe in any sort of god. Nowadays, I sort of grimace a little when I hear atheists use that phrase. Because now I consider myself a Bayesian. And for Bayesians, “no evidence” means something a lot different than how other people use “no evidence”.

As a Bayesian, if I say there is evidence for some hypothesis, then this means that P(H | E) > P(H). If I say there is evidence against some hypothesis, then this means that P(H | E) < P(H). Most importantly, as a Bayesian, I don't just update once; I update on multiple pieces of evidence to arrive at a provisional posterior probability about some claim. And it’s provisional because there’s always new evidence to discover. In this sense, and in my opinion, agnosticism is probably the closest mainstream or Traditional Rationality analog to being a Bayesian.

But what could it mean if I say there is no evidence for some claim? And does this apply to the concept of god?

Let’s compare two conditional probabilities: The probability of having some datum given that god exists and the probability of having some datum given the nonexistence of god. P(D | G) and P(D | ~G). So, assuming god exists, what would the most basic evidence be, and would this be more or less likely given the nonexistence of god?

Some axioms of probability to remind you of: P(E | H) + P(~E | H) = 100%. That is, the probability of the evidence given the hypothesis is true plus the nonexistence of (or existence of some other) evidence given the hypothesis is true must exhaust all possibilities. Meaning they add up to 100%. This is how you know that you have a 1/6 chance of rolling a 4 given a fair die. P(Roll 4 | Fair Die) + P(Roll Other Number | Fair Die) = 100%.

Given that, most simplistically, stories about the existence of god are more likely than no stories about god given that god exists. Meaning that P(D | G) > P(~D | G). And the opposite for the alternative: stories about the existence of god are less likely given that no god exists than no stories about god given that god doesn’t exist. Meaning, also, that P(D | ~G) < P(~D | ~G). If I can say this another way, if god did exist we would have more stories about him than if god didn’t exist: P(D | G) > P(D | ~G). Think about it. There are more non stories of things that don’t exist than there are stories of things that don’t exist. Sure there are stories of unicorns and unicorns don’t exist. But what about the trillions of things that don’t exist that we concurrently don’t have stories of? They are legion.

Basically, anecdotes about the existence of god are evidence that god exists. I go over this in the post Logical Fallacies as Weak Bayesian Evidence: Argument from Anecdote. This all might seem a bit counterintuitive, but relying on intuition to make decisions is just another way of saying that the decision conforms to your biases. Which is usually not a good thing.

So what does no evidence look like? To me, this would be some conditional probability that is equal to all alternatives. One where Bayes Factor is 1. In other words, the evidence exists independently of the hypothesis.

This all being said, I think there is evidence for the existence of god. I actually concede a little bit of relatively strong evidence for the existence of god. But, there is so much more evidence against the existence of god because god, as defined by laypeople and sophisticated theologians alike, is unfalsifiable. For most other data besides morality, god is the equivalent of a trillion^trillion sided die and expecting to roll a 3, and comparing that to the probability of rolling a 3 given normal die. This is what happens when one conceives of an all-powerful god; there’s nothing an all powerful god can’t explain.

So yes, there is evidence for the existence of god. But it is underwhelming in comparison to the orders upon orders of magnitude of the evidence — Bayesian evidence — against the existence of god.

## Logical Fallacies As Weak Bayesian Evidence: The Fallacy Fallacy

So the Fallacy Fallacy is when you declare the conclusion of an argument false because the logical structure of the argument used a logical fallacy. As in the attached comic, you can’t just declare an argument wrong based on the number of logical fallacies it violates; at the most, you can say that the conclusion doesn’t follow from the premises, or that there is a false premise or something. Take the following argument:

P1: Bill is the CEO of my company and says that dolphins are fish
P2: Bill also says that all fish live in the ocean
C: Since Bill is the CEO of the company, and says that dolphins are fish, then dolphins live in the ocean

This argument not only appeals to false authority for the strength of its premises (itself weak Bayesian evidence) but also has factually incorrect premises as well. However, saying “appeal to false authority!” doesn’t actually mean that dolphins don’t live in the ocean.

So looking at this from a Bayesian point of view, how likely is it that the fallacy fallacy actually points out a false conclusion as opposed to a true conclusion (and thus being a true mistake)? Meaning that our evidence E is “you used a logical fallacy in constructing this argument” and H is “therefore your conclusion is wrong” and finding out the difference between H and ~H.

Making an intuitive judgement, there seems to be no relationship between logical fallacies used and whether an argument has a true conclusion; the fallacy fallacy itself seems to hinge on whether other logical fallacies can be legitimately used as weak Bayesian evidence. So if a fallacy fallacy can point out false conclusions just as easily as mistakenly calling true conclusions false, it seems as though P(E | H) is equal to P(E | ~H). I don’t think I have any valid way to differentiate between the two conditionals. If that is true, then it means I’m in a case where Bayes Factor is 1 and just go by the prior probability that a person would arrive at a false conclusion. Over the course of human history, more people have been wrong than they’ve been right, so the prior probability that some random person would randomly know the correct answer to something is low.

So the fallacy fallacy is not technically strong or weak Bayesian evidence, but paying attention to the base rate of someone constructing a valid and sound argument means that the fallacy fallacy, or the bare act of pointing out a logical fallacy itself, probably means that the argument has a false conclusion.

Comments Off on Logical Fallacies As Weak Bayesian Evidence: The Fallacy Fallacy

Posted by on January 7, 2014 in Bayes, logical fallacies as weak bayesian evidence

## Logical Fallacies as Weak Bayesian Evidence: Argument from Anecdote

Another juicy logical fallacy that gets repeated over and over on teh Internetz due to how thoughts are cached in your brain like webpages on your computer. Again, the problem is that people treat anecdotes as strong Bayesian evidence, or even as a Prosecutor’s fallacy-like conclusion, when in all likelihood they’re probably just very weak Bayesian evidence. But evidence is evidence nonetheless. Like pressing the gas on your car, you can either press it so that you increase by 50 mph or by 5 mph. But acceleration is acceleration whether strong or weak.

So the argument from anecdote. This is taking an event that happens to you personally and using it in an argument for a general explanation. Let’s take a situation that I obviously think is false, like ghost stories. Someone tries to convince me that ghost are real because they once heard a creaky floor in an old abandoned house and got a feeling of dread. Obviously, this isn’t conclusive evidence of the existence of ghosts. Butassuming that ghosts are real, this would “fit” into that worldview. And that’s the rub.

I personally think there’s a 1 in a trillion chance that ghosts are real. So I’ll use that number to demonstrate why an anecdote can still be used as evidence but not as a conclusion. Just like in my example of falsifiability using Bayes Theorem, let’s assume that I have a jar that has two types of dice: One that is a normal sided die with 1 – 6 labeled and another that has just a 5 on all sides. But in this instance, the jar is filled with 999,999,999,999 normal dice and only 1 (one!) trick die that has a 5 on all sides.

If I grab a die at random from the jar and roll a 5, Bayes Factor says I should divide the probability of rolling a 5 given that I’ve rolled the trick die by the probability of rolling a 5 given that I’ve rolled the normal die. Given that I’ve rolled the trick die, the probability of rolling a 5 is 100%. Given that I’ve rolled the normal die, the probability of rolling a 5 is 16.7%. This quotient is greater than 1 so that means that rolling a 5 is evidence for having rolled the trick die. But the prior probability of rolling the trick die in this case is basically a trillion to one, so in the end it is much more likely that I had grabbed a normal die.

Regardless, rolling a 5 is weak Bayesian evidence for having grabbed the trick die. Just like a ghost story anecdote is weak Bayesian evidence for the existence of ghosts.

Let’s try a more controversial anecdote, like “black people are stupid”. So someone grows up with this worldview of blacks having a low IQ while never having met a black person. The first time they meet a real live black person, the black guy they meet just happened to be in one of his high school classes and was the worst student. Assuming the hypothesis of stupid blacks is true, this anecdote fits that worldview. Maybe not at 100% like the trick die, but it’s a high probability. On the flip side, assuming this worldview is false there’s a much lower probability that this would happen. As a matter of fact, I would assume that exactly half of black people are below average intelligence, so the alternative hypothesis would say there’s a 50% chance that this would happen.

As it stands, the racist hypothesis puts more probability capital in seeing something like this than the non-racist hypothesis, so the racist hypothesis gets the evidence cash-out due to this anecdote. Just like with the trick die compared to the normal die. So again, in this case an anecdote can be legitimately used as Bayesian evidence. It might be strong or weak evidence, but it’s evidence.

Comments Off on Logical Fallacies as Weak Bayesian Evidence: Argument from Anecdote

Posted by on June 6, 2013 in Bayes, logical fallacies as weak bayesian evidence

## Logical Fallacies as Weak Bayesian Evidence: Correlation Doesn’t Equal Causation

See my post about the related logical fallacy Post Hoc Ergo Propter Hoc.

## Logical Fallacies as Weak Bayesian Evidence: Argumentum Ad Hominem

Ad hominem. The most well known logical fallacy (and probably most used) in the history of everything. Like most logical fallacies, in some instances it might not actually be fallacious; but again, the thing about straightforward logic is that the conclusion has to follow necessarily from the premises. And again, we don’t live in a world of deductive certainty but the world of inductive probability.

First I have to distinguish between an ad hominem (ad hom) fallacy and a personal attack. The following is an ad hom:

Bob: All instruments that have strings are made out of wood. My guitar has strings. Therefore it is a wood instrument
Sue: Your guitar isn’t a wood instrument because you are a well known liar

This is a personal attack:

Bob: All instruments that have strings are made out of wood. My guitar has strings. Therefore it is a wood instrument
Sue: My electric violin has strings but isn’t made out of wood. You are a known liar.

The tricky part is that there can be non-fallacious ad hom arguments. This might be seen as a valid ad hom:

Bob: All instruments that have strings are made out of wood. My guitar has strings. Therefore it is a wood instrument
Sue: You are a known liar so I have no reason for accepting your say-so that all instruments that have strings are made out of wood. Therefore I can’t follow you to your conclusion that your guitar is a wood instrument.

Pedantically, an ad hom is fallacious because one is rejecting an argument or the conclusions of an argument based on the qualities of the person presenting it. However, a premise can be accepted or rejected for any reason; it’s up to the person presenting the argument to support their premises. The personal attack, however, is simply attacking the person gratuitously.

It’s a pretty good rule of thumb that an untrustworthy source isn’t to be trusted. If I read a story in the National Enquirer, there’s a high probability that it won’t be true. But people confuse themselves when, instead of “National Enquirer” we switch to “Bob from accounting”. If Bob was also a well known liar, then it might be perfectly reasonable to not accept a story from Bob as true.

But this is about arguments, not stories. So what if Bob was well known for concocting arguments that sound true but are actually fallacious? What if Bob was well known for his sophistry? Let’s say you hear argument E from Bob. The hypothesis, H, is whether Bob’s argument is a good one or a shitty, sophist-actic argument. The probability of E given H would be pretty low, since it is basically asking what the probability hearing about Bob’s argument given that he made an actual good argument. The converse would be the probability of hearing about Bob’s argument given that he is intentionally being tricky.

Let’s solidify this example a bit. Let’s say you overhear that Bob has recently argued that glutamine makes a good dietary suppliment for bodybuilding. What is the probability that this would be a good argument given that Bob argues for it? This depends on the probability that Bob would posit an argument given that it’s good versus the probability that Bob would posit an argument given that it’s sophistry. Or:

P(Good Arg) is our prior probability
P(Bob Said | Good Arg) is our success rate
P(Bob Said | Bad Arg) is our false positive rate
P(Good Arg | Bob Said) is what we are trying to find out

So let’s say that we are perfectly agnostic about whether glutamine is a good dietary suppliment for bodybuilding. If Bob has almost never presented a good argument for something, then this means that P(Bob Said | Good Arg) would be extremely low. However, knowing that Bob is constantly presenting sophisticated yet false arguments, P(Bob Said | Bad Arg) would be extremely high. So our Bayes Factor in this instance would be P(Bob Said | Good Arg) / P(Bob Said | Bad Arg), which is low / high, which is less than 1.

This means that, given that Bob has presented an argument that we are unsure about the conclusion, it is more likely that glutamine is not a good dietary suppliment for bodybuilding based only on the fact that Bob has argued in the positive. So depending on how disparate Bob’s success and false positive rates are for his arguments, an ad hominem argument could be either strong or weak Bayesian evidence against Bob’s argument. Maybe Bob has an invalid premise in his argument? Usually that’s how sophistry works. And remember, premises can be accepted or rejected for any reason and it’s up to the person presenting the argument to support their premises.

So here is the apparent contradiction: It is definitely deductively fallacious to reject an argument based on who says it. But it is not inductively fallacious to reject an argument based on who says it because based on their history, there could be a high probability of them hiding away false premises tucked in the argument (like a complex question). This makes a bit of sense, since induction and deduction are a bit at odds. What’s true by induction isn’t necessarily true by deduction, and what’s true by deduction isn’t necessarily true by induction.

But again, this is weak Bayesian evidence. Just like absence of evidence being evidence of absence, it’s not enough to rest an entire conclusion on. You would have to continually collect evidence besides the fact that Bob presented an argument to arrive at a definitive conclusion. The most straightforward way to do that would be to deductively check Bob’s premises and see if they follow logically to his conclusion.

Comments Off on Logical Fallacies as Weak Bayesian Evidence: Argumentum Ad Hominem

Posted by on October 23, 2012 in Bayes, logical fallacies as weak bayesian evidence

## Logical Fallacies As Weak Bayesian Evidence: Argumentum Ad Populum (Appeal to Popularity)

This is another post in my, I guess, series where I explore typical logical fallacies in a Bayesian context. This time I’ll be looking at the appeal to popularity.

Appeal to popularity, of course, is a logical fallacy because in bare bones logic the conclusion must follow from the premises necessarily. As in, 100% of the time. But again, we don’t live in a world of deductive certainty where we can have 100% certainty. We live in a world of induction; the world of uncertainty; the world of probability (I like abusing semicolons).

So an example of the logical fallacy appeal to popularity: Most of the world believes in god, so therefore god exists. That is an obvious example of the fallacy but I’ll pick a less controversial one; at least, one where the conclusion is more than likely true, but the argument itself is fallacious:

Most of the world believes in evolution, therefore evolution is true.

So to analyze this in a Bayesian fashion we need our three variables. The prior probability, the success rate, and the false positive rate. The prior probability is the probability that the hypothesis is true before looking at any specific evidence.

Next is the success rate. This is the probability the majority of the world believes in something given that it is true. This in turn means our evidence (the Total Probability) is “the majority of the world believes it”. Our third variable is the false positive rate. This is the probability that the majority of the world believes in something given that it is false.

Now, we don’t have any hard numbers for the success rate/false positive rate. But in general, we can make a good guess at how many times the majority of people have believed in something given that it is true and contrast it to the frequency that the majority of people have believed in something given that it is false. Taking the entirety of human history into account, it seems like the false positive rate vastly outnumbers the success rate. The majority has been wrong a lot more than its been right. This means that dividing the success rate by the false positive rate returns a quotient less than 1. And that means that, generally, an appeal to popularity is usually Bayesian evidence against some hypothesis.

So using evolution as our example, the fact that the majority of people on the planet accept the theory of evolution means that this is Bayesian evidence against evolution; appeals to popularity in and of themselves are strong Bayesian evidence against some proposition. The good thing is that the entire point of Bayes Theorem is to continually update the probability of your hypothesis when you encounter new evidence. And there is plenty of evidence in support of evolution to update against. Unfortunately, the same doesn’t happen with god belief. The fact that the majority of the world believes in god is also Bayesian evidence against the existence of god. You would, again, need some other evidence to update against, but since the existence of god is unfalsifiable, that is highly unlikely.

But there are also different populations that we can appeal to. Instead of appealing to humanity as a whole, we could appeal to more specific groups. Like doctors, or lawyers, or family members, or what have you. If the majority of surgeons say that the best instrument to cut a sternum is so-and-so knife, what is the probability that they would say so given that they are correct, and what is the probability that they would say so given that they are incorrect? In this case, the success rate vastly outperforms the false positive rate. Thus, appeals to popularity are pretty good Bayesian evidence when you appeal to specialists in some field, or in general, to a population who is usually correct. In general, humanity as a whole is usually incorrect so appealing to a general consensus is probably not a good idea.

This seems counter intuitive. Why would we have a natural tendency to follow the herd if it weren’t beneficial in an evolutionary context? That’s exactly it; an evolutionary context is only good at helping you not end up dead and have lots of kids… not to uncover complex information about the true nature of reality. If the majority of people in your tribe 100,000 years ago believed in something, and you went against the grain, chances are that you would be ostracized or kicked out. Getting kicked out = death. Getting ostracized = less access to mates. Something like belief in god might not be true, but it certainly seems to help with having lots of kids.

Comments Off on Logical Fallacies As Weak Bayesian Evidence: Argumentum Ad Populum (Appeal to Popularity)

Posted by on September 3, 2012 in Bayes, logical fallacies as weak bayesian evidence

## Logical Fallacies As Weak Bayesian Evidence: Post Hoc Ergo Propter Hoc

So I already have two posts that go over the notion that logical fallacies aren’t necessarily fallacies of probability. The issue with logical fallacies is that in deduction, the conclusion has to follow necessarily from the premises. But we don’t live in a world of deductive certainty; we live in a world of uncertainty: the world of probability.

The thing about post hoc ergo propter hoc is that it is an inductive inference. That being the case, post hoc fallacies should be easily explained using probability theory, thus Bayes’. Thinking about this fallacy intuitively (that is, using quick Bayesian format), it seems that this fallacy is an instance of the Base Rate fallacy. Of course, given that some cause is the reason for some effect, the cause has to come before the effect (unless you live in the world of quantum physics, which none of us do).

This means that the conditional probability, or success rate, of a post hoc argument would necessarily be 1.00, or P(B Happened After X | B Caused By X) = 1.00. But the argument itself is trying to prove P(B Caused By X | B Happened After X); the cause is the hypothesis and what happens is the evidence. Sure, given that god answers prayer there’s a 100% chance you would get a job after praying for it. But that’s a Base Rate fallacy; we are not trying to establish P(Get A Job After Praying To God | God Answers Prayers) but P(God Answers Prayers).

One hundred percent of all effects (in the macro world) are preceded by their causes. Concluding that because the conditional probability is 100% that it means that it is actually the reason is, like I said, a Base Rate fallacy, because we aren’t taking into account the prior probability.

But there’s a second factor that has to be taken into account: The alternative hypothesis. Or, what about an effect that just happens after the “cause” by chance or some other cause? In other words the false positive rate? This, surely, must also be a high number but it doesn’t necessitate 100% certainty like the success rate that denominates post hoc logic. Given this, it seems that the Likelihood Ratio, or dividing the success rate by the false positive rate, returns a very very low quotient. If the success rate is 100%, and the false positive rate is 98%, then this is only a Bayes’ Factor of 1.02 decibles. This means that if we had a 50/50 spread between the hypothesis and the alternative, the post hoc ergo propter hoc logic in this example would only increase our probability to 50.5%.

If we go back to my original example P(Get A Job After Praying To God | God Answers Prayers), we would have to include the alternative hypothesis. There are various alternatives, but let’s just go with P(Get A Job After Praying | Economy Improves). Of course, there’s not a 100% chance that you would get a job when the economy improves, but an improving economy in and of itself has a much higher prior probability than the existence of god. Therefore, in this case, the prior probability of P(God Answers Prayers) doesn’t get much of a boost due to the small difference between P(Get A Job After Praying To God | God Answers Prayers) and P(Get A Job After Praying | Economy Improves).

So post hoc ergo propter hoc is weak (possibly very weak) probabilistic evidence. It’s not strong enough evidence to rest an entire argument on; you would need much more evidence. Or you would need an argument or situation where there is a huge disparity between the success rate and false positive rate, which most post hoc ergo propter hoc arguments never attempt to ascertain.

The god hypothesis, of course, also suffers due to its lack of falsifiability.

NeuroLogica Blog

Slate Star Codex

SELF-RECOMMENDING!

Κέλσος

Matthew Ferguson Blogs

The Wandering Scientist

What a lovely world it is

NT Blog

PsyBlog

Understand your mind with the science of psychology -

Vridar

Musings on biblical studies, politics, religion, ethics, human nature, tidbits from science

Maximum Entropy

Skepticism, Properly Applied

Criticism is not uncivil

Say..

Research Digest

Disrupting Dinner Parties

Feminism is for everyone!