There is a small battle brewing about the application of Bayes Theorem over at R. Joseph Hoffmann’s blog. The main confusion seems to be about the difference between objective and subjective probability. Hoffmann, an anti-mythicist (note: I think there’s a difference between being a Jesus historicist and an anti-mythicist, much like the difference between atheist and anti-theist), thinks that Richard Carrier is attempting to bluff mathematical precision onto the conclusion that Jesus didn’t exist by way of Bayes.
This is a fundamental misunderstanding of two different types of probability. Objective probability is what one thinks of when one thinks of mathematical precision. Subjective probability confers no such precision. There can be 100% objective mathematical probability, but 100% subjective probability is, at the least, unreasonable. I’ll quote a few words of Grand Bayesian Eliezer Yudkowsky:
Yet the map is not the territory: if I say that I am 99% confident that 2 + 2 = 4, it doesn’t mean that I think “2 + 2 = 4” is true to within 99% precision, or that “2 + 2 = 4” is true 99 times out of 100. The proposition in which I repose my confidence is the proposition that “2 + 2 = 4 is always and exactly true”, not the proposition “2 + 2 = 4 is mostly and usually true”.
I hope that impresses upon you. The “map”, that is, our model of the world, cannot have 100% certainty. However, the “territory” can. There is objectively 100% probability that 2 + 2 = 4; that is, from a Frequentist perspective, every single time we’ve put 2 and 2 together we get 4. But from a subjective point of view, asserting 100% probability that 2 + 2 = 4 would translate into infinite certainty. And it would take infinitely strong evidence to convince someone otherwise (how to convince Yudkowsky that 2 + 2 = 3). This is explained in a following post of Yudkowsky’s:
In the usual way of writing probabilities, probabilities are between 0 and 1. A coin might have a probability of 0.5 of coming up tails, or the weatherman might assign probability 0.9 to rain tomorrow.
This isn’t the only way of writing probabilities, though. For example, you can transform probabilities into odds via the transformation O = (P / (1 – P)). So a probability of 50% would go to odds of 0.5/0.5 or 1, usually written 1:1, while a probability of 0.9 would go to odds of 0.9/0.1 or 9, usually written 9:1. To take odds back to probabilities you use P = (O / (1 + O)), and this is perfectly reversible, so the transformation is an isomorphism—a two-way reversible mapping. Thus, probabilities and odds are isomorphic, and you can use one or the other according to convenience.
Why am I saying all this? To show that “odd ratios” are just as legitimate a way of mapping uncertainties onto real numbers as “probabilities”. Odds ratios are more convenient for some operations, probabilities are more convenient for others. A famous proof called Cox’s Theorem (plus various extensions and refinements thereof) shows that all ways of representing uncertainties that obey some reasonable-sounding constraints, end up isomorphic to each other.
Why does it matter that odds ratios are just as legitimate as probabilities? Probabilities as ordinarily written are between 0 and 1, and both 0 and 1 look like they ought to be readily reachable quantities—it’s easy to see 1 zebra or 0 unicorns. But when you transform probabilities onto odds ratios, 0 goes to 0, but 1 goes to positive infinity. Now absolute truth doesn’t look like it should be so easy to reach.
A representation that makes it even simpler to do Bayesian updates is the log odds—this is how E. T. Jaynes recommended thinking about probabilities. For example, let’s say that the prior probability of a proposition is 0.0001—this corresponds to a log odds of around -40 decibels. Then you see evidence that seems 100 times more likely if the proposition is true than if it is false. This is 20 decibels of evidence. So the posterior odds are around -40 db + 20 db = -20 db, that is, the posterior probability is ~0.01.
When you transform probabilities to log odds, 0 goes onto negative infinity and 1 goes onto positive infinity. Now both infinite certainty and infinite improbability seem a bit more out-of-reach.
This makes sense. If I have a prior probability of 0 for some hypothesis, what sort of evidence could move it beyond 0? Anything multiplied by 0 is still 0.
But it doesn’t seem to be sticking, the distinction between objective and subjective probability. Just like an unflinching Frequentist, Hoffmann claims that subjective probability has no utility in formulating arguments and only objective probability is the “true” probability. In his mind, 100% “certainty” is possible, because he is only thinking of probability in terms of a Frequentist. And why not? 100% is a valid type of mathematical precision; 2 + 2 being equal to 4 has happened 100% of the time.
Richard Carrier’s point in introducing Bayes Theorem to the study of the historical Jesus (and history in general) isn’t mathematical precision or the illusion of it. Carrier’s point is that historians should follow the rules of logic when constructing arguments. The rules of probability follow from the rules of logic, thus historians should also follow the rules of probability when constructing arguments. The easiest way to do that is Bayes Theorem. Both objective and subjective probability have to follow the rules of probability, just like real premises and hypothetical premises have to follow the rules of logic when constructing arguments.
Now I’m no psychologist, but I think something a bit more nefarious is going on. There is a lot of bad blood between Carrier and Hoffmann. What I think is happening is that Hoffmann thinks that any argument that is being used by Carrier or for mythicism must be wrong, because Jesus existed. That, of course, is a logical fallacy. And I’m not even sure it’s the type of logical fallacy that is weak Bayesian evidence.