RSS

Daily Archives: October 9, 2013

One-Box vs. Two-Box: Further Elaboration

20131009-144216.jpg

In a previous post about Omega and One-Boxing vs Two-Boxing, I went over a hypothetical situation where a superintelligent being named Omega comes to Earth and is said to be able to predict your every move. He presents you with two boxes: One box is transparent and contains $1,000 and the second box is closed and may or may not have $1 million in it. If Omega predicts that you would pick both boxes, he doesn’t put $1 million in the second box but if Omega predicts you would go for only the $1 million box he put the $1 million in it. This is obviously a hypothetical scenario; we will (more than likely) never encounter a being that can predict our every move nor will we be presented with such a scenario.

Or is it?

Everyone knows that smoking causes cancer. But what if it wasn’t the actual carcinogens in cigarettes that causes cancer but some sort of hidden brain tumor that causes cancer, and the tumor also makes people smoke cigarettes? If, after learning this information, you decide to start smoking, does this mean you have the brain tumor? Or, after having learned this information you decide to stop smoking, does this mean you don’t have the brain tumor?

Just like with Omega, the type of choice you would have made before learning about the information dictates whether you have the brain tumor or not. And that choice is directly related to how tightly coupled the brain tumor is with making people smoke. But unlike Omega, the relationship isn’t at the (alleged) 100% accuracy. In that previous post I wrote:

But let me continue to shut up and multiply, this time using actual numbers and Bayes Theorem. What I want to find out is P(H | E), or the probability that Omega is a perfect predictor given the evidence. To update, I need my three variables to calculate Bayes: The prior probability, the success rate, and the false positive rate. Let’s say that my prior for Omega being a supersmart being is 50%; I’m perfectly agnostic about its abilities (even though I think this prior should be a lot less…). To update my prior based on the evidence at hand (five people have two-boxed and gotten $1,000), I need the success rate for the current hypothesis and the success rate of an alternative hypothesis or hypotheses (if it were binary evidence, like the results of a cancer test, I would call it the false positive rate).

The success rate is asking what is the probability of the previous five people two-boxing and getting $1,000 given that Omega is a perfect predictor. Assuming Omega’s prediction powers are true, P(E | H) is obviously 100%. The success rate of my alternative is asking what is the probability of the previous five people two-boxing and getting $1,000 given that Omega never even puts $1 million in the second box. Again, assuming that Omega never puts $1 million in the second box is true, P(E | ~H) would also be 100%. In this case, Bayes Factor is 1 since the success rate for the hypothesis and the success rate for the alternative hypothesis are equal, meaning that my prior probability does not move given the evidence at hand.

In the cigarettes/tumor scenario, I still want to find out P(H | E), or the probability that I have the brain tumor given that I smoke. I need the three variables — base rate, success rate, and false positive rate — to figure this out. P(H) is the probability of having a brain tumor period (as in, the rate of brain tumors in the entire population or some other general category), which is pretty low. P(E | H) is the probability of smoking given that I have the brain tumor, and P(E | ~H) which is the false positive rate; or some other hypothesis’ explanation for why I smoke.

This happens to be a legitimate medical question, and is totally plausible, but just not with brain tumors/smoking/cancer. But I’m completely clueless about these sorts of covariable diagnoses in medicine so I have no examples readily available. But the big takeaway, which the Omega situation tries to shine a light on, is that your prior behavior determines what’s in the box/whether you have a brain tumor. If you were the type of person who is more prone to smoking than the normal population, stopping smoking after finding out that brain tumors cause one to desire smoking will not magically make the brain tumor go away. Similarly, if you were the type of person who would two box, deciding to one box will not make $1 million appear in the box.

If, just like with the Omega problem, that P(E | H) or the probability of smoking given that you have a brain tumor was near or at 100%, it still doesn’t mean that you have the brain tumor; it all depends on Bayes Factor (comparing the success rate with alternative hypotheses’ rates) and the prior probability of having a brain tumor period. But again, stopping yourself from smoking would not make the tumor disappear.

 
Comments Off on One-Box vs. Two-Box: Further Elaboration

Posted by on October 9, 2013 in Bayes, rationality

 
 
NeuroLogica Blog

Your Daily Fix of Neuroscience, Skepticism, and Critical Thinking

The Wandering Scientist

What a lovely world it is

NT Blog

My ὑπομνήματα about religion

PsyPost

Reporting the latest scientific research on behavior, cognition and society

PsyBlog

Understand your mind with the science of psychology -

Vridar

Musings on biblical studies, politics, religion, ethics, human nature, tidbits from science

Maximum Entropy

My ὑπομνήματα about religion

My ὑπομνήματα about religion

My ὑπομνήματα about religion

Skepticism, Properly Applied

Dissent is Critical and Necessary

Download PDF

My ὑπομνήματα about religion

Research Digest

My ὑπομνήματα about religion

Disrupting Dinner Parties

Feminism is for everyone!

My ὑπομνήματα about religion

The New Oxonian

Religion and Culture for the Intellectually Impatient

AwayPoint

Between An Island of Certainties and the Unknown Shore