RSS

One-Box Or Two-Box?

10 Jul

20130710-114448.jpg

So there’s a classic Decision Theory paradox called Newcomb’s Paradox. This is where you’re presented with an agent that supposedly can predict your actions (on Less Wrong he’s called “Omega”). It descends from its spaceship and presents you with two boxes. One box is transparent and contains $1,000 and the other box is closed and may or may not contain $1,000,000. You have a choice of choosing only the million dollar box or taking both boxes. The rub is that Omega has predicted your choice in advance, and if it predicts that you’ll take both boxes it didn’t put 1 million in the second box but if you only choose the unknown box then it put 1 million in it.

Which would you choose?

There is more information in this scenario. Omega has done this game with five other people and all five of those people two-boxed and only got $1,000. Since Omega is gone by this point, your decision that you haven’t made yet, in a way, already affected the contents of the unknown box. And if you decide to one-box, then the transparent box explodes and the $1,000 is burnt up.

This hypothetical unpacks a bunch of assumptions that people have about free will, omniscience, and decision theory, which leads to the paradox. If you’re the type of person to one-box, then this means that deciding to two-box should net you $1,001,000 right? Or maybe Omega already knew you would think like that so you’ll really only get $1,000? But if Omega really did know you’d think like that, then simply deciding to one-box automatically puts $1 million in the second box? (Seems like I’m arguing with a Sicilian when death is on the line…)

The majority of Less Wrongers choose to one-box, while garden-variety atheists choose to two-box, and theists choose to one-box. What’s going on here? Granted, like that link says, this is a pretty informal survey so it might not reflect the wider populations of those subgroups. But it seems to me that the deciding factor is whether you allow for the possibility of an all-knowing being who is able to predict your move.

Theists, obviously, allow for the possibility of a being that knows everything and can predict their actions. Garden-variety atheists reject such a possibility. These two facts probably account for the tendency for theists to one-box and for atheists to two-box. Even though Less Wrongers are much more likely to be atheists, they are atheists who study rationality a lot more than the average garden-variety atheist (garden-variety atheist probably know a lot of logical fallacies, but logical fallacies are not the be-all, end-all of rationality). It might be Less Wrong’s commitment to rationality and its focus on AI that leads more towards one-boxing, as it is theoretically possible for a superintelligent AI to predict ones actions. As they say at Less Wrong, just shut up and multiply. Since I am technically a Less Wronger, that’s exactly what I’ll do 🙂

First of all, since I don’t think the concept of free will exists or is even coherent, I don’t see any reason why there can’t be some supersmart being out there that could predict my actions. Given that, it doesn’t mean that this Omega person actually is that supersmart being. The only evidence I have is its say so and the fact that five previous people two-boxed and only got $1,000.

Is the reason that these other five people two-boxed and got $1,000 due to Omega accurately predicting their actions? Or is there some other explanation… like Omega not being a supersmart being and he never puts $1 million in the second box? Which is more likely, if I were actually presented with this scenario in real life? It seems like the simplest explanation, the one with the least metaphysical coin flips, is that Omega is just a being with a spaceship and doesn’t have $1 million to give. If I had some evidence that people actually have one-boxed and gotten the $1 million then I would put more weight on the idea that he actually has $1 million to spare, and more weight on the possibility that Omega is a good/perfect predictor.

I guess I’m just a garden-variety atheist 😦

But let me continue to shut up and multiply, this time using actual numbers and Bayes Theorem. What I want to find out is P(H | E), or the probability that Omega is a perfect predictor given the evidence. To update, I need my three variables to calculate Bayes: The prior probability, the success rate, and the false positive rate. Let’s say that my prior for Omega being a supersmart being is 50%; I’m perfectly agnostic about its abilities (even though I think this prior should be a lot less…). To update my prior based on the evidence at hand (five people have two-boxed and gotten $1,000), I need the success rate for the current hypothesis and the success rate of an alternative hypothesis or hypotheses (if it were binary evidence, like the results of a cancer test, I would call it the false positive rate).

The success rate is asking what is the probability of the previous five people two-boxing and getting $1,000 given that Omega is a perfect predictor. Assuming Omega’s prediction powers are true, P(E | H) is obviously 100%. The success rate of my alternative is asking what is the probability of the previous five people two-boxing and getting $1,000 given that Omega never even puts $1 million in the second box. Again, assuming that Omega never puts $1 million in the second box is true, P(E | ~H) would also be 100%. In this case, Bayes Factor is 1 since the success rate for the hypothesis and the success rate for the alternative hypothesis are equal, meaning that my prior probability does not move given the evidence at hand.

Now that I have my posterior, which is still agnosticism about Omega’s powers of prediction, I find out my decision theory algorithm. If I two-box, there are two possible outcomes: I either get only $1,000 or I get $1,001,000. Both outcomes have a 50% chance of happening due to my subjective prior, so my decision theory algorithm is 50% * $1,000 + 50% * $1,001,000. This sums to a total utility/cash of $501,000.

If I one-box, there are also two possible outcomes: I either get $1,000,000 or I lose $1,000. Both outcomes, again, have a 50% chance of happening due to my subjective probability about Omega’s powers of prediction, so my decision theory algorithm is 50% * $1,000,000 + 50% * $1,000. This sums to $499,000 in total utility.

So even using some rudimentary decision theory, it’s still in my best interests to two-box given the evidence at hand and my subjective estimation of Omega’s abilities. But like I said, if there was evidence that Omega ever puts $1 million in the second box, this would increase my subjective probability that I could win $1 million. According to my rudimentary decision theory algorithm, one-boxing and two-boxing have equal utility around 50.1%. Meaning that any probability that I estimate at above 50.1% for Omega’s powers of prediction it makes more sense to one-box. I assume that Less Wrongers and theists have a subjective probability about Omega’s powers of prediction to be close to 100%, and in that case it overwhelmingly makes more sense to one-box. Things get more complicated, though, if Omega both puts $1 million in a box and someone got $1,001,000! Omega’s success rate is now less than 100% and I known that he does put $1 million in the closed box.

But again, if your subjective estimate of Omega’s powers of prediction are less than 50% it makes more sense to two-box. And that’s probably why Vizzini is dead.

Advertisements
 
Comments Off on One-Box Or Two-Box?

Posted by on July 10, 2013 in Bayes, decision theory

 

Comments are closed.

 
NeuroLogica Blog

My ὑπομνήματα about religion

Slate Star Codex

"Talks a good game about freedom when out of power, but once he’s in - bam! Everyone's enslaved in the human-flourishing mines."

Κέλσος

Matthew Ferguson Blogs

The Wandering Scientist

Just another WordPress.com site

NT Blog

My ὑπομνήματα about religion

Euangelion Kata Markon

A blog dedicated to the academic study of the "Gospel According to Mark"

PsyPost

Behavior, cognition and society

PsyBlog

Understand your mind with the science of psychology -

Vridar

Musings on biblical studies, politics, religion, ethics, human nature, tidbits from science

Maximum Entropy

My ὑπομνήματα about religion

My ὑπομνήματα about religion

My ὑπομνήματα about religion

atheist, polyamorous skeptics

Criticism is not uncivil

Say..

My ὑπομνήματα about religion

Research Digest

My ὑπομνήματα about religion

Disrupting Dinner Parties

Feminism is for everyone!

My ὑπομνήματα about religion

The New Oxonian

Religion and Culture for the Intellectually Impatient

The Musings of Thomas Verenna

A Biblioblog about imitation, the Biblical Narratives, and the figure of Jesus

%d bloggers like this: