Category Archives: decision theory

Reverse Game Theory


So on Less Wrong I proposed what I thought was an example of the Prisoner’s Dilemma:

I am trying to formalize what I think should be solvable by some game theory, but I don’t know enough about decision theory to come up with a solution.

Let’s say there are twins who live together. For some reason they can only eat when they both are hungry. This would work as long as they are both actually hungry at the same time, but let’s say that one twin wants to gain weight since that twin wants to be a body builder, or one twin wants to lose weight since that twin wants to look better in a tuxedo.

At this point it seems like they have conflicting goals, so this seems like an iterated prisoner’s dilemma. And it seems like if this is an iterated prisoner’s dilemma, then the best strategy over the long run would be to cooperate. Is that correct, or am I wrong about something in this hypothetical?

It was pointed out that this isn’t a PD since the twins have competing goals. So instead of going with a confusing hypothetical, I decided to use an example from my own life:

My brother likes to turn the air conditioner as cold as possible during the summer so that he can bundle up in a lot of blankets when he goes to sleep. I on the other hand prefer to sleep with the a/c at room temperature so that I don’t have to bundle up with blankets. Sleeping without bundling up makes my brother uncomfortable, and having to sleep under a lot of blankets so I don’t freeze makes me uncomfortable. We both have to use the a/c, but we have contradictory goals even though we’re using the same resource at the same time. And the situation is repeated every night during the summer (thankfully I don’t live with my brother, but my current new roommate seems to have the same tendency with the a/c).

User badger was able to use this real life example to explain that this indeed isn’t a PD since this isn’t a pre-made game, so I would have to design a game that could be solved. This is mechanism design, or reverse game theory:

That example helps clarify. In the A/C situation, you and your brother aren’t really starting with a game. There isn’t a natural set of strategies you are each independently choosing from; instead you are selecting one temperature together. You could construct a game to help you two along in that joint decision, though. To solve the overall problem, there are two questions to be answered:

  1. Given a set of outcomes and everyone’s preferences over the outcomes, which outcome should be chosen? This is studied in social choice theory, cake-cutting/fair division, and bargaining solutions.
  2. Given an answer to the first question, how do you construct a game that implements the outcome that should be chosen? This is studied in mechanism design.

One possible solution: If everything is symmetric, the result should split the resource equally, either by setting the temperature halfway between your ideal and his ideal or alternating nights where you choose your ideals. With this as a starting point, flip a coin. The winner can either accept the equal split or make a new proposal of a temperature and a payment to the other person. The second person can accept the new proposal or make a new one. Alternate proposals until one is accepted. This is roughly the Rubinstein bargaining game implementing the Nash bargaining solution with transfers.

Another possible solution: Both submit bids between 0 and 1. Suppose the high bid is p. The person with the high bid proposes a temperature. The second person can either accept that outcome or make a new proposal. If the first player doesn’t accept the new proposal, the final outcome is the second player’s proposal with probability p and the status quo (say alternating nights) with probability 1-p. This is Moulin's implementation of the Kalai-Smorodinsky bargaining solution.

According to my own moral intuitions (which admittedly aren’t universal moral intuitions) alternating weeks over who has control over the A/C seems the most fair. But someone else might prefer to place bids. However, introducing money also introduces a power system where whoever has or earns more money gets their wants achieved asymmetrically.

Of course there could also be a power differential due to knowledge, such as a game that is designed to violate the laws of probability to favor the game designer. But that’s why it’s always in your favor to learn rationality!

Comments Off on Reverse Game Theory

Posted by on January 30, 2014 in decision theory


Pascal’s Wager And Decision Theory


Pascal’s Wager is a pretty infamous case of religious logic. The Wager goes, given the choice between believing in god or not believing in god, there are four possible outcomes. If you believe in god, then if god exists then you go to heaven. If you believe in god and god doesn’t exist, then nothing happens. On the other hand, if you don’t believe in god, and god exists, then you go to hell. If you don’t believe in god and god doesn’t exist, then nothing happens.

Pascal argued that eternity in heaven is a much better option than an eternity in hell, so it would be rational to believe in god, just in case. What’s the harm, even if god doesn’t exist?

From a decision theory perspective, Pascal is succumbing to unbounded utility; that is in this case, a utility function that allows for infinities. Since 0 and 1 are not probabilities, it follows that anything multiplied by infinity is also infinity. Meaning, even if you assign a 0.000000000000000000000000000000000000000001% chance of god existing, adding an infinite utility in going to heaven creates a situation where you gain infinite utility with believing in god and infinite negative utility in not believing in god. This might seem like a win for a theist who subscribes to Pascal’s Wager and/or infinite utility, but it also introduces some other wacky “rational” behavior.

Let’s say you are walking down a dark alley. A dark figure approaches you from the shadows, demanding money. There’s no weapon visible on the person, and he doesn’t actually seem very threatening. He asks you about Pascal’s Wager, and hypothetically you think it’s a rational decision to believe in god, given the infinite utility it rewards you with. The shady person then claims that he’s not actually from this world, but the world above; he claims that your reality is really a simulation and he’s one of the programmers. He then says that if you don’t give him 10 dollars, he will use his programming powers to generate 3^^^^3* people and torture them for eternity, in your simulated reality.

Is $10 really worth the lives of 3^^^^3 people being tortured for eternity? If you accept the unbounded utility function of Pascal’s Wager, then you must also accept a similarly unbounded utility function here; $10 certainly isn’t worth 3^^^^3 lives given the off chance that this mugger is telling the truth. Meaning, 99.99999999999999999999999% * 10 is never going to overcome 0.0000000000000000000000001% * 3^^^^3. This hypothetical decision theory scenario is called Pascal’s Mugging, a sort of reductio ad absurdum critique of Pascal’s Wager.

So the solution, then, is to not have unbounded utility functions. There should be some sort of cutoff of utility that won’t lead to absurd, yet “rational”, choices in your decision theory algorithm. Of course, I’m not going to begin to attempt to figure out what sort of decision theory framework satisfies that consistently… that’s for economists and AI researchers to debate and decide 😉

3^3 = 3*3*3 = 27
3^^3 = (3^(3^3)) = 3^27 = 3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3*3 = 7625597484987
3^^^3 = (3^^(3^^3)) = 3^^7625597484987 = 3^(3^(3^(… 7625597484987 times …)))
In other words, a really frickin’ huge number (much larger than 10^80, which is the number of atoms estimated to exist in the universe)

Comments Off on Pascal’s Wager And Decision Theory

Posted by on September 16, 2013 in decision theory, rationality


Selfish Traits Not Favored By Evolution


According to a report over at the BBC:

Crucially, in an evolutionary environment, knowing your opponent’s decision would not be advantageous for long because your opponent would evolve the same recognition mechanism to also know you, Dr Adami explained.

This is exactly what his team found, that any advantage from defecting was short-lived. They used a powerful computer model to run hundreds of thousands of games, simulating a simple exchange of actions that took previous communication into account.

“What we modelled in the computer were very general things, namely decisions between two different behaviours. We call them co-operation and defection. But in the animal world there are all kinds of behaviours that are binary, for example to flee or to fight,” Dr Adami told BBC News.

“It’s almost like what we had in the cold war, an arms race – but these arms races occur all the time in evolutionary biology.”


“Darwin himself was puzzled about the co-operation you observe in nature. He was particularly struck by social insects,” he explained.

So we really shouldn’t be wondering where morality comes from. Reading comments on various news websites, you invariably get someone — usually religious — who rhetorically asks why don’t we murder or steal if god doesn’t exist; or what basis do we have for morality if no god exists and our morals are entirely secular. This BBC article gives some further evidence that being selfish isn’t a logical or winning strategy. Also, Less Wrong had a Prisoner’s Dilemma game run between programs that people wrote and submitted. The winning strategy there, as in the strategy hard coded by the programmers, was to lean towards cooperation. Note that this is a game between computer programs; the person just presses play and waits for the outcome.

It seems like it’s a rule of the universe that defection is good for a single instance of the PD, but cooperation is more rational over the long run. So if you ever find yourself in an iterated PD with someone, let them know that the winning strategy is cooperation… no god required.

Comments Off on Selfish Traits Not Favored By Evolution

Posted by on August 8, 2013 in decision theory


One-Box Or Two-Box?


So there’s a classic Decision Theory paradox called Newcomb’s Paradox. This is where you’re presented with an agent that supposedly can predict your actions (on Less Wrong he’s called “Omega”). It descends from its spaceship and presents you with two boxes. One box is transparent and contains $1,000 and the other box is closed and may or may not contain $1,000,000. You have a choice of choosing only the million dollar box or taking both boxes. The rub is that Omega has predicted your choice in advance, and if it predicts that you’ll take both boxes it didn’t put 1 million in the second box but if you only choose the unknown box then it put 1 million in it.

Which would you choose?

There is more information in this scenario. Omega has done this game with five other people and all five of those people two-boxed and only got $1,000. Since Omega is gone by this point, your decision that you haven’t made yet, in a way, already affected the contents of the unknown box. And if you decide to one-box, then the transparent box explodes and the $1,000 is burnt up.

This hypothetical unpacks a bunch of assumptions that people have about free will, omniscience, and decision theory, which leads to the paradox. If you’re the type of person to one-box, then this means that deciding to two-box should net you $1,001,000 right? Or maybe Omega already knew you would think like that so you’ll really only get $1,000? But if Omega really did know you’d think like that, then simply deciding to one-box automatically puts $1 million in the second box? (Seems like I’m arguing with a Sicilian when death is on the line…)

The majority of Less Wrongers choose to one-box, while garden-variety atheists choose to two-box, and theists choose to one-box. What’s going on here? Granted, like that link says, this is a pretty informal survey so it might not reflect the wider populations of those subgroups. But it seems to me that the deciding factor is whether you allow for the possibility of an all-knowing being who is able to predict your move.

Theists, obviously, allow for the possibility of a being that knows everything and can predict their actions. Garden-variety atheists reject such a possibility. These two facts probably account for the tendency for theists to one-box and for atheists to two-box. Even though Less Wrongers are much more likely to be atheists, they are atheists who study rationality a lot more than the average garden-variety atheist (garden-variety atheist probably know a lot of logical fallacies, but logical fallacies are not the be-all, end-all of rationality). It might be Less Wrong’s commitment to rationality and its focus on AI that leads more towards one-boxing, as it is theoretically possible for a superintelligent AI to predict ones actions. As they say at Less Wrong, just shut up and multiply. Since I am technically a Less Wronger, that’s exactly what I’ll do 🙂

First of all, since I don’t think the concept of free will exists or is even coherent, I don’t see any reason why there can’t be some supersmart being out there that could predict my actions. Given that, it doesn’t mean that this Omega person actually is that supersmart being. The only evidence I have is its say so and the fact that five previous people two-boxed and only got $1,000.

Is the reason that these other five people two-boxed and got $1,000 due to Omega accurately predicting their actions? Or is there some other explanation… like Omega not being a supersmart being and he never puts $1 million in the second box? Which is more likely, if I were actually presented with this scenario in real life? It seems like the simplest explanation, the one with the least metaphysical coin flips, is that Omega is just a being with a spaceship and doesn’t have $1 million to give. If I had some evidence that people actually have one-boxed and gotten the $1 million then I would put more weight on the idea that he actually has $1 million to spare, and more weight on the possibility that Omega is a good/perfect predictor.

I guess I’m just a garden-variety atheist 😦

But let me continue to shut up and multiply, this time using actual numbers and Bayes Theorem. What I want to find out is P(H | E), or the probability that Omega is a perfect predictor given the evidence. To update, I need my three variables to calculate Bayes: The prior probability, the success rate, and the false positive rate. Let’s say that my prior for Omega being a supersmart being is 50%; I’m perfectly agnostic about its abilities (even though I think this prior should be a lot less…). To update my prior based on the evidence at hand (five people have two-boxed and gotten $1,000), I need the success rate for the current hypothesis and the success rate of an alternative hypothesis or hypotheses (if it were binary evidence, like the results of a cancer test, I would call it the false positive rate).

The success rate is asking what is the probability of the previous five people two-boxing and getting $1,000 given that Omega is a perfect predictor. Assuming Omega’s prediction powers are true, P(E | H) is obviously 100%. The success rate of my alternative is asking what is the probability of the previous five people two-boxing and getting $1,000 given that Omega never even puts $1 million in the second box. Again, assuming that Omega never puts $1 million in the second box is true, P(E | ~H) would also be 100%. In this case, Bayes Factor is 1 since the success rate for the hypothesis and the success rate for the alternative hypothesis are equal, meaning that my prior probability does not move given the evidence at hand.

Now that I have my posterior, which is still agnosticism about Omega’s powers of prediction, I find out my decision theory algorithm. If I two-box, there are two possible outcomes: I either get only $1,000 or I get $1,001,000. Both outcomes have a 50% chance of happening due to my subjective prior, so my decision theory algorithm is 50% * $1,000 + 50% * $1,001,000. This sums to a total utility/cash of $501,000.

If I one-box, there are also two possible outcomes: I either get $1,000,000 or I lose $1,000. Both outcomes, again, have a 50% chance of happening due to my subjective probability about Omega’s powers of prediction, so my decision theory algorithm is 50% * $1,000,000 + 50% * $1,000. This sums to $499,000 in total utility.

So even using some rudimentary decision theory, it’s still in my best interests to two-box given the evidence at hand and my subjective estimation of Omega’s abilities. But like I said, if there was evidence that Omega ever puts $1 million in the second box, this would increase my subjective probability that I could win $1 million. According to my rudimentary decision theory algorithm, one-boxing and two-boxing have equal utility around 50.1%. Meaning that any probability that I estimate at above 50.1% for Omega’s powers of prediction it makes more sense to one-box. I assume that Less Wrongers and theists have a subjective probability about Omega’s powers of prediction to be close to 100%, and in that case it overwhelmingly makes more sense to one-box. Things get more complicated, though, if Omega both puts $1 million in a box and someone got $1,001,000! Omega’s success rate is now less than 100% and I known that he does put $1 million in the closed box.

But again, if your subjective estimate of Omega’s powers of prediction are less than 50% it makes more sense to two-box. And that’s probably why Vizzini is dead.

Comments Off on One-Box Or Two-Box?

Posted by on July 10, 2013 in Bayes, decision theory


Bayes Theorem And Decision Theory In The Media

This is a clip from the show Family Guy. Here’s my transcription:

Salesman: Hold on! You have a choice… you can have the boat, or the mystery box!

Lois: What, are you crazy? We’ll take the boat

Peter: Woa, not so fast Lois. A boat’s a boat, but a mystery box could be anything! It could even be a boat! You know how much we’ve wanted one of those!

Lois: Then let’s just–

Peter: We’ll take the box!

Everyone understands why what Peter Griffin did here was dumb (or maybe you don’t, and only laugh because other people laugh?). Peter’s failure was a failure of probability, deciding on the mystery box when he wanted a boat instead of just going for the boat.

We can put Peter’s dilemma into the format of Bayes Factor and see that the evidence was in favor of him choosing the boat option (if he wanted a boat) instead of the mystery box. So this would be the probability of getting a boat given that he picked the boat option divided by the probability of getting the boat given that he picked the mystery box. Or, let B represent getting a boat and O represent picking the original option, and ~O represent the mystery box.

P(B | O), the probability of getting the boat given that he picked the original option, is 100%. P(B | ~O), the probability of getting the boat given that he picked the mystery box, is some other number. But this number has to include both the probability of getting the boat plus the probability of getting something else. And, since it’s a mystery box, it could have any number of other things to win. Remember that P(B | ~O) + P(~B | ~O) = 100%, so P(B | ~O) = 100% – P(~B | ~O), or 100% minus the probability of not getting the boat given that he picked the mystery box. The big problem is that it’s a mystery box, so it can account for any data.

But this isn’t the whole story behind Peter’s fail of logic.

Remember my post on decision theory? That applies here, as well. A utility function in decision theory is basically where you multiply the probability of the event happening with the amount of utility (an arbitrary number, equivalent to “happiness points”; or “cash” if you’re an economist) you would get from it coming to pass. Combining both Peter’s low probability fail with the mystery box to decision theory, we get one of the classic examples of cognitive biases: the Framing Effect. It goes like this:

Participants were told that 600,000 people were at risk from a deadly disease. They were then presented with the same decision framed differently. In one condition, they chose between a medicine (A) that would definitely save 200,000 lives versus another (B) that had a 33.3 per cent chance of saving 600,000 people and a 66.6 per cent chance of saving no one. In another condition, the participants chose between a medicine (A) that meant 400,000 people will die versus another (B) that had a 33.3 per cent chance that no one will die and 66.6 per cent that 600,000 will die.

For some reason, people choose the saving of 600,000 people at 33.3% probability due to the way the question is framed, even though the two scenarios are equivalent. The brain might be just comparing the sizes and ignoring the probabilities.

Analogously, Peter’s boat option is equivalent to the 100% chance of saving 200,000 people, and his mystery box option is equivalent to the 33.3% chance of saving 600,000 people (though, like I said, a mystery box has some unknown — but necessarily less — probability of being a boat). If you simply replace “people” with “utility” in the Framing Effect example, you realize that the two options are roughly equivalent (33.3 * 600,000 = 200,000) on the positive utility side. We also have to account for the negative utility of having a 66% chance of saving no one. That would be 66.6 * 600,000 compared to 100% * 400,000, which are also equal from a utility perspective.

And this is why Peter shouldn’t have picked the mystery box. We don’t actually know the probability of getting the boat given that he chose the mystery box but, like I said, it’s necessarily lower than 100%. Similarly, if Peter chose the boat outright, that’s a 0% chance of him getting anything else. We also don’t know the amount of utility for him getting anything else, but we do know that his utility of getting the boat seems to be pretty high. This is the crucial difference between Peter’s utility function and the one represented with the Framing Effect: Peter’s utility for getting the boat isn’t the same with either option.

Of course, the thing about logical fallacies is that, due to their non-sequitur nature, they are oftentimes used as jokes. That’s why Peter’s choice is also hilarious.

What’s black and rhymes with Snoop? Dr Dre.

Funny, but also a fallacy of equivocation.

It’s probably just a coincidence, but the creator of Family Guy is an atheist. Peter basically chose the “god” option for his explanation instead of the more precise boat option in the above scenario. God represents the mystery box, not only because theists think that god being mysterious is a good thing, but because god, just like the mystery box, can account for any possible data imaginable (even a boat).

Comments Off on Bayes Theorem And Decision Theory In The Media

Posted by on June 26, 2013 in Bayes, decision theory, Funny


Game Theory

(Russel Crowe as John Nash in A Beautiful Mind describing some Game Theory)

Again, a post not directly related to religion. This is a post about bare bones rationality. But first, a quote from Robin Hanson:

“Students are often quite capable of applying economic analysis to emotionally neutral products such as apples or video games, but then fail to apply the same reasoning to emotionally charged goods to which similar analyses would seem to apply. I make a special effort to introduce concepts with the neutral examples, but then to challenge students to ask wonder why emotionally charged goods should be treated differently.”

If you don’t know about Game Theory (GT) you should, since it is a situation that you’ve probably been in in some form or another in life. My first exposure to GT was the Prisoner’s Dilemma (PD).

Let’s say two people who have robbed a bank are under arrest and sitting in jail. The cops don’t actually have enough evidence to get a high probability of conviction, so they try to get them to admit to the robbery. They separate the two criminals and offer each a plea deal if they admit that the other one was involved.

If neither of them accuses the other, then they go to trial with a 50% chance of getting convicted. If they both accuse each other, they both have a 99% chance of getting convicted. If only one accuses the other, the accused has a 75% chance of getting convicted while the accuser gets granted immunity.

Which would you choose, if you were one of the criminals? From Wikipedia:

Because betrayal always rewards more than cooperation, all purely rational self-interested prisoners would betray the other, and so the only possible outcome for two purely rational prisoners is for them both to betray each other. The interesting part of this result is that pursuing individual reward logically leads the prisoners to both betray, but they would get a better reward if they both cooperated. In reality, humans display a systematic bias towards cooperative behavior in this and similar games, much more so than predicted by simple models of “rational” self-interested action

There is also an extended “iterative” version of the game, where the classic game is played over and over between the same prisoners, and consequently, both prisoners continuously have an opportunity to penalize the other for previous decisions. If the number of times the game will be played is known to the players, then (by backward induction) two purely rational prisoners will betray each other repeatedly, for the same reasons as the classic version.


Doping in sport has been cited as an example of a prisoner’s dilemma. If two competing athletes have the option to use an illegal and dangerous drug to boost their performance, then they must also consider the likely behaviour of their competitor. If neither athlete takes the drug, then neither gains an advantage. If only one does, then that athlete gains a significant advantage over their competitor (reduced only by the legal or medical dangers of having taken the drug). If both athletes take the drug, however, the benefits cancel out and only the drawbacks remain, putting them both in a worse position than if neither had used doping.

Which would you choose?

As is evidenced by the reference material in Wikipedia, the rational decision is to betray your accomplice/gangmate, since that has the highest personal payoff. Of course, most people are not rational and have a bias towards cooperation. Which is even worse, because people who actually are rational and study rationality would win almost every time; at least in single play PD games.

Here is another good example of the PD when it comes to other types of collaboration:

Say X is a writer and Y is an illustrator, and they have very different preferences for how a certain scene should come across, so they’ve worked out a compromise. Now, both of them could cooperate and get a scene that both are OK with, or X could secretly change the dialogue in hopes of getting his idea to come across, or Y could draw the scene differently in order to get her idea of the scene across. But if they both “defect” from the compromise, then the scene gets confusing to readers. If both X and Y prefer their own idea to the compromise, prefer the compromise to the muddle, and prefer the muddle to their partner’s idea, then this is a genuine Prisoner’s Dilemma.

Here is yet another example of the PD that is actually an iterative version written by a Less Wrong (that blog devoted to refining the art of human rationality) member:

Ganaj: Hey man! I will tell you your fortune for fifty rupees!

Philosophical Me: Ganaj, are you authorized to speak for all Indian street fortune-tellers?

Ganaj: By a surprising coincidence, I am!

Philosophical Me: Good. By another coincidence, I am authorized to speak for all rich tourists. I propose an accord. From now on, no fortune-tellers will bother any rich tourists. That way, we can travel through India in peace, and you’ll never have to hear us tell you to “check your [poor] privilege” again.

Ganaj: Unfortunately, I can’t agree to that. See, we fortune-tellers’ entire business depends on tourists. There are cultural norms that only crappy fraudulent fortune-tellers advertise in the newspapers or with signs, so we can’t do that. And there are also cultural norms that tourists almost never approach fortune-tellers. So if we were to never accost tourists asking for business, no fortunes would ever get told and we would go out of business and starve to death.

Philosophical Me: At the risk of sounding kind of callous, your desire for business doesn’t create an obligation on my part or justify you harassing me.

Ganaj: Well, think about it this way. Many tourists really do want their fortunes told. And a certain number of fortune-tellers are going to defect from any agreement we make. If all the cooperative fortune-tellers agree not to accost tourists the defecting fortune-tellers will have the tourists all to themselves. So if we do things your way, either you’ll never be able to get your fortune told at all, or you’ll only be able to get your fortune told by a defecting fortune-teller who is more likely to be a fraud or a con man. You end up fortuneless or conned, we end up starving to death, and the only people who are rich and happy are the jerks who broke our hypothetical agreement.

This is a situation that you’ve most definitely been involved in. Not because it’s about being confronted by Indian beggars, but because it’s an allegory for dating. So again, which one would you choose: cooperate or defect? Or which one have you chosen?

To see how this allegory is one about dating, I’ll have to make it more explicit. Say in some alternate universe, women who wanted to be approached by men on the street/at a coffee shop/dance event/etc. wore some sort of special wristband. The agreement was that men should only approach women with the wristband, and women would expect to be approached if they were wearing the wristband and would be friendly and welcoming when approached.

Because society is what it is with humans running around with their bunny brains, some men decide to approach even women without the wristband, and sometimes they get positive responses/dates and what have you; they’ve defected from the agreement and got more dating opportunities than by just approaching women wearing the wristband. Again, society being what it is, some women wore the wristband because they liked getting attention from men; even if they weren’t open to dating because they were married or in a relationship. Over time, men notice these other men getting more dates and eventually all men defect. Over time, women notice that other women are getting more attention just by wearing the wristband and eventually the wristband loses its initial function (some women even wear the wristband just because they like the color, and then get upset when they get approached); all the women have defected.

Dating. One massive game theory defection scheme.

As the Game Theorists have noted, if everyone defects in the iterated PD, then eventually this becomes a losing strategy. The issue with dating is that the iteration takes place over numerous generations. Meaning that the initial defectors are dead by the time the damage has been done and everyone has started losing out because everyone is now defecting. But why would a rational person not defect after multiple generations? Agreeing to the now generations-old cooperation scheme only results in the other prisoner reaping the rewards.

What I think is the worst part of GT and other decision theories is that we’re not consciously aware of our values. Playing a game of “let’s pretend” doesn’t solve that intractable problem. Furthermore, as I’ve been writing about a lot recently, our values are socially constructed. Our “conscious mind” is more like a press secretary for the government, meant to put a feel-good spin on whatever it is that our unconscious mind (the actual government) values and the actions we make due to those unconscious values. This is the main reason why people are religious, why some are suicide bombers, why some are feminists, and the main reason for many other political groupings.

The link to religion will be made in a subsequent post 🙂


Posted by on June 19, 2013 in decision theory


Should You Get Married?


This is a question I’ve been grappling with for a while.

Even though I’ve been wavering about this for a number of years, I miraculously (lol) read two posts today that made me finally decide to write my own. Even more oddly, about a week ago, my grandmother made the startling argument that marriage/kids isn’t for anyone. She went on further to say that having a kid these days is basically madness; if she were a young adult in today’s age she wouldn’t have kids.

The first post I read today was a post over at Less Wrong, the subject of which was about Unknown Knows; why we choose monogamy and why we don’t choose non-monogamy (i.e. polyamory or some other relationship format) . As a society, we “know” that monogamy is “correct”, but we don’t know why it’s “correct” (scare quotes for a reason):

By definition, we are each completely ignorant of our own unknown knowns. So even when our culture gives us a fairly accurate map of the territory, we’ll never notice the Mercator projection’s effect. Unless it’s pointed out to us or we find contradictory evidence, that is. A single observation can be all it takes, if you’re paying attention and asking questions. The answers might not change your mind, but you’ll still come out of the process with more knowledge than you went in with.

When I was eighteen I went on a date with a girl I’ll call Emma, who conscientiously informed me that she already had two boyfriends: she was, she said, polyamorous. I had previously had some vague awareness that there had been a free love movement in the sixties that encouraged “alternative lifestyles”, but that awareness was not a sufficient motivation for me to challenge my default belief that romantic relationships could only be conducted one at a time. Acknowledging default settings is not easy.

The chance to date a pretty girl, though, can be sufficient motivation for a great many things (as is also the case with pretty boys). It was certainly a good enough reason to ask myself, “Self, what’s so great about this monogamy thing?”

I couldn’t come up with any particularly compelling answers, so I called Emma up and we planned a second date.

Since that fateful day, I’ve been involved in both polyamorous and monogamous relationships, and I’ve become quite confident that I am happier, more fulfilled, and a better romantic partner when I am polyamorous. This holds even when I’m dating only one person; polyamorous relationships have a kind of freedom to them that is impossible to obtain any other way, as well as a set of similarly unique responsibilities.

In this discussion I am targeting monogamy because its discovery has had an effect on my life that is orders of magnitude greater than that of any other previously-unknown known. Others I’ve spoken with have had similar experiences. If you haven’t had it before, you now have the same opportunity that I lucked into several years ago, if you choose to exploit it.

This, then, is your exercise: spend five minutes thinking about why your choice of monogamy is preferable to all of the other inhabitants of relationship-style-space…

…If you have a particularly compelling argument for or against a particular relationship style, please share it. But if romantic jealousy is your deciding factor in favor of monogamy, you may want to hold off on forming a belief that will be hard to change

Someone in the comments made a pretty good observation:

More specifically, the concept of love seems to have the concepts of fidelity and jealousy inextricably woven into it, at least in mainstream Western culture. On a philosophical level, this doesn’t exactly make sense. If we care about the overall happiness and flourishing of man kind, it seems likely we would be far better off if we took the effort we put into suppressing, say, premarital sex, and moved it into suppressing jealousy.

Obviously, this is the view of a rather small minority, but it is nonetheless fascinating that most people are incapable of conceiving of love without fidelity: consider the seriousness of the implications of a romantic partner saying, “I love you,” for most people.


The second post I read, about 30 minutes later, was over at the blog Debunking Christianity. The Christian who was quoted in that post said that atheists, if they are deferring to rationality (i.e. probabilities and the scientific method) for their life choice about god, then they should be using the same skills for their personal relationships:

That brings me to the first part of the title of this brief essay, “Why Atheists Shouldn’t Marry.” Entering into marriage requires a leap of faith beyond the scientific probabilities. For atheists who claim that we should only believe what science can support, the claim that another human loves us so much as to lead us to pledge our lifelong love and commitment to remain married to them is absolutely hypocritical. How can science prove beyond doubt the love of another for me? We can observe signs of what we call love, we can scan the brain for activity in the appropriate sectors, we can even use our own feelings as a guide. These can all be manipulated or faked. And if you use science as a guide, atheists have to admit that the statistical probability of a lifelong happy marriage is well below 50%.

So why do atheists, even those who embrace some form of scientism, get married? …Loftus admits in [Why I Became An Atheist] that his own experience might have tainted his objectivity as he began to doubt his faith. Highly regarded experts in the Philosophy of Science such as Susan Haack, Paul Feyerabend, and Keith Ward admit that the scientific method is incomplete and that scientism has to be rejected because science cannot explain everything. Human opinion and experience has to be factored in. We all marry because we believe that, in the face of overwhelming odds against it, the love that we have with another is real, true, and lasting. That requires a leap of faith beyond the probabilities.

Non-sequiturs aside, I think he makes a good overall point. We should be entering into relationships/marriages (especially marriage, since it’s a business and/or financial contract) with as much rationality as we can muster, and try our best not to be influenced by the inherently irrational — at least, irrational as far as modern life goes, not irrational for our ancestors millions of years ago — pulling of our heartstrings.

Of course, just from some basic overall numbers, the divorce rate in the USA is around 50%. The more important number is the divorce rate when looked at by gender and who initiates it. Women overwhelmingly initiate divorce to the order of around 2:1. If there are 200 marriages and 100 of those end in divorce, approximately 65 – 70 of that 100 will be initiated by the woman; meaning that my own personal “get-divorced-on” rate is around 35%. That is, if the divorce initiate rate were actually gender symmetrical, the overall divorce rate would not be 50% but around 40%. This is one of the reasons I consider marriage to be inherently misogynist. Think about it: Who invented marriage? Who laments dropping marriage rates the most?

The above is overly simplistic though. There are a lot of other factors you could use in order to see how successful a marriage might be, like being over 30, education level, religiosity, number of sexual partners relative to age, and probably a whole bunch of other things. But I feel that would be overly complicated for a blog post.

Anyway, now that I’ve got some probabilities, we have to get into decision theory:

Decision theory is about choosing among possible actions based on how much you desire the possible outcomes of those actions.

How does this work? We can describe what you want with something called a utility function, which assigns a number that expresses how much you desire each possible outcome (or “description of an entire possible future”). Perhaps a single scoop of ice cream has 40 “utils” for you, the death of your daughter has -⁠274,000 utils for you, and so on. This numerical representation of everything you care about is your utility function.

We can combine your probabilistic beliefs and your utility function to calculate the expected utility for any action under consideration. The expected utility of an action is the average utility of the action’s possible outcomes, weighted by the probability that each outcome occurs.

Suppose you’re walking along a freeway with your young daughter. You see an ice cream stand across the freeway, but you recently injured your leg and wouldn’t be able to move quickly across the freeway. Given what you know, if you send your daughter across the freeway to get you some ice cream, there’s a 60% chance you’ll get some ice cream, a 5% your child will be killed by speeding cars, and other probabilities for other outcomes.

To calculate the expected utility of sending your daughter across the freeway for ice cream, we multiply the utility of the first outcome by its probability: 0.6 × 40 = 24. Then, we add to this the product of the next outcome’s utility and its probability: 24 + (0.05 × -⁠274,000) = -⁠13,676. And suppose the sum of the products of the utilities and probabilities for other possible outcomes was 0. The expected utility of sending your daughter across the freeway for ice cream is thus very low (as we would expect from common sense). You should probably take one of the other actions available to you, for example the action of not sending your daughter across the freeway for ice cream — or, some action with even higher expected utility.

A rational agent aims to maximize its expected utility, because an agent that does so will on average get the most possible of what it wants, given its beliefs and desires.

So how much utility does monogamy and/or marriage have to you? How much utility does some other relationship paradigm have? How much negative utility do you place on infidelity/divorce? Personally, getting a divorce has a much higher negative utility than lifelong marriage; even if the utility for marriage/divorce had an equal probability of happening, marriage would not be worth it. Like I mentioned above, I think that marriage is inherently misogynist so, ironically, I think truly equal marriages can only be between homosexuals.


Posted by on March 5, 2013 in decision theory, economics/sociology

NeuroLogica Blog

My ὑπομνήματα about religion

Slate Star Codex

The Schelling Point for being on the #slatestarcodex IRC channel (see sidebar) is Wednesdays at 10 PM EST


Matthew Ferguson Blogs

The Wandering Scientist

Just another site

NT Blog

My ὑπομνήματα about religion

Euangelion Kata Markon

A blog dedicated to the academic study of the "Gospel According to Mark"


My ὑπομνήματα about religion


Understand your mind with the science of psychology -


Musings on biblical studies, politics, religion, ethics, human nature, tidbits from science

Maximum Entropy

My ὑπομνήματα about religion

My ὑπομνήματα about religion

My ὑπομνήματα about religion

atheist, polyamorous skeptics

Criticism is not uncivil


My ὑπομνήματα about religion

Research Digest

My ὑπομνήματα about religion

Disrupting Dinner Parties

Feminism is for everyone!

My ὑπομνήματα about religion

The New Oxonian

Religion and Culture for the Intellectually Impatient

The Musings of Thomas Verenna

A Biblioblog about imitation, the Biblical Narratives, and the figure of Jesus