Monthly Archives: January 2014

Reverse Game Theory


So on Less Wrong I proposed what I thought was an example of the Prisoner’s Dilemma:

I am trying to formalize what I think should be solvable by some game theory, but I don’t know enough about decision theory to come up with a solution.

Let’s say there are twins who live together. For some reason they can only eat when they both are hungry. This would work as long as they are both actually hungry at the same time, but let’s say that one twin wants to gain weight since that twin wants to be a body builder, or one twin wants to lose weight since that twin wants to look better in a tuxedo.

At this point it seems like they have conflicting goals, so this seems like an iterated prisoner’s dilemma. And it seems like if this is an iterated prisoner’s dilemma, then the best strategy over the long run would be to cooperate. Is that correct, or am I wrong about something in this hypothetical?

It was pointed out that this isn’t a PD since the twins have competing goals. So instead of going with a confusing hypothetical, I decided to use an example from my own life:

My brother likes to turn the air conditioner as cold as possible during the summer so that he can bundle up in a lot of blankets when he goes to sleep. I on the other hand prefer to sleep with the a/c at room temperature so that I don’t have to bundle up with blankets. Sleeping without bundling up makes my brother uncomfortable, and having to sleep under a lot of blankets so I don’t freeze makes me uncomfortable. We both have to use the a/c, but we have contradictory goals even though we’re using the same resource at the same time. And the situation is repeated every night during the summer (thankfully I don’t live with my brother, but my current new roommate seems to have the same tendency with the a/c).

User badger was able to use this real life example to explain that this indeed isn’t a PD since this isn’t a pre-made game, so I would have to design a game that could be solved. This is mechanism design, or reverse game theory:

That example helps clarify. In the A/C situation, you and your brother aren’t really starting with a game. There isn’t a natural set of strategies you are each independently choosing from; instead you are selecting one temperature together. You could construct a game to help you two along in that joint decision, though. To solve the overall problem, there are two questions to be answered:

  1. Given a set of outcomes and everyone’s preferences over the outcomes, which outcome should be chosen? This is studied in social choice theory, cake-cutting/fair division, and bargaining solutions.
  2. Given an answer to the first question, how do you construct a game that implements the outcome that should be chosen? This is studied in mechanism design.

One possible solution: If everything is symmetric, the result should split the resource equally, either by setting the temperature halfway between your ideal and his ideal or alternating nights where you choose your ideals. With this as a starting point, flip a coin. The winner can either accept the equal split or make a new proposal of a temperature and a payment to the other person. The second person can accept the new proposal or make a new one. Alternate proposals until one is accepted. This is roughly the Rubinstein bargaining game implementing the Nash bargaining solution with transfers.

Another possible solution: Both submit bids between 0 and 1. Suppose the high bid is p. The person with the high bid proposes a temperature. The second person can either accept that outcome or make a new proposal. If the first player doesn’t accept the new proposal, the final outcome is the second player’s proposal with probability p and the status quo (say alternating nights) with probability 1-p. This is Moulin's implementation of the Kalai-Smorodinsky bargaining solution.

According to my own moral intuitions (which admittedly aren’t universal moral intuitions) alternating weeks over who has control over the A/C seems the most fair. But someone else might prefer to place bids. However, introducing money also introduces a power system where whoever has or earns more money gets their wants achieved asymmetrically.

Of course there could also be a power differential due to knowledge, such as a game that is designed to violate the laws of probability to favor the game designer. But that’s why it’s always in your favor to learn rationality!

Comments Off on Reverse Game Theory

Posted by on January 30, 2014 in decision theory


Falsifiability and Characters In A Story


Another interesting study intersecting psychology with falsifiability:

You have four cards. Flip whichever cards you must to verify the rule If a card has a vowel on one side then it has an even number on the other:

E C 4 5

People do notoriously badly at this game. It’s called the “Wason selection task”. It was mentioned in the sequences a few times. But it turns out, people are much better at this version:

There are four people drinking at a bar. A police officer busts in and needs to verify the rule If a person is drinking alcohol, they must be at least 21. Which of the following must they investigate further?

Beer-drinker, Coke-drinker, 25-year-old, 16-year-old

These problems are logically identical. However, most people suggest flipping 4 while few people suggest checking what the 25 year old is drinking.

More generally, it seems that people can do very well on the Wason selection task if it’s framed in such a way that people are looking for cheaters. (Eliminating the police officer from the above story is sufficient to reduce performance.)

So it seems we have an intuitive understanding of falsifiability if we move from abstract concepts to characters in a story, actively looking for cheaters.

I’m trying to think of examples where I can use characters/cheaters to explain how falsifiability works (or otherwise called precision) other than this, but it makes me think that people would choose based on representativeness.

So instead of using marbles or dice for an example of falsifiability, I think I can use police and a lineup of usual suspects as an example.

Let’s say that someone was murdered. The police line up the usual suspects of mob hitmen. In this case, we have four suspects.

Nate has a tendency to keep it simple. He always uses a .9mm gun to shoot victims. Jerry likes to either strangle or poison his victims, preferring to keep things from getting messy. Bob is known to use any means he can to kill his victims: guns, knives, poisoning, arson, strangling, bombs, throwing people out of airplanes, etc.Dan either strangles or shoots victims and doesn’t bother with any other methods.

The person murdered was found strangled. So, based on this information, and keeping things simple for this thought experiment, which person likely did it? We can rule out Nate since he never strangles victims. Bob, Dan, and Jerry all use strangling, but since Bob is all over the place with his method of taking out people he is the least likely to have strangled someone in this case. Dan and Jerry are equally as likely to have been the one to do it.

Obviously, you would have to also take into account prior probabilities, like how often each person actually kills someone for the mob if this were a real situation. But with all else equal, Nate is the least likely to have done this hit, followed by Bob, and then Dan and Jerry tie for most likely.

Of course, this remains to be seen whether people actually do better at eliminating Nate, placing Bob in second to last, and focusing on Dan and Jerry. I’ll have to ask some friends or something. But one point that I think might hinder it is representativeness; they might see Bob’s penchant for variety in his choice of murder and implicitly use that large number as a substitute for prior probability. In other words, they might see the method of murder and substitute it with the number of murders. In that view, Nate has only killed one person, Dan and Jerry have killed two, and Bob has killed a multitude… even though the number of people each person has murdered is never given in this thought experiment.

Comments Off on Falsifiability and Characters In A Story

Posted by on January 27, 2014 in Bayes, cognitive science, rationality


Why Final Fantasy Is Anti-Religion

Also, see my post on Jesus as an Emanation of God.

Comments Off on Why Final Fantasy Is Anti-Religion

Posted by on January 11, 2014 in video games


Logical Fallacies As Weak Bayesian Evidence: The Fallacy Fallacy

So the Fallacy Fallacy is when you declare the conclusion of an argument false because the logical structure of the argument used a logical fallacy. As in the attached comic, you can’t just declare an argument wrong based on the number of logical fallacies it violates; at the most, you can say that the conclusion doesn’t follow from the premises, or that there is a false premise or something. Take the following argument:

P1: Bill is the CEO of my company and says that dolphins are fish
P2: Bill also says that all fish live in the ocean
C: Since Bill is the CEO of the company, and says that dolphins are fish, then dolphins live in the ocean

This argument not only appeals to false authority for the strength of its premises (itself weak Bayesian evidence) but also has factually incorrect premises as well. However, saying “appeal to false authority!” doesn’t actually mean that dolphins don’t live in the ocean.

So looking at this from a Bayesian point of view, how likely is it that the fallacy fallacy actually points out a false conclusion as opposed to a true conclusion (and thus being a true mistake)? Meaning that our evidence E is “you used a logical fallacy in constructing this argument” and H is “therefore your conclusion is wrong” and finding out the difference between H and ~H.

Making an intuitive judgement, there seems to be no relationship between logical fallacies used and whether an argument has a true conclusion; the fallacy fallacy itself seems to hinge on whether other logical fallacies can be legitimately used as weak Bayesian evidence. So if a fallacy fallacy can point out false conclusions just as easily as mistakenly calling true conclusions false, it seems as though P(E | H) is equal to P(E | ~H). I don’t think I have any valid way to differentiate between the two conditionals. If that is true, then it means I’m in a case where Bayes Factor is 1 and just go by the prior probability that a person would arrive at a false conclusion. Over the course of human history, more people have been wrong than they’ve been right, so the prior probability that some random person would randomly know the correct answer to something is low.

So the fallacy fallacy is not technically strong or weak Bayesian evidence, but paying attention to the base rate of someone constructing a valid and sound argument means that the fallacy fallacy, or the bare act of pointing out a logical fallacy itself, probably means that the argument has a false conclusion.

Comments Off on Logical Fallacies As Weak Bayesian Evidence: The Fallacy Fallacy

Posted by on January 7, 2014 in Bayes, logical fallacies as weak bayesian evidence


Genetic Religion


In my previous post on the gender differences in morality (and thus religiosity), I hypothesized prematurely that religiosity was probably minimally genetic and mainly sociological at a 9:1 ratio of sociological:biological. As fate would have it, I stumbled across some twin studies that conclude that religiosity, as well as other pro-social phenomena, are closer to 50% inherited.

The more generalized twin study focused on overall sociological behaviors being genetic:


Findings from twin studies yield heritability estimates of 0.50 for prosocial behaviours like empathy, cooperativeness and altruism. First molecular genetic studies underline the influence of polymorphisms located on genes coding for the receptors of the neuropeptides, oxytocin and vasopressin. However, the proportion of variance explained by these gene loci is rather low indicating that additional genetic variants must be involved. Pharmacological studies show that the dopaminergic system interacts with oxytocin and vasopressin… The present experimental study tests a dopaminergic candidate polymorphism for altruistic behaviour, […] Altruism was assessed by the amount of money donated to a poor child in a developing country, after having earned money by participating in two straining computer experiments. Construct validity of the experimental data was given: the highest correlation between the amount of donations and personality was observed for cooperativeness… Carriers of at least one Val allele donated about twice as much money as compared with those participants without a Val allele… Cooperativeness and the Val allele of COMT additively explained 14.6% of the variance in donation behaviour. Results indicate that the Val allele representing strong catabolism of dopamine is related to altruism.

As I wrote in that previous post, empathy is highly correlated with religiosity. I also seemed to have guessed correctly that empathy was genetic; I was just wrong about how much it is genetic.

Here is the more specific twin study, itself referring to previous twin studies, focused on religiosity:

For decades, religiosity (defined as beliefs or behaviors towards superempirical agents) has been explored like other traits such as musicality, intelligence or skin color by Twin Studies – which conclusively found it to be partially inherited by genes and partially dependend on environmental (cultural) clues. In fact, religion turns out to be fully comparable to other biocultural traits such as speech or music.

Now, Kenneth S. Kendler, Hermine H. Maes and Todd Vance from the Virginia Commonwealth University in Richmond, VA, presented another Twin Study with rather large sample of 1106 monozygotic twins and 1501 dizygotic twins on "Genetic and Environmental Influences on Multiple Dimensions of Religiosity" (J Nerv Ment Dis 2010; 198: 755-761), DOI: 10.1097/NMD.0b013e3181f4ao7c.

Building on lots of earlier Twin Studies, they selected 78 religion-related items for their questionnaire, which were organized (by way of a statistical VARIMAX rotation) into 7 factors: General Religiosity, Social Religiosity, Involved God, Forgiveness, God as Judge, Unvengefulness and Thankfulness.

And as those (many) earlier studies (e.g. Bouchard and Koenigs), they found the correlations among monozygotic twins to be far stronger than among dizygotic twins, strongly supporting the notion of genetic heritability of the trait.

So it might actually be that religiosity is closer to 50% biological and 50% social. Meaning that women being more religious than men might itself be more genetic and less social conditioning; with the sociological factors (like groupthink, etc.) themselves being a genetic predisposition affecting women more than men. Indeed, almost everyone knows that men show off when in the presence of women; romantic or sexual priming increases men’s proclivity for risk taking. But one study I read showed that when women are given a romantic prime, they volunteer more. There goes that groupthink again…

Comments Off on Genetic Religion

Posted by on January 6, 2014 in cognitive science, morality

NeuroLogica Blog

My ὑπομνήματα about religion

The Wandering Scientist

What a lovely world it is

NT Blog

My ὑπομνήματα about religion


Understand your mind with the science of psychology -


Musings on biblical studies, politics, religion, ethics, human nature, tidbits from science

Maximum Entropy

My ὑπομνήματα about religion

My ὑπομνήματα about religion

My ὑπομνήματα about religion

Skepticism, Properly Applied

Criticism is not uncivil

Download PDF

My ὑπομνήματα about religion

Research Digest

My ὑπομνήματα about religion

Disrupting Dinner Parties

Feminism is for everyone!

My ὑπομνήματα about religion

The New Oxonian

Religion and Culture for the Intellectually Impatient

The Musings of Thomas Verenna

A Biblioblog about imitation, the Biblical Narratives, and the figure of Jesus

The Syncretic Soubrette

Snarky musings from an everyday woman