Author Archives: J. Quinton
I’ve ranted about the conjunction fallacy before. Help me out, Wikipedia!
Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.
Which is more probable?
1. Linda is a bank teller.
2. Linda is a bank teller and is active in the feminist movement.
The majority of those asked chose option 2. However, the probability of two events occurring together (in “conjunction”) is always less than or equal to the probability of either one occurring alone—formally, for two events A and B this inequality could be written as Pr(A & B) < Pr(A) and Pr(A & B) < Pr(B).
For example, even choosing a very low probability of Linda being a bank teller, say Pr(Linda is a bank teller) = 0.05 and a high probability that she would be a feminist, say Pr(Linda is a feminist) = 0.95, then, assuming independence, Pr(Linda is a bank teller and Linda is a feminist) = 0.05 × 0.95 or 0.0475, lower than Pr(Linda is a bank teller).
Tversky and Kahneman argue that most people get this problem wrong because they use a heuristic (an easily calculated) procedure called representativeness to make this kind of judgment: Option 2 seems more “representative” of Linda based on the description of her, even though it is clearly mathematically less likely.
In other words, the representativeness bias (System 1) is making people answer when people should be using math (System 2).
There are a bunch of other instances of this:
Which is more probable?
1. God exists
2. God exists and cares about you
Which is more probable?
1. Jesus was crucified
2. Jesus was crucified and had twelve disciples
Which is more probable?
1. Organ A is responsible for the disease
2. Protein G in Organ A is responsible for the disease
Which is more probable?
1. Men assault/kill most people because men are violent
2. Men assault/kill women specifically due to misogyny
Remember what causes bias: We are biased because we use our moral intuitions to decide on something before using our more analytical brain, and we only use our analytical brain to defend our moral intuitions.
Chances are high that if some answer that should be a basic math problem upsets you, then you are defending your biases.
Take the last conjunction fallacy, that men assault women specifically due to misogyny. Already we are asserting a moral issue (misogyny) as the causal agent. Bias is already being prompted. But if men assault more people period due to men being more violent than women, there’s no need to introduce an additional factor when men assault women. Of course if men assault more people than women, the subset of people includes women. Hence the conjunction.
Now if it turned out that men assaulted/killed women more than men, an additional explanatory factor might be needed. This is not the case though.
Anytime you see an issue where the subset is being presented as more probable than its superset, you’re probably dealing with a conjunction fallacy. Indeed, if you want to nip this bias in the bud, always think about the possible supersets and their likelihoods.
From Mind Hacks:
A curious term from anthropology describing the tendency for someone to come up with a counter-example from some usually obscure and remote tribe when anyone makes a general claim about human culture.
Bongo-bongoism: the venerable but ultimately sterile anthropological practice of countering every generalization with an exception located somewhere at some time.
Apparently, it was first used by anthropologist Mary Douglas in her book Natural Symbols.
Link to the culture evolves! blog (where I found the definition).
A few years ago I made a post titled “Truth vs. Morality; Rationality vs Intuition“. In that post I put forward the idea that there are certain things — empirical claims — that people dismiss because of the unfavorable moral implications.
I encountered this so many times in debates with religious people that I assumed that it was a particular failing of the religious. All of us former religious people have encountered the following logic:
If god doesn’t exist, what’s stopping an atheist from murdering bystanders and raping children?
Religious people don’t realize that this is quite the self-own: They are so depraved and morally bankrupt that the only thing that’s stopping them from raping children is belief in god. Eliezer Yudkowsky refutes this pretty soundly in my opinion by substituting “murder” with something more mundane like “going to the bathroom after midnight”: If god doesn’t exist, what’s stopping atheist from going to the bathroom after midnight? Checkmate, atheists!
The substitution demonstrates that an extra, hidden premise is smuggled in to give the original formulation its weight.
Unfortunately, religion isn’t some aberration of human behavior. The physical-to-moral sleights of hand that religious people perform aren’t limited to them, many other non-religious people do them as well. Religion is just a subset of moral intuitions. As such, there are many other empirical claims that are dismissed on secular morality grounds, and lead to the same sorts of self-owns.
Can you think of any? I brought some up in that previous post.
The larger point in both this post and the previous, is that human worth should be orthogonal to most — if not all — empirical claims. If god doesn’t exist this should have no bearing on the value of human life. But to even get to this step, people have to understand that the existence of god is an empirical claim and not a moral one. That is a hard ask.
And to my non-religious readers, you don’t get away either! You suffer from the same inability to divorce the physical from the moral that the religious do. And as such, you will inadvertently self-own in the same way religious people do. How depraved and morally destitute are you by your own admission?
Or to put it in a phrasing you might be familiar with (and leads to the moral self-own), does not believing in [XYZ] empirical or physical claim make you racist/sexist/homophobic/transphobic? Are you saying that the only thing that holds you back from being a putrid mire of racism/sexism/homophobia/transphobia is believing in [XYZ] claim?
Now, I actually don’t think religious people suffer from an abject poverty of moral purchase due to the implications of this particular anti-atheist argument. When a religious person hears “I don’t believe in god” it gets translated by their subconscious as “I don’t think morality exists”. The primacy of social or moral rules over the physical is a bias we all have. Meaning that the same translation happens for other physical claims besides the existence of god in non-religious domains: When presented with a question/claim that can have a moral/social answer/interpretation XOR a physical answer/interpretation, we tend to answer with the moral/social answer/interpretation.
But when you interpret a physical claim as a moral/social claim, you logically paint yourself into a moral corner. You imply that you would kill innocent people/rape children/be racist/sexist/homophobic/transphobic, and the only thing holding you back is the existence of god/[XYZ] claim.
To drive this point home, I’ll end this post with the most egregious example of the human tendency to supplant the social/moral over the physical — besides the existence of god — in the current zeitgeist:
“It is difficult to get a man to understand something, when his salary depends upon his not understanding it!”
Bullock, et al., 2013:
In both experiments, all subjects were asked factual questions, but some were given financial incentives to answer correctly. In both experiments, we find that the incentives reduce partisan divergence substantially–on average, by about 55% and 60% across all of the questions for which partisan gaps appear when subjects are not incentivized. But offering an incentive for accurate responses will not deter cheerleading among those who are unsure of the correct factual response, because such people stand to gain little by forgoing it. In our second experiment, we therefore implement a treatment in which subjects were offered incentives both for correct responses and for admitting that they did not know the correct response. We find that partisan gaps are even smaller in this condition–about 80% smaller than for unincentivized responses. This finding suggests that partisan divergence is driven by both expressive behavior and by respondents’ knowledge that they do not actually know the correct answers.
(h/t Bryan Caplan)
Do people generally view others as good or evil? Although people generally cooperate with others and view others’ “true selves” as intrinsically good, we suggest that they are likely to assume that the actions of others are evil-at least when they are ambiguous. Nine experiments provide support for promiscuous condemnation: the general tendency to assume that ambiguous actions are immoral. Both cognitive and functional arguments support the idea of promiscuous condemnation. Cognitively, dyadic completion suggests that when the mind perceives some elements of immorality (or harm), it cannot help but perceive other elements of immorality. Functionally, assuming that ambiguous actions are immoral helps people quickly identify potential harm and provide aid to others. In the first seven experiments, participants often judged neutral nonsense actions (e.g., “John pelled”) as immoral, especially when the context surrounding these nonsense actions included elements of immorality (e.g., intentionality and suffering). In the last two experiments, participants showed greater promiscuous condemnation under time pressure, suggesting an automatic tendency to assume immorality that people must effortfully control.
I dub this the Edgelord Effect.