Monthly Archives: February 2020
A popular view in philosophy of science contends that scientific reasoning is objective to the extent that the appraisal of scientific hypotheses is not influenced by moral, political, economic, or social values, but only by the available evidence. A large body of results in the psychology of motivated-reasoning has put pressure on the empirical adequacy of this view. The present study extends this body of results by providing direct evidence that the moral offensiveness of a scientific hypothesis biases explanatory judgment along several dimensions, even when prior credence in the hypothesis is controlled for. Furthermore, it is shown that this bias is insensitive to an economic incentive to be accurate in the evaluation of the evidence. These results contribute to call into question the attainability of the ideal of a value-free science.
Political debate concerning moralized issues is increasingly common in online social networks. However, moral psychology has yet to incorporate the study of social networks to investigate processes by which some moral ideas spread more rapidly or broadly than others. Here, we show that the expression of moral emotion is key for the spread of moral and political ideas in online social networks, a process we call “moral contagion.” Using a large sample of social media communications about three polarizing moral/political issues (n = 563,312), we observed that the presence of moral-emotional words in messages increased their diffusion by a factor of 20% for each additional word. Furthermore, we found that moral contagion was bounded by group membership; moral-emotional language increased diffusion more strongly within liberal and conservative networks, and less between them. Our results highlight the importance of emotion in the social transmission of moral ideas and also demonstrate the utility of social network methods for studying morality. These findings offer insights into how people are exposed to moral and political ideas through social networks, thus expanding models of social influence and group polarization as people become increasingly immersed in social media networks.
Bias in the general sense is weighing the proverbial (or even literal) scales in a particular direction when the scale should be neutral.
Cognitive biases are the epistemic equivalent. And cognitive biases happen when people use their intuitions — especially their moral intuitions — when they shouldn’t be.
Our thought processes can be crudely divided into two general types of thinking: Intuitive thinking and analytic thinking  . Intuitive thinking is virtually instantaneous, analytic thinking is slow and deliberate. Intuition is good at recognizing faces or voices, and is terrible at math. The slow one is good at math and terrible at recognizing faces.
If you want to engage the intuitive engine, just listen to a story. Especially an engaging story with a simple narrative that has clear good guys and clear bad guys. If you want to engage the slow one, start doing some statistical analysis.
A bat and a ball together costs $1.10. The ball costs $1.00 more than the bat. How much does the bat cost?
The bat costs $0.10 right?
If you think so, you used your intuition instead of your analytic brain to answer the question.
Imagine the following scenario:
A beautiful woman sits alone at a bar. An attractive man walks up to her. He leans in and whispers in her ear. She laughs.
What did the woman look like? Was she white? Black? Indian? What did the bar look like? Was it upscale? A dive bar? What did the man look like?
Did you have to slowly deliberate over each of these questions while reading the quote, or were they provided to you? That was your intuition that provided it.
The intuitive type is our default, is in control all the time, and is what chooses when to engage the analytic type. Hence bias: There are issues that were already decided by our intuitive engine and when challenged the intuitive engine calls on the slow engine to defend the intuitive engine’s conclusion.
Because our intuitive engine is on all the time, we are easily biased by things that appeal to and make sense to it. As a thought process, intuition sees almost everything as having a social cause or human agency. Indeed, if there is a choice between something having a mindless physical cause or being caused by an intentional agent or some social/moral role, we are biased to go with the intentional agent/social role.
The intuitive engine is especially succeptible to teams and coalitions  . If you want the intuitive engine to turn into an absolute totalitarian dictator, just form a coalition (e.g., Republican, feminist, Christian, Muslim, Brexiter, Remainer, Yankees fan, etc. ).
If you want to overcome bias? Since our intuition is in charge almost all the time, this is a Herculean effort. But here’s how you can start:
- Don’t join or identify with any coalitions. If you do, try to do so with a highly exclusive one. Not one that just anyone can join.
- Don’t trust stories. Especially ones with clear good guys and bad guys. Anytime you hear a well crafted narrative, you should be on the cognitive equivalent of FPCON DELTA; you are being biased in real time.
- Be careful about snap judgements. Try not to moralize an issue. Morality is orthogonal to what’s true. Or in other words, moral truth is not the same as physical truth .
- Learn what makes something a good explanation . This means: Learn statistics. Learn about things like Simpson’s paradox, the base rate fallacy , the conjunction fallacy , the prosecutor’s fallacy — assuming P(A | B) = P(B | A) , the Univariate fallacy , and so on.
- Finally, and most importantly, don’t try to pick apart the biases in other people. Since our intuition is in charge, it will use knowledge of bias to defend its biases . Concentrate on overcoming your own biases. If you hear a thought that says something like “this person I’m talking to is biased” then you’ve just biased yourself.