RSS

Monthly Archives: August 2013

“Evil Will Always Triumph… Because Good Is Dumb”

Because good is dumb

Because good is dumb

Not much to this post, just, again, harping on groupthink and mixing it in with a quote from Less Wrong:

The respected leader speaks, and there comes a chorus of pure agreement: if there are any who harbor inward doubts, they keep them to themselves. So all the individual members of the audience see this atmosphere of pure agreement, and they feel more confident in the ideas presented—even if they, personally, harbored inward doubts, why, everyone else seems to agree with it.

(“Pluralistic ignorance” is the standard label for this.)

If anyone is still unpersuaded after that, they leave the group (or in some places, are executed)—and the remainder are more in agreement, and reinforce each other with less interference.

(I call that “evaporative cooling of groups“.)

The ideas themselves, not just the leader, generate unbounded enthusiasm and praise. The halo effect is that perceptions of all positive qualities correlate—e.g. telling subjects about the benefits of a food preservative made them judge it as lower-risk, even though the quantities were logically uncorrelated. This can create a positive feedback effect that makes an idea seem better and better and better, especially if criticism is perceived as traitorous or sinful.

(Which I term the “affective death spiral“.)

It’s a bit distressing that it wasn’t religion that first came to mind when I read this, but (atheist) P. Z. Myers’ blog; I was one of the ones who “evaporated”.

So this isn’t just a problem with religion. It’s a problem with nationalism, football teams, favorite bands, race, sex/gender, alma mater, or any other grouping that humans put themselves into and identify with. If criticism is seen as sinful, then you know you’re dealing with an affective death spiral. Again, a phenomenon not restricted to religion.

A group composed of rationalists would (should?) never devolve into an affective death spiral, and thus would be less efficient than group of non-rationalists.

 
2 Comments

Posted by on August 26, 2013 in cognitive science

 

The Insidious Manipulation Of Your Beliefs

20130820-160454.jpg

Ever wonder where those annoying pop up ads that litter the margins and sidebars of your favorite websites come from? And why they’re so ubiquitous? And who would fall for clicking on those things anyway? Well, an article over at Slate gives us a little insight:

“Research on persuasion shows the more arguments you list in favor of something, regardless of the quality of those arguments, the more that people tend to believe it,” Norton says. “Mainstream ads sometimes use long lists of bullet points—people don’t read them, but it’s persuasive to know there are so many reasons to buy.” OK, but if more is better, then why only one trick? “People want a simple solution that has a ton of support.”

Reminds me of the Gish Gallop. It’s a lot easier to spout off a flurry of bad arguments against evolution than it is to correct each and every one of them. Moreover, just by the nature of things, there are a lot more ways to be wrong about something than there are to be right about it. How many wrong answers are there for 2 + 2? Literally infinite. So it seems like one aspect of the science of persuasion is to offer a multitude of arguments for something, regardless of the quality of the argument. This is quite obviously hijacking the brain’s mental algorithms to fabricate the feeling of certainty in someone without much effort.

It makes a little bit of sense though. Most people operate under the assumption of confirmation bias, so they might automatically assume an argument is true before assuming it’s false (especially if they have an emotional investment in it being true). Moreover, we seem to have a bit of intuitive understanding of Bayesian updates so combining them it would make sense that unleashing a torrent of arguments in favor of something (even if those arguments are bad if looked at rationally) would default someone into “updating” their prior about the belief in question towards being more probable.

Also, more on the intuitive probability front, people prefer simple solutions to complex ones. That’s actually rational, but the brain has no quick (i.e. intuitive) way of distinguishing between simple and simplistic. A simple explanation is just following Occam’s Razor. A simplistic explanation functions more like a cognitive stop sign. So for many people, “god” is a simple explanation but if you ask yourself how many things need to be true in order for “god” to be the correct explanation, it turns out that it functions more like a simplistic answer. You can tell that an answer is simplistic if the crucial point of finding out how many things need to be true in order for the explanation to be correct is purposefully avoided.

What about all the weirdness? “A word like ‘weird’ is not so negative, and kind of intriguing,” says Oleg Urminsky of the University of Chicago Booth School of Business. “There’s this foot-in-the-door model. If you lead with a strong, unbelievable claim it may turn people off. But if you start with ‘isn’t this kind of weird?’ it lowers the stakes.” The model also explains why some ads ask you to click on your age first. “Giving your age is low-stakes but it begins the dialogue. The hard sell comes later.”

There was a post over at Epiphenom that explains the allure of religious belief in this aspect. Religious beliefs are only slightly counterintuitive, not completely nonsense. Blurb: “There’s a particular brain wave that gets triggered when you hear stuff that doesn’t make sense. It’s called the N400, and it’s triggered by sentences like “I like my coffee with cream and socks”. Although each individual word makes sense, and although the grammar is fine, the semantics is screwy – the meaning of those words is pretty unexpected… the size of the N400 wave was largest for the pure nonsense, and smallest for the sensible sentences. The religious statements were in-between.”

I would actually further hypothesize that the particular N400 wave that represented religion would be somewhat pleasing depending on the person’s thinking style (I’m not aware of any evidence for that conjecture, though).

Also, referring to the “foot-in-the-door” model, this actually seems like The Benjamin Franklin Effect, or what Less Wrong calls Cached Selves. I’ve actually seen this happen at bars/clubs: A guy will ask a girl to do an inocuous favor (e.g. watch my drink) and then she’s more likely to do other favors for him (e.g. come to this other bar with me / come home with me). Insidiousness level: 99; you doing a favor for someone makes you more likely to like them!

“People tend to think something is important if it’s secret,” says Michael Norton, a marketing professor at Harvard Business School. “Studies find that we give greater credence to information if we’ve been told it was once ‘classified.’ Ads like this often purport to be the work of one man, telling you something ‘they’ don’t want you to know.” The knocks on Big Pharma not only offered a tempting needle-free fantasy; they also had a whiff of secret knowledge, bolstering the ad’s credibility

Mystery and inside secrets seem to be one of the reasons why Christianity became so popular almost 2,000 years ago. So the trick is, if you want to increase the draw of some idea you’re selling, try to make it seem like it’s secret knowledge. This way if you are imparting it to someone it makes them feel special and individualized.

Though “one weird trick” ads may not be aimed at the average consumer, they show how deftly marketers have learned to manipulate our beliefs. There may be little daylight between the temptation of learning a weird trick at the behest of a sketchy mail-order outfit and the provocative headlines of mainstream news outlets like BuzzFeed, the Huffington Post, or—indeed!—Slate. The science of grabbing and directing your attention advances each time you click a link on your Facebook feed.

Of course, a reader “Adrian” posted in the comments “It sounds like all the elements needed to start a cult”. Exactly.

 
Comments Off on The Insidious Manipulation Of Your Beliefs

Posted by on August 20, 2013 in cognitive science

 

Religion Is Cheesecake For The Soul

20130819-100038.jpg

(Christianity, in edible form)

So I was reading some older posts over at Epiphenom and Dr. Rees had this interesting analogy for why religion exists:

First off, we are not naturally religious.

At least, we are no more naturally religious than we are naturally football fans, or concert goers.

Of course, football and pop concerts are popular because they appeal to a number of deep-rooted instincts, but no-one would claim they are natural. They are things we invented.

And here’s the critical bit: we invented them specifically to satisfy our instincts.

So that’s the way that it works. We have mental biases, that make us want to do certain things. We make culture, and we make culture that appeals to and works with our mental biases.

Religion, like music, is cheesecake for the mind – an “exquisite confection crafted to tickle the sensitive spots of our mental faculties”. So the fact that religion, like football and pop concerts, taps into our instincts is not a coincidence and it’s not a surprise.

In fact, it’s bleeding obvious.

That leads to another critical concept. There is more than one way of tickling these sensitive mental spots. All of culture does it. It just so happens that one group of cultural practices in the West that seem to appeal to similar mental biases have been given the label ‘religion’.

But try to apply these categories to other cultures, and you fall flat. Other cultures have invented kirschtorte, not cheesecake, while some choose not to have dessert at all. These people have the same cognitive biases, but different ways of tickling them.

Once you get that point, the next is obvious: Religion can be beneficial without being optimal.

Of course, Rees (and Pinker) call religion cheesecake for the mind, but I like “soul” better. It’s more ironic 🙂

So yeah, religion isn’t “natural” anymore than cheesecake is “natural”. We are adaption executors, not fitness maximizers. And just like our biology doesn’t make sense except in the light of the theory of evolution, our cognitive biases — especially those that produce religious beliefs — don’t make sense except in the light of evolutionary psychology.

There was once a time when high calorie foods were scarce, so we evolved an adaptation that made us prone to eat high calorie (i.e. sweet) foods. Similarly, there was once a time when life was harsh and short, and identifying strongly with a group was one of the best ways to survive. We no longer live in that world, one where we need to continually eat sugary foods. But we still do eat sugary foods; and is why we have an obesity epidemic in the West. Furthermore, people seem to gravitate towards sugary foods when they are depressed, busy, or exhausted. The analogy with religion in this regard continues to be apt. Though, sugar still has its uses: Judges give harsher sentences for the same crimes late in the day when they’re tired unless they drink a sweet drink during breaks (Kahneman 2011).

Ironically, I don’t really like cheesecake. But I do like Fuze, a much healthier invention than cheesecake. I don’t see why we can’t invent an analogously healthy alternative to religion that takes advantage of our cognitive biases to do good.

 
Comments Off on Religion Is Cheesecake For The Soul

Posted by on August 19, 2013 in cognitive science

 

The Charge Of Hyperskepticism/Hyposkepticism

20130816-115036.jpg

So there was a post that was critiquing the epistemological framework of Bayesianism (i.e. using probability theory and its laws as a basis for rationality) saying that it’s “just common sense”. This may seem so in rationalist communities, but in the wider world, a lot of people don’t follow such common sense. As the witty catchphrase goes, common sense isn’t all that common. And I’m beginning to doubt that it’s even all that common in rationalist communities.

This is one of the reasons why we have religious scientists. Many scientists think that the scientific method is just something you do to publish in academic journals, and not something more like a reliable framework for attaining and updating your beliefs about the world. So they might adhere to a concept like Popperian falsifiability when designing experiments but then go home and pray to Nocturnal for good luck.

Anyway, I’ve been noticing a lot of articles and such over the past few days about charges of sexual harassment. Like Philosophy Has A Sexual Harassment Problem. Or this Mr. Deity video that a friend posted on my Facebook and this response. And then there’s the sexual harassment problem at my employer, which has scheduled extra trainings for us as a result.

So the first problem I see, putting on my little rationality cogsci hat on, is that human beings think in groups. And the largest subgroup that humans can be divided into is male and female. This will of course lead to a bunch of motivated cognition, charges of hyperskepticism, and confirmation bias flying off the shelves like they’re on sale at Wal-Mart.

That’s why y’all motherfuckers need Bayes:

  1. Banish talk like “There is absolutely no evidence for that belief”. P(E | H) > P(E) if and only if P(H | E) > P(H). The fact that there are myths about Zeus is evidence that Zeus exists. Zeus’s existing would make it more likely for myths about him to arise, so the arising of myths about him must make it more likely that he exists. A related mistake I made was to be impressed by the cleverness of the aphorism “The plural of ‘anecdote’ is not ‘data’.” There may be a helpful distinction between scientific evidence and Bayesian evidence. But anecdotal evidence is evidence, and it ought to sway my beliefs.
  2. Banish talk like “I don’t know anything about that”. See the post “I don’t know.”
  3. Banish talk of “thresholds of belief”. Probabilities go up or down, but there is no magic threshold beyond which they change qualitatively into “knowledge”. I used to make the mistake of saying things like, “I’m not absolutely certain that atheism is true, but it is my working hypothesis. I’m confident enough to act as though it’s true.” I assign a certain probability to atheism, which is less than 1.0. I ought to act as though I am just that confident, and no more. I should never just assume that I am in the possible world that I think is most likely, even if I think that that possible world is overwhelmingly likely. (However, perhaps I could be so confident that my behavior would not be practically discernible from absolute confidence.)
  4. Absence of evidence is evidence of absence. P(H | E) > P(H) if and only if P(H | ~E) < P(H). Absence of evidence may be very weak evidence of absence, but it is evidence nonetheless. (However, you may not be entitled to a particular kind of evidence.)
  5. Many bits of “common sense” rationality can be precisely stated and easily proved within the austere framework of Bayesian probability. As noted by Jaynes in Probability Theory: The Logic of Science, “[P]robability theory as extended logic reproduces many aspects of human mental activity, sometimes in surprising and even disturbing detail.” While these things might be “common knowledge”, the fact that they are readily deducible from a few simple premises is significant. Here are some examples:
    • It is possible for the opinions of different people to diverge after they rationally update on the same evidence. Jaynes discusses this phenomenon in Section 5.3 of PT:TLoS.
    • Popper’s falsification criterion, and other Popperian principles of “good explanation”, such as that good explanations should be “hard to vary”, follow from Bayes’s formula. Eliezer discusses this in An Intuitive Explanation of Bayes’ Theorem and A Technical Explanation of Technical Explanation.
    • Occam’s razor. This can be formalized using Solomonoff induction. (However, perhaps this shouldn’t be on my list, because Solomonoff induction goes beyond just Bayes’s formula. It also has several problems.)
  6. You cannot expect that future evidence will sway you in a particular direction. “For every expectation of evidence, there is an equal and opposite expectation of counterevidence.”
  7. Abandon all the meta-epistemological intuitions about the concept of knowledge on which Gettier-style paradoxes rely. Keep track of how confident your beliefs are when you update on the evidence. Keep track of the extent to which other people’s beliefs are good evidence for what they believe. Don’t worry about whether, in addition, these beliefs qualify as “knowledge”.

If everyone is using the same framework, then charges of hyperskepticism, or hypo-skepticism ( not enough skepticism), should be more easily handled — just like my math teachers would say — by showing my work.

So if I were to apply Bayesianism to this sexual harassment boondogle, I would first establish my prior by analyzing my background knowledge. What do all of these problem areas — the atheist/skeptic community, the military, philosophy departments, the tech community — have in common? They are all heavily male-dominated. This creates a scarcity mentality and the men would behave the same way that any other human behaves in a scarcity context: Aggression, objectification (i.e. the thing that’s “scarce” being seen as “valuable“), selfishness/lack of empathy, and other deviant and competitive behavior.

More background knowledge, where are we more likely to find psychopaths: In jail or in leadership positions? You guessed it… definitely leadership positions. Jail selects for criminals, not psychopathy! Couple this with the odd relationship between psychopathy, testosterone, and social dominance, and we have a pretty dangerous combo. Mix a high likelihood for psychopathy with scarcity mentality, and I would have to put a bit higher prior on the likelihood for sexual harassment in these male-dominated areas than in the general population.

One of the critiques of Bayesianism is that prior probabilities are subjective. But probability is in the mind and is (mostly) subjective.

So what would I consider a prior for someone with some level of status in a male-dominated community (i.e. the background knowledge) engaging in some form of sexual harassment? I’d say around 5%. This means that if I were to survey the population of maybe 100 people with some level of status in a male-dominated community, I predict that around 5 of them would be unquestioningly guilty of sexual harassment. Considering that the actual population is much higher than 100, it seems about reasonable. Especially since the majority of victims are usually victimized by a minority, and of that minority the majority are repeat offenders.

Here’s where we get to the divergent assertions of hyperskepticism/hypo-skepticism.

So let’s say that C is “claim of harassment” and H is “actually sexually harassed someone”. This means that P(C | H) is the probability of there being a claim of sexual harassment given that one has actually sexually harassed someone. What we want to find out is P(H | C), the probability that someone has sexually harassed someone given a claim of sexual harassment, which is equal to P(C | H) * P(H) / [P(C | H) * P(H)] + [P(C | ~H) * P(~H)]

Now, a claim of sexual harassment is not itself definitive proof of sexual harassment. Just like testing positive for breast cancer is not itself definitive proof of breast cancer. Even if 100% of claims of sexual harassment given actual sexual harassment are true, this does not mean that someone is definitely guilty of sexual harassment if accused, as counterintuitive as that sounds. A claim of sexual harassment correlating with sexual harassment is a conditional probability; what we want is to update the prior probability of sexual harassment. What we want to find is P(H | C).

On the other hand, collecting multiple independent claims of harassment counts as evidence, and you should update your prior accordingly. The more claims that are made, the more times you update. Even if the prior’s rate of increase might start to plateau. This might not be scientific evidence, or not the type of evidence that might bring one to felony charges, but it’s Bayesian evidence nonetheless.

We then have to look at alternative hypotheses, which are represented by ~H. What is P(C | ~H), or the probability that someone would file a claim given that they weren’t sexually harassed? Maybe it was an actual misunderstanding, or the woman is being vindictive, or any other possible explanation for ~H. But I would certainly say that P(C | H) > P(C | ~H). By how much is the most important factor.

Another point of view I like to look at is from the “absence of evidence” view. If one claims that P(C | H) is 100%, this necessarily means that P(~C | H), or the probability of not having a claim of sexual harassment given that said person actually did sexually harass someone is 0%. And I’m reasonably certain that people have been sexually harassed and not filed a claim for whatever reason (fear, rape culture, etc.), so P(~C | H) is definitely greater than 0%.

So if my prior is 5%, and I think that P(C | H) > P(C | ~H), then this means that P(H | C) > P(H). And the amount that P(H | C) > P(H) is determined by how much P(C | H) > P(C | ~H). Let’s assume that P(C | H) is 90% [forcing P(~C | H) to be 10%] and P(C | ~H) is 1%. This means that P(H | C) is equal to 31%. That’s just for one claim. If there’s another claim, then (depending on the relationship between the two claims) this possibly moves my new prior of 31% to 73%. Of course this is assuming a 90% conditional probability, and I think that looking at it from the view of P(~C | H), it should be lower. Even so, with a conditional probability of 50%, it still moves my prior from 5% to 20%; add another claim and it goes to 50%.

With all of the furor brewing over false accusations, and not enough/too much skepticism about claims of sexual harassment, it seems pretty obvious that “[rationalist] common sense” is not prevailing where it should. Sure, you can assert that “extraordinary claims require extraordinary evidence” but you can only prove that by using Bayes.

 
Comments Off on The Charge Of Hyperskepticism/Hyposkepticism

Posted by on August 16, 2013 in Bayes, economics/sociology, rape, rationality

 

Are Atheists Smarter Than Religious People?

20130814-144520.jpg

So says a new study popularized by Yahoo! News. A minor disagreement with the conclusions of the study was published over at Salon (Christopher Hitchens’ former employer).

Yahoo! News reports that a review of 63 scientific studies over the years has found that religious people are less intelligent than non-believers. The study, by Miron Zuckerman of the University of Rochester, found that there is “ a reliable negative relation between intelligence and religiosity” in 53 out of the 63 studies review. This is the case even when intelligent people who are non-believers grow old.

[…]

“Intelligent people typically spend more time in school—a form of self-regulation that may yield long-term benefits,” the researchers write. “People possessing the functions that religion provides are likely to adopt atheism, people lacking these very functions (e.g., the poor, the helpless) are likely to adopt theism.”

[…]

One flaw in the study, though, is that it does not appear to take into account socioeconomic factors. Growing up in a comfortable household impacts a person’s educational levels and professional success, and therefore may influence the religious beliefs of the person.

What do I think? I don’t think it’s likely that atheists are “smarter” than theists. As you’re no doubt aware of by now from reading my blog, most of our beliefs are a function of our social standing. Human brains are wired for groupthink, and as such, we will tend to believe and defend those beliefs that confer to us the greatest social/sociological benefits. Meaning that most of our arguments for our beliefs are rationalizations of feelings related to ingroup/outgroup dynamics (unless you’re on the autism side of things). Simply put, knowing the reasons why people are religious — or hold most any belief for that matter — would at the most have a weak correlation with intelligence. I don’t want to sound like a conservative blowhard, but in higher education environments if you want to fit in socially, dampening your religiosity is probably a good investment. It’s not deterministic, but it probably has a pretty good unconscious effect.

The Salon article does bring up an important objection: Did this study control for socio-economic status? At the country level, poverty/economic inequality is one of the leading indicators of religiosity. It’s like having a cold; religion is the sneezing and economic inequality is the actual virus.

Of course, being rich doesn’t prevent you from believing in irrational claims

Furthermore, “intelligence” is a fuzzy concept. People can be highly intelligent in their System 1 reasoning (morality/intuition) or they can be highly intelligent in their System 2 reasoning (logic/abstraction). According to MIRI (the AI institute that Eliezer Yudkowsky works for) if we taboo the word “intelligence” we get something like efficient cross-domain optimization. Or, as a simple formula: Intelligence = Optimization Power / Resources Used. So, if someone is able to maximize their optimization power while minimizing the resources they use, then they are — from the vantage point of AI — “intelligent”. What does that mean?

Our “logical” thinking system takes up vast mental resources, while our intuitive thinking system takes up very little. As in my thief and wizard analogy, if you’re able to beat the game using only your thief this could be seen as a much better use of resources than relying on your wizard. Especially if beating the game takes equal amount of time using both. For a real world analogy, is someone who is socially successful (i.e. intuitive intelligence) optimizing their goals in life better than someone who is maxed out in logical intelligence?

I personally think that a truly “intelligent” person would be someone who is both an efficient intuitive thinker and also an efficient logical thinker; someone who excels at both social and abstract intelligence. And again, by “efficient” I mean maximizing their goals while minimizing their brainpower. If I have kids, I would definitely have them concentrate on being socially successful a bit more than logically intelligent; he or she should learn how to intuitively persuade and could then have people more logically intelligent than him/herself do the heavy cognitive lifting. But it would probably be good to maximize both. Most of the atheists that I first met were/are good at logical reasoning, but fail at social reasoning. Moreover, a lot of the more socially intelligent atheists I’ve come to know became atheists for (obviously) non-rational reasons, mired in the many cognitive biases that we have, even though they still had higher education degrees.

 
1 Comment

Posted by on August 14, 2013 in religiosity

 

The DSM-5 On Delusions

20130813-162754.jpg

(Are Birthers literally delusional?)

From You needn’t be wrong to be called delusional:

But, contrary to popular belief, the relationship between madness and truth is a complex one. They are made out to be strangers but often they are more like distant cousins.

This relationship has recently been acknowledged with the publication of the new version of the psychiatrists’ diagnostic manual (the DSM-5) where one of the most interesting but less noticed changes has been the redefinition of the delusion, a symptom that has long been considered the “basic characteristic of madness”.

Delusions, in the medical sense, are not simply a case of being mistaken, as the everyday use of the term suggests. They are profound and intensely held beliefs that seem barely swayed by evidence to the contrary – even to the point of believing in the bizarre. My heart has been replaced by steam. My thoughts are being stolen by satellites. The government communicates with me through birdsong.

But many delusions are not outlandishly eccentric, they are simply implausible. Consider the scenario where people believe that their neighbours are conspiring against them or that they are the subject of a film star’s secret affections. Occasionally, these beliefs turn out to be true, but this is not a reliable guide to whether someone is delusional or not. This was memorably illustrated by the psychiatrist Andrew Sims, who warned in his psychopathology textbook Symptoms in the Mind that spouses of people with delusions of infidelity may occasionally be driven to infidelity. This romantic betrayal does not suddenly cure their partner of their mental illness.

The general idea is that delusions represent a problem with how you believe – that is, a problem with forming and changing beliefs – not a problem with what you believe. In other words, simply believing something strange or unusual should not be considered a problem but having “stuck” beliefs that are completely impervious to reality suggests something is mentally awry.

[…]

Instead, the new definition of delusions describes them as fixed beliefs that are unswayed by clear or reasonable contradictory evidence, which are held with great conviction and are likely to share the common themes of psychosis: paranoia, grandiosity, bodily changes and so on. The belief being false is no longer central and this step forward makes it less likely that uncomfortable claims can be dismissed as signs of madness.

This is relevant to my blogging because I blog about Bayes Theorem and cognitive science within the context of religious history and beliefs. If someone is completely impervious to updating their beliefs based on new evidence, then this new DSM model suggests that they are leaning towards being delusional. This is kind of tricky territory because this implies that any belief that is unfalsifiable should be considered as a type of delusion.

Of course, in the common lexicon an unfalsifiable belief is simply a social faux pas of the scientific method. But as I’ve written pretty recently, falsifiability is not just a faux pas of the social process of the scientific method, but follows necessarily from being able to update your beliefs rationally.

 
Comments Off on The DSM-5 On Delusions

Posted by on August 13, 2013 in cognitive science

 

Selfish Traits Not Favored By Evolution

20130808-101936.jpg

According to a report over at the BBC:

Crucially, in an evolutionary environment, knowing your opponent’s decision would not be advantageous for long because your opponent would evolve the same recognition mechanism to also know you, Dr Adami explained.

This is exactly what his team found, that any advantage from defecting was short-lived. They used a powerful computer model to run hundreds of thousands of games, simulating a simple exchange of actions that took previous communication into account.

“What we modelled in the computer were very general things, namely decisions between two different behaviours. We call them co-operation and defection. But in the animal world there are all kinds of behaviours that are binary, for example to flee or to fight,” Dr Adami told BBC News.

“It’s almost like what we had in the cold war, an arms race – but these arms races occur all the time in evolutionary biology.”

[…]

“Darwin himself was puzzled about the co-operation you observe in nature. He was particularly struck by social insects,” he explained.

So we really shouldn’t be wondering where morality comes from. Reading comments on various news websites, you invariably get someone — usually religious — who rhetorically asks why don’t we murder or steal if god doesn’t exist; or what basis do we have for morality if no god exists and our morals are entirely secular. This BBC article gives some further evidence that being selfish isn’t a logical or winning strategy. Also, Less Wrong had a Prisoner’s Dilemma game run between programs that people wrote and submitted. The winning strategy there, as in the strategy hard coded by the programmers, was to lean towards cooperation. Note that this is a game between computer programs; the person just presses play and waits for the outcome.

It seems like it’s a rule of the universe that defection is good for a single instance of the PD, but cooperation is more rational over the long run. So if you ever find yourself in an iterated PD with someone, let them know that the winning strategy is cooperation… no god required.

 
Comments Off on Selfish Traits Not Favored By Evolution

Posted by on August 8, 2013 in decision theory

 
 
NeuroLogica Blog

My ὑπομνήματα about religion

Slate Star Codex

NꙮW WITH MꙮRE MULTIꙮCULAR ꙮ

Κέλσος

Matthew Ferguson Blogs

The Wandering Scientist

Just another WordPress.com site

NT Blog

My ὑπομνήματα about religion

Euangelion Kata Markon

A blog dedicated to the academic study of the "Gospel According to Mark"

PsyPost

Behavior, cognition and society

PsyBlog

Understand your mind with the science of psychology -

Vridar

Musings on biblical studies, politics, religion, ethics, human nature, tidbits from science

Maximum Entropy

My ὑπομνήματα about religion

My ὑπομνήματα about religion

My ὑπομνήματα about religion

atheist, polyamorous skeptics

Criticism is not uncivil

Say..

My ὑπομνήματα about religion

Research Digest

My ὑπομνήματα about religion

Disrupting Dinner Parties

Feminism is for everyone!

My ὑπομνήματα about religion

The New Oxonian

Religion and Culture for the Intellectually Impatient

The Musings of Thomas Verenna

A Biblioblog about imitation, the Biblical Narratives, and the figure of Jesus