RSS

Monthly Archives: June 2013

Bayes Theorem And Decision Theory In The Media

This is a clip from the show Family Guy. Here’s my transcription:

Salesman: Hold on! You have a choice… you can have the boat, or the mystery box!

Lois: What, are you crazy? We’ll take the boat

Peter: Woa, not so fast Lois. A boat’s a boat, but a mystery box could be anything! It could even be a boat! You know how much we’ve wanted one of those!

Lois: Then let’s just–

Peter: We’ll take the box!

Everyone understands why what Peter Griffin did here was dumb (or maybe you don’t, and only laugh because other people laugh?). Peter’s failure was a failure of probability, deciding on the mystery box when he wanted a boat instead of just going for the boat.

We can put Peter’s dilemma into the format of Bayes Factor and see that the evidence was in favor of him choosing the boat option (if he wanted a boat) instead of the mystery box. So this would be the probability of getting a boat given that he picked the boat option divided by the probability of getting the boat given that he picked the mystery box. Or, let B represent getting a boat and O represent picking the original option, and ~O represent the mystery box.

P(B | O), the probability of getting the boat given that he picked the original option, is 100%. P(B | ~O), the probability of getting the boat given that he picked the mystery box, is some other number. But this number has to include both the probability of getting the boat plus the probability of getting something else. And, since it’s a mystery box, it could have any number of other things to win. Remember that P(B | ~O) + P(~B | ~O) = 100%, so P(B | ~O) = 100% – P(~B | ~O), or 100% minus the probability of not getting the boat given that he picked the mystery box. The big problem is that it’s a mystery box, so it can account for any data.

But this isn’t the whole story behind Peter’s fail of logic.

Remember my post on decision theory? That applies here, as well. A utility function in decision theory is basically where you multiply the probability of the event happening with the amount of utility (an arbitrary number, equivalent to “happiness points”; or “cash” if you’re an economist) you would get from it coming to pass. Combining both Peter’s low probability fail with the mystery box to decision theory, we get one of the classic examples of cognitive biases: the Framing Effect. It goes like this:

Participants were told that 600,000 people were at risk from a deadly disease. They were then presented with the same decision framed differently. In one condition, they chose between a medicine (A) that would definitely save 200,000 lives versus another (B) that had a 33.3 per cent chance of saving 600,000 people and a 66.6 per cent chance of saving no one. In another condition, the participants chose between a medicine (A) that meant 400,000 people will die versus another (B) that had a 33.3 per cent chance that no one will die and 66.6 per cent that 600,000 will die.

For some reason, people choose the saving of 600,000 people at 33.3% probability due to the way the question is framed, even though the two scenarios are equivalent. The brain might be just comparing the sizes and ignoring the probabilities.

Analogously, Peter’s boat option is equivalent to the 100% chance of saving 200,000 people, and his mystery box option is equivalent to the 33.3% chance of saving 600,000 people (though, like I said, a mystery box has some unknown — but necessarily less — probability of being a boat). If you simply replace “people” with “utility” in the Framing Effect example, you realize that the two options are roughly equivalent (33.3 * 600,000 = 200,000) on the positive utility side. We also have to account for the negative utility of having a 66% chance of saving no one. That would be 66.6 * 600,000 compared to 100% * 400,000, which are also equal from a utility perspective.

And this is why Peter shouldn’t have picked the mystery box. We don’t actually know the probability of getting the boat given that he chose the mystery box but, like I said, it’s necessarily lower than 100%. Similarly, if Peter chose the boat outright, that’s a 0% chance of him getting anything else. We also don’t know the amount of utility for him getting anything else, but we do know that his utility of getting the boat seems to be pretty high. This is the crucial difference between Peter’s utility function and the one represented with the Framing Effect: Peter’s utility for getting the boat isn’t the same with either option.

Of course, the thing about logical fallacies is that, due to their non-sequitur nature, they are oftentimes used as jokes. That’s why Peter’s choice is also hilarious.

What’s black and rhymes with Snoop? Dr Dre.

Funny, but also a fallacy of equivocation.

It’s probably just a coincidence, but the creator of Family Guy is an atheist. Peter basically chose the “god” option for his explanation instead of the more precise boat option in the above scenario. God represents the mystery box, not only because theists think that god being mysterious is a good thing, but because god, just like the mystery box, can account for any possible data imaginable (even a boat).

 
Comments Off on Bayes Theorem And Decision Theory In The Media

Posted by on June 26, 2013 in Bayes, decision theory, Funny

 

Free Will

20130625-163624.jpg

So there’s been some discussion in the blogosphere about “free will”. I’ve posted this on a few blogs, so this post is really just me copy/pasting my responses here for posterity. Of course, this is really a rehash of a lot of the links that are already available in my tags cognitive science and sociology/economics. Before I get into it, though, it’s impossible for me to stress this quote enough:

The tool we use to philosophize is the brain. And if you don’t know how your tool works you’ll use it poorly

The first thing we should do when we come upon a complicated question is to try to taboo the conflict word/phrase. In this case, what happens when we taboo “free will”? If it helps, try making your beliefs pay rent; what would you expect to feel if you do have “free will”? What would it feel like to not have free will? For that negative question, try not to generalize from fictional evidence. When you Taboo “free will” you find yourself trying to say that your brain is not controlled by physics but by “you”. This implies that “you” has to be not physics, which is really just the supernatural.

Because of my readings about why people are religious, this has had the side effect of me being wholly against the idea of any sort of free will.

The main reason that people are religious is because our brains are much more optimized for social activity than intellectual activity. We believe things because the groups that we [want to] belong to believe them. As a matter of fact, our human-level intelligence only came about as a way of navigating tribal politics.

People chanting, singing, dancing, or even walking together in synchrony increases group bonding, which makes you much more likely to adopt the beliefs of the group you are involved with. Born again Christians and the military implicitly know this. Being told that you belong to a group that does XYZ makes you try harder to do XYZ (this probably explains why women are less represented in STEM fields than men).

Then there is the art of persuasion that takes advantage (if you’re that kind of person) all of this cognitive architecture; a really good salesman will act like a “detective of influence” and not have you realize that you’ve been swayed… e.g. a menu item at a restaurant was marked as being “the most popular” and its sales increased by 13-20%. Related, there’s the neurological differences between the “modules” in the brain dedicated to liking and wanting. They are correlated, but separate. You can like something but not want it and vice versa, because you don’t actually have control over those two modules. There’s also the concept of the apologist and the revolutionary working in your brain that, again, you have no conscious control over when encountering and making sense of new information. Linked in that article is the pretty weird fact that squirting cold water in your left ear makes you more likely to accept new information if you originally didn’t believe it. Seems exactly like the sort of bug that would happen to a robot.

All of these things don’t happen on a conscious level, they happen at the unconscious level of moral intuition and the feeling of certainty; one study showed that we temporarily adopt the moral intuitions of people we read in fiction. We don’t have conscious access to the cognitive algorithm that produces feelings like trusting people in a group or feeling certain about something, we just have the end product: That feeling. Our brain is like a government and the conscious “you” that feels like it experiences the world is more like a press secretary than the president who tries to explain what the government did to the public instead of the one actually making the decisions. Overall, we have no control over any of the emotions we feel, yet it’s those emotions that drive all of our decisions. Worse yet, we all have the ego of thinking that we are the rational actors in the drama of life instead of the emotional ones. Moreover (since I arrived at all this from reading about causes for religion) high income inequality, loneliness, or feeling out of control all (subconsciously) increase religiosity.

With all of that in mind, I’m having trouble seeing where any sort of free will comes into play. We’re a product of our environment at a level that naive introspection is impossible to detect objectively. Naive introspection would just produce the same confusion as this anecdote:

“Forget about minds,” he told her. “Say you’ve got a device designed to monitor—oh, cosmic rays, say. What happens when you turn its sensor around so it’s not pointing at the sky anymore, but at its own guts?”

He answered himself before she could: “It does what it’s built to. It measures cosmic rays, even though it’s not looking at them any more. It parses its own circuitry in terms of cosmic-ray metaphors, because those feel right, because they feel natural, because it can’t look at things any other way. But it’s the wrong metaphor. So the system misunderstands everything about itself. Maybe that’s not a grand and glorious evolutionary leap after all. Maybe it’s just a design flaw.”

Now, if I try to make “free will” pay rent in anticipated experiences, I would think that none of the above evidences for why people are religious would happen. I would think that people believed in religions simply because they didn’t know any better; that they were religious because they were “dumb”. I would assume that there was a little rational homunculus inside of our brains that is just getting corrupted by logical fallacies and cognitive biases.

In reality, there is no separate “you” that exists outside of this list of cognitive biases. You are that list. Full stop.

Take for example the rhyme as a reason effect. This where you remember an aphorism because it’s easier to remember a phrase that rhymes over a phrase that doesn’t rhyme; a rhyming phrase has less cognitive load on your System 2 than a non-rhyming one. Thoughts subsequently get cached in your brain due to the slow processor speed of your brain-CPU; the rate your neurons fire at is about 100 Hz (the computer I’m using to type this has a processor of about 2.3 GHz. That is, approx. 2,300,000,000 Hz). To speed things up, your brain uses caching, just like your computer. And because your brain doesn’t differentiate between the feeling of certainty and the feeling of familiarity, you will unconsciously remember a cached, rhyming argument and then conclude that it’s “true”.

At what point in this cognitive algorithm do “you” decide on the truth of the aphorism? Like I said, you don’t have inbuilt access to any of this, especially if you don’t study cognitive science. The only thing “you” have access to is the end product; the feeling of certainty. And “you” can’t just decide to feel certain about something.

Going back to making beliefs pay rent, I really can’t imagine any subjective difference between having free will and not having free will. For example, most conceptions of “free will” seem to be the one we have from video games/movies/etc. (e.g. in Dragon’s Dogma, your pawns can get possessed by a dragon and then they run towards you trying to kill you while telling you that they’re not in control of themselves) but I don’t think we should generalize from fictional evidence. Fictional evidence is meant to be entertaining, not true.

So if there’s no observational difference between two hypotheses, we should pick the one that has fewer metaphysical coin flips. Positing a brain plus some undetectable homunculus controlling everything has more metaphysical coin flips than just a brain.

Lastly, just because I don’t think free will exists (or is even meaningful) doesn’t mean I don’t want there to be free will.

 
1 Comment

Posted by on June 25, 2013 in cognitive science

 

Game Theory

(Russel Crowe as John Nash in A Beautiful Mind describing some Game Theory)

Again, a post not directly related to religion. This is a post about bare bones rationality. But first, a quote from Robin Hanson:

“Students are often quite capable of applying economic analysis to emotionally neutral products such as apples or video games, but then fail to apply the same reasoning to emotionally charged goods to which similar analyses would seem to apply. I make a special effort to introduce concepts with the neutral examples, but then to challenge students to ask wonder why emotionally charged goods should be treated differently.”

If you don’t know about Game Theory (GT) you should, since it is a situation that you’ve probably been in in some form or another in life. My first exposure to GT was the Prisoner’s Dilemma (PD).

Let’s say two people who have robbed a bank are under arrest and sitting in jail. The cops don’t actually have enough evidence to get a high probability of conviction, so they try to get them to admit to the robbery. They separate the two criminals and offer each a plea deal if they admit that the other one was involved.

If neither of them accuses the other, then they go to trial with a 50% chance of getting convicted. If they both accuse each other, they both have a 99% chance of getting convicted. If only one accuses the other, the accused has a 75% chance of getting convicted while the accuser gets granted immunity.

Which would you choose, if you were one of the criminals? From Wikipedia:

Because betrayal always rewards more than cooperation, all purely rational self-interested prisoners would betray the other, and so the only possible outcome for two purely rational prisoners is for them both to betray each other. The interesting part of this result is that pursuing individual reward logically leads the prisoners to both betray, but they would get a better reward if they both cooperated. In reality, humans display a systematic bias towards cooperative behavior in this and similar games, much more so than predicted by simple models of “rational” self-interested action

There is also an extended “iterative” version of the game, where the classic game is played over and over between the same prisoners, and consequently, both prisoners continuously have an opportunity to penalize the other for previous decisions. If the number of times the game will be played is known to the players, then (by backward induction) two purely rational prisoners will betray each other repeatedly, for the same reasons as the classic version.

[…]

Doping in sport has been cited as an example of a prisoner’s dilemma. If two competing athletes have the option to use an illegal and dangerous drug to boost their performance, then they must also consider the likely behaviour of their competitor. If neither athlete takes the drug, then neither gains an advantage. If only one does, then that athlete gains a significant advantage over their competitor (reduced only by the legal or medical dangers of having taken the drug). If both athletes take the drug, however, the benefits cancel out and only the drawbacks remain, putting them both in a worse position than if neither had used doping.

Which would you choose?

As is evidenced by the reference material in Wikipedia, the rational decision is to betray your accomplice/gangmate, since that has the highest personal payoff. Of course, most people are not rational and have a bias towards cooperation. Which is even worse, because people who actually are rational and study rationality would win almost every time; at least in single play PD games.

Here is another good example of the PD when it comes to other types of collaboration:

Say X is a writer and Y is an illustrator, and they have very different preferences for how a certain scene should come across, so they’ve worked out a compromise. Now, both of them could cooperate and get a scene that both are OK with, or X could secretly change the dialogue in hopes of getting his idea to come across, or Y could draw the scene differently in order to get her idea of the scene across. But if they both “defect” from the compromise, then the scene gets confusing to readers. If both X and Y prefer their own idea to the compromise, prefer the compromise to the muddle, and prefer the muddle to their partner’s idea, then this is a genuine Prisoner’s Dilemma.

Here is yet another example of the PD that is actually an iterative version written by a Less Wrong (that blog devoted to refining the art of human rationality) member:

Ganaj: Hey man! I will tell you your fortune for fifty rupees!

Philosophical Me: Ganaj, are you authorized to speak for all Indian street fortune-tellers?

Ganaj: By a surprising coincidence, I am!

Philosophical Me: Good. By another coincidence, I am authorized to speak for all rich tourists. I propose an accord. From now on, no fortune-tellers will bother any rich tourists. That way, we can travel through India in peace, and you’ll never have to hear us tell you to “check your [poor] privilege” again.

Ganaj: Unfortunately, I can’t agree to that. See, we fortune-tellers’ entire business depends on tourists. There are cultural norms that only crappy fraudulent fortune-tellers advertise in the newspapers or with signs, so we can’t do that. And there are also cultural norms that tourists almost never approach fortune-tellers. So if we were to never accost tourists asking for business, no fortunes would ever get told and we would go out of business and starve to death.

Philosophical Me: At the risk of sounding kind of callous, your desire for business doesn’t create an obligation on my part or justify you harassing me.

Ganaj: Well, think about it this way. Many tourists really do want their fortunes told. And a certain number of fortune-tellers are going to defect from any agreement we make. If all the cooperative fortune-tellers agree not to accost tourists the defecting fortune-tellers will have the tourists all to themselves. So if we do things your way, either you’ll never be able to get your fortune told at all, or you’ll only be able to get your fortune told by a defecting fortune-teller who is more likely to be a fraud or a con man. You end up fortuneless or conned, we end up starving to death, and the only people who are rich and happy are the jerks who broke our hypothetical agreement.

This is a situation that you’ve most definitely been involved in. Not because it’s about being confronted by Indian beggars, but because it’s an allegory for dating. So again, which one would you choose: cooperate or defect? Or which one have you chosen?

To see how this allegory is one about dating, I’ll have to make it more explicit. Say in some alternate universe, women who wanted to be approached by men on the street/at a coffee shop/dance event/etc. wore some sort of special wristband. The agreement was that men should only approach women with the wristband, and women would expect to be approached if they were wearing the wristband and would be friendly and welcoming when approached.

Because society is what it is with humans running around with their bunny brains, some men decide to approach even women without the wristband, and sometimes they get positive responses/dates and what have you; they’ve defected from the agreement and got more dating opportunities than by just approaching women wearing the wristband. Again, society being what it is, some women wore the wristband because they liked getting attention from men; even if they weren’t open to dating because they were married or in a relationship. Over time, men notice these other men getting more dates and eventually all men defect. Over time, women notice that other women are getting more attention just by wearing the wristband and eventually the wristband loses its initial function (some women even wear the wristband just because they like the color, and then get upset when they get approached); all the women have defected.

Dating. One massive game theory defection scheme.

As the Game Theorists have noted, if everyone defects in the iterated PD, then eventually this becomes a losing strategy. The issue with dating is that the iteration takes place over numerous generations. Meaning that the initial defectors are dead by the time the damage has been done and everyone has started losing out because everyone is now defecting. But why would a rational person not defect after multiple generations? Agreeing to the now generations-old cooperation scheme only results in the other prisoner reaping the rewards.

What I think is the worst part of GT and other decision theories is that we’re not consciously aware of our values. Playing a game of “let’s pretend” doesn’t solve that intractable problem. Furthermore, as I’ve been writing about a lot recently, our values are socially constructed. Our “conscious mind” is more like a press secretary for the government, meant to put a feel-good spin on whatever it is that our unconscious mind (the actual government) values and the actions we make due to those unconscious values. This is the main reason why people are religious, why some are suicide bombers, why some are feminists, and the main reason for many other political groupings.

The link to religion will be made in a subsequent post 🙂

 
2 Comments

Posted by on June 19, 2013 in decision theory

 

Do People Not Really Believe In Paradise?

20130610-191001.jpg

(Like herding rabbits)

Over at Sam Harris’ blog, he says that Scott Atran claims that people don’t actually believe in the tenets of their religion, specifically the concept of heaven. To quote:

According to Atran, people who decapitate journalists, filmmakers, and aid workers to cries of “Alahu akbar!” or blow themselves up in crowds of innocents are led to misbehave this way not because of their deeply held beliefs about jihad and martyrdom but because of their experience of male bonding in soccer clubs and barbershops. (Really.) So I asked Atran directly:

“Are you saying that no Muslim suicide bomber has ever blown himself up with the expectation of getting into Paradise?”

“Yes,” he said, “that’s what I’m saying. No one believes in Paradise.”

This can’t possibly be correct. Maybe Harris has misunderstood Atran’s response to his question, but there are certainly people who really believe in the dogmas of their religion. Whatever the cause of this belief is, the end product of the belief is all that matters. And as I’ve posted about recently, one of the ways to influence peoples’ decisions is to appeal to their desire for [ideological] consistency.

Surely, people believe in heaven as a propositional statement, signalling inclusion in the tribe. But the same goes for many other propositional statements. People believe in the theory of relativity signalling inclusion in the tribe. And there are certainly people who don’t actually believe in heaven but believe in the belief in heaven because they consider belief in heaven to be virtuous. Just as I’m sure that there are people who believe in the belief in the theory of relativity, mainly people who simply don’t understand it and don’t want to be seen as backwards bumpkins in an increasingly scientific world. I mean, not many people have the free time to learn and understand the mathematical equations behind relativity for themselves so most people just “believe” in relativity since that’s just what you’re supposed to believe. If Atran is correct, no one “really” believes in anything.

Anyway, Harris points out some other things in the videos on his page that also evidence a lot of the things I’ve mentioned recently about the causes of religion.

Again, I’m going to re-post this post about how our brains are wired:

  • Human brains are effectively populated by rabbits. Your conscious mind is like a very small person attempting to ride a large herd of rabbits, which aren’t all going the same direction. Your job is to pretend to be in control, and make shit up to explain where the rabbits went, and what you did.
  • Humans bunny brains are optimized for social activity, not intellectual activity. If your brain thinks principles first, instead of groups first, it’s broken, and not just a little bit.
  • Of course, this means that anyone thinking group first is almost completely full of crap regarding their reasoning process. They’re (99.86% certainty) making shit up that makes the group look good, and the actual rational value of the statement is near zero. The nominal process “A->B->C” is actually C, now let’s backfill with B and A.
  • Therefore I’m almost only interested in listening to folks who are group-free. If your brain is broken in the kind of way that prohibits group-attachment…then you’re far far more likely to be thinking independently, and shifting perspectives.
  • Aside: FWIW, this is the core (unsolvable?) problem that inhabits rationalist groups. There is a deep and abiding conflict between groupism and thinking. The Randians have encountered this most loudly, but it’s also there in the libertarians, the extropians, the David Deutch-led popperian rationalists, and the LessWrongers.

    New discovery, shouldn’t have been as surprising as it was. When looking for folks who are group-avoidant, I seem to have phenomenally good luck finding great people when talking with Gays from non-leftist areas (rural Texas, Tennessee, downstate Illinois). Because they don’t/can’t fit in with their local culture, and often can’t conveniently exit, they become interesting people. It’s a surprisingly good metric.

    […]

    Most people have a group of 5 bunnies that are rather muscular bunnies that focus on group dynamics, group belonging, etc. Their preferences are aligned enough that they usually pull in the same direction. In practice, this means that in conflicts, this particular group of bunnies gets their way most of the time. There is also another bunny who is usually weak and sickly (or a frog) who checks for ideational consistency. That frog usually moves backwards.

    In some rare folks, the frog is unusually muscular. Not a normal frog or even a bullfrog, but a big-ass pixie frog who eats rats. He gets what he wants a little bit. Or he has a buddy: 2 giant pixie frogs. These people would land in what Simon Baron Cohen (autism researcher) talks about as high on the systematizing scale. Now, some other rare folks would have group bunnies that were sick…they had polio as baby bunnies. One of the 5 died. The other 4 are crippled and can’t walk effectively.

    If you run into a person who (a) has crippled group bunnies, and (b) has giant pixie-frogs…then you get a different approach to cognition than you see in most.

    That doesn’t say it’s better.

    FWIW, the book that most informed my thinking on Rabbit-Brains is “Everyone (Else) is a Hypocrite” by Robert Kurzban. Fabulous book. Rabbits are my wording.

  • Rabbits!

    The videos that Harris has on his blog point out a number of things that are evidence of why Islam is such a successful religion:

    1. People chanting together increases bonding — basically getting all of the rabbits in line — which then makes people more likely to strive for ideological consistency. Since religions are inherently contradictory — Islam included — it’s relative child’s play to make someone do despicable things in the name of “religion” from select readings or “cherrypicking”.
    2. More on consistency. If you are told that you are a member of a group that does XYZ, then you will try harder than normal to do XYZ.
    3. People will subconsciously adopt the morality of people they read in fiction. So whether Jew, Christian, Muslim, Hindu, etc., if you are told to read some select passages from those holy books depicting the main character doing violent acts, you will subconsciously adopt that morality. At least temporarily.
    4. Moral judgements are made intuitively. Meaning that we don’t have access to the underlying cognitive algorithm that produces our feeling of certainty about a moral action. So it’s probably true that people subconsciously ascribe a moral judgement to be what their tribe/society would want but their conscious mind says “This is for Allah!”.
    5. The larger sociological issue: Economics. High levels of economic inequality are correlated with high levels of religiosity. It doesn’t get any more economically unequal than a few fat-cat sultans living it up while the vast majority of the rest of the people live in poverty.

    So if I attempt to Steelman Atran’s response, as I’ve come to understand it, he’s basically saying that people arrive at their deeply held beliefs about jihad and martyrdom from the road of their experience of male bonding in soccer clubs and barbershops. Just like I’ve arrived at the destination of being proud of my military service due to the road of bootcamp, because bootcamp is intentionally designed that way. It would be a strawman to say that I’m not “really” proud of my military service, but that’s the impression I get from Harris’ presentation of Atran’s argument. Which makes no sense.

    But there’s another possible explanation.

    It could be that Atran really did say what Harris said, and confused himself by mistaking explaining something vs. explaining something away:

    John Keats’s Lamia (1819) surely deserves some kind of award for Most Famously Annoying Poetry:

    …Do not all charms fly
    At the mere touch of cold philosophy?
    There was an awful rainbow once in heaven:
    We know her woof, her texture; she is given
    In the dull catalogue of common things.
    Philosophy will clip an Angel’s wings,
    Conquer all mysteries by rule and line,
    Empty the haunted air, and gnomed mine—
    Unweave a rainbow.

    […]

    Apparently “the mere touch of cold philosophy”, i.e., the truth, has destroyed:

    Haunts in the air
    Gnomes in the mine
    Rainbows

    […]

    The rainbow was explained. The haunts in the air, and gnomes in the mine, were explained away.

    I think this is the key distinction that anti-reductionists don’t get about reductionism.

    You can see this failure to get the distinction in the classic objection to reductionism:

    If reductionism is correct, then even your belief in reductionism is just the mere result of the motion of molecules—why should I listen to anything you say?

    The key word, in the above, is mere; a word which implies that accepting reductionism would explain away all the reasoning processes leading up to my acceptance of reductionism, the way that an optical illusion is explained away.

    So just because the reason that someone was convinced of heaven was due to tribal politics, does not mean they don’t “really” believe in heaven. That would be explaining away their belief in paradise, which is fallacious. There really should be a formal name for this fallacy, so I’ll hereby christen it the explaining away fallacy.

    ‘Hasn’t it ever occurred to you that in your promiscuous pursuit of women you are merely trying to assuage your subconscious fears of sexual impotence?’

    ‘Yes, sir, it has.’

    ‘Then why do you do it?’

    ‘To assuage my fears of sexual impotence.’

     

    The Biggest Challenges to Staying Christian

    20130607-151931.jpg

    Courtesy of Adam Lee of Daylight Atheism:

    On Patheos’ evangelical Christian channel, Peter Enns has been soliciting comments from his readers about what the greatest challenges are to remaining Christian. He got hundreds of responses, and he’s compiled a list of five common themes in the answers:

    1. The Bible, namely inerrancy. This was the most commonly cited challenge, whether implicitly or explicitly, and it lay behind most of the others mentioned. The pressure many of you expressed was the expectation of holding specifically to an inerrant Bible in the face of such things as biblical criticism, contradictions, implausibilities in the biblical story, irrelevance for life (its ancient context), and the fact that the Bible is just plain confusing.

    2. The conflict between the biblical view of the world and scientific models. In addition to biological evolution, mentioned were psychology, social psychology, evolutionary psychology, and anthropology. What seems to fuel this concern is not simply the notion that Scripture and science offer incompatible models for cosmic, geological, and human origins, but that scientific models are verifiable, widely accepted, and likely correct, thus consigning the Bible to something other than a reliable description of reality.

    3. Where is God? A number of you, largely in emails, wrote of personal experiences that would tax to the breaking point anyone’s faith in a living God who is just, attentive, and loving. Mentioned were many forms of random/senseless suffering and God’s absence or “random” presence (can’t count on God being there).

    4. How Christians behave. Tribalism, insider-outsider thinking; hypocrisy, power; feeling misled, sheltered, lied to by leaders; a history of immoral and unChristian behavior towards others (e.g., Crusades, Jewish pogroms). In short, practically speaking, commenters experienced that Christians too often exhibit the same behaviors as everyone else, which is more than simply an unfortunate situation but is interpreted as evidence that Christianity is not true-more a crutch or a lingering relic of antiquity than a present spiritual reality.

    5. The exclusivism of Christianity. Given 1-4 above, and in our ever shrinking world, can Christians claim that their way is the only way?

    Adam Lee has his owns thoughts on the significance of this, which are good, but I want to write my own.

    Why be concerned about what Christians are struggling with? I don’t have a problem with Christianity per se, but I have a problem with groupthink (which is a much larger problem, one that atheists aren’t immune to). A few years ago I might have said that Christianity is problematic, but this assumes that there is one true version of Christianity. Even though Christian “orthodoxy” tries to paint that picture in history, even though modern Christians might try to promote that idea, there never was one, true version of Christianity, nor will there ever be. Again, the problem is tribal politics, which will cause Christians to act like jerks for the tribe, even if their rationale uses religious wording:

    Subject: You and [girlfriend],

    Hi [boyfriend],

    I see that you and [girlfriend] are ratcheting up your relationship. As I said before, this puts your family in a very difficult situation.

    Althought it seems you have made up your mind about this, I want to make sure that you are aware of the scriptures on this.

    The most helpful passage about marrying an unbeliever can be found at 2 Cor 6: 14 Do not be yoked together with unbelievers. For what do righteousness and wickedness have in common? Or what fellowship can light have with darkness? 15 What harmony is there between Christ and Belial[a]? What does a believer have in common with an unbeliever?

    Besides this there are numerous Old Testament passages in which Israelite men married non-believing women from other nations, always to the displeasure of the Lord. For example, in Ezra 10, Israel is rebuked for their marriage to foreign wives: 10 Then Ezra the priest stood up and said to them, “You have been unfaithful, you have married foreign women, adding to Israel’s guilt. 11 Now make confession to the LORD, the God of your fathers, and do his will. Separate yourselves from the peoples around you and from your foreign wives.” 12 The whole assembly responded with a loud voice: “You are right! We must do as you say.

    When you think about it, it only makes sense. What is more fundamental to a person, their values, their world view, their preferences and convictions than their true religious beliefs?

    More to the point for the Christian, how can we justify joining ourselves as one with someone who is opposed to what we believe and hold dear, our relationship to Jesus.

    I say all this [boyfriend] because while I love you dearly, I am quickly coming to a point where lines must be drawn. As your relationship picks up, so does my unease with the two of you.

    I am sorry it has come to this [boyfriend]. I sincerely hope that I am wrong. But nothing I see in your relationship, nothing in the way [girlfriend] presents herself, gives me any hope. And it grieves me that you do not seem moved by this at all. Quite frankly, this has struck me as one of those times when you set yourself to do what you want, regardless of the truth of the situation.

    I suggest that you, [girlfriend], and I meet. Unless and until we hear her beliefs about Christ, this uneasy relationship will continue. In fact, it will become worse

    Love,

    Dad

    If Christianity changed to become a religion that prevented stuff like that, then I wouldn’t have much of a problem with it. Unfortunately, because there isn’t a tradition of overcoming bias in Christianity, I don’t know if that’s even possible. We’ll continue to get stories like the above and life threatening instances of misogyny due to the built-in focus on valuing the dogma of the religion instead of the well-being of the very real human beings practicing it. In order for Christianity to become more socially acceptable in the modern world, it has to become more like science. And it seems like that’s impossible while Christianity remains a religion with a mysterious god.

     
    Comments Off on The Biggest Challenges to Staying Christian

    Posted by on June 7, 2013 in religiosity

     

    Logical Fallacies as Weak Bayesian Evidence: Argument from Anecdote

    Another juicy logical fallacy that gets repeated over and over on teh Internetz due to how thoughts are cached in your brain like webpages on your computer. Again, the problem is that people treat anecdotes as strong Bayesian evidence, or even as a Prosecutor’s fallacy-like conclusion, when in all likelihood they’re probably just very weak Bayesian evidence. But evidence is evidence nonetheless. Like pressing the gas on your car, you can either press it so that you increase by 50 mph or by 5 mph. But acceleration is acceleration whether strong or weak.

    So the argument from anecdote. This is taking an event that happens to you personally and using it in an argument for a general explanation. Let’s take a situation that I obviously think is false, like ghost stories. Someone tries to convince me that ghost are real because they once heard a creaky floor in an old abandoned house and got a feeling of dread. Obviously, this isn’t conclusive evidence of the existence of ghosts. Butassuming that ghosts are real, this would “fit” into that worldview. And that’s the rub.

    I personally think there’s a 1 in a trillion chance that ghosts are real. So I’ll use that number to demonstrate why an anecdote can still be used as evidence but not as a conclusion. Just like in my example of falsifiability using Bayes Theorem, let’s assume that I have a jar that has two types of dice: One that is a normal sided die with 1 – 6 labeled and another that has just a 5 on all sides. But in this instance, the jar is filled with 999,999,999,999 normal dice and only 1 (one!) trick die that has a 5 on all sides.

    If I grab a die at random from the jar and roll a 5, Bayes Factor says I should divide the probability of rolling a 5 given that I’ve rolled the trick die by the probability of rolling a 5 given that I’ve rolled the normal die. Given that I’ve rolled the trick die, the probability of rolling a 5 is 100%. Given that I’ve rolled the normal die, the probability of rolling a 5 is 16.7%. This quotient is greater than 1 so that means that rolling a 5 is evidence for having rolled the trick die. But the prior probability of rolling the trick die in this case is basically a trillion to one, so in the end it is much more likely that I had grabbed a normal die.

    Regardless, rolling a 5 is weak Bayesian evidence for having grabbed the trick die. Just like a ghost story anecdote is weak Bayesian evidence for the existence of ghosts.

    Let’s try a more controversial anecdote, like “black people are stupid”. So someone grows up with this worldview of blacks having a low IQ while never having met a black person. The first time they meet a real live black person, the black guy they meet just happened to be in one of his high school classes and was the worst student. Assuming the hypothesis of stupid blacks is true, this anecdote fits that worldview. Maybe not at 100% like the trick die, but it’s a high probability. On the flip side, assuming this worldview is false there’s a much lower probability that this would happen. As a matter of fact, I would assume that exactly half of black people are below average intelligence, so the alternative hypothesis would say there’s a 50% chance that this would happen.

    As it stands, the racist hypothesis puts more probability capital in seeing something like this than the non-racist hypothesis, so the racist hypothesis gets the evidence cash-out due to this anecdote. Just like with the trick die compared to the normal die. So again, in this case an anecdote can be legitimately used as Bayesian evidence. It might be strong or weak evidence, but it’s evidence.

     
    Comments Off on Logical Fallacies as Weak Bayesian Evidence: Argument from Anecdote

    Posted by on June 6, 2013 in Bayes, logical fallacies as weak bayesian evidence

     

    The Six Ways To Influence People

    20130604-110252.jpg

    Over at the blog Bakadesuyo, Eric Barker interviews Dr. Robert Cialdini about the six ways to influence people. What’s this have to do with religion? Well, here Dr. Cialdini describes the difference between “true” and “false” influence; what he refers to as a detective of influence and a smuggler of influence:

    My sense of the proper way to determine what is ethical is to make a distinction between a smuggler of influence and a detective of influence. The smuggler knows these six principles and then counterfeits them, brings them into situations where they don’t naturally reside.

    The opposite is the sleuth’s approach, the detective’s approach to influence. The detective also knows what the principles are, and goes into every situation aware of them looking for the natural presence of one or another of these principles. If we truly do have authority in the topic, if we locate it as inherently present, we can simply bring it to the surface and make people aware of it. If we truly do have social proof, we can bring that to the surface. If we truly do recognize that people have made a commitment, or have prioritized a particular value that is consistent with what we can provide, we can show them that congruency and let the rule for commitment and consistency do the work for us.

    That’s the difference, the difference between manufacturing, fabricating, counterfeiting the presence of one or another of these principles in a situation, versus identifying and then uncovering it for our audience members so that it simply becomes more visible to them as something that’s truly present in the situation.

    I think it’s obvious which version religion implements. The version that seems like a microcosm of the dark arts.

    Of course, there’s more to the article. Again, this feeds back to my recent bender on how groupthink is the main causative agent for rampant religiosity:

    Social Proof

    People will be likely to say yes to your request if you give them evidence that people just like them have been saying yes to it, too. For example, I saw a recent study that came from Beijing. If a manager put on the menu of the restaurant, “These are our most popular dishes,” each one immediately became 13 to 20 percent more popular. What I like about that is, not only did a very small change produce a big effect, it was entirely costless and entirely ethical. It was only the case that these popular items were identified as popular items. That was enough to cause people to want to go along with what they saw as the wisdom of the crowd.

    Then there is the scarcity aspect of religion, especially the great monotheisms of the West:

    People will try to seize those opportunities that you offer them that are rare or scarce, dwindling in availability. That’s an important reminder that we need to differentiate what we have to offer that is different from our rivals or competitors. That way we can tell people honestly, “You can only get this aspect, or this feature, or this combination of advantages by moving in the direction that I’m recommending.”

    Christianity, Judaism, and Islam all operate from a scarcity mentality. Salvation is scarce, sacred land is scarce, even god himself is scarce since there is only one god available to worship. According to Biblical scholar Hector Avalos, this scarcity mentality is one of the main causes for religious violence. The worst part of it is that this scarcity is all invented; it’s a smuggling of influence.

     
    1 Comment

    Posted by on June 4, 2013 in cognitive science

     
     
    NeuroLogica Blog

    My ὑπομνήματα about religion

    Slate Star Codex

    NꙮW WITH MꙮRE MULTIꙮCULAR ꙮ

    Κέλσος

    Matthew Ferguson Blogs

    The Wandering Scientist

    Just another WordPress.com site

    NT Blog

    My ὑπομνήματα about religion

    Euangelion Kata Markon

    A blog dedicated to the academic study of the "Gospel According to Mark"

    PsyPost

    Behavior, cognition and society

    PsyBlog

    Understand your mind with the science of psychology -

    Vridar

    Musings on biblical studies, politics, religion, ethics, human nature, tidbits from science

    Maximum Entropy

    My ὑπομνήματα about religion

    My ὑπομνήματα about religion

    My ὑπομνήματα about religion

    atheist, polyamorous skeptics

    Criticism is not uncivil

    Say..

    My ὑπομνήματα about religion

    Research Digest

    My ὑπομνήματα about religion

    Disrupting Dinner Parties

    Feminism is for everyone!

    My ὑπομνήματα about religion

    The New Oxonian

    Religion and Culture for the Intellectually Impatient

    The Musings of Thomas Verenna

    A Biblioblog about imitation, the Biblical Narratives, and the figure of Jesus