RSS

Big gods came after the rise of civilisations, not before, finds study using huge historical database

When you think of religion, you probably think of a god who rewards the good and punishes the wicked. But the idea of morally concerned gods is by no means universal. Social scientists have long known that small-scale traditional societies – the kind missionaries used to dismiss as “pagan” – envisaged a spirit world that cared little about the morality of human behaviour. Their concern was less about whether humans behaved nicely towards one another and more about whether they carried out their obligations to the spirits and displayed suitable deference to them.

Nevertheless, the world religions we know today, and their myriad variants, either demand belief in all-seeing punitive deities or at least postulate some kind of broader mechanism – such as karma – for rewarding the virtuous and punishing the wicked. In recent years, researchers have debated how and why these moralising religions came into being.

Now, thanks to our massive new database of world history, known as Seshat (named after the Egyptian goddess of record keeping), we’re starting to get some answers.

Read more at The Conversation

Advertisements
 
Leave a comment

Posted by on March 22, 2019 in religion

 

What Are The Strongest Arguments For Atheism?

The strongest arguments for atheism aren’t really related to atheism, but about how the human mind works and what makes something a good explanation.

We know why we have particular emotions. Anger protects us from harm, and thus dying. Feelings of friendship/bonding with others gives us access to resources and mates; it’s very hard to live alone without having had others to teach you how to do so or build the infrastructure that allows this. Many other animals have these same emotions, and for the same reasons: to not die and/or perpetuate their genes. The ones that don’t have these emotions usually die pretty quickly.

Why would a god have these emotions? There’s no underlying reason for a god to love or to get angry. A god that loves makes about as much sense as a god with a penis.

And then there are the reasons why we believe things in the first place. How are our beliefs formed? For many of the things we believe, we do so because of our feeling of certainty:

A newspaper is better than a magazine. A seashore is a better place than the street. At first it is better to run than to walk. You may have to try several times. It takes some skill, but it is easy to learn. Even young children can enjoy it. Once successful, complications are minimal. Birds seldom get too close. Rain, however, soaks in very fast. Too many people doing the same thing can also cause problems. One needs lots of room. If there are no complications, it can be very peaceful. A rock will serve as an anchor. If things break loose from it, however, you will not get a second chance.

Is this paragraph comprehensible or meaningless? Feel your mind sort through potential explanations. Now watch what happens with the presentation of a single word: kite. As you reread the paragraph, feel the prior discomfort of something amiss shifting to a pleasing sense of rightness. Everything fits; every sentence works and has meaning. Reread the paragraph again; it is impossible to regain the sense of not understanding. In an instant, without due conscious deliberation, the paragraph has been irreversibly infused with a feeling of knowing.

Try to imagine other interpretations for the paragraph. Suppose I tell you that this is a collaborative poem written by a third-grade class, or a collage of strung-together fortune cookie quotes. Your mind balks. The presence of this feeling of knowing makes contemplating alternatives physically difficult.

Did you get the same inability to explain the paragraph using some other concept? Take note of that: You really don’t have any control over how certain you feel about things. Just like other emotions, the feeling of certainty is generated unconsciously. The next obvious question would be “What sort of brain algorithm generates your feeling of certainty?” More on that below.

Experience teaches us what stimuli make us angry, or jealous, or happy, sad, etc. Sometimes, the feeling is unwarranted and using our self-reflection we can determine that feeling angry about a particular situation isn’t justified. What’s dangerous is this: Our feeling of certainty feels good. At least, it’s much more pleasant than the feeling of uncertainty. And in that sense, we generally never stop to reflect on why our feeling of certainty might not be correct. Unlike with, say, jealousy.

The rabbit hole of why we believe what we do goes a lot further than this. Books like Thinking, Fast and Slow about our cognitive biases go into a lot of this. The major premise of that book is that we have two types of thought engines. A “fast” engine (System 1) and a “slow” engine (System 2). These two engines are good at different tasks: the fast one is good at recognizing faces or voices, the slow one is good at math. The fast one is good at social interaction, the slow one is good for abstract/impersonal concepts.

Generally, the fast engine is the one that’s in charge, and is responsible for telling the slow engine to start up (the fast one is also the one responsible for the feeling of certainty). Problem is, the fast engine has to be trained on when a task should be handled by itself or when it should give a problem over to the slow engine. It’s not very good at doing this intuitively. But for many of us, a problem might have been answered already by the fast engine and when challenged that’s the only time the fast engine uses the slow engine: To defend the fast one’s conclusion. And a lot of the time, the fast one’s conclusion will be for some social goal: Status, friendship, not ending up dead, and so on.

Our brains are actually more complicated, or modular, than the System 1 and System 2 way of explaining it. There actually seem to be multiple modules in our brains, and the ones that use information don’t explain their “reasoning” to the ones that talk to the outside world. Our brains are more like Congress, with some congresspeople acting on behalf of the overall “fast” engine or “slow” engine. The you that you feel is “you”, speaking to the outside world, is more like the press secretary for Congress.

There are a few experiments that show that when communication is physically severed between the two halves of the brain, each side of the brain gets different information. Yet, the part of the brain that does the speaking might not be the part of the brain that has the information. So you end up with rationalizations like split brain patients grabbing a shovel with their left hand (since their left eye was shown snow) while their right eye sees a chicken. When asked to explain why they grabbed the shovel, they — well, the side of their brain that only sees the chicken — make up an explanation, like the shovel is used to scoop up chicken poop! That press secretary, pretty quick on his feet.

But this doesn’t just happen with split brain patients. It seems to happen a lot more than we think, in our normal, everyday brains.

So for example, there was one experiment where people were asked to pick their favorite pair of jeans out of four (unbeknownst to them) identical pairs of jeans. A good portion of the people picked the jeans on the right, since they looked at the jeans from left to right. But they were unaware that that was their decision algorithm, and they rationalized their decision by saying they liked the fabric or the length or some other non-discriminating fact about the jeans. Liking the fabric of one pair of jeans more than the others was demonstrably false since the jeans were identical, yet that was the reason they gave. There’s still no persistent across the isle partisanship in your fully functioning brain, so the press secretary has to still come up with a good, socially acceptable story about Congress’ decision for the general public’s consumption. The part of our brain that ‘reasons’ and explains our actions, neither makes decisions, nor is even privy to the real cause of our actions.

The tl;dr version is this. Our brains are good at social goals. And unless we’ve been trained on it, it’s not so good at forming true beliefs about the non-social world. If we had some machine was was designed to analyze electromagnetic radiation as seen in space and pointed that machine at its own circuitry, it would interpret everything about itself in the manner of cosmic rays. Similarly, if we had a machine (our brain) that interprets everything through the lens of social interaction, and pointed it at the universe, it would interpret everything in the universe as some manifestation of social rules.

And this is what happens. Our default is to treat a lot of non-social things as social. It’s why things like animism and magical thinking are prevalent. It’s why we call planets “planets” (Greek for wanderer), the Milky Way a galaxy (gala is Greek for milk. In our case, Hera’s milk). If someone “thinks really hard” about a problem, they’re more than likely using the tools meant for social problems, not the tools meant for solving non-social questions.

So if we don’t have control over our feeling of certainty, what’s a System 2 way of making sure that we have correct beliefs about non-social things? How can we be sure that we aren’t just defending a belief that we initially arrived at unconsciously? Or, more generally, what are some unbiased traits that good explanations share? What makes something a bad explanation? Since we’re operating under uncertainty (since we can’t trust our feeling of certainty), we have to use methods for explaining our uncertainty logically and consistently.

Let’s look at the Linda problem:

Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

Which is more probable?

  1. Linda is a bank teller.
  2. Linda is a bank teller and is active in the feminist movement.

What does this have to do with good explanations? Most people will say that it’s more likely that Linda is a bank teller and is active in the feminist movement. While that might seem true socially (i.e., being a feminist and a bank teller seems to tell a better story about Linda), it’s mathematically impossible; the population of people who are bank tellers is larger than the population of people who are bank tellers and in the feminist movement. That’s why this is called the conjunction fallacy. The conjunction of A and B is necessarily smaller than A individually (or B individually).

All else being equal, a good explanation has fewer unnecessary conjunctions than bad ones. Taken further, an explanation with two known facts as conjunctions is more likely than an explanation with one known fact and one unknown fact. The more unknown facts you use to support your explanation, the less likely it is. This is generally called Occam’s Razor.

So for example, a noise at night. A tree hitting your window at night has fewer assumptions that need to be true than an alien invasion in your house. Trees hitting windows only require things that we already know to be true. Alien invasions require a lot more things to be happening in the world that we don’t know to be true (e.g., the possibility of intelligent alien life, of interstellar/intergalactic travel) than just trees and wind. That goes into the next thing that good explanations have.

Good explanations are more commonplace (more mundane) than bad ones. If you’re walking down the street and hear hooves clicking on the street, it’s probably a deer or a horse. Not a zebra or a cow. Or a hooved alien from Jupiter. The corollary for commonplace is that extraordinary claims require extraordinary evidence. Hearing hooves isn’t unlikely enough to suggest that what you hear is a hooved alien from Jupiter. You need evidence that’s a lot less likely to happen than that.

Another facet of good explanations is that they explain only what they intend to explain and very little else. It’s the difference between using bug spray to kill a spider over setting fire to your house to kill it; good explanations are precise in what they explain.

As an example, let’s say you’re a student at uni. You know one of your TAs, Anna, only uses red ink when grading papers. But the other TA, Jill, uses a variety of colored ink (red, blue, green, black, orange, purple, etc.) to grade the papers. You get your grade on a quiz back one day and it’s a grade you disagree with. The ink on it is red. Based on only this information (e.g., assume they’ve graded equal amounts of papers at this point and they have similar handwriting), which TA was more likely to have graded your paper? It’s certainly possible that Jill did, after all, she has been known to use red, but it’s more likely that Anna was the one who graded it since she only uses red. The lesson here is that, the more possible things your explanation can explain, the less likely it is to explain a particular instance.

Now notice the words I’m using: Likely, probably, possible. I’m not reinventing the wheel by saying that we need a logical framework for dealing with uncertainty, and one has already been created: Probability theory. For the conjunction fallacy, this works. The conjunction of 90% and 50%, or 90% * 50% is less than both 90% and 50% (it’s 45%). Commonplace is another way of saying prior probability. And when we talk about prior probability, we’re usually talking about Bayes theorem.

Now, Pr(Claim | Evidence) reads “the probability of claim given evidence”. The short formulation of Bayes Theorem (BT) is Pr(Claim | Evidence) = Pr(Evidence | Claim) * Pr(Claim) / Pr(Evidence). An extraordinary claim, that is, a low prior probability claim, needs a correspondingly low probability evidence. And if you have some equation that is 100 * 4 / 5, the result will be a lot closer to 100 than it is to 4 or 5.

BT also explains why Anna was more likely to have graded the paper than Jill. Let’s say Anna is represented as dice that is 1s on all sides, and Jill is normal 6 sided dice (it’s the reason I picked 6 colors for Jill above). Let’s further say you have a jar filled in equal amounts with the normal 6 sided dice and the 1 sided dice; the jar is 50 / 50 of each. You’re blindfolded, told to pull a die from the jar and roll it. You’re told that you rolled a 1. What’s the probability that you grabbed the Anna dice (the 1s on all sides) or you grabbed the Jill dice (normal 1 – 6 dice)? The probability of rolling a 1 given the 1 dice is 100%. The probability of rolling a 1 given the 1 – 6 dice is 1 / 6, or around 16%.

For this we use the long form of BT: Pr(Anna | One) = Pr(One | Anna) * Pr(Anna) / [ Pr(One | Anna) * Pr(Anna) ] + [ Pr(One | Jill) * Pr(Jill) ]. What we end up with is around an 86% chance that you grabbed the Anna dice. If you follow this, you can tell that the more possible numbers the Jill dice has, the less likely it is that it can account for rolling a 1. Another way of phrasing “precision” is that there’s a punishment for spreading yourself too thin, of trying to hedge all bets, when trying to explain something.

So, tl;dr the qualities of good explanations are that they are on the likelier side of Occam’s Razor, are mundane, and precise. There are others, but this is probably (heh) getting too long.


Notice that I hardly ever mentioned god or atheism in these sections. Especially the second part. That’s because I think the strongest arguments for atheism aren’t about atheism per se, but are in general strong arguments for good thinking. They take into account our imperfections as human beings, especially in regards to how people think and act, and attempts to account for those failings. It seems to me that god(s) are what happen when social brains are trying to explain a fundamentally impersonal universe. And when that happens, those personal explanations for impersonal events tend to fail the logic of dealing with uncertainty.

 

Study: Intimacy with God is driving the gender difference in biblical literalism

A new study sheds light on why women are more likely than men to believe the Bible is literally true. The research, which appears in the Journal for the Scientific Study of Religion, found evidence that intimacy with God explained the gender gap in biblical literalism.

The researchers analyzed data from 1,394 respondents in the national Baylor Religion Survey’s third wave. They found that attachment to God and seeking to establish a stronger connection with God were both associated with more literal views of the Bible.

[…]

In other words, both men and women who took the Bible more literally were more likely to say they had “a warm relationship with God” and reported spending more time alone praying and reading the Bible. But women tended to report both stronger attachments to God and spending more time attempting to connect with God, which explained their higher rates of biblical literalism.

“We found that while it’s true women take the Bible more literally than men, once attachment to God is accounted for that relationship disappears. So it’s really intimacy with God driving this difference, not gender per se,” Kent told PsyPost.

Read more at PsyPost

 
Leave a comment

Posted by on February 27, 2019 in religion

 

What’s the most insanely misguided belief you’ve heard from someone who claims it’s 100% fact?

All of the misguided beliefs I’ve heard share one thing in common: These people are trying to use moral, ethical, or political frameworks to try to model and predict the world. It just doesn’t work that way.

Flat earthers think the world governments are trying to pull a fast one over everybody for nefarious reasons. This means that believing in a flat earth is a moral position; taken in opposition to evil hegemonic powers.

Anti-vaxxers think that Big Pharma is evil. This means that being an anti-vaxxer is a moral position; taken in opposition to evil hegemonic powers.

Chemtrail believers… again, believing in chemtrails is a moral position; taken in opposition to evil hegemonic powers.

9/11 Truthers… again, believing in 9/11 truth is a moral position; taken in opposition to evil hegemonic powers.

Moon landing hoaxers… again, believing in the moon landing hoax is a moral position; taken in opposition to evil hegemonic powers.

That’s why I consider them “misguided”. Moral theories are prescriptions for how people should behave, not descriptions of the world. Chances are, if you’re using a moral theory to try to predict how the world works, you will not only be wrong, but you will refuse any evidence that doesn’t fit in your moral framework because allowing this evidence to change your mind is “immoral”.

Indeed, all of these are oppressor vs. the oppressed narratives. When used to model the world, they will lead to delusion, since any evidence or model that doesn’t fit into the oppressor/oppressed narrative necessarily undermines it… which gives power to the oppressor and therefore this evidence or model is immoral.

Evolutionary biology? Things like dinosaur bones were put in the Earth by Satan (i.e., an evil hegemon) to turn you into an atheist (i.e., being an atheist is immoral). Psychology and psychiatry? A ploy by body Thetans to keep you in bondage to Xenu’s hegemony. Barack Obama birthism? A ploy by evil democrats (i.e., an evil hegemony) to put a Muslim (an immoral religion) in the White House. Evolutionary psychology? A ruse created by white heteropatriarchy (an evil hegemony) to keep women, non-straight, trans, and people of color down.

Using your morals to inform empirical reality is the root of almost all human cognitive biases. Moral intuitions come first, strategic reasoning comes after[1][2][3]. The biggest clashes between morality and empirical reality result in the most tenaciously held yet misguided beliefs, and will almost always be the cases where the scientific method is used to study and uncover the nature of humanity.

I don’t have to tell you that quite a few people — secular or religious — who use their personal moral intuitions (quite literally just another way to say “their cognitive biases”) to model the world think that the scientific method is immoral.

Footnotes

[1] Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind: Robert Kurzban: 9780691154398: Amazon.com: Books

[2] The Righteous Mind: Why Good People Are Divided by Politics and Religion: Jonathan Haidt: 9780307455772: Amazon.com: Books

[3] The Elephant in the Brain: Hidden Motives in Everyday Life 1, Kevin Simler, Robin Hanson – Amazon.com

 

Psychology’s favourite moral thought experiment doesn’t predict real-world behaviour

Would you wilfully hurt or kill one person so as to save multiple others? That’s the dilemma at the heart of moral psychology’s favourite thought experiment and its derivatives. In the classic case, you must decide whether or not to pull a lever to divert a runaway mining trolley so that it avoids killing five people and instead kills a single individual on another line. A popular theory in the field states that, to many of us, so abhorrent is the notion of deliberately harming someone that our “deontological” instincts deter us from pulling the lever; on the other hand, the more we intellectualise the problem with cool detachment, the more likely we will make a utilitarian or consequentialist judgment and divert the trolley.

just under 200 of the participants were invited to the psych lab, one at a time, to take part in a real-life moral dilemma involving live mice. The participants saw two cages – one housing one mouse, the other housing five – each wired to an electroshock machine. They were told that in 20 seconds, if they did nothing, the machine would deliver a very painful but nonlethal shock to the cage containing five mice. However, if the participants pressed a button in front of them, they could divert the electric shock to the cage containing one mouse, thus saving the other five from pain

The participants who performed the real-life mouse task behaved differently than those who made a purely hypothetical decision – they were less than half as likely to let the five mice get shocked (16 per cent of them left the button unpressed compared with 34 per cent of the hypothetical group). In other words, faced with a real-life dilemma, the volunteers were more consequentialist / utilitarian; that is, more willing to inflict harm for the greater good.

Read more at BPS

 
Leave a comment

Posted by on February 4, 2019 in religion

 

The Righteousness and the Woke – Why Evangelicals and Social Justice Warriors Trigger Me in the Same Way

The Righteousness and the Woke – Why Evangelicals and Social Justice Warriors Trigger Me in the Same Way

The Righteousness and the Woke – Why Evangelicals and Social Justice Warriors Trigger Me in the Same Way
— Read on valerietarico.com/2019/01/24/the-righteousness-and-the-woke-why-evangelicals-and-social-justice-warriors-trigger-me-in-the-same-way/

 
Leave a comment

Posted by on January 25, 2019 in religion

 

(Ideo)Logical Reasoning: Ideology Impairs Sound Reasoning

Abstract:

Beliefs shape how people interpret information and may impair how people engage in logical reasoning. In 3 studies, we show how ideological beliefs impair people’s ability to: (1) recognize logical validity in arguments that oppose their political beliefs, and, (2) recognize the lack of logical validity in arguments that support their political beliefs. We observed belief bias effects among liberals and conservatives who evaluated the logical soundness of classically structured logical syllogisms supporting liberal or conservative beliefs. Both liberals and conservatives frequently evaluated the logical structure of entire arguments based on the believability of arguments’ conclusions, leading to predictable patterns of logical errors. As a result, liberals were better at identifying flawed arguments supporting conservative beliefs and conservatives were better at identifying flawed arguments supporting liberal beliefs. These findings illuminate one key mechanism for how political beliefs distort people’s abilities to reason about political topics soundly.

Gampa, A., Wojcik, S., Motyl, M., Nosek, B. A., & Ditto, P. (2019, January 13). (Ideo)Logical Reasoning: Ideology Impairs Sound Reasoning. https://doi.org/10.31234/osf.io/hspjz

Related: Politics Is The Mind Killer

 
Comments Off on (Ideo)Logical Reasoning: Ideology Impairs Sound Reasoning

Posted by on January 14, 2019 in cognitive science, morality

 
 
NeuroLogica Blog

My ὑπομνήματα about religion

Slate Star Codex

SELF-RECOMMENDING!

Κέλσος

Matthew Ferguson Blogs

The Wandering Scientist

What a lovely world it is

NT Blog

My ὑπομνήματα about religion

PsyBlog

Understand your mind with the science of psychology -

Vridar

Musings on biblical studies, politics, religion, ethics, human nature, tidbits from science

Maximum Entropy

My ὑπομνήματα about religion

My ὑπομνήματα about religion

My ὑπομνήματα about religion

Skepticism, Properly Applied

Criticism is not uncivil

Say..

My ὑπομνήματα about religion

Research Digest

My ὑπομνήματα about religion

Disrupting Dinner Parties

Feminism is for everyone!

My ὑπομνήματα about religion

The New Oxonian

Religion and Culture for the Intellectually Impatient

The Musings of Thomas Verenna

A Biblioblog about imitation, the Biblical Narratives, and the figure of Jesus

The Syncretic Soubrette

Snarky musings from an everyday woman