RSS

Category Archives: cognitive science

Sexual disgust sensitivity mediates the sex difference in support of censoring hate speech

Abstract

Prior research showed that women are generally more supportive than men of censoring hate speech and this sex difference remained significant after such variables as authoritarianism and political conservatism were controlled for. However, an explanation of that sex difference is lacking. A recent theory distinguishes between pathogen-, sexual, and moral disgust, and we hypothesize that pathogen- and sexual disgust sensitivity will mediate the sex difference in support of censoring hate speech. This is because 1) women typically show stronger pathogen- and sexual disgust sensitivity and 2) people higher in pathogen- and sexual disgust sensitivity are more repulsed by stimuli related to infection (e.g., blood) and sexual assaults. Hate speech can produce both types of stimuli by instigating violence. Indeed, two studies (N=250 and 289) show a robust indirect effect through sexual disgust sensitivity that explains over 50% of the total effect of sex on censorship support and renders the direct effect of sex non-significant. The indirect effect through pathogen disgust sensitivity is also significant but the direct effect of sex remains significant. These findings extend censorship-attitude research, inform the explanation of a similar sex difference in political intolerance, and further suggest that sexual disgust sensitivity shapes political psychology. (my emphasis)

https://www.sciencedirect.com/science/article/pii/S0191886919301965

I’ve posted about the bolded part before:

Morality is both cultural and genetic

Intuition and morality changes by gender

 

Advertisements
 

Extreme male brain theory of autism confirmed in large new study – and no, it doesn’t mean autistic people lack empathy or are more ‘male’

https://theconversation.com/extreme-male-brain-theory-of-autism-confirmed-in-large-new-study-and-no-it-doesnt-mean-autistic-people-lack-empathy-or-are-more-male-106800

Two long-standing psychological theories – the empathising-systemising theory of sex differences and the extreme male brain theory of autism – have been confirmed by our new study, the largest of its kind to date. The study, published in the Proceedings of the National Academy of Sciences, used data on almost 700,000 people in the UK to test the theories.

The first theory, known as the empathising-systemising theory of typical sex differences, posits that, on average, females will score higher on tests of empathy than males, and that, on average, males will score higher on tests of systemising than females.

Empathy is the drive to recognise another person’s state of mind and to respond to another person’s state of mind with an appropriate emotion. Systemising is the drive to analyse or build a system where a system is defined as anything that follows rules or patterns.

The second theory, known as the extreme male brain theory of autism, extends the empathising-systemising theory. It posits that autistic people will, on average, show a shift towards “masculinised” scores on measures of empathy and systemising. In other words, they will score below average on empathy tests, but score at least average, or even above average, on systemising tests.

The data on the almost 700,000 people in our study (including over 36,000 autistic people) came from an online survey carried out for the Channel 4 documentary, Are you autistic?Our analysis of this data robustly confirmed the predictions of these two theories

[…]

Beware of misinterpretations

The first misinterpretation is that the results mean that autistic people lack empathy, but this isn’t the case. Empathy has two major parts: cognitive empathy (being able to recognise what someone else is thinking or feeling) and affective empathy (having an appropriate emotional response to what someone else is thinking or feeling).

The evidence suggests that it is only the first aspect of empathy – also known as “theory of mind” – that autistic people on average struggle with. As a result, autistic people are not uncaring or cruel but are simply confused by other people. They don’t tend to hurt others, rather they avoid others.

They may miss the cues in someone’s facial expression or vocal intonation about how that person is feeling. Or they may have trouble putting themselves in someone else’s shoes, to imagine their thoughts. But when they are told that someone else is suffering, it upsets them and they are moved to want to help that person.

So autistic people do not lack empathy.

The second misinterpretation is that autistic people are hyper-male. Again, this is not the case. While our latest study shows that autistic people, on average, have a shift towards a masculinised profile of scores on empathy and systemising tests, they are not extreme males in terms of other typical sex differences. For example, they are not extremely aggressive, but tend to be gentle individuals.

So autistic people are not hyper-male in general.

 
Comments Off on Extreme male brain theory of autism confirmed in large new study – and no, it doesn’t mean autistic people lack empathy or are more ‘male’

Posted by on May 20, 2019 in cognitive science

 

Epistemic spillovers: Learning others’ political views reduces the ability to assess and use their expertise in nonpolitical domains

Abstract

On political questions, many people prefer to consult and learn from those whose political views are similar to their own, thus creating a risk of echo chambers or information cocoons. We test whether the tendency to prefer knowledge from the politically like-minded generalizes to domains that have nothing to do with politics, even when evidence indicates that politically like-minded people are less skilled in those domains than people with dissimilar political views. Participants had multiple opportunities to learn about others’ (1) political opinions and (2) ability to categorize geometric shapes. They then decided to whom to turn for advice when solving an incentivized shape categorization task. We find that participants falsely concluded that politically like-minded others were better at categorizing shapes and thus chose to hear from them. Participants were also more influenced by politically like-minded others, even when they had good reason not to be. These results replicate in two independent samples. The findings demonstrate that knowing about others’ political views interferes with the ability to learn about their competency in unrelated tasks, leading to suboptimal information-seeking decisions and errors in judgement. Our findings have implications for political polarization and social learning in the midst of political divisions.

 
Comments Off on Epistemic spillovers: Learning others’ political views reduces the ability to assess and use their expertise in nonpolitical domains

Posted by on May 19, 2019 in cognitive science

 

What Scientific Term Or Concept Ought To Be More Widely Known?

Coalitional Instincts

Every human—not excepting scientists—bears the whole stamp of the human condition. This includes evolved neural programs specialized for navigating the world of coalitions—teams, not groups. (Although the concept of coalitional instincts has emerged over recent decades, there is no mutually-agreed-upon term for this concept yet.) These programs enable us and induce us to form, maintain, join, support, recognize, defend, defect from, factionalize, exploit, resist, subordinate, distrust, dislike, oppose, and attack coalitions. Coalitions are sets of individuals interpreted by their members and/or by others as sharing a common abstract identity (including propensities to act as a unit, to defend joint interests, and to have shared mental states and other properties of a single human agent, such as status and prerogatives).

[…]

Coalition-mindedness makes everyone, including scientists, far stupider in coalitional collectivities than as individuals.

[…]

Moreover, to earn membership in a group you must send signals that clearly indicate that you differentially support it, compared to rival groups. Hence, optimal weighting of beliefs and communications in the individual mind will make it feel good to think and express content conforming to and flattering to one’s group’s shared beliefs and to attack and misrepresent rival groups. The more biased away from neutral truth, the better the communication functions to affirm coalitional identity, generating polarization in excess of actual policy disagreements. Communications of practical and functional truths are generally useless as differential signals, because any honest person might say them regardless of coalitional loyalty. In contrast, unusual, exaggerated beliefs—such as supernatural beliefs (e.g., god is three persons but also one person), alarmism, conspiracies, or hyperbolic comparisons—are unlikely to be said except as expressive of identity, because there is no external reality to motivate nonmembers to speak absurdities. (my emphasis)

Read more at Edge.org

 
Comments Off on What Scientific Term Or Concept Ought To Be More Widely Known?

Posted by on May 9, 2019 in cognitive science

 

Falsifiability is Bayesian Evidence

I’ve explained how Bayes Theorem demonstrates nicely why an explanation being unfalsifiable is a bad explanation. An explanation that can be used to explain some data and explain the complete opposite of those data is a bad explanation. But I should note that “bad” is doing a lot of work here; on topics besides the existence of god (a hypothesis of maximum entropy; one of the most common and most unfalsifiable explanations out there) it should be more comparative. Yet, “bad” doesn’t necessitate “wrong”.

If you don’t feel like clicking on my many previous posts on the topic, I’ll explain here again using a similar example.

Say I have two pants pockets. One pocket has only $USD coins and the other pocket has coins from all across the world, including $USD coins. If I pull a coin from a random pocket and it’s a $USD dime, does this mean that I pulled it from the pocket with only $USD coins or from the pocket that has coins from all over the world?

All else being equal (e.g., the same number of coins or same number of $USD coins in each pocket) it’s more likely that I pulled it from the pocket that only has $USD coins. This is because there are more possible coins available in the pocket with coins from all over the globe. The citizen of the world pocket is less falsifiable than the $USD pocket. But that doesn’t mean it’s wrong. That the citizen of the world pocket can be used to explain both pulling out a $USD coin and also not pulling out a $USD coin makes it a worse explanation than the $USD only pocket. Less likely doesn’t mean wrong though.

What does an unfalsifiable explanation look like? Again, I’ll invoke the god hypothesis. One pocket has only $USD coins and the other pocket is a pocket of miracles. Any coin from any civilization on any planet in the entirety of existence is possible from the other pocket. The reason this is unfalsifiable is because there are more possibilities for coins. Someone trying to use the miracle pocket to explain pulling out a $USD quarter would reply “but you can’t prove it wrong!

The fact that you can’t prove it wrong is what makes it less likely to be true. Less likely than something that can be proven wrong.

Again, you can look at the math behind this logic here.

The operating principle here is that, the more potential (and mutually exclusive) data a hypothesis, explanation, or model can explain, the less likely it is to explain any particular datum. Somewhat counter-intuitive, but this is what probability theory tells us. Do not trust your intuitions when it comes to probability. They are wrong.

As I wrote in one of the other posts on Bayes Theorem and falsifiability, god could have had us live on any planet in the solar system. With god, all things are possible. But we just so happen to live on the only planet in our solar system where it’s possible for intelligent life to exist without supernatural intervention. Yet if we invoke god, that hypothesis could be used to explain us living on Jupiter or Mercury or a comet orbiting the sun every 150 years; the naturalism hypothesis cannot explain those possibilities, if one of them is what had actually happened. God has more possible locations to create humanity (i.e., can’t be proven wrong) than a godless hypothesis.

For a more real-world example so I can stop beating up on god, the same situation happens between evolutionary psychology and blank-slatism. Blank-slatism and the social constructionist hypotheses / social role theories claim that any configuration of human behavior is possible due to socialization, but we just so happen to live in a world where e.g., men have a high murder rate among their fellow men, just like the males of our primate cousins.

The social constructionist hypothesis isn’t as egregious as the god hypothesis, but it’s still less falsifiable than any evolutionary explanation. In this case, intrasexual competition.

Let’s continue with the examples.

What if I told you that the number of men against abortion is higher than the number of women who are anti-abortion? Of course, Patriarchy makes men misogynists who treat women as baby machines so of course they don’t want women to have a choice.

What if I told you that the number of women against abortion is higher than the number of men who are anti-abortion?

Of course, Patriarchy makes women have to behave like men, that makes women internalize their misogyny so that means they’ll be viewing themselves as baby machines.

Do you see what I did there? A hypothesis (Patriarchy) that can explain data and be used to explain the complete opposite of those data? That means it can’t be proven wrong. That means it’s behaving like the god hypothesis.

That means it’s unfalsifiable. That means it’s a bad explanation.

What would be a good hypothesis? One that explains only one of those situations and completely fails at explaining the other. Something like intrasexual competition, which would say that abortion is a woman on woman problem and men care less either way. Intrasexual competition can only apply to one of those hypotheticals, and ***spoiler alert*** it’s the one that we see:

Intrasexual competition makes no sense for men being more anti-abortion than women, (and if that was indeed the case we could rule intrasexual competition out, like how getting a non-$USD coin rules out the $USD-only pocket) but makes sense to explain why women are more anti-abortion than men (and more pro-abortion than men): They are competing with other women. Indeed, the poll above shows that the abortion debate is mainly a battle between republican women and democrat women. Now it shouldn’t surprise you that the same thing happens in other domains related to sex and reproduction among humans. Like going topless, sex work, fat shaming, promiscuity/slut shaming, wearing makeup, etc. All of those are topics where men are less polarized about it than women, just like abortion.

Now I’m not saying that all current models of evolutionary psychology are gold. But with no other information, evolutionary hypotheses have a higher prior of being correct than social role hypotheses when it comes to large scale human behavior. Indeed, human beings are biased to apply social rules to phenomena when they don’t actually apply. The opposite — not applying social rules when they should — rarely happens. Remember, our intuitions about invoking social rules as an explanation are because we want to curry favor, support allies, or denigrate enemies. It makes System 1 sense that society tells men to be more violent than women, or that society forces women to be less aggressive than men. Unfortunately, your intuitions are highly unlikely to be true. And as I demonstrated above, the social role explanation isn’t a very robust explanation.

As a matter of fact, a lot of the problems people point out with evolutionary psychology today are the same ones that were encountered with evolutionary biology in the 19th century. Darwin’s original formulation had no mechanism (DNA hadn’t been discovered), no predictions (it was explaining things that people had already seen, i.e., a “just-so” story), and no mathematical models.

But it was still a good explanation; a better explanation than alternatives. A lot better explanatory framework than social constructionism Creationism. And that’s all that really matters. The more falsifiable explanation wins the race.

The moral of the story is, that there are very few things we encounter in either science or every day life that are wholly unfalsifiable (besides the existence of god). Unfalsifiability shouldn’t be viewed as a binary option. Some explanations are either more or less falsifiable than others. Falsifiability is Bayesian, and like all things Bayesian, should be used in a comparative, greater than/less than manner and not in a binary is/is not formulation. I’m just a lone voice crying out in the wilderness, but I would like “unfalsifiable” to be replaced with “more” or “less” falsifiable in almost all common usage; only save “unfalsifiable” for truly egregious cases like omnipotent gods or Last Thursdayism.

 
1 Comment

Posted by on April 30, 2019 in Bayes, cognitive science, religion

 

How Internet Fighting Works

 
Comments Off on How Internet Fighting Works

Posted by on April 26, 2019 in cognitive science, Funny

 

What Are The Strongest Arguments For Atheism?

The strongest arguments for atheism aren’t really related to atheism, but about how the human mind works and what makes something a good explanation.

We know why we have particular emotions. Anger protects us from harm, and thus dying. Feelings of friendship/bonding with others gives us access to resources and mates; it’s very hard to live alone without having had others to teach you how to do so or build the infrastructure that allows this. Many other animals have these same emotions, and for the same reasons: to not die and/or perpetuate their genes. The ones that don’t have these emotions usually die pretty quickly.

Why would a god have these emotions? There’s no underlying reason for a god to love or to get angry. A god that loves makes about as much sense as a god with a penis.

And then there are the reasons why we believe things in the first place. How are our beliefs formed? For many of the things we believe, we do so because of our feeling of certainty:

A newspaper is better than a magazine. A seashore is a better place than the street. At first it is better to run than to walk. You may have to try several times. It takes some skill, but it is easy to learn. Even young children can enjoy it. Once successful, complications are minimal. Birds seldom get too close. Rain, however, soaks in very fast. Too many people doing the same thing can also cause problems. One needs lots of room. If there are no complications, it can be very peaceful. A rock will serve as an anchor. If things break loose from it, however, you will not get a second chance.

Is this paragraph comprehensible or meaningless? Feel your mind sort through potential explanations. Now watch what happens with the presentation of a single word: kite. As you reread the paragraph, feel the prior discomfort of something amiss shifting to a pleasing sense of rightness. Everything fits; every sentence works and has meaning. Reread the paragraph again; it is impossible to regain the sense of not understanding. In an instant, without due conscious deliberation, the paragraph has been irreversibly infused with a feeling of knowing.

Try to imagine other interpretations for the paragraph. Suppose I tell you that this is a collaborative poem written by a third-grade class, or a collage of strung-together fortune cookie quotes. Your mind balks. The presence of this feeling of knowing makes contemplating alternatives physically difficult.

Did you get the same inability to explain the paragraph using some other concept? Take note of that: You really don’t have any control over how certain you feel about things. Just like other emotions, the feeling of certainty is generated unconsciously. The next obvious question would be “What sort of brain algorithm generates your feeling of certainty?” More on that below.

Experience teaches us what stimuli make us angry, or jealous, or happy, sad, etc. Sometimes, the feeling is unwarranted and using our self-reflection we can determine that feeling angry about a particular situation isn’t justified. What’s dangerous is this: Our feeling of certainty feels good. At least, it’s much more pleasant than the feeling of uncertainty. And in that sense, we generally never stop to reflect on why our feeling of certainty might not be correct. Unlike with, say, jealousy.

The rabbit hole of why we believe what we do goes a lot further than this. Books like Thinking, Fast and Slow about our cognitive biases go into a lot of this. The major premise of that book is that we have two types of thought engines. A “fast” engine (System 1) and a “slow” engine (System 2). These two engines are good at different tasks: the fast one is good at recognizing faces or voices, the slow one is good at math. The fast one is good at social interaction, the slow one is good for abstract/impersonal concepts.

Generally, the fast engine is the one that’s in charge, and is responsible for telling the slow engine to start up (the fast one is also the one responsible for the feeling of certainty). Problem is, the fast engine has to be trained on when a task should be handled by itself or when it should give a problem over to the slow engine. It’s not very good at doing this intuitively. But for many of us, a problem might have been answered already by the fast engine and when challenged that’s the only time the fast engine uses the slow engine: To defend the fast one’s conclusion. And a lot of the time, the fast one’s conclusion will be for some social goal: Status, friendship, not ending up dead, and so on.

Our brains are actually more complicated, or modular, than the System 1 and System 2 way of explaining it. There actually seem to be multiple modules in our brains, and the ones that use information don’t explain their “reasoning” to the ones that talk to the outside world. Our brains are more like Congress, with some congresspeople acting on behalf of the overall “fast” engine or “slow” engine. The you that you feel is “you”, speaking to the outside world, is more like the press secretary for Congress.

There are a few experiments that show that when communication is physically severed between the two halves of the brain, each side of the brain gets different information. Yet, the part of the brain that does the speaking might not be the part of the brain that has the information. So you end up with rationalizations like split brain patients grabbing a shovel with their left hand (since their left eye was shown snow) while their right eye sees a chicken. When asked to explain why they grabbed the shovel, they — well, the side of their brain that only sees the chicken — make up an explanation, like the shovel is used to scoop up chicken poop! That press secretary, pretty quick on his feet.

But this doesn’t just happen with split brain patients. It seems to happen a lot more than we think, in our normal, everyday brains.

So for example, there was one experiment where people were asked to pick their favorite pair of jeans out of four (unbeknownst to them) identical pairs of jeans. A good portion of the people picked the jeans on the right, since they looked at the jeans from left to right. But they were unaware that that was their decision algorithm, and they rationalized their decision by saying they liked the fabric or the length or some other non-discriminating fact about the jeans. Liking the fabric of one pair of jeans more than the others was demonstrably false since the jeans were identical, yet that was the reason they gave. There’s still no persistent across the isle partisanship in your fully functioning brain, so the press secretary has to still come up with a good, socially acceptable story about Congress’ decision for the general public’s consumption. The part of our brain that ‘reasons’ and explains our actions, neither makes decisions, nor is even privy to the real cause of our actions.

The tl;dr version is this. Our brains are good at social goals. And unless we’ve been trained on it, it’s not so good at forming true beliefs about the non-social world. If we had some machine was was designed to analyze electromagnetic radiation as seen in space and pointed that machine at its own circuitry, it would interpret everything about itself in the manner of cosmic rays. Similarly, if we had a machine (our brain) that interprets everything through the lens of social interaction, and pointed it at the universe, it would interpret everything in the universe as some manifestation of social rules.

And this is what happens. Our default is to treat a lot of non-social things as social. It’s why things like animism and magical thinking are prevalent. It’s why we call planets “planets” (Greek for wanderer), the Milky Way a galaxy (gala is Greek for milk. In our case, Hera’s milk). If someone “thinks really hard” about a problem, they’re more than likely using the tools meant for social problems, not the tools meant for solving non-social questions.

So if we don’t have control over our feeling of certainty, what’s a System 2 way of making sure that we have correct beliefs about non-social things? How can we be sure that we aren’t just defending a belief that we initially arrived at unconsciously? Or, more generally, what are some unbiased traits that good explanations share? What makes something a bad explanation? Since we’re operating under uncertainty (since we can’t trust our feeling of certainty), we have to use methods for explaining our uncertainty logically and consistently.

Let’s look at the Linda problem:

Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

Which is more probable?

  1. Linda is a bank teller.
  2. Linda is a bank teller and is active in the feminist movement.

What does this have to do with good explanations? Most people will say that it’s more likely that Linda is a bank teller and is active in the feminist movement. While that might seem true socially (i.e., being a feminist and a bank teller seems to tell a better story about Linda), it’s mathematically impossible; the population of people who are bank tellers is larger than the population of people who are bank tellers and in the feminist movement. That’s why this is called the conjunction fallacy. The conjunction of A and B is necessarily smaller than A individually (or B individually).

All else being equal, a good explanation has fewer unnecessary conjunctions than bad ones. Taken further, an explanation with two known facts as conjunctions is more likely than an explanation with one known fact and one unknown fact. The more unknown facts you use to support your explanation, the less likely it is. This is generally called Occam’s Razor.

So for example, a noise at night. A tree hitting your window at night has fewer assumptions that need to be true than an alien invasion in your house. Trees hitting windows only require things that we already know to be true. Alien invasions require a lot more things to be happening in the world that we don’t know to be true (e.g., the possibility of intelligent alien life, of interstellar/intergalactic travel) than just trees and wind. That goes into the next thing that good explanations have.

Good explanations are more commonplace (more mundane) than bad ones. If you’re walking down the street and hear hooves clicking on the street, it’s probably a deer or a horse. Not a zebra or a cow. Or a hooved alien from Jupiter. The corollary for commonplace is that extraordinary claims require extraordinary evidence. Hearing hooves isn’t unlikely enough to suggest that what you hear is a hooved alien from Jupiter. You need evidence that’s a lot less likely to happen than that.

Another facet of good explanations is that they explain only what they intend to explain and very little else. It’s the difference between using bug spray to kill a spider over setting fire to your house to kill it; good explanations are precise in what they explain.

As an example, let’s say you’re a student at uni. You know one of your TAs, Anna, only uses red ink when grading papers. But the other TA, Jill, uses a variety of colored ink (red, blue, green, black, orange, purple, etc.) to grade the papers. You get your grade on a quiz back one day and it’s a grade you disagree with. The ink on it is red. Based on only this information (e.g., assume they’ve graded equal amounts of papers at this point and they have similar handwriting), which TA was more likely to have graded your paper? It’s certainly possible that Jill did, after all, she has been known to use red, but it’s more likely that Anna was the one who graded it since she only uses red. The lesson here is that, the more possible things your explanation can explain, the less likely it is to explain a particular instance.

Now notice the words I’m using: Likely, probably, possible. I’m not reinventing the wheel by saying that we need a logical framework for dealing with uncertainty, and one has already been created: Probability theory. For the conjunction fallacy, this works. The conjunction of 90% and 50%, or 90% * 50% is less than both 90% and 50% (it’s 45%). Commonplace is another way of saying prior probability. And when we talk about prior probability, we’re usually talking about Bayes theorem.

Now, Pr(Claim | Evidence) reads “the probability of claim given evidence”. The short formulation of Bayes Theorem (BT) is Pr(Claim | Evidence) = Pr(Evidence | Claim) * Pr(Claim) / Pr(Evidence). An extraordinary claim, that is, a low prior probability claim, needs a correspondingly low probability evidence. And if you have some equation that is 100 * 4 / 5, the result will be a lot closer to 100 than it is to 4 or 5.

BT also explains why Anna was more likely to have graded the paper than Jill. Let’s say Anna is represented as dice that is 1s on all sides, and Jill is normal 6 sided dice (it’s the reason I picked 6 colors for Jill above). Let’s further say you have a jar filled in equal amounts with the normal 6 sided dice and the 1 sided dice; the jar is 50 / 50 of each. You’re blindfolded, told to pull a die from the jar and roll it. You’re told that you rolled a 1. What’s the probability that you grabbed the Anna dice (the 1s on all sides) or you grabbed the Jill dice (normal 1 – 6 dice)? The probability of rolling a 1 given the 1 dice is 100%. The probability of rolling a 1 given the 1 – 6 dice is 1 / 6, or around 16%.

For this we use the long form of BT: Pr(Anna | One) = Pr(One | Anna) * Pr(Anna) / [ Pr(One | Anna) * Pr(Anna) ] + [ Pr(One | Jill) * Pr(Jill) ]. What we end up with is around an 86% chance that you grabbed the Anna dice. If you follow this, you can tell that the more possible numbers the Jill dice has, the less likely it is that it can account for rolling a 1. Another way of phrasing “precision” is that there’s a punishment for spreading yourself too thin, of trying to hedge all bets, when trying to explain something.

So, tl;dr the qualities of good explanations are that they are on the likelier side of Occam’s Razor, are mundane, and precise. There are others, but this is probably (heh) getting too long.


Notice that I hardly ever mentioned god or atheism in these sections. Especially the second part. That’s because I think the strongest arguments for atheism aren’t about atheism per se, but are in general strong arguments for good thinking. They take into account our imperfections as human beings, especially in regards to how people think and act, and attempts to account for those failings. It seems to me that god(s) are what happen when social brains are trying to explain a fundamentally impersonal universe. And when that happens, those personal explanations for impersonal events tend to fail the logic of dealing with uncertainty.

 
 
NeuroLogica Blog

My ὑπομνήματα about religion

Slate Star Codex

SELF-RECOMMENDING!

Κέλσος

Matthew Ferguson Blogs

The Wandering Scientist

What a lovely world it is

NT Blog

My ὑπομνήματα about religion

PsyBlog

Understand your mind with the science of psychology -

Vridar

Musings on biblical studies, politics, religion, ethics, human nature, tidbits from science

Maximum Entropy

My ὑπομνήματα about religion

My ὑπομνήματα about religion

My ὑπομνήματα about religion

Skepticism, Properly Applied

Criticism is not uncivil

Say..

My ὑπομνήματα about religion

Research Digest

My ὑπομνήματα about religion

Disrupting Dinner Parties

Feminism is for everyone!

My ὑπομνήματα about religion

The New Oxonian

Religion and Culture for the Intellectually Impatient

The Musings of Thomas Verenna

A Biblioblog about imitation, the Biblical Narratives, and the figure of Jesus

The Syncretic Soubrette

Snarky musings from an everyday woman