RSS

Category Archives: cognitive science

Children with Williams Syndrome don’t form racial stereotypes

WILLIAMS Syndrome (WS) is a rare neurodevelopmental disorder caused by the deletion of about 28 genes from the long arm of chromosome 7. It is characterized by mild to moderate mental retardation and “elfin” facial features. Most strikingly, individuals with WS exhibit highly gregarious social behaviour: they approach strangers readily and indiscriminately, behaving as if…

Source: Children with Williams Syndrome don’t form racial stereotypes

 
Comments Off on Children with Williams Syndrome don’t form racial stereotypes

Posted by on October 11, 2016 in cognitive science

 

Why Conservatives Are Against Science And Social Justice

 
Comments Off on Why Conservatives Are Against Science And Social Justice

Posted by on September 26, 2016 in cognitive science, economics/sociology, morality, religion

 

The Fundamental Premise

Everyone I don't like is Hitler
It’s been a long time since I’ve posted! I think I don’t really have much more to say about religion and belief that I haven’t already said. So this post is more like my conclusion about why people believe what they do, the grand conclusion of all of my reading of scholarly literature related to religion and cognitive science over the past eight years.

What brought this up is another blog post I’ve read. The following is from In Due Course (h/t Marginal Revolution):

The whole “normative sociology” concept has its origins in a joke that Robert Nozick made, in Anarchy, State and Utopia, where he claimed, in an offhand way, that “Normative sociology, the study of what the causes of problems ought to be, greatly fascinates us all”(247). Despite the casual manner in which he made the remark, the observation is an astute one. Often when we study social problems, there is an almost irresistible temptation to study what we would like the cause of those problems to be (for whatever reason), to the neglect of the actual causes. When this goes uncorrected, you can get the phenomenon of “politically correct” explanations for various social problems – where there’s no hard evidence that A actually causes B, but where people, for one reason or another, think that A ought to be the explanation for B. This can lead to a situation in which denying that A is the cause of B becomes morally stigmatized, and so people affirm the connection primarily because they feel obliged to, not because they’ve been persuaded by any evidence.

Let me give just one example, to get the juices flowing. I routinely hear extraordinary causal powers being ascribed to “racism” — claims that far outstrip available evidence. Some of these claims may well be true, but there is a clear moral stigma associated with questioning the causal connection being posited – which is perverse, since the question of what causes what should be a purely empirical one. Questioning the connection, however, is likely to attract charges of seeking to “minimize racism.” (Indeed, many people, just reading the previous two sentences, will already be thinking to themselves “Oh my God, this guy is seeking to minimize racism.”) There also seems to be a sense that, because racism is an incredibly bad thing, it must also cause a lot of other bad things. But what is at work here is basically an intuition about how the moral order is organized, not one about the causal order. It’s always possible for something to be extremely bad (intrinsically, as it were), or extremely common, and yet causally not all that significant.

I actually think this sort of confusion between the moral and the causal order happens a lot. Furthermore, despite having a lot of sympathy for “qualitative” social science, I think the problem is much worse in these areas. Indeed, one of the major advantages of quantitative approaches to social science is that it makes it pretty much impossible to get away with doing normative sociology.

Incidentally, “normative sociology” doesn’t necessarily have a left-wing bias. There are lots of examples of conservatives doing it as well (e.g. rising divorce rates must be due to tolerance of homosexuality, out-of-wedlock births must be caused by the welfare system etc.) The difference is that people on the left are often more keen on solving various social problems, and so they have a set of pragmatic interests at play that can strongly bias judgement. The latter case is particularly frustrating, because if the plan is to solve some social problem by attacking its causal antecedents, then it is really important to get the causal connections right – otherwise your intervention is going to prove useless, and quite possibly counterproductive.

This quote points towards the heart of what I think leads people to believe what they do.

People sort of waffle between thinking of the universe as operating in a mechanistic or empirical way and operating in a “social” or “moral” way. We restrict, at least ideally, mechanistic thinking to relationships between inanimate objects. A rock on some remote coastline crumbles into the sea due to water/air erosion. A simple (ish) math formula can probably be used to predict when that particular piece of rock will erode and crumble.

On the other hand, for interactions between agents, we think in a “moral” way. That is, when we think in the moral way, we think in moral “shoulds” and “oughts” instead of the mechanical “shoulds” and “oughts”. In the quote above, racism “should” be responsible for a host of other social ills because, well, racism is bad. It’s a horns effect applied to a concept.

Most importantly, people intuit that this “moral” way supercedes the mechanistic way in both value and precedence. The moral way is both more important and it’s the ultimate cause. The moral way is the fundamental rule of the universe. Instead of the universe running on the laws of physics, “moral” thinking intuits that the universe runs on the laws of proper social protocols. If something of great import either has happened or has to happen, then the rules behind social interactions owns the day. Due to this tendency, we as humans tend to ascribe moral causality behind things and events that are in actuality mechanistic. Your car didn’t start this morning? What did you do (to your car?) or what moral failing did you enact to deserve this!? And so on.

The more relevant case of this, since this is an early Christianity blog as well, is the sacrifice of Jesus. This makes absolutely no sense in a mechanistic way (i.e., biology; laws of physics). But it makes sense in a social way: Concepts of sin, blood sacrifice, redemption, and so on are social concepts. You feel bad or guilty, or are overwhelmed with empathy and a sense of indebtedness. And our brains give precedence to these social and moral aspects of “causality” since those are the fundamental building blocks of the universe… intuitively.

This follows our cognitive architecture of System 1 and System 2 thinking. In my parlance, the Intuitionists and the Rationalists.

Moreover, people seem to balk at reference overlap. At least, in one direction anyway. Imagine someone saying that the reason for the rock eroding and crumbling into the coastline was because we didn’t sit and talk to the coastline enough. This is the basics behind concepts like animism. When we hear people talk like this, we sort of shrug our shoulders and go on with our lives. Animism makes sort of intuitive sense; especially if we didn’t know any better.

But the other direction, if someone were to apply mechanical thinking to human relationships, this is where the real fireworks happens. It’s not allowed! You can’t do that! Notice that people have the same reactions if you try to apply mechanical thinking to religious concepts. You can’t do that! Non-Overlapping Magisteria! Because religion is premised on the idea that the fundamental reality of the universe is social. The supernatural? Psi? Deepak Chopra-like universal consciousness? Life after death? Even free will? All based on the idea that the fundamental rule of the universe is social.

Why do we think like this? I think it’s because our brains evolved intelligence in a social environment, where socialization was the main determinant for who lived and who died. Having too much gain on social rules, in that environment, probably didn’t hurt. However, when all you know is your tribe and trying to model other minds, applying mechanical thinking is probably detrimental.

The problem being that a lot of our experiences of the world involve intentional agents interacting with unintentional inanimate objects and vice versa. We yell at inanimate objects when they do us wrong, and we assume we must have committed some social faux pas if bad things happen to us. This undergirds intuitive concepts like the just world fallacy.

It just so happens that mechanical vs social modes of thinking not only inhibit each other (i.e., thinking in a mechanical way makes you less empathetic and thinking in a social way makes you less “technical” so to say) but seem to be represented in the genders: There’s the “extreme” male brain and the “extreme” female brain:

It turns out that, when it comes to brains, being a super-male may not be such a good deal. According to Baron-Cohen, Autism-Spectrum-Disorders (ASD), which are far more common in males than in females, may reflect the expression of an extreme male brain, one that has extremely high systemizing skills and extremely low empathizing ones. Individuals with ASD often have excellent abilities for analyzing, organizing, and remembering technical information but poor abilities for communication, expressing emotions, and understanding the emotional and communicative expressions of others. Baron-Cohen has suggested that this extreme male brain may be the result of exposure to too much testosterone in the first trimester of pregnancy.

Until recently, it was unclear what an extreme female brain may look like, but a recent study conducted at the State University of New York in Albany and published in the online journal Evolutionary Psychology has offered some hints about it. The authors of this study, Jennifer Bremser and Gordon Gallup Jr., have shown that too much concern about what other people think and feel is associated with fear of negative evaluations, which may be expressed through apprehension and distress over negative evaluations by others, the avoidance of evaluative social situations, and the expectation that others would evaluate one negatively.

With this information, can you guess which gender is more religious?

But let me get this out of the way: No, the universe doesn’t ultimately run on social rules. No, the universe is not at its base ontologically mental; as a matter of fact, I can say with a high degree of Bayesian confidence that no ontologically mental entities exist, since that breaks all sorts of laws of thermodynamics.

Maybe I’m not the first to point this out, and maybe this is specious thinking, but it seems to me that, just as how ontogeny recapitulates phylogeny, our developmental psychology (probably) recapitulates our evolutionary psychology.

 
Comments Off on The Fundamental Premise

Posted by on September 19, 2016 in cognitive science, religion

 

Charts Are Persuasive

  

 
2 Comments

Posted by on September 17, 2015 in cognitive science, Funny

 

Objective Morality: Not Even Wrong

a disgusted woman

If you’re not familiar with the term not even wrong, go read that link and come back.

Back? Good. I think the concept of objective morality, in the Sam Harris sense that we can use science to determine moral values, is not even wrong. It fails at a fundamental level; that level being that it assumes moral reasoning that people do is the same as mathematical reasoning. Hell, it assumes moral reasoning follows the neatly logical “if-then-else” sort of reasoning.

It doesn’t. Rather, people don’t reason morally this way.

I wrote a post about this a while ago. Just look at the title of this blog post to see my point: intuition / morality changes by gender. Or, even take a look at this recent post at from Epiphenom:

Using an online questionnaire, they showed that people think that justice stems from at least 6 different sources: from ‘nature’ and from God, and also from other people and from yourself, as well as just plain chance. Using an online questionnaire, they showed that people think that justice stems from at least 6 different sources: from ‘nature’ and from God, and also from other people and from yourself, as well as just plain chance.

[…]

They were also interested in inaction. It probably won’t surprise you to know that the most common human response to minor criminal behaviour is inaction. So what the researchers wanted to know is whether the reasons given for inaction varied according to people’s beliefs about why the world was just.

Sure enough, people who believed in God gave God-related reasons for inaction (e.g. “There’s little we can do to help these people, as what happens to them is God’s will”). Similarly, people who believe in nature-related justice felt that we can’t help criminals because it’s in their nature, people who believe in self-related justice felt that it was up to the individuals concerned to help themselves, while those who believed in chance-related justice felt that it was just their dumb luck.

Once again, however, people who believe justice is down to other people were different. When offered an ‘other people’ related reason for inaction (“With society and the justice system the way it is, there’s nothing we can do”), they rejected it. And it’s not because only an idiot would agree with that statement – it was quite attractive to those who believed in nature- and self-related justice.

This study from Epiphenom is what made me think of Harris’ thesis of scientific morality. The people in this survey are rationalizing their morality by appeals to what they see is the source of justice in the world. In effect, Harris is forcing a certain type of moral thinking on people that doesn’t come naturally. Slate Star Codex put this more eloquently:

Democrats don’t really care about helping the poor, they only care about increasing government’s ability to take your money. We can prove this, because Republicans consistently give more to charity than Democrats – and because if Democrats really cared about the poor they would stop supporting a welfare system that discourages lifting yourself out of poverty. The only explanation is that the hundred-million odd Democrats in this country are all moral mutants who hold increased labyrinthine bureaucracy as a terminal moral value.

No, wait, sorry! That wasn’t it at all. They were saying that civil rights activists don’t really want to prevent hate crimes against Muslims, they only care about supporting terrorism. We can prove this because they seem pretty okay with the tens of thousands of Muslims who are being killed and maimed in wars abroad that they don’t promote any intervention in – and because they refuse to ban Muslim immigration to America, a policy which would decrease hate crimes against Muslims but also decrease the chance of terrorism. The only explanation is that the hundred-million odd civil rights activists in this country are all moral mutants who hold increased terrorism as a terminal moral value.

No, wait, sorry again! That wasn’t it either! They were saying that pro-lifers don’t really care about fetuses, they just support government coercion of women. We can prove this because they refuse to support contraception, which would decrease the need for fetus-murdering abortions – and because they seem pretty okay with abortion in cases of rape or incest. The only explanation is that the hundred-million odd pro-lifers in this country are all moral mutants who hold increased oppression of women as a terminal moral value.

No, wait, still wrong! I’m totally breaking apart here! They were saying that atheists don’t really doubt the existence of God, but they are too proud to worship anything except themselves. We can prove this because atheists sometimes pray for help during extreme emergencies, – and…

No, wait! It turns out it was actually third one after all! The one with the pro-lifers and abortion. Oops. In my defense, I have trouble keeping essentially identical arguments separate from one another.

[…]

In saying pro-lifers should support contraception, Alas is making exactly the error that The Last Superstition warned against. Ze’s noticing that Christians do things that don’t agree with modern moral philosophy, and so assuming Christians are either stupid or evil, instead of that they have a weird moral philosophy ze’s never heard of.

So instead of excusing pro-lifers, start by tarring them further. They don’t hate women. They don’t love oppression. It’s much worse than that. Pro-lifers are not consequentialists.

Consequentialism is a moral philosophy that says it’s okay to do a lesser evil if it leads to a greater good. I have argued for it at length elsewhere, but one of the reasons I argue for it is that most people don’t believe it. Only about a quarter of philosophers are consequentialists, and all the evidence shows that even fewer ordinary people do. Studies of the famous fat man problem show only 10% of people are willing to kill one person in order to save five others, something a true consequentialist would do in a heartbeat.

One group particularly heinous in their rejection of consequentialism is Christians. In his Epistle to the Romans, St. Paul argues that “One may not do evil that good may come”.

The Christians agree with me, against Alas, that their rejection of consequentialism is fundamental to their rejection of abortion.

Whenever we talk about moral reasoning, we have to take into account that not everyone reasons the same way morally. Moreover, that people aren’t even aware of how they reason, they just get a feeling of certainty (or disgust, or fear, or…). Most fatally, people think that their way of reasoning morally is how other people (should) reason morally. That’s just not gonna fly. You’re not going to convince a deontologist how they should act via consequentialist logic, and vice versa. It might help to try to convince said person of the consequentialist worldview via deontology (or vice versa) but you have to actually think of that meta step first.

What’s really jacked up is that even though most people aren’t consequentialists, many people use consequentialist reasoning to back up their moral reasoning after the fact. Think about something like gay marriage. The argument is that gay marriage will destroy traditional marriage. This is obviously nonsense, but this happens because the initial moral reasoning was something else (denotological, or maybe just disgust), but it’s rationalized with consequentialist reasoning.

Don’t think for a second that only conservatives do this. You, yes you, probably do this too. What’s the easiest way to prevent rape? Sex segregation; having women drink less around strange men; or any number of (consequentialist!) solutions that conservatives concoct. But those won’t work, because the initial moral impetus for equality wasn’t consequentialist, so we wind up with inefficient signaling that postures as consequentialist instead.

Where have we seen this rationalization behavior before? Oh yeah, the intuitionists and the rationalists. I love quoting myself:

There are a few experiments that show that when communication is physically severed between the two halves of the brain, each side of the brain gets different information. Yet, the part of the brain that does the speaking might not be the part of the brain that has the information. So you end up with rationalizations like split brain patients grabbing a shovel with their left hand (since their left eye was shown snow) while their right eye sees a chicken. When asked to explain why they grabbed the shovel, they — well, the side of their brain that only sees the chicken — make up an explanation, like the shovel is used to scoop up chicken poop! That press secretary, pretty quick on his feet.

But this doesn’t just happen with split brain patients. It seems to happen a lot more than we think, in our normal, everyday brains.

So for example, there was one experiment where people were asked to pick their favorite pair of jeans out of four (unbeknownst to them) identical pairs of jeans. A good portion of the people picked the jeans on the right, since they looked at the jeans from left to right. But they were unaware that that was their decision algorithm, and they rationalized their decision by saying they liked the fabric or the length or some other non-discriminating fact about the jeans. Liking the fabric of one pair of jeans more than the others was demonstrably false since the jeans were identical, yet that was the reason they gave. There’s still no persistent across the isle partisanship in your fully functioning brain, so the press secretary has to still come up with a good, socially acceptable story about Congress’ decision for the general public’s consumption.

This is one reason why it is inefficient to flat out ask someone something controversial. People make decisions based on information they don’t even know they’re using, and from there the entire existence of bias (the flip side of that is if you get someone to admit to some group identity or position publicly, they”ll be biased to act more in line with that group identity or proposition in the future without even realizing it). They’re not going to give you their “real” answer, they’re going to give you the socially acceptable answer since that is the entire job of the press secretary, and any psychological study that simply asks people questions has a fatal flaw.

Or this

We have little idea why we do things, but make up bogus reasons for our behavior…

Adrian North and colleagues from the University of Leicester playe[d] traditional French (accordion music) or traditional German (a Bierkeller brass band – oompah music) music at customers and watched the sales of wine from their experimental wine shelves, which contained French and German wine matched for price and flavour. On French music days 77% of the wine sold was French, on German music days 73% was German – in other words, if you took some wine off their shelves you were 3 or 4 times more likely to choose a wine that matched the music than wine that didn’t match the music.

Did people notice the music? Probably in a vague sort of way. But only 1 out of 44 customers who agreed to answer some questions at the checkout spontaneously mentioned it as the reason they bought the wine. When asked specifically if they thought that the music affected their choice 86% said that it didn’t. The behavioural influence of the music was massive, but the customers didn’t notice or believe that it was affecting them.

In other words the part of our brain that ‘reasons’ and explains our actions, neither makes decisions, nor is even privy to the real cause of our actions…

Let me emphasis this last sentence: In other words the part of our brain that ‘reasons’ and explains our actions, neither makes decisions, nor is even privy to the real cause of our actions. Moral reasoning is no different.

In principle, we might be able to discern objective moral values if we could get everyone to be a consequentialist. But… yeah, good luck with that. I’ll just say that I hate talking about how people should behave morally, mainly because of this wall of separation between how people actually reason morally and their rationalizations for it. It’s such a headache. I’d rather stick with the bird-watching view towards morality.

 
Comments Off on Objective Morality: Not Even Wrong

Posted by on July 31, 2015 in cognitive science, morality

 

Study: Even Atheists Distrust Atheists

From Epiphenom:

Leah Giddings and Thomas Dunn, of Nottingham Trent University in the UK, set out to replicate some of the earlier work on atheism and trust, but with a twist.

They gave a group of 100 people a short story to read about Richard. It’s the same story that’s been used in previous research, and it goes as follows:

Richard is 31 years old. On his way to work one day, he accidentally backed his car into a parked van. Because pedestrians were watching, he got out of his car. He pretended to write down his insurance information. He then tucked the blank note into the van’s window before getting back into his car and driving away.

Later the same day, Richard found a wallet on the sidewalk. Nobody was looking, so he took all of the money out of the wallet. He then threw the wallet in a trash can.

Half the participants were asked whether they thought Richard was a teacher, or a teacher and a Christian. The other half were asked whether he was a teacher, or a teacher and an atheist.

Now of course there’s nothing in the story to indicate Richard’s spiritual beliefs, so if they claim it does that’s evidence of prejudice.

As expected, Christians were likely to be prejudiced against atheists. But once again, so were the atheists (albeit to a lesser degree – nearly 50% of atheists and over 75% of Christians associated atheism with untrustworthy behaviour).

So here we are in one of the most secular countries on earth, and even atheists think that other atheists aren’t to be trusted.
To follow on from this, the researchers gave the participants some statistics on the number of atheists in the country. Some of them got accurate statistics, and some got statistics that inflated the number of atheists.

It didn’t make much difference. Pro-christian prejudice went down, but anti-atheist prejudice did not.

As usual, this is only one study, so don’t take it as the definitive say on the matter. It’s more likely that this study is descriptive for the environment it was created in instead of it describing some fundamental human nature.

I only put that caveat there because I kinda get tired of people pointing to one study and claiming it is the be all end all of all argument. But this study is still interesting nonetheless!

 
Comments Off on Study: Even Atheists Distrust Atheists

Posted by on June 26, 2015 in cognitive science

 

Διαριθμέω / Diarithmeo

to list

A roundup of some stuff I found interesting pertaining to religious belief!

Meditation has slightly different effects on the brains of men and women:

To our knowledge, this is the first study examining potential modulating effects of biological sex on hippocampal anatomy in the framework of meditation. Our analyses were applied in a well-matched sample of 30 meditators (15 men/15 women) and 30 controls (15 men/15 women), where meditators had, on average, more than 20 years of experience (with a minimum of 5 years), thus constituting true long-term practitioners. In accordance with the outcomes of our previous study of meditation effects on hippocampal anatomy by pooling male and female brains together (Luders et al., 2013b), we observed that hippocampal dimensions were enlarged both in male and in female meditators when compared to sex- and age-matched controls. In addition, our current analyses revealed that meditation effects, albeit present in both sexes, differ between men and women in terms of the magnitude of the effects, the laterality of the effects, and the exact location of the effects detectable on the hippocampal surface.

[…]

Although existing mindfulness research seems to lack sex-specific analyses—at least with respect to addressing brain anatomy—the observed group-by-sex interactions seem to be in accordance with a recent study reporting sex-divergent outcomes when assessing the impact of a mindfulness intervention on behavioral measures/psychological constructs (de Vibe et al., 2013). More specifically, administering a 7-week mindfulness-based stress reduction (MBSR) program, that study detected significant changes in mental distress, study stress and well-being in female students but not in male students.

The hippocampus is a small brain structure integral to the limbic (emotion-motivation) system. It plays important roles in learning, mood, and the formation of memories.

Meditation and prayer have some of the same effects on the brain, so we might see the same results with people who pray regularly. This might be another reason why men are less religious than women: Do women benefit more from religious practices?

Next, type of belief in free will linked to performance in self-control based tasks:

The first task used to measure self-control is known as the “Stroop task,” which requires participants to resist the urge to name a word on a colored background rather than simply saying the name of the color, which requires a degree of self-regulation to stifle the incorrect response. The second, an anagram test, gave participants seven letters and unlimited time to make as many English words as they could with the letters, which measures persistence despite boredom or fatigue.

Both tests are considered “seminal indices of self-control,” according to Clarkson, although the skills required to perform each are different.

“So it is not simply a matter of conservatives being more efficient or liberals being overly analytical,” he said.

In their performance on both tasks, however, conservatives outpaced their liberal counterparts. At the same time, both groups were shown to have similar levels of motivation and effort.

[…]

[Next], a group of study participants was told that the belief in free will has been shown to be detrimental to self-control by causing feelings of frustration, anger or anxiety that inhibit concentration. Under these circumstances, the effects were reversed. Liberals outperformed conservatives, suggesting that a belief in free will can undermine self-control under certain conditions.

“If you can get people to believe that free will is bad for self-control, conservatives no longer show an advantage in self-control performance,” Clarkson said.

So, if one believes in free will then one will perform better on tasks that test free will. But if you poison the concept of free will, and you believe you have this poisoned trait, then you’ll do worse on tests of free will! Pretty wild stuff. Reminds me of stereotype threat and growth mindset.

Next on the rationality front, expert philosophers are just as irrational as the rest of us [pdf]:

Abstract:

We examined the effects of order of presentation on the moral judgments of professional philosophers and two comparison groups. All groups showed similarsized order effects on their judgments about hypothetical moral scenarios targeting the doctrine of the double effect, the action-omission distinction, and the principle of moral luck. Philosophers’ endorsements of related general moral principles were also substantially influenced by the order in which the hypothetical scenarios had previously been presented. Thus, philosophical expertise does not appear to enhance the stability of moral judgments against this presumably unwanted source of bias, even given familiar types of cases and principles.

Seems as though expert philosophers are subject to framing effects just like some other experts in their field. This is why one needs to learn to just shut up and multiply. But not too much.

And then, being told about naive realism and experiencing an optical illusion makes people doubt their certainty:

Nearly 200 students took part and were split into four groups. One group read about naive realism (e.g. “visual illusions provide a glimpse of how our brain twists reality without our intent or awareness”) and then they experienced several well-known, powerful visual illusions (e.g. the Spinning Wheels, shown above, the Checker Shadow, and the Spinning Dancer), with the effects explained to them. The other groups either: just had the explanation but no experience of the illusions; or completed a difficult verbal intelligence test; or read about chimpanzees.

Afterwards, whatever their group, all the participants read four vignettes about four different people. These were written to be deliberately ambiguous about the protagonist’s personality, which could be interpreted, depending on the vignette, as either assertive or hostile; risky or adventurous; agreeable or a push over; introverted or snobbish. There was also a quiz on the concept of naive realism.

The key finding is that after reading about naive realism and experiencing visual illusions, the participants were less certain of their personality judgments and more open to the alternative interpretation, as compared with the participants in the other groups. The participants who only read about naive realism, but didn’t experience the illusions, showed just as much knowledge about naive realism, but their certainty in their understanding of the vignettes wasn’t dented, and they remained as closed to alternative interpretations as the participants in the other comparison conditions.

“In sum,” the researchers said, “exposing naive realism in an experiential way seems necessary to fuel greater doubt and openness.”

I imagine doing something like this, and then teaching some other rationality concepts (like my feeling of certainty) might be a good overall teaching tool. Might.

 
2 Comments

Posted by on June 25, 2015 in cognitive science, rationality

 
 
NeuroLogica Blog

My ὑπομνήματα about religion

Slate Star Codex

The Schelling Point for being on the Discord server (see sidebar) is Wednesdays at 10 PM EST

Κέλσος

Matthew Ferguson Blogs

The Wandering Scientist

Just another WordPress.com site

NT Blog

My ὑπομνήματα about religion

Euangelion Kata Markon

A blog dedicated to the academic study of the "Gospel According to Mark"

PsyPost

My ὑπομνήματα about religion

PsyBlog

Understand your mind with the science of psychology -

Vridar

Musings on biblical studies, politics, religion, ethics, human nature, tidbits from science

Maximum Entropy

My ὑπομνήματα about religion

My ὑπομνήματα about religion

My ὑπομνήματα about religion

atheist, polyamorous skeptics

Criticism is not uncivil

Say..

My ὑπομνήματα about religion

Research Digest

My ὑπομνήματα about religion

Disrupting Dinner Parties

Feminism is for everyone!

My ὑπομνήματα about religion

The New Oxonian

Religion and Culture for the Intellectually Impatient

The Musings of Thomas Verenna

A Biblioblog about imitation, the Biblical Narratives, and the figure of Jesus