RSS

Category Archives: rationality

Διαριθμέω / Diarithmeo

to list

A roundup of some stuff I found interesting pertaining to religious belief!

Meditation has slightly different effects on the brains of men and women:

To our knowledge, this is the first study examining potential modulating effects of biological sex on hippocampal anatomy in the framework of meditation. Our analyses were applied in a well-matched sample of 30 meditators (15 men/15 women) and 30 controls (15 men/15 women), where meditators had, on average, more than 20 years of experience (with a minimum of 5 years), thus constituting true long-term practitioners. In accordance with the outcomes of our previous study of meditation effects on hippocampal anatomy by pooling male and female brains together (Luders et al., 2013b), we observed that hippocampal dimensions were enlarged both in male and in female meditators when compared to sex- and age-matched controls. In addition, our current analyses revealed that meditation effects, albeit present in both sexes, differ between men and women in terms of the magnitude of the effects, the laterality of the effects, and the exact location of the effects detectable on the hippocampal surface.

[…]

Although existing mindfulness research seems to lack sex-specific analyses—at least with respect to addressing brain anatomy—the observed group-by-sex interactions seem to be in accordance with a recent study reporting sex-divergent outcomes when assessing the impact of a mindfulness intervention on behavioral measures/psychological constructs (de Vibe et al., 2013). More specifically, administering a 7-week mindfulness-based stress reduction (MBSR) program, that study detected significant changes in mental distress, study stress and well-being in female students but not in male students.

The hippocampus is a small brain structure integral to the limbic (emotion-motivation) system. It plays important roles in learning, mood, and the formation of memories.

Meditation and prayer have some of the same effects on the brain, so we might see the same results with people who pray regularly. This might be another reason why men are less religious than women: Do women benefit more from religious practices?

Next, type of belief in free will linked to performance in self-control based tasks:

The first task used to measure self-control is known as the “Stroop task,” which requires participants to resist the urge to name a word on a colored background rather than simply saying the name of the color, which requires a degree of self-regulation to stifle the incorrect response. The second, an anagram test, gave participants seven letters and unlimited time to make as many English words as they could with the letters, which measures persistence despite boredom or fatigue.

Both tests are considered “seminal indices of self-control,” according to Clarkson, although the skills required to perform each are different.

“So it is not simply a matter of conservatives being more efficient or liberals being overly analytical,” he said.

In their performance on both tasks, however, conservatives outpaced their liberal counterparts. At the same time, both groups were shown to have similar levels of motivation and effort.

[…]

[Next], a group of study participants was told that the belief in free will has been shown to be detrimental to self-control by causing feelings of frustration, anger or anxiety that inhibit concentration. Under these circumstances, the effects were reversed. Liberals outperformed conservatives, suggesting that a belief in free will can undermine self-control under certain conditions.

“If you can get people to believe that free will is bad for self-control, conservatives no longer show an advantage in self-control performance,” Clarkson said.

So, if one believes in free will then one will perform better on tasks that test free will. But if you poison the concept of free will, and you believe you have this poisoned trait, then you’ll do worse on tests of free will! Pretty wild stuff. Reminds me of stereotype threat and growth mindset.

Next on the rationality front, expert philosophers are just as irrational as the rest of us [pdf]:

Abstract:

We examined the effects of order of presentation on the moral judgments of professional philosophers and two comparison groups. All groups showed similarsized order effects on their judgments about hypothetical moral scenarios targeting the doctrine of the double effect, the action-omission distinction, and the principle of moral luck. Philosophers’ endorsements of related general moral principles were also substantially influenced by the order in which the hypothetical scenarios had previously been presented. Thus, philosophical expertise does not appear to enhance the stability of moral judgments against this presumably unwanted source of bias, even given familiar types of cases and principles.

Seems as though expert philosophers are subject to framing effects just like some other experts in their field. This is why one needs to learn to just shut up and multiply. But not too much.

And then, being told about naive realism and experiencing an optical illusion makes people doubt their certainty:

Nearly 200 students took part and were split into four groups. One group read about naive realism (e.g. “visual illusions provide a glimpse of how our brain twists reality without our intent or awareness”) and then they experienced several well-known, powerful visual illusions (e.g. the Spinning Wheels, shown above, the Checker Shadow, and the Spinning Dancer), with the effects explained to them. The other groups either: just had the explanation but no experience of the illusions; or completed a difficult verbal intelligence test; or read about chimpanzees.

Afterwards, whatever their group, all the participants read four vignettes about four different people. These were written to be deliberately ambiguous about the protagonist’s personality, which could be interpreted, depending on the vignette, as either assertive or hostile; risky or adventurous; agreeable or a push over; introverted or snobbish. There was also a quiz on the concept of naive realism.

The key finding is that after reading about naive realism and experiencing visual illusions, the participants were less certain of their personality judgments and more open to the alternative interpretation, as compared with the participants in the other groups. The participants who only read about naive realism, but didn’t experience the illusions, showed just as much knowledge about naive realism, but their certainty in their understanding of the vignettes wasn’t dented, and they remained as closed to alternative interpretations as the participants in the other comparison conditions.

“In sum,” the researchers said, “exposing naive realism in an experiential way seems necessary to fuel greater doubt and openness.”

I imagine doing something like this, and then teaching some other rationality concepts (like my feeling of certainty) might be a good overall teaching tool. Might.

 
2 Comments

Posted by on June 25, 2015 in cognitive science, rationality

 

What are the odds that Jesus rose or Moses parted the waves? Even with the best witnesses, vanishingly small

I claim no great originality for my argument. I’m borrowing from the great Scottish philosopher David Hume, particularly Section 10 of his magnificent Enquiry Concerning Human Understanding (1748). If there is any novelty in my presentation, it owes to the marriage of Hume’s ideas with a famous theorem in probability theory proposed by the Reverend Thomas Bayes in ‘An Essay towards solving a Problem in the Doctrine of Chances’ (1763). The technical details, fortunately, can be put to the side for our purposes.

Read more at Aeon.

 
Comments Off on What are the odds that Jesus rose or Moses parted the waves? Even with the best witnesses, vanishingly small

Posted by on December 5, 2014 in Bayes, rationality

 

What If?

IMG_4677.JPG

A while ago I wrote a post called Truth vs. Morality where I pointed out a question I sometimes asked Christians: If god didn’t exist, and this was known, should people still believe in god? Receptions to that question (the few times I’ve asked) had been somewhat predictable; some say yes, most say no.

I’m thinking that the “yes” answers are maybe not answering the question I’m asking, but subconsciously substituting it with an easier question and then answering that. Who knows.

I thought of a way to take it further. Instead of asking a truth vs. morality question, I might start asking a morality vs. morality question; that is, a consequentialist vs. deontological question. This would be something like What if being a Christian leads to a net unhappiness in the world? Should one still be a Christian? Not sure what the answers to that question might be, but I predict that they would say “yes” in the majority of cases. Probably because in this instance, they might substitute the implicit consequential point of the question with, not only the deontological question (i.e., what one’s duty is), but with the “is Christianity true” question. I.e., Christianity could only be a net negative in the world if Christianity is false; Christianity is true, therefore it is not a net negative in the world.

Of course, maybe if Christianity is true we should believe it. Even if belief in Christianity ultimately makes humanity unhappy.

But then again, this question could be equally applied to beliefs I hold dear. Just like I applied the same truth vs. morality question to beliefs I hold dear in the original post. What if secularism or atheism ultimately makes the world unhappy? What if sexism is a net benefit for the world, and feminism makes people unhappy? What if slavery is good for the world over at the expense of black people?

In these cases, I’m pretty sure I would answer exactly how a Christian might answer, and my thought process might mirror theirs (hopefully that isn’t too much of a typical mind fallacy). My first response is selfishness; I like my personal freedom/secularism/feminism/etc. thank you very much, and the rest of the world can fuck off. Why should I be a slave if that benefits the world? It seems pretty jacked up to think about it. Or, just like the hypothetical theist, I wouldn’t even countenance the question asked. Meaning that I would rebuke the question with “well that can’t be because racism/sexism/theocracy are obviously false and demonstrably make people unhappy so the question is a non-starter”.

This is one of the huge drawbacks for any sort of upcoming technological singularity. Whose morals do we program into the AI before it goes FOOM? People are all too eager to defer to a supernatural god whose whims are just, if not more so, as arbitrary as a future AI. What if this AI has the same conclusion about sex/gender roles or slavery that patriarchal religions have had? That divisions of labor among sexes and/or slavery makes people happier because they have less choices? There are probably an uncountable number of personal creeds, beliefs, and morals that make you as an individual happy, but if studied by anyone/thing with enough processing power can be demonstrated to be harmful if practiced on a wide scale. And any budding rationalist should always be aware of alternatives to their pet hypothesis.

So it seems like I wouldn’t be able to answer the very question that I would pose to a hypothetical Christian. I would think their answer “wrong” while hypocritically accepting my own answer to my sacred values as “right”.

 
Comments Off on What If?

Posted by on September 19, 2014 in apologetics, morality, rationality

 

The Motte and Bailey Doctrine

IMG_4497.JPG

(More than meets the eye)

As most people who read this blog are aware, I’ve read and have been subject to lot of religious apologetics. Either online or in meatspace. One of the things that I started to become aware of was a particularly nebulous debating tactic. There really wasn’t a name for it; but it would be pretty obvious when pointed out.

It goes a bit like this: When theists use the argument “God is just another word for the Ground of All Being” or “God is love”, I mean, that’s a pretty inoffensive premise. Of course, things like love exist and, well, existence exists. But then in another breath they’re praying to god to find their keys, or get them a new job, or, more in a more sinister context, send hurricanes because he’s angry at homosexuals; this more interactive god is not just “love” or the ground of all being. It’s, quite obviously, a personal god. A god with agency. You point this out, but then the theist retreats; he rejoins “But no, God is just another word for love/Ground of Being, surely you can’t object to that?”

Frustrating. I recently discovered that there is a name for this tactic: The Motte and Bailey Doctrine. The writers of the paper compare this to a form of medieval castle, where there would be a field of desirable and economically productive land called a bailey, and a big ugly tower in the middle called the motte. If you were a medieval lord, you would do most of your economic activity in the bailey and get rich. If an enemy approached, you would retreat to the motte and rain down arrows on the enemy until they gave up and went away. Then you would go back to the bailey, which is the place you wanted to be all along. It’s fitting that this is pointed out in an article attacking post-modernism.

It would be kinda like having a flower in your house. No one objects to flowers, right? But then whenever you get in an argument with someone, you transform the flower into an assault rifle. The person being attacked says “Hey! Hey! What are you doing with an M4???” and the then you say “I don’t know what you’re talking about, it’s just a flower!” because you’ve transformed the M4 back into a flower. But make no mistake: That flower is more than meets the eye.

Reading more about why people believe what they do, on the other hand, has made me realize that apologists probably don’t even realize that they’re doing this. Hypocrisy is a very fruitful strategy if you can get away with it. Your subconscious brain knows this. As Robin Hanson says:

Overcoming bias is also a Red Queen game. Your mind was built to be hypocritical, with more conscious parts of your mind sincerely believing that they are unbiased, and other less conscious parts systematically distorting those beliefs, in order to achieve the many functional benefits of hypocrisy. This capacity for hypocrisy evolved in the context of conscious minds being aware of bias in others, suspecting it in themselves, and often sincerely trying to overcome such bias. Unconscious minds evolved many effective strategies to thwart such attempts, and they usually handily win such conflicts.

Our big brains were not designed by the blind idiot god evolution to get impartial, objectively true answers. It was designed to be more like a defense lawyer defending a client that’s probably guilty.

Now, I didn’t discover the name for this debate technique. This is thanks to Slate Star Codex pointing out something that I’ve compared to religion before:

I feel like every single term in social justice terminology has a totally unobjectionable and obviously important meaning – and then is actually used a completely different way.

The closest analogy I can think of is those religious people who say “God is just another word for the order and beauty in the Universe” – and then later pray to God to smite their enemies. And if you criticize them for doing the latter, they say “But God just means there is order and beauty in the universe, surely you’re not objecting to that?”

The result is that people can accuse people of “privilege” or “mansplaining” no matter what they do, and then when people criticize the concept of “privilege” they retreat back to “but ‘privilege’ just means you’re interrupting women in a women-only safe space. Surely no one can object to criticizing people who do that?”

I wouldn’t read this as a condemnation of feminism. I would read this as a condemnation of the architecture of the human brain. After all, any cause that deals with morality is bound to sacrifice what is truth to what is morality because of our inherently hypocritical, biased brains. We may want to do good, but we really have no control over our decisions. Free will doesn’t exist. We just rely on vague feelings of certainty that we’re doing good. But, crucially, we are kept in the dark about our subconscious algorithm for generating that feeling of certainty… the how of what we decide in the first place.

Case in point:

We have little idea why we do things, but make up bogus reasons for our behavior…

Adrian North and colleagues from the University of Leicester playe[d] traditional French (accordion music) or traditional German (a Bierkeller brass band – oompah music) music at customers and watched the sales of wine from their experimental wine shelves, which contained French and German wine matched for price and flavour. On French music days 77% of the wine sold was French, on German music days 73% was German – in other words, if you took some wine off their shelves you were 3 or 4 times more likely to choose a wine that matched the music than wine that didn’t match the music.

Did people notice the music? Probably in a vague sort of way. But only 1 out of 44 customers who agreed to answer some questions at the checkout spontaneously mentioned it as the reason they bought the wine. When asked specifically if they thought that the music affected their choice 86% said that it didn’t. The behavioural influence of the music was massive, but the customers didn’t notice or believe that it was affecting them.

In other words the part of our brain that ‘reasons’ and explains our actions, neither makes decisions, nor is even privy to the real cause of our actions…

I’ve pointed out this phenomenon before.

Realizing that the same affliction that causes religions to be vectors for irrationality also inhabit more (to me) socially acceptable causes made me start being more tolerant of religion.

I’m certain there are a lot of people who don’t consider themselves bigots. But unless you are actively using some sort of mitigation strategy against your biases, using some actual humility, you’ll probably act in a bigoted way without even realizing it. And this cuts across everything; even people who are actively fighting for equality might not even realize that they’re subconsciously favoring their in-group to the detriment of the out-group. Yet their fuzzy feeling of certainty makes it feel like equality. Racism, sexism, nationalism, etc. aren’t foreign diseases that attack your cognition that you have to build up antibodies to… they are your cognition.

So when it comes to hypocritical behavior, we can’t think that we are being objective. Especially when it comes to moral behavior or any sort of normative, social justice goal. Overcoming our biases should be required education before we start making arguments and pronouncements when it comes to morality, or we’ll just be Motte and Bailey-ing at every chance to escape criticism. Just like a run of the mill Christian apologist.

 
3 Comments

Posted by on August 20, 2014 in apologetics, cognitive science, rationality

 

Guardians Of The Truth

20140523-153726.jpg

Reading about how and why religion comes about, you inevitably stumble onto the conclusion that religion isn’t just some aberration of humanity. The only thing that separates tried and true “religion” from other types of groups — or to put it in its real meaning, tribes — that people identify with is belief in the supernatural. Even if you take away belief in the supernatural, there’s nothing stopping a secular grouping (say, feminism or Objectivism) from tapping into the same family of negative behavior that religious people engage in.

The problem isn’t the supernatural. The problem is in-group vs. out-group. And this in-group/out-group animosity becomes more pronounced when you have a group that has an either implicit or explicit charge of guarding the truth:

“The Inquisition thought they had the truth! Clearly this ‘truth’ business is dangerous.” […]

Questions like that don’t have neat single-factor answers. But I would argue that one of the factors has to do with assuming a defensive posture toward the truth, versus a productive posture toward the truth.

When you are the Guardian of the Truth, you’ve got nothing useful to contribute to the Truth but your guardianship of it. When you’re trying to win the Nobel Prize in chemistry by discovering the next benzene or buckyball, someone who challenges the atomic theory isn’t so much a threat to your worldview as a waste of your time.

When you are a Guardian of the Truth, all you can do is try to stave off the inevitable slide into entropy by zapping anything that departs from the Truth. If there’s some way to pump against entropy, generate new true beliefs along with a little waste heat, that same pump can keep the truth alive without secret police. In chemistry you can replicate experiments and see for yourself—and that keeps the precious truth alive without need of violence.

Remember my little maxim that I made up: The more a group promotes prosociality, the less it cares about accurately modeling reality (this is because truth and morality, in practice, occupy two different magisteria). A group that forms on the premise of some social or moral cause (like religion), and is also defending “the truth”, will inevitably lead to horrible behavior just like all of those horror stories that atheists like to blame on religion.

Yudkowsky’s use of “entropy” might muddle the concept a bit, so granting myself the liberty of rewording it, the gravity of the situation becomes clear. “When you are a Guardian of the Truth, all you can do is try to stave off the inevitable slide into [the immoral past] by zapping anything that departs from the Truth”

It also might be helpful to replace “truth” with information.

I am reminded of this very phenomenon of “guardians of the truth” pointed out by one of Jerry Coyne’s recent posts:

First Ayaan Hirsi Ali’s invitation to speak at the Brandeis commencement was withdrawn, and now there are more, for Political Correctness season is upon us. Certainly universities have the right to choose their speakers, but it’s bad form to choose someone and then rescind their invitation, or to cave in to student pressures that make speakers withdraw.

[…]

The WSJ is, of course, a conservative organ, and goes on to decry the “loopiness” of the left wing and the ostracism of conservative professors, as well the tendency of universities to allow “the nuttiest professors to dumb down courses and even whole disciplines into tendentious gibberish.” That’s an exaggeration, but still, it’s disturbing that we see the left attacking, in effect, freedom of speech. If you don’t like Condaleeza Rice (and I sure don’t), that doesn’t mean you should mount such a protest against her that she has to withdraw. Are all speakers to be vetted for signs of cryptic conservatism? Are students that loath to hear views that might disagree with them?

I’m no conservative, but these Commencement Police frighten me, and paint students as self-entitled, fragile beings who can’t countenance dissent—unless it’s their own. At my own commencement at William and Mary in 1971, we had an undistinguished state legislator as speaker—and this after many of us wanted a more leftist person. But we didn’t shout him down, or pressure the university to withdraw his invitation. Instead, we organized a “counter commencement,” held at a different time and place, and our class invited and paid for Charles Evers, the older brother of slain civil rights worker Medgar Evers.

People! Stop the tribal thinking! Argument gets counter-argument, not reputation assassination and/or banishment from the tribe.

 
Comments Off on Guardians Of The Truth

Posted by on May 23, 2014 in rationality, religiosity

 

Solomonoff Induction

20140515-121755.jpg

Solomonoff Induction is the more accurate mathematical formulation of the epistemic heuristic Occam’s Razor. While my post on the subject is also mathematical, and is influenced by the conjunction fallacy, Occam’s Razor itself is just a vague rule of thumb. Solomonoff Induction is an attempt to make Occam’s Razor more rigorous.

The way I understand it, imagine writing a computer program that knew every single hypothesis imaginable that can be used to explain some data. The computer would select the hypothesis with the least amount of code — represented as a binary string — as its main prior. Or at least, as the prior with the highest probability.

The problem would be actually having a computer that had every hypothesis imaginable and one that compared each one via complexity of code (complexity of code = more bits used to represent it).

As an example, let’s say we have two pieces of code: One that represents Newton’s laws of motion and another that represents Einstein’s theory of relativity. At first glance, the program that computes Newton’s laws might seem simpler than Einstein’s (more people know Newton’s formulas than know Einstein’s formulas), but Newton’s code doesn’t account for weird things like Mercury’s orbit. So in actuality, Newton’s code would get bloated from all of the ad hoc code meant to account for things like Mercury’s orbit that can’t be computed using the baseline Newtownian formulas. And in the end, Newton’s program would be larger than Einstein’s due to that extra code; maybe Newton.dll would be 100 MB and Einstein.dll would be 80 MB.

Thus, by my understanding of Solomonoff Induction, a sufficiently advanced artificial intelligence would use Einstein.dll as its main prior when attempting to explain some gravitational phenomenon. At least until a smaller program is written that accounts for all of the things Einstein.dll accounts for plus things that it doesn’t (e.g., quantum gravity).

Now imagine comparing disparate hypotheses, like a computer that could model the atmosphere to predict when and where hurricanes will strike, and a computer program that attempted to model an angry god to predict where and when hurricanes strike. I’m willing to bet that the code used to model non-supernatural weather would be smaller than the one used to model a supernatural being’s motivations (I’m relatively certain that hurricanes forming isn’t more complex than the bio-chemical and social processes that produce anger in living beings, not to mention angry beings that have no physical body… though this is intuitively backwards; and it is backwards precisely because we think primarily in social ways). Or more pointedly, comparing the code used to model a supernatural Jesus coming back from the dead and the code used to model the story being invented by people who are the modern equivalent of people from a small, backwards village in Africa colonized by the British.

Well… this is all fine and dandy, but most people aren’t going to comprehend this intuitively, since there isn’t a reference to things that they already know about. But Solomonoff Induction makes sense to me (well, at least how I’ve laid it out above), because I’ve actually written code that uses the math behind special/general relativity and I can see how at first glance it would look more complicated than Newton’s laws of motion. But adding that extra code to account for things that can’t be explained via Newton’s laws would be bad programming practice. I would certainly prefer code that had a one-stop algorithm that computed things instead of an algorithm plus some hand-jammed code added to it because the original algorithm wasn’t robust enough.

So back to Occam’s Razor, except a more intuitive explanation of it. I think Occam’s Razor can be summed up using the English metaphor “a chain is only as strong as its weakest link”. Imagine choosing from a variety of chains to hook up a disabled car to the back of a pickup truck. Given that the chain is only as strong as its weakest link, you would want to pick the chain that has the lowest chance for having a weak link, thus a lower chance for the chain to break and the car goes careening off somewhere on the highway.

A short chain might have an extremely weak link in it, yet a longer chain might have a bunch of slightly weak links in it. Which chain do you use then? Whatever methodology you use to ensure that you pick the right chain would be Occam’s Razor; you could even go about removing the weak link altogether and go with the strongest part of the chain.

 
Comments Off on Solomonoff Induction

Posted by on May 15, 2014 in rationality

 

“If I Think Really, Really Hard, I Can Get The Right Answer”

20140425-160031.jpg

Human beings are social animals.

“Duh” you say. Of course we are. Why am I pointing this out? Well, why are human beings social animals? How strong is the desire for socialization; for having friends and family and allies? Think about how something like that would come about, and how strong that pull is in our cognition.

If evolution had to choose between two options — being correct or having allies — which one would evolution choose for brain design? I’m pretty sure that if it was only one that could be chosen, the strategy that confers the most reproductive benefits would be to have allies. Of course, the two aren’t mutually exclusive; you can be correct and have allies.

But then, think of the modular mind. We are strangers to ourselves. There are probably different modules for correctly modeling the world, and others for making friends. And probably just like my computer, one module takes precedence over the other in certain situations. And probably due to evolution, the making allies module(s) probably override the having correct beliefs modules 9 times out of 10 (I just made that ratio up). And these modules probably don’t communicate all that much. The only thing you’re aware of is the end product — your feeling of certainty.

So no, the title-quote of this blog post is wrong. Yet, I see the equivalent of the above numerous times in myriad contentious issues on blogs/online newspapers almost daily; the title of this post is especially wrong if whatever it is you’re attempting to figure out has some sort of moral component. Since morality is all about moderating social behavior, your social brain will be rationalizing things (think religion, politics, social justice, non-economists doing economics, etc.) to make yourself signal being impartial when in reality you’re subconsciously defending your in-group.

Imagine it like this. If you woke up one morning and said to yourself “If I work really hard, I can build a computer” the first thing a normal person would do would probably be to go out and get the tools and materials needed to build a computer. Almost no one would do the complete opposite: Stay in their room and attempt to build a computer with just the tools and materials that they happened to already have in their room. That would be, well, downright irrational.

Do you have the tools and materials, right now, in your bedroom to build a computer? Probably not; unless it was already your job or hobby to build computers.

And yet — and yet! — people do the analogous of the irrational approach of building computers when it comes to “getting the right answer”. They think that they can do the epistemic equivalent of building a computer with just the tools and materials lying around in their apartment instead of going out and getting the proper tools; they think that the fact that they have a brain is evidence enough that they have the proper tools and materials. It would be equally odd (and arrogant) to think that just because you have hands (after all, people who actually build computers also use their hands!) you too can build a computer… with nothing but the tools & materials that are currently in your bedroom. 9 times out of 10, the tools you’ll be using will be the ones to make friends.

So no! By all that is holy in the milk of Hera, no!

If you intend to get the right answers for some issue, you need to first adorn your brain with the right tools and materials: That means learning the methods of rationality. That means getting familiar with Bayes Theorem (the foundation for the logic of science); that means learning how to figure out patterns; that means learning the laws of thought and what actually makes a good explanation; that means knowing that you are the easiest person for yourself to fool, that education more than likely makes you better at defending conclusions you originally arrived at for irrational (or social!) reasons. Education in and of itself actually doesn’t seem to do much to get rid of said irrational conclusions; education just gives you better ammunition to defend them.

Maybe even start taking some creatine!

So it’s not enough to know that you’re a flawed human being. Yes, yes, we all have biases. But someone who engages in fake humility is one who is just professing their flaws, as one would do with a new pair of pants they never wear or a flashy car they never drive; it’s a status symbol; it’s signaling; it’s your social modules; it’s you making friends. The true purpose of humility is to plan to correct for our flaws. Indeed, chances are that the more something promotes prosociality, the less it accurately models reality.

 
Comments Off on “If I Think Really, Really Hard, I Can Get The Right Answer”

Posted by on April 25, 2014 in cognitive science, rationality

 
 
NeuroLogica Blog

My ὑπομνήματα about religion

Slate Star Codex

The Schelling Point for being on the #slatestarcodex IRC channel (see sidebar) is Wednesdays at 10 PM EST

Κέλσος

Matthew Ferguson Blogs

The Wandering Scientist

Just another WordPress.com site

NT Blog

My ὑπομνήματα about religion

Euangelion Kata Markon

A blog dedicated to the academic study of the "Gospel According to Mark"

PsyPost

My ὑπομνήματα about religion

PsyBlog

Understand your mind with the science of psychology -

Vridar

Musings on biblical studies, politics, religion, ethics, human nature, tidbits from science

Maximum Entropy

My ὑπομνήματα about religion

My ὑπομνήματα about religion

My ὑπομνήματα about religion

atheist, polyamorous skeptics

Criticism is not uncivil

Say..

My ὑπομνήματα about religion

Research Digest

My ὑπομνήματα about religion

Disrupting Dinner Parties

Feminism is for everyone!

My ὑπομνήματα about religion

The New Oxonian

Religion and Culture for the Intellectually Impatient

The Musings of Thomas Verenna

A Biblioblog about imitation, the Biblical Narratives, and the figure of Jesus