RSS

Monthly Archives: April 2014

Grad School

20140430-110636.jpg

So I’m starting grad school for computer science in about a month. This is on top of having a normal 9 – 5 (well, 8:30 – 6) job. Meaning that in a little while I’ll probably have less time for blogging; at least, blogging anything with more than some passing thoughts and/or cool articles I find about religion.

Since I’m continuing my compsci schooling towards an M.S. I thought I’d try brushing up on my programming besides the meager tasks that I do for work (right now I’m more of a “software engineer”, meaning I mainly concentrate on the process aspect of software development with some coding on the side if required) so I’m writing a Java app that is — you guessed it — computing Bayes Theorem! I’m going to add it as an executable to my static website where I’m going to be doing some other web dev for a page dedicated to how probability theory is the logic of science. The page isn’t up yet, but it’ll get there eventually.

It was actually really simple to write the backend code for BT, but one neat little thing I discovered while coding for it, ironing out all of the nooks and crannies of BT, was combining likelihood ratios/Bayes factors. Here it is, better described over at Overcoming Bias:

You think A is 80% likely; my initial impression is that it’s 60% likely. After you and I talk, maybe we both should think 70%. “Average your starting beliefs”, or perhaps “do a weighted average, weighted by expertise” is a common heuristic.

But sometimes, not only is the best combination not the average, it’s more extreme than either original belief.

Let’s say Jane and James are trying to determine whether a particular coin is fair. They both think there’s an 80% chance the coin is fair. They also know that if the coin is unfair, it is the sort that comes up heads 75% of the time.

Jane flips the coin five times, performs a perfect Bayesian update, and concludes there’s a 65% chance the coin is unfair. James flips the coin five times, performs a perfect Bayesian update, and concludes there’s a 39% chance the coin is unfair. The averaging heuristic would suggest that the correct answer is between 65% and 39%. But a perfect Bayesian, hearing both Jane’s and James’s estimates – knowing their priors, and deducing what evidence they must have seen – would infer that the coin was 83% likely to be unfair.

That is because a perfect Bayesian would be combining their data, not simply taking an average of their posteriors. Which makes more sense if you think about it. If one group of people concluded that the world was round and another group of people thought the world was flat, it wouldn’t make sense to take an average of the two conclusions and say that the world must be shaped like a calzone. You would want the data that they used to arrive at their conclusions and update on that. Taking an average of the two is a social solution — meant to save people’s egos — not one that’s actually attempting to get at a more accurate model of the world.

It seems like combining likelihood ratios is actually pretty straightforward. Think about the conjunction fallacy. The probability of X% combined with the probability of Y% isn’t X% + Y%, or the average of X% and Y%, but rather X% * Y%. So combining likelihood ratios follows the same logic.

Again, from OB:

James, to end up with a 39% posterior on the coin being heads-weighted, must have seen four heads and one tail:

P(four heads and one tail| heads-weighted) = (0.75^4 * 0.25^1) = 0.079. P(four heads and one tail | fair) = 0.031. P(heads-weighted | five heads) = (0.2 * 0.079)/(0.2 * 0.079 + 0.8 * 0.031) = 0.39, which is the posterior belief James reports.

Jane must similarly have seen five heads and zero tails.

Plugging the total nine heads and one tail into Bayes’ theorem:

P(heads-weighted | nine heads and a tail) = ( 0.2 * (0.75^9 * 0.25^1) ) / ( 0.2 * (0.75^9 * 0.25^1) + 0.8 * (0.5^9 * 0.5^1) ) = 0.83, giving us a posterior belief of 83% that the coin is heads-weighted.

So what I call the success rate — P(E | H) — is represented here as P(four heads and one tail | heads-weighted). P(E | ~H), the alternative hypothesis, is P(four heads and one tail | fair). P(E | H) / P(E | ~H) = 0.079 / 0.031 = 2.531 for James’ likelihood ratio. Jane’s numbers are P(E | H) / P(E | ~H) = 0.237 / 0.031 = 7.593. The combined likelihood ratio is 19.221, which is how much evidence is needed to move the prior from 20% to 83%; that likelihood ratio also happens to be the other two likelihoods multiplied together, 2.531 * 7.593.

Something like this is very handy if you have two people with disparate priors. Two people can have different priors, but as long as you’re updating on the same evidence, the priors will eventually converge. Combining likelihood ratios ensures that both parties are updating on the same evidence, since the likelihood ratio is what is determining how much your prior moves.

 
Comments Off on Grad School

Posted by on April 30, 2014 in Bayes

 

“If I Think Really, Really Hard, I Can Get The Right Answer”

20140425-160031.jpg

Human beings are social animals.

“Duh” you say. Of course we are. Why am I pointing this out? Well, why are human beings social animals? How strong is the desire for socialization; for having friends and family and allies? Think about how something like that would come about, and how strong that pull is in our cognition.

If evolution had to choose between two options — being correct or having allies — which one would evolution choose for brain design? I’m pretty sure that if it was only one that could be chosen, the strategy that confers the most reproductive benefits would be to have allies. Of course, the two aren’t mutually exclusive; you can be correct and have allies.

But then, think of the modular mind. We are strangers to ourselves. There are probably different modules for correctly modeling the world, and others for making friends. And probably just like my computer, one module takes precedence over the other in certain situations. And probably due to evolution, the making allies module(s) probably override the having correct beliefs modules 9 times out of 10 (I just made that ratio up). And these modules probably don’t communicate all that much. The only thing you’re aware of is the end product — your feeling of certainty.

So no, the title-quote of this blog post is wrong. Yet, I see the equivalent of the above numerous times in myriad contentious issues on blogs/online newspapers almost daily; the title of this post is especially wrong if whatever it is you’re attempting to figure out has some sort of moral component. Since morality is all about moderating social behavior, your social brain will be rationalizing things (think religion, politics, social justice, non-economists doing economics, etc.) to make yourself signal being impartial when in reality you’re subconsciously defending your in-group.

Imagine it like this. If you woke up one morning and said to yourself “If I work really hard, I can build a computer” the first thing a normal person would do would probably be to go out and get the tools and materials needed to build a computer. Almost no one would do the complete opposite: Stay in their room and attempt to build a computer with just the tools and materials that they happened to already have in their room. That would be, well, downright irrational.

Do you have the tools and materials, right now, in your bedroom to build a computer? Probably not; unless it was already your job or hobby to build computers.

And yet — and yet! — people do the analogous of the irrational approach of building computers when it comes to “getting the right answer”. They think that they can do the epistemic equivalent of building a computer with just the tools and materials lying around in their apartment instead of going out and getting the proper tools; they think that the fact that they have a brain is evidence enough that they have the proper tools and materials. It would be equally odd (and arrogant) to think that just because you have hands (after all, people who actually build computers also use their hands!) you too can build a computer… with nothing but the tools & materials that are currently in your bedroom. 9 times out of 10, the tools you’ll be using will be the ones to make friends.

So no! By all that is holy in the milk of Hera, no!

If you intend to get the right answers for some issue, you need to first adorn your brain with the right tools and materials: That means learning the methods of rationality. That means getting familiar with Bayes Theorem (the foundation for the logic of science); that means learning how to figure out patterns; that means learning the laws of thought and what actually makes a good explanation; that means knowing that you are the easiest person for yourself to fool, that education more than likely makes you better at defending conclusions you originally arrived at for irrational (or social!) reasons. Education in and of itself actually doesn’t seem to do much to get rid of said irrational conclusions; education just gives you better ammunition to defend them.

Maybe even start taking some creatine!

So it’s not enough to know that you’re a flawed human being. Yes, yes, we all have biases. But someone who engages in fake humility is one who is just professing their flaws, as one would do with a new pair of pants they never wear or a flashy car they never drive; it’s a status symbol; it’s signaling; it’s your social modules; it’s you making friends. The true purpose of humility is to plan to correct for our flaws. Indeed, chances are that the more something promotes prosociality, the less it accurately models reality.

 
Comments Off on “If I Think Really, Really Hard, I Can Get The Right Answer”

Posted by on April 25, 2014 in cognitive science, rationality

 

With Reverence And Fear

20140421-163536.jpg

(Fearsome sauce?)

A few studies about religious belief that I’ve read over the past couple of days.

At PsyPost: Our relationship with God changes when faced with potential romantic rejection:

New research explores a little-understood role of God in people’s lives: helping them cope with the threat of romantic rejection. In this way, God stands in for other relationships in our lives when times are tough.

Most psychological research to date has looked at people’s relationship with God as similar to a parent-child bond, says Kristin Laurin of the Stanford Graduate School of Business. “We wanted to push further the idea that people have a relationship with God in the same sense as they have relationships with other humans,” she says. “The idea is certainly not new in terms of cultural discourse, but it’s not something that psychologists have done a lot of empirical work to study.”

Specifically, Laurin and colleagues wanted to see how our relationship with God changes as our other relationships change. So the researchers designed a series of studies, published today in Social Psychological and Personality Science, that experimentally induced people to believe their romantic relationship was under threat and then tested their feelings of closeness to God. They also wanted to examine the opposite idea – how people’s romantic relationships take on different meaning when their relationship with God is threatened – and tested how this dynamic changed based on the individual’s self-esteem.

[…]

Laurin’s team found that participants sought to enhance their relationship with God when under threat of romantic rejection – but only if they had high self-esteem. This fits with past work showing that people high in self-esteem seek social connection when their relationships are threatened.

[…]

Interestingly, in one of the studies, researchers looked at how people respond to a threat to their relationship with God, and they found similar trends… “We might have thought that people expect God to already know everything about them, and therefore that the concept of a ‘secret self’ that you try to hide from God wouldn’t really make sense,” Laurin says. “But we found that using that threat on people’s relationship with God worked in much the same way as it did with people’s romantic relationships.”

[…]

While the research did not specifically aim to analyze differences in this effect between religions, it did hint at some trends. In the study that included Hindus from India and Christians from the United States, the researchers found no differences when comparing the two groups; they both reacted similarly.

At Epiphenom: Turning to God for reassurance in the face of wonder:

‘Agency detection’ – seeing purposeful minds at work behind seemingly random events – is a powerful human instinct that is thought to play an important role in the generation of religious beliefs.

There’s quite a body of research that shows that a persons ‘agency detection’ can be turned up in circumstances where they are made to feel uncertain or confused. Piercarlo Valdesolo (Claremont McKenna College, USA ) and Jesse Graham (University of Southern California) reckoned that giving people a sense of awe might just unsettle them enough to start detecting agents at work in the world around them.

[…]

What they found, repeatedly, was that watching an awe-inspiring video increased the tendency to see agents at work. So, for example, they were more likely to believe that the strings of random numbers had been put together by humans…

They also measured their subjects’ tolerance of uncertainty “I feel uncomfortable when I don’t understand the reason why an event occurred in my life”. What they found was that watching the awe-inspiring videos did indeed increase their subjects’ tolerance of uncertainty.

What do these two studies have in common? Fear. Fear of the unknown, or fear of your relationship status. It seems as though we turn to our social relationships (including god) to manage how we cope with uncertainty and/or loss. What was interesting about the Epiphenom study was that awe-inspiring things seem to temper uncertainty tolerance, and uncertainty in and of itself makes people more religious. This study also might explain why people get religious experiences when seeing awe-inspiring things in nature, like a frozen waterfall.

Interestingly, the Greek word phobos means both fear and awe. Its Greek synonym deos (fear, awe; used at Hebrews 12.28 “with reverence and fear/awe”) sounds pretty close to theos (god). Probably meaning the connection between fear/awe and god-belief was well known in antiquity so much so that it affected the language.

 
Comments Off on With Reverence And Fear

Posted by on April 21, 2014 in cognitive science, greek

 

A Little Music For Good Friday

 
Comments Off on A Little Music For Good Friday

Posted by on April 18, 2014 in early Christianity, music

 

This Is Your Brain On Catholicism

ash-wednesday

Well, this is pretty interesting. Roman Catholic beliefs produce characteristic neural responses to moral dilemmas. I’m posting it without further comment:

Abstract

This study provides exploratory evidence about how behavioral and neural responses to standard moral dilemmas are influenced by religious belief.

Eleven Catholics and thirteen Atheists (all female) [my emphasis] judged 48 moral dilemmas. Differential neural activity between the two groups was found in precuneus and in prefrontal, frontal and temporal regions. Furthermore, a double dissociation showed that Catholics recruited different areas for deontological (precuneus; temporoparietal junction [TPJ]) and utilitarian moral judgments (dorsolateral prefrontal cortex [DLPFC]; temporal poles [TP]), whereas Atheists did not (superior parietal gyrus [SPG] for both types of judgment). Finally, we tested how both groups responded to personal and impersonal moral dilemmas: Catholics showed enhanced activity in DLPFC and posterior cingulate cortex [PCC] during utilitarian moral judgments to impersonal moral dilemmas, and enhanced responses in anterior cingulate cortex [ACC] and superior temporal sulcus [STS] during deontological moral judgments to personal moral dilemmas.

Our results indicate that moral judgment can be influenced by an acquired set of norms and conventions transmitted through religious indoctrination and practice. Catholic individuals may hold enhanced awareness of the incommensurability between two unequivocal doctrines of the Catholic belief set, triggered explicitly in a moral dilemma: help and care in all circumstances –but thou shalt not kill.

Actually I do have a comment: There’s probably a reason why they went with an all female sample group.

(h/t Scott)

 
Comments Off on This Is Your Brain On Catholicism

Posted by on April 15, 2014 in cognitive science

 

Buddhism and Modern Psychology

So I’m taking a course on Coursera called Buddhism and Modern Psychology. As the course title might suggest, it’s a course about the intersection of Buddhist thought and modern findings in psychology. It’s a pretty interesting course, piquing my interest in Buddhism (again) and also learning some neat new stuff about psychology and meditation.

My first homework assignment is due pretty soon, so I thought I’d reblog (so to say) the work I’m going to submit here on my blog. The assignment:

The Buddha makes the claim, which may draw some support from modern psychology, that the self does not exist. Describe the self that the Buddha says does not exist and explain the Buddha’s principal argument against it. Do you agree or disagree with the Buddha’s argument that this kind of self doesn’t exist? Or are you unable to take a position? Give two specific reasons for your view, and explain your reasons support either the existence of the self or the non-existence of the self, or why they explain why you are unable to take a position on the question.

The Buddha’s main argument is premised on his conception of the makeup of a person. That makeup is composed of the five aggregates. The five aggregates are form (physical body) feeling, mental formations (emotions, desires), perception, and consciousness (subjective awareness). Buddha goes through the qualities that are thought to be associated with the aggregates and says that the “self” cannot be made up of them.

Impermanence is his main argument against the self, so he must have thought the self has a sort of persistence; something that does not change through time and space. Additionally the Buddha thought of the self as being associated with being under control, however the Buddha argued that the self cannot control feeling or form (e.g. you can’t will yourself to be happy, or decide to grow an extra arm), or any of the other aggregates, so therefore the self as he conceived of it does not exist.

On the other hand, the self or something like it has to exist if you are being “liberated”. So it is argued that the Buddha wasn’t speaking literally about the self not existing but argued more from an instrumental perspective. In order to get someone to actually accept the impermanence of things, one has to understand the impermanence of the individual “parts” of a person like their form or mental formations. Indeed, the Buddha teaches that the self exists for karma purposes when making ethical pronouncements.

The Buddha’s formulation of the self as being composed of five aggregates matches with modern psychology’s view of the mind and brain. The brain might also be composed, not of five aggregates, but of modules; each module having a specific function. Beyond this, there doesn’t seem to be any further overlap. The modular mind view in psychology is much more specific than the Buddha’s general five aggregates; though both systems make it hard to pinpoint where exactly a “self” would reside. In both modern psychology and Buddhism, there seems to be a rejection of the Cartesian theater model of the self that most everyday people have of themselves.

I would have to say that I am convinced by the Buddha’s argument that the self doesn’t reside in any of the five aggregates, but not because the aggregates lack persistence over time. Even without his rationale that the self is supposed to have a sort of permanence or has the quality of being “under control”, it would be hard to locate a sort of CEO, king, or even Cartesian theater version of the self in any of the five aggregates. This is not the Buddha’s argument (or if it is, I’ve not heard it yet) but it very well could be that your form affects your feeling and mental state, or your mental state affects your consciousness/awareness and perception. Each of the five aggregates can influence any of the other aggregates so it would be hard to cordon off one aggregate and claim that that one in particular is where the self resides.

(Note: I didn’t put the hyperlinks in the one I actually submitted)

 
Comments Off on Buddhism and Modern Psychology

Posted by on April 12, 2014 in buddhism, cognitive science

 

Ara Norenzayan: Religion and Prosociality

This is a lecture given by Ara Norenzayan describing some of his findings about the sociology behind religious beliefs.

Some of the things he touches on briefly in his lecture (which he said he goes into more depth in his book Big Gods):

Whether religions are the result of a cognitive byproduct model or an evolutionary/sociological adaptation/benefit model. The cognitive byproduct model I first read about in Richard Dawkins’ book The God Delusion, though psychologists and sociologists are coming more in line with the evolutionary adaptation model of religion (though the two aren’t necessarily in opposition). The adaptation model of religions is where religion isn’t a fluke of human cognition, but was specifically selected for by evolution. Or rather and more simply, that religious people had more reproductive success than the “non-religious” (whatever that would mean in the Pleistocene). And, it wasn’t just any old religion; it had to be a religion that promoted prosociality.

As an example of a successful religion vs. an unsuccessful religion, Norenzayan compares the Mormon church with the Oneida Perfectionists, who both started around the same time in the same location in the US. Mormonism had a growth rate of about 40% per decade (about the same rate as original Christianity), while the Oneida perfectionists only lasted about 30 years before disbanding; the remnants went on to become a silverware company.

Norenzayan then lists some reasons for why religions become successful:

  • moralizing god spread over increasing individualism to combat the observation that proximity + diversity = war
  • extravagant displays or synchronous behaviors/rituals
  • inculcate self control
  • moral realism: Our morality is the true morality
  • high fertility rates

Norenzayan also mentions that the more abstract ones conception of god is, the less they think said god cares about morality and/or punishes bad behavior. So someone who believes in a completely abstract “ground of being” god more than likely also believes that this god doesn’t care too much about morality. Whereas someone who believes in a god that cares a great deal about morality simultaneously believes that said god is also much more anthropomorphic. At one end of the spectrum is the god of the philosophers/Sophisticated TheologiansTM, at the other is the god of fundamentalists.

Other points:

* Small foraging societies typically don’t have moralizing gods. Big societies generally have moralizing gods. Causal or correlational?

* Economic games and small/big religions: Big religions, that is, the world religions, show more cooperative behavior in economic games. Small religions are more selfish. Again, causal or correlational?

* Belief in god in and of itself doesn’t correlate with any behavior in monetary generosity (belief in god per se doesn’t lead to moral behavior; you need to go to church to reap the benefits! And you get those same benefits being an atheist in church). Though in the context that Norenzayan was mentioning this fact, it was in the context of religious priming. Just declaring theism didn’t make someone more cooperative, but religious priming does. On the other hand, being non-religious makes you sort of impervious to religious priming; though secular priming has the same cooperative effect on the non-religious.

* Prosocial behavior correlates with a belief in a punishing god. Belief in a forgiving god correlates with cheating. Same for hell/heaven belief, respectively (though belief in hell seems to make people less happy).

* Religions are also correlated with extreme rituals for possibly belief in belief (i.e. costly signaling) reasons.

* Religious communes last longer than secular communes; religious ones are more strict. Again… causal or correlational?

* The more that the state/secular institutions provide the things that religion usually provides, the less religious that society. I’ve also read about similar things.

 
Comments Off on Ara Norenzayan: Religion and Prosociality

Posted by on April 11, 2014 in cognitive science, economics/sociology, religiosity

 
 
NeuroLogica Blog

My ὑπομνήματα about religion

Slate Star Codex

SSC DISCORD SERVER AT https://discordapp.com/invite/gpaTCxh ; SCHELLING POINT FOR DISCUSSION IS WED 10 PM EST

Κέλσος

Matthew Ferguson Blogs

The Wandering Scientist

Just another WordPress.com site

NT Blog

My ὑπομνήματα about religion

Euangelion Kata Markon

A blog dedicated to the academic study of the "Gospel According to Mark"

PsyPost

My ὑπομνήματα about religion

PsyBlog

Understand your mind with the science of psychology -

Vridar

Musings on biblical studies, politics, religion, ethics, human nature, tidbits from science

Maximum Entropy

My ὑπομνήματα about religion

My ὑπομνήματα about religion

My ὑπομνήματα about religion

atheist, polyamorous skeptics

Criticism is not uncivil

Say..

My ὑπομνήματα about religion

Research Digest

My ὑπομνήματα about religion

Disrupting Dinner Parties

Feminism is for everyone!

My ὑπομνήματα about religion

The New Oxonian

Religion and Culture for the Intellectually Impatient

The Musings of Thomas Verenna

A Biblioblog about imitation, the Biblical Narratives, and the figure of Jesus