SBL Pacific Coast Region Conference 2015: Dennis MacDonald

Originally posted on Κέλσος:

Earlier this month I attended the SBL Pacific Coast Region conference at Azusa Pacific University. For those who have been following the Bible blog sphere, this conference was particularly prominent, since Richard Carrier defended his new book On the Historicity of Jesus Christ during the meeting, which is the first academically published book defending the Christ Myth Theory. I do not agree with the mythicist position, as I have discussed in a previous article, but I do think that Carrier’s new books is the best defense of the theory published yet. Unfortunately, I actually had to miss Carrier’s defense due to a scheduling conflict, but Simon Joseph has posted a (fairly critical) review of Carrier’s presentation, and Carrier himself has also written a post responding to Kenneth Waters Sr., who critiqued Carrier’s thesis during the conference. Each post provides a good summary of the arguments on either side.

MacDonaldIn this post, however, I…

View original 1,624 more words

Leave a comment

Posted by on March 26, 2015 in religion


Unfalsifiable Beliefs Are More Attractive When We’re Threatened

Phalanx from 300


When people hear the phrase “unfalsifiable” it’s usually in a scientific context. It was one of Karl Popper’s definitions of science, which he crafted in opposition to Freud’s method of psychoanalysis. In this case, religious people aren’t really concerned that their beliefs are unfalsifiable; religion is not science.

But the appeal to the unfalsifiable isn’t restricted to religious belief. It seems to apply and appeal to people in a general moral domain, and the largest sample of unfalsifiable beliefs outside of religion are found in the realm of politics.

From Friesen, et al. (2014):


We propose that people may gain certain “offensive” and “defensive” advantages for their cherished belief systems (e.g., religious and political views) by including aspects of unfalsifiability in those belief systems, such that some aspects of the beliefs cannot be tested empirically and conclusively refuted. This may seem peculiar, irrational, or at least undesirable to many people because it is assumed that the primary purpose of a belief is to know objective truth. However, past research suggests that accuracy is only one psychological motivation among many, and falsifiability or testability may be less important when the purpose of a belief serves other psychological motives (e.g., to maintain one’s worldviews, serve an identity). In Experiments 1 and 2 we demonstrate the “offensive” function of unfalsifiability: that it allows religious adherents to hold their beliefs with more conviction and political partisans to polarize and criticize their opponents more extremely. Next we demonstrate unfalsifiability’s “defensive” function: When facts threaten their worldviews, religious participants frame specific reasons for their beliefs in more unfalsifiable terms (Experiment 3) and political partisans construe political issues as more unfalsifiable (“moral opinion”) instead of falsifiable (“a matter of facts”; Experiment 4). We conclude by discussing how in a world where beliefs and ideas are becoming more easily testable by data, unfalsifiability might be an attractive aspect to include in one’s belief systems, and how unfalsifiability may contribute to polarization, intractability, and the marginalization of science in public discourse.

As an sort of aside, just because a belief is unfalsifiable doesn’t mean that it’s false. A belief can be unfalsifiable but yet still be true. Falsifiability is a problem for epistemology, not ontology. For example, there’s no possible observation I can make where I’m not alive. So from my point of view, being alive is unfalsifiable.

Anyway, retreating to unfalsifiable beliefs once you feel you’re under attack seems like it’s a pretty good example of a Mott and Bailey tactic. If you recall, Mott and Baliey behavior, as Scott describes, is when:

I feel like every single term in social justice terminology has a totally unobjectionable and obviously important meaning – and then is actually used a completely different way.

The closest analogy I can think of is those religious people who say “God is just another word for the order and beauty in the Universe” – and then later pray to God to smite their enemies. And if you criticize them for doing the latter, they say “But God just means there is order and beauty in the universe, surely you’re not objecting to that?”

The result is that people can accuse people of “privilege” or “mansplaining” no matter what they do, and then when people criticize the concept of “privilege” they retreat back to “but ‘privilege’ just means you’re interrupting women in a women-only safe space. Surely no one can object to criticizing people who do that?”

So someone presents evidence that the type of god that the average religious person believes in doesn’t exist and a sophisticate rejoins with a completely unfalsifiable version of that god, saying that “of course” no one believes in the first type of god. But you can bet that once the sophisticate feels like they are no longer under attack, they will go back to believing in the “falsifiable” version of that god again.

So what we have here seems to be basic human psychology. When we feel threatened, we retreat to our Mott: The unfalsifiable version of our cherished belief(s). Maybe in the future we’ll read psychology articles about the Mott & Bailey Effect that describes people’s tendency to retreat to the unfalsifiable version of their beliefs when they feel like they’re under attack.

Related, if you want to try persuading someone, try to make sure they don’t feel like you’re attacking them.

(h/t Epiphenom)

Leave a comment

Posted by on February 8, 2015 in cognitive science, religion


What Gambling Monkeys Teach Us About Human Rationality

From the website Mind Hacks:

When we gamble, something odd and seemingly irrational happens.

It’s called the ‘hot hand’ fallacy – a belief that your luck comes in streaks – and it can lose you a lot of money. Win on roulette and your chances of winning again aren’t more or less – they stay exactly the same. But something in human psychology resists this fact, and people often place money on the premise that streaks of luck will continue – the so called ‘hot hand’.

The opposite superstition is to bet that a streak has to end, in the false belief that independent events of chance must somehow even out. This is known as the gambler’s fallacy, and achieved notoriety at the Casino de Monte-Carlo on 18 August 1913. The ball fell on black 26 times in a row, and as the streak lengthened gamblers lost millions betting on red, believing that the chances changed with the length of the run of blacks.


An experiment reported by Tommy Blanchard of the University of Rochester in New York State, and colleagues, shows that monkeys playing a gambling game are swayed by the same hot hand bias as humans. Their experiments involved three monkeys controlling a computer display with their eye-movements – indicating their choices by shifting their gaze left or right. In the experiment they were given two options, only one of which delivered a reward. When the correct option was random – the same 50:50 chance as a coin flip – the monkeys still had a tendency to select the previously winning option, as if luck should continue, clumping together in streaks.

The reason the result is so interesting is that monkeys aren’t taught probability theory as school. They never learn theories of randomness, or pick up complex ideas about chance events. The monkey’s choices must be based on some more primitive instincts about how the world works – they can’t be displaying irrational beliefs about probability, because they cannot have false beliefs, in the way humans can, about how luck works. Yet they show the same bias.

As the writer says, people being bad at probability might have some sort of primitive cause. A module or something that evolved in our brain before homo sapiens were sapient. If seeing consistency where there is none came about due to our evolutionary heritage, then things like believing in conspiracy theories or the supernatural were sort of bred into us by evolutionary processes. Combine this premise with our highly social brain and we might have another reason why belief in god is so prevalent, which is usually the go-to example of failing at using probability correctly.

Another commenter on the site provides some additional common irrationalities between us and other animals:

Humans also succumb to another fallacy that is strikingly irrational from an economic standpoint: They often give greater value to objects of good quality than to the same objects together with objects of lesser quality. This so-called “less is more effect” can be demonstrated when humans are asked to estimate the value of two alternatives, one of which is objectively of greater value than the other. For example, in one study subjects bid at an auction on 10 baseball cards in mint condition and at a different time on the same 10 cards with an additional 3 cards that were judged to be in poorer condition. Although the 3 cards in poorer condition were not worth as much as the cards in mint condition, they were each worth something. Nevertheless, the bid for the 10-card set was on average 59% higher than it was for the 13-card set.

Interestingly, animals, too, appear to experience this kind of sub-optimal judgment. For instance, monkeys willingly ate a piece of sliced vegetable or a grape but when offered a choice between them, showed a clear preference for the grape over the vegetable slice. However, surprisingly, when they were offered a choice between a single grape and a grape plus a slice of vegetable, they reliably preferred the single grape. This is hard to understand, as one would think that the struggle for existence teaches animals “every calorie counts”.

To test the generality of this effect, Zentall conducted a similar experiment with dogs. The dogs showed a preference for a small piece of cheese over a small piece of carrot but would willingly eat the piece of carrot when offered by itself . When, on critical test trials, the researcher offered the dogs a piece of cheese together with a piece of carrot versus a piece of cheese alone, all of the dogs except one preferred the piece of cheese alone.

We should keep stuff like this in mind when we get upset that people are behaving in irrational ways… like not vaccinating their children. Which is definitely another example of failing at applying probability correctly (probability is logic of science). We’re still animals. Social animals, but animals nonetheless.

Leave a comment

Posted by on February 4, 2015 in cognitive science


Mechanical Thinking Inhibits Empathic Thinking, And Vice Versa


More strangeness from the realm of cognitive science:


Two lines of evidence indicate that there exists a reciprocal inhibitory relationship between opposed brain networks. First, most attention-demanding cognitive tasks activate a stereotypical set of brain areas, known as the task-positive network and simultaneously deactivate a different set of brain regions, commonly referred to as the task negative or default mode network. Second, functional connectivity analyses show that these same opposed networks are anti-correlated in the resting state… tasks requiring social cognition, i.e., reasoning about the mental states of other persons, and tasks requiring physical cognition, i.e., reasoning about the causal/mechanical properties of inanimate objects. Social and mechanical reasoning tasks were presented to neurologically normal participants during fMRI. Each task type was presented using both text and video clips. Regardless of presentation modality, we observed clear evidence of reciprocal suppression: social tasks deactivated regions associated with mechanical reasoning and mechanical tasks deactivated regions associated with social reasoning.

So it seems that when we’re thinking of things in terms of objects, we (or rather, our brains) shuts of the empathy circuitry. And then when we’re thinking in terms of people, our brains turns off the sort of “machinery” circuitry.

This is an odd coincidence with the evidence that testosterone injections dampen oxytocin. Oxytocin, of course, is the social/trust/empathy hormone. It would be especially weird if figuring out mechanical properties of inanimate objects subtly increased testosterone, or if thinking about social reasoning increased oxytocin.

This is also oddly a pretty strong reification of the two different thinking styles of System 1 and System 2; what I analogize as the thief and the wizard and further as the intuitionists and the rationalists.

(H/t PsyBlog)

Leave a comment

Posted by on January 6, 2015 in cognitive science


Reading Fiction, As Opposed To Non-Fiction, Temporarily Changes Personality

woman sitting on a sofa reading a book

Here’s a third article I’ve found that demonstrates this effect. My other two posts describe people adopting the morality and temperament of the people they read in fiction. This new one has an added twist, where the control is a version of the story with the same same facts but not written as a narrative:

In one experiment, published in 2009 in the Creativity Research Journal, we and the psychologists Sara Zoeterman and Jordan B. Peterson randomly assigned participants to one of two groups: one whose members read “The Lady With the Dog,” an Anton Chekhov short story centered on marital infidelity, and another whose members read a “nonfictionalized” version of the story, written in the form of a report from a divorce court.

The nonfiction text was the same length and offered the same ease of reading as Chekhov’s story. It contained the same information, including some of the same dialogue. (Notably, though readers of this text deemed it less artistic than readers of “The Lady With the Dog” deemed their text, they found it just as interesting.)

Before they started reading, each participant took a standard test of the so-called big five personality traits: extroversion, neuroticism, openness, agreeableness and conscientiousness. The participants also rated how they were feeling, on a scale of 0 to 10, for 10 different emotions. Then, after reading the text they were assigned, the participants were again given the personality test and asked to rate their emotions.The personality scores of those who read the nonfiction text remained much the same. But the personality scores of those who read the Chekhov story fluctuated. The changes were not large but they were statistically significant, and they were correlated with the intensity of emotions people experienced as they read the story. Chekhov’s story seemed to get people to start thinking about their personalities — about themselves — in new ways

Dark Arts and/or persuasion alert: If you want to convince someone, to actually temporarily change their personality or moral stance on some issue, don’t just give them the facts. Give them a story they can read, follow, and empathize with; a story where they can place themselves in the character’s (your character’s) shoes.

(h/t Robin Hanson)

Leave a comment

Posted by on January 1, 2015 in cognitive science


Against Asshole Atheists

J. Quinton:

“The world is a million-question test. The problem with Asshole Atheists is that they look at the first question, bubble in “No” on “is there a God?”, lie back in their chairs, and are like “I got an A!” That’s very nice for you, getting the first question right. Now it’s time to deal with the rest of them.”

Originally posted on Thing of Things:

Religious people: This post mentions the nonexistence of certain things the majority of religious people believe exist, such as God, an afterlife, the supernatural, and any nonhuman force that rewards good and punishes evil in the world. If your form of religion doesn’t believe in those things, that’s very nice for you and I’m not talking about you. If you are upset at the suggestion that these things don’t exist or that the majority of religious people do believe they exist, I suggest you look at Cute Roulette instead, because this post will not make you happy.

Today I would like to complain about the phenomenon of Asshole Atheists. Let me be clear here: when I talk about Asshole Atheists, I’m not talking about people who are loudly atheist. While some people have a tendency to consider you an asshole if you say, loudly and without caveats, that God doesn’t…

View original 646 more words

1 Comment

Posted by on December 31, 2014 in religion


Nature or Nature’s God


Cthulhu. Apparently, the spirit animal for the Enlightenment

Lately I’ve been implicitly writing about how religion isn’t some quirk of human cognition but the result of humans unwittingly designing something that appeals to our brain architecture. Much like how blockbuster movies, apple pie, roller-coaster rides, or even crack-cocaine are human designed. Furthermore, I’ve been writing about how religion isn’t some unique evil set loose upon the world that we must work to destroy, but rather something that we should try to harness and use as a well thought-out instrument towards our betterment.

Indeed, there’s nothing we can do to change the laws of physics, but we manipulate those laws to give us heavier than air flight, the Internet, GPS satellites that account for Einsteinian relativity, and GMO food to feed many more people than natural food. Human cognition should be “exploited” in the same manner to make life better. People already exploit human cognition for their own personal gain. We should use it instead to improve the world.

But why is it so easy to see religion as a unique evil? Memes.

Memes, just like genes, can reproduce. And in that paradigm, successful reproduction depends on not only adapting to the environment the meme/gene finds itself in, but a gene/meme that fully exploits its environment to reproduce will outcompete other memes/genes. Moreover, genes/memes can manipulate their hosts to change the environment to better fulfill that selfish gene/meme’s ability to reproduce. This happens in nature with parasites, where they change the behavior of their host to make the parasite more likely to reproduce successfully. Sometimes, to the detriment of the host.

Memes do this too.

Imagine you have an idea, much like the thesis of this post. That we should use the way we know how human cognition works in order to make the world better. A more fleshed out version of this would be filled with complexities and nuance; one that at least attempts to make sure that things don’t go awry. I mean, let’s face it: “exploiting human cognition” is ominous enough. But a successful meme is going to be successful due to the environment it finds itself in. The free market of ideas doesn’t select for truth, but for reproductive fitness.

I’ll say this again: In the free market of ideas, memes couldn’t care less about accurately modeling the world. Memes get set in the population by how virulent they are. Think viral videos. Just because a video goes viral doesn’t mean it’s true. A viral video has been “naturally selected” to propagate through memespace due to its success in a particular time period and environment. The same principle is in effect for any and all other memes or ideas that you are presented with and eventually become part of your identity. Human beings are especially susceptible to this due to our natural tendency for groupthink. Do you think you can find out what’s true just by sitting around and thinking really hard? Think again. The tools you’ll be unwittingly using will be the ones to make friends; those tools working in the service of whatever large-scale memes are part of your identity. This is generally called “bias”. You are biased, and so just like aircraft engineers account for the laws of physics and aerodynamics to build planes, you should account for human bias when attempting to navigate memespace.

So what, specifically, is the logical outcome of meme fitness? Memes that are optimized for virulence — memes, again, are not intentionally designed by humans per se — are most likely the memes that you identify with. And in the rat race of ideaspace, the optimization will take priority over any and all other goals. Indeed, it might even come to pass that you sacrifice a terminal goal for more optimization. Scott at Slate Star Codex calls this sacrificial behavior Moloch:


A basic principle unites all of the multipolar traps above. In some competition optimizing for X, the opportunity arises to throw some other value under the bus for improved X. Those who take it prosper. Those who don’t take it die out. Eventually, everyone’s relative status is about the same as before, but everyone’s absolute status is worse than before. The process continues until all other values that can be traded off have been – in other words, until human ingenuity cannot possibly figure out a way to make things any worse… Any human with above room temperature IQ can design a utopia. The reason our current system isn’t a utopia is that it wasn’t designed by humans.


But these institutions not only incentivize others, but are incentivized themselves. These are large organizations made of lots of people who are competing for jobs, status, prestige, et cetera – there’s no reason they should be immune to the same multipolar traps as everyone else, and indeed they aren’t. Governments can in theory keep corporations, citizens, et cetera out of certain traps, but as we saw above there are many traps that governments themselves can fall into.

The United States tries to solve the problem by having multiple levels of government, unbreakable constitutional laws, checks and balances between different branches, and a couple of other hacks.

Saudi Arabia uses a different tactic. They just put one guy in charge of everything.

This is the much-maligned – I think unfairly – argument in favor of monarchy. A monarch is an unincentivized incentivizer. He actually has the god’s-eye-view and is outside of and above every system. He has permanently won all competitions and is not competing for anything, and therefore he is perfectly free of Moloch and of the incentives that would otherwise channel his incentives into predetermined paths. Aside from a few very theoretical proposals like my Shining Garden, monarchy is the only system that does this.

But then instead of following a random incentive structure, we’re following the whim of one guy. Caesar’s Palace Hotel and Casino is a crazy waste of resources, but the actual Gaius Julius Caesar Augustus Germanicus wasn’t exactly the perfect benevolent rational central planner either.

The libertarian-authoritarian axis on the Political Compass is a tradeoff between discoordination and tyranny. You can have everything perfectly coordinated by someone with a god’s-eye-view – but then you risk Stalin. And you can be totally free of all central authority – but then you’re stuck in every stupid multipolar trap Moloch can devise.

The libertarians make a convincing argument for the one side, and the neoreactionaries for the other, but I expect that like most tradeoffs we just have to hold our noses and admit it’s a really hard problem.


Democracy is less obviously vulnerable, but it might be worth going back to Bostrom’s paragraph about the Quiverfull movement. These are some really religious Christians who think that God wants them to have as many kids as possible, and who can end up with families of ten or more. Their articles explicitly calculate that if they start at two percent of the population, but have on average eight children per generation when everyone else on average only has two, within three generations they’ll make up half the population.

It’s a clever strategy, but I can think of one thing that will save us: judging by how many ex-Quiverfull blogs I found when searching for those statistics, their retention rates even within a single generation are pretty grim. Their article admits that 80% of very religious children leave the church as adults (although of course they expect their own movement to do better). And this is not a symmetrical process – 80% of children who grow up in atheist families aren’t becoming Quiverfull.

It looks a lot like even though they are outbreeding us, we are outmeme-ing them, and that gives us a decisive advantage.

But we should also be kind of scared of this process. Memes optimize for making people want to accept them and pass them on – so like capitalism and democracy, they’re optimizing for a proxy of making us happy, but that proxy can easily get uncoupled from the original goal.

Chain letters, urban legends, propaganda, and viral marketing are all examples of memes that don’t satisfy our explicit values (true and useful) but are sufficiently memetically virulent that they spread anyway.

I hope it’s not too controversial here to say the same thing is true of religion. Religions, at their heart, are the most basic form of memetic replicator – “Believe this statement and repeat it to everyone you hear or else you will be eternally tortured”. A slight variation of this was recently banned as a basilisk, and people make fun of the “overreaction”, but maybe if Jesus’ system administrator had been equally watchful things would have turned out a little different… The point is – imagine a country full of bioweapon labs, where people toil day and night to invent new infectious agents. The existence of these labs, and their right to throw whatever they develop in the water supply is protected by law. And the country is also linked by the world’s most perfect mass transit system that every single person uses every day, so that any new pathogen can spread to the entire country instantaneously. You’d expect things to start going bad for that city pretty quickly.

Well, we have about a zillion think tanks researching new and better forms of propaganda. And we have constitutionally protected freedom of speech. And we have the Internet. So we’re pretty much screwed.

A topical example explains Moloch more readily: The airline JetBlue recently sacrificed customer comfort for profits:

This fall, JetBlue airline finally threw in the towel. For years, the company was among the last holdouts in the face of an industry trend toward smaller seats, higher fees, and other forms of unpleasantness. JetBlue distinguished itself by providing decent, fee-free service for everyone, an approach that seemed to be working: passengers liked the airline, and it made a consistent profit. Wall Street analysts, however, accused JetBlue of being “overly brand-conscious and customer-focussed.” In November, the airline, under new management, announced that it would follow United, Delta, and the other major carriers by cramming more seats into economy, shrinking leg room, and charging a range of new fees for things like bags and WiFi.

When I read Scott’s opus on Moloch, my amorphous cynicism about humanity finally solidified. And I thought “That’s why I think humanity is fucked!”

Another blog — one of the, uhh… perushim of Less Wrong — has a concept with a lot of overlap with Scott’s Moloch and breaks it down into four sort of… sephiroth, or aspects, or emanations, or… something… of what they call “Nature or Nature’s God” (“Gnon“, since you have to spell the acronym backwards to make it more ominous, right?). The four sephirot of Gnon are:

Azathoth. Death. Evolution. The blind idiot alien god that shapes our biological nature and guides our genetic destiny according to who lives and who dies. Contrary to popular belief, the telos of evolution is not progress to more “advanced” forms; it will ruthlessly twist organisms for a few points of inclusive genetic fitness, and abandon “important” features of an organism (eg. our intelligence) as soon as they stop being critical to fertility.

Cthulhu. Pestilence. Hosted Evolution. Memetics. Epidemics. The tendency for popular forms to be those most able to propagate themselves by capturing transmission institutions and getting repeated. Contrary to popular opinion, the “marketplace of ideas” does not select for truth and good, but virulence. Truth/good selection only happens if the mass idea-propagation systems structurally favor truth and good, which they often do not. The current result being that “Cthulhu may swim slowly, but he only swims left.”

Mammon. Famine. Capitalism. Techno-Economical Optimization. Production. When a form succeeds by exploiting a technological resource-use opportunity, that is Mammon at work. Thus we have an efficient and recycling biological ecosystem, and human capitalism has driven the creation of great works of technology. But Mammon will ruthlessly recycle forms not contributing to the cutting edge of production, including us, if it comes to that.

Ares. War. Conquest. Empire. Agricultural Civilization won not because it was “better” in our sense, but because 100 malnourished toothless peasants with sticks beats one of even the healthiest and best trained tribal warriors. War is computation with weapons, and the truth thus revealed is simply which sociomilitary group is stronger.

Another LW user, jaime2000, sums up Gnon:

Gnon is reality, with an emphasis towards the aspects of reality which have important social consequences. When you build an airplane and fuck up the wing design, Gnon is the guy who swats it down. When you adopt a pacifist philosophy and abolish your military, Gnon is the guy who invades your country. When you are a crustacean struggling to survive in the ocean floor, Gnon is the guy who turns you into a crab.

Basically, reality has mathematical, physical, biological, economical, sociological, and game-theoretical [my link] laws. We anthropomorphize those laws as Gnon.

So back to my original point. What would a more “rational” religion look like? I can’t really tell you (in general it’ll probably have some group dancing or group singing, maybe some extreme rituals, a good narrative/mythos/story; maybe all of that at once), but I can tell you what would probably happen to this more rational religion. Since, you know, it won’t be just itself in the world… it’ll be a religion that is stuck in a world where if that religion is to survive in the minds of us humans, it’s going to be subject to an optimization process. You can probably see where this is going.

This rational religion will be designed with a bunch of nuance and subtlety, probably about using Bayes Theorem and decision theory appropriately. And on paper it’ll be good. But human minds aren’t designed for nuance and complexity. Our minds are designed for simplicity; they are run by our intuition. And our intuition doesn’t like dealing with complexity. It likes feels. There will then grow out from this nuanced rational religion a simpler one for the masses because that’s what sticks for the lowest common denominator. The two religions carry the same name, but one spreads more rapidly due to it being optimized for spreading, not for nuance. It spreads more rapidly due to winning in the marketplace of ideas, not due to its subtlety. And so the “winning” optimized version of this new rational religion overtakes the marketplace instead of the rational-optimized version. But the original version is not doing much to correct this, because they both carry the same banner; cooperation wins over defection in an iterated prisoner’s dilemma, as any rationalist would know. And as such, a Mott-and-Bailey-like situation happens between the sort of neighboring-ring-species religions. One version is the actual nuanced version and the other only pays lip service to being the nuanced version. But they both have the same name.

The fact that people Mott-and-Bailey is probably evidence enough that this has happened throughout history. There are motts/baileys for Christianity, for Communism, for feminism, for America, for The Ravens, the list is endless. There’s always the academic, nuanced version and then the version optimized for spreading; the version that beautifully haunts the halls of the academe and the version that gorges in the troughs on Tumblr; both falling under the same name.

I mean, I think that Marcionism is — was — the most rational version of Christianity. But it lost out in the marketplace of ideas in ante-Nicaea Christianity because it wasn’t optimized for its time period. Imagine a Christianity that completely ignored Jews, that didn’t have deplorable lines like Τὸ αἷμα αὐτοῦ. ἐφ᾽ ἡμᾶς καὶ ἐπὶ τὰ τέκνα ἡμῶν; a Christianity without centuries of Jewish pogroms, expulsions, and holocausts. But that more humane Christianity lost to the one better optimized for “winning”. The same thing will probably happen to any rational religion that we design, since it will ultimately be subject to Gnon and its sephirot, sacrificing its terminal goals to Moloch so that it can better optimize its winning power. Though I hope I’m wrong.


Posted by on December 29, 2014 in economics/sociology, religion

Slate Star Codex

In a mad world, all blogging is psychiatry blogging


Matthew Ferguson Blogs

The Wandering Scientist

Just another site

NT Blog

My ὑπομνήματα about religion

Euangelion Kata Markon

A blog dedicated to the academic study of the "Gospel According to Mark"


My ὑπομνήματα about religion


Understand your mind with the science of psychology -


Musings on biblical studies, politics, religion, ethics, human nature, tidbits from science

Maximum Entropy

My ὑπομνήματα about religion

My ὑπομνήματα about religion

My ὑπομνήματα about religion

atheist, polyamorous skeptics

Criticism is not uncivil


My ὑπομνήματα about religion

BPS Research Digest

My ὑπομνήματα about religion

Disrupting Dinner Parties

Feminism is for everyone!

My ὑπομνήματα about religion

The New Oxonian

Religion and Culture for the Intellectually Impatient

The Musings of Thomas Verenna

A Biblioblog about imitation, the Biblical Narratives, and the figure of Jesus

The Floating Lantern

Discussing science, religion, dance, and anything else I come up with


Get every new post delivered to your Inbox.