RSS

Monthly Archives: February 2012

Adam Lee and the Apologist’s Turnstile

Adam Lee has defined what I think is an important aspect of apologetics that needs to be pointed out and denounced at every turn:

[T]he idea that no particular level of knowledge is needed to assent to a religion, but an impossibly, unattainably high level of knowledge and expertise is needed to deny it. In the minds of many believers, the entrance to their religion is like a subway turnstile: a barrier that only allows people to pass through in one direction.

This is similar to the tactic called the Courtier’s Reply, the silencing argument often used against atheists which holds that no one is qualified to criticize a religion in any particular unless they’ve completed a total study of its most esoteric doctrines. The difference is that the Apologist’s Turnstile adds the assumption, implicitly or explicitly, that none of this knowledge is necessary to join or to be a member of that same religion.

This is a very good point, and it’s slightly related to my previous post. Most people join religions for unsophisticated reasons, yet one is only lauded if you leave the religion for sophisticated reasons. It should be the other way around, at least for the “joining” part. No one that I know of was a disinterested bystander of Christianity and then read a ton of apologetics and weighed them against a ton of skeptical books before converting.

Advertisements
 
4 Comments

Posted by on February 10, 2012 in apologetics

 

Mass Hysteria

Apparently there is a case of mass hysteria, or “psychogenic illness”, in upstate NY. Of course, there’s a stigma against mass hysteria because it makes it seem as though the hysteria isn’t real. It is very much real, it’s just that people, more importantly, that mass of flesh in our heads that controls 99% of the events in our body, has a less than ideal method for determining what’s real and what’s not.

The condition may sound unlikely, but it is real, and it has in the past caused significant problems for emergency services. For example, after terrorists released toxic gas in the Tokyo subway system in 1995, commuters fell ill with mass dizziness and nausea. But doctors found that more than 70% of the 5,500 people who sought help at hospitals for gas-related symptoms turned out not to have been significantly exposed [PDF].

Similarly, strange smells in schools, businesses and factories have set off numerous outbreaks of fainting, nausea and cramps in the absence of actual chemical dangers, typically affecting only those who have seen other affected people or who believe the smell is dangerous. Recent decades have seen cases in Jordan, France and Colorado.

A 2011 study led by Joan Broderick of Stony Brook University in New York found that psychogenic symptoms can even be deliberately induced in normal, healthy adults. In the research, participants were given a pill and told that it was an experimental drug that had mild side effects and was being tested to increase effectiveness of flu treatment during a pandemic. Sixty-seven people participated in the study, which took place in a hospital.

Researchers divided the participants into three groups: one group received the pill (actually a placebo) in the presence of actors who also took it and displayed symptoms like nausea, headache and dizziness. A second group took the pill in the presence of actors faking symptoms and also watched a documentary about pandemic flu. A third group simply sat in the waiting room after taking the pill.

The participants who took the pill with the actors were 11 times more likely to show signs of illness than the control group — regardless of whether they watched the documentary. Some people developed symptoms that the actors had not even displayed. (They were all debriefed about the research afterward, and none objected to the earlier deception.)

Of course, labeling a condition as “psychogenic” or, worse, “hysteria” seems belittling and demeaning. Many people mistakenly believe that this means affected people are faking their symptoms and can control them. Despite the strides made by modern neuroscience, the stigma associated with conditions that are not physical in origin or “all in your head” still runs deep.

Not surprisingly, some parents and affected students in Le Roy, N.Y. — some who have had to drop out of school because of their condition — have objected to their diagnosis. One father told the Today show earlier this month, “Obviously we are all not just accepting that this is a stress thing … It’s heart wrenching. You fear your daughter’s not going to have a normal life.”

But stress is not just a mental phenomenon. Broderick explains that stress can actually change the body’s physiology. “Stress responses are not just psychological,” she says. “They also involve physiological responses [like] increased heart rate.”

In mass psychogenic illness, she says, “we believe it is the physiological response that individuals misinterpret as evidence of infection [or] contamination. This leads to fear and even more anxiety, creating a powerful experience of illness.”

This all reminds me of an episode of Derren Brown where he had given a group of atheists a “religious experience”. This also has implications for church, and how religious frenzies and even born again experiences are contagious. This is probably why church camps have those “born again” events en masse, so that the unsuspecting pre-Christian is sort of “forced” to have a born again experience because other people were having them.

It’s amazing how the brain can affect the body and vice versa. One thing is certain, though. There’s no strict dichotomy between “only in your head” symptoms and “actual” symptoms. If symptoms are real or psychogenic, both are controlled by the brain. It just depends on the part of the brain that is responsible for it.

‘Tell me one last thing,’ said Harry. ‘Is this real? Or has this been happening inside my head?’

Dumbledore beamed at him, and his voice sounded loud and strong in Harry’s ears even though the bright mist was descending again, obscuring his figure.

‘Of course it is happening inside your head, Harry, but why on earth should that mean that it is not real?’

 
Comments Off on Mass Hysteria

Posted by on February 9, 2012 in cognitive science

 

The True Value of the Actual Arguments For/Against Religion

On my Facebook page a small argument brewed over an image I posted from Reddit of a former Christian describing his deconvertion process. One person said that the Redditor’s deconversion basically amounted to a superficial, unsophisticated view of theology (really theodicy, but I’m being pedantic). I agreed that his views were simplistic, but I commented that most Christians become Christians for similarly superficial, unsophisticated reasons.

Of course, this much is true. But someone countered that many Christians looked at the more sophisticated arguments for/against religion and stayed Christians. However, that’s not the point. The true value of arguments will never be how well they retain current members, but how well they sway the undecideds and the opposition. And if you start to study cognitive science you’ll know why this is correct. From that post on the cognitive analogy for the “thief” and “wizard”, I wrote:

Unfortunately, the wizard does everything that the thief asks him to do, especially attack positions that she doesn’t like and defend positions that she does like. This applies to everyone. The wizard would not know who to cast a spell on without the thief’s instruction or deference… Christianity is large and complicated; it is a final boss at the end of a dungeon. It would be unwise to use only the thief on a final boss, or only use the wizard after attempting to drain the majority of the final boss’ HP with only the thief …That would be a horrible strategy in any RPG. The final boss would soundly pummel the thief and she would run out of HP and the game would be over very quickly.

But it’s simple confirmation bias and motivated skepticism that keeps believers being believers (and unbelievers being unbelievers). An argument that overcomes those two deeply, deeply entrenched cognitive biases would be the truly strong argument. An argument that doesn’t have to deal with those biases, which actually have those biases in its favor, is a relatively bad argument. An argument that keeps a person in a belief that they already believe, comparatively, isn’t a very good argument.

This is why most arguments for religion are pretty worthless. Very few atheists are convinced of the sophisticated arguments for why the Christian god allows evil, to give one example. The fact that those arguments keeps Christians believing doesn’t really say anything about how good the argument is. Its worth as an argument can only be gauged on the undecideds and non-believers. On the other hand, the logical/evidential problem of evil itself is probably the best argument against Christianity, since that is what draws the most Christians away from Christianity, compared to its sophisticated equivalent on atheists (the problem of good?).

More specific to Christianity, there are very few people who were undecided or skeptical of Christianity and then read the sophisticated arguments of Christians (or the NT itself) and then became believers. What usually happens is that the person has an experience they can’t explain and then they use their surrounding society’s cultural language to explain it; for most modern Christians their cultural language is Christianity. You can’t call this situation “brainwashing” as John Loftus does, however, since if it were then most people reading this blog have been “brainwashed” into having English as their first language.

But anyway, this sort of conversion is wholly unsophisticated. Yet it’s only after this unsophisticated conversion that the born-again Christian looks into the sophisticated arguments for Christianity and are “convinced” by them. In short, the sophisticated arguments for Christianity are normally only a means of validating a belief arrived at through unsophisticated means. Thus the sophisticated arguments have no true value outside of that context.

Really though. An experience you can’t explain is an experience you can’t explain. This is a statement about you, and has nothing to do with the truth value of Christianity. Which is why religious experiences are wholly unsophisticated. Yet once Christianity has subdued the thief, the thief then asks the wizard, now employing sophisticated arguments, to defend Christianity at all costs.

Most Christians and atheists are unaware of the sophisticated arguments for their positions. If an unsophisticated Christian is convinced by an unsophisticated argument for atheism, this to me seems fair enough. If an unsophisticated atheist is convinced by an unsophisticated argument for Christianity, this also seems fair enough.

The worst test for an argument is how many unsophisticated Christians are convinced by sophisticated arguments for Christianity, or unsophisticated atheists being convinced by sophisticated arguments for atheism.

The real test would be to see how many unsophisticated atheists are convinced by sophisticated arguments for Christianity, and how many unsophisticated Christians are convinced by sophisticated arguments for atheism. The best test would be sophisticated Christians being convinced by sophisticated atheism, and vice versa. From my point of view, it seems that the one I see the most is unsophisticated Christians being convinced by sophisticated arguments for atheism, with the opposite of that almost never happening.

Since atheism itself is a rising trend, this speaks volumes about how good arguments for Christianity (or generic theism) are. In other words, not very good.

 
Comments Off on The True Value of the Actual Arguments For/Against Religion

Posted by on February 9, 2012 in apologetics

 

Why Neil deGrasse Tyson is the New Carl Sagan

 
1 Comment

Posted by on February 8, 2012 in Funny

 

It’s Not A Tumor!

I made a post a while back about why something that explains everything explains nothing. It was a sort of long post trying to explain in the simplest way possible, without skipping over anything, why something that can be used to explain every possible scenario in reality explains nothing.

Really, though, I just wanted to use that “divide by zero” image 🙂

There’s a lot simpler way of demonstrating that something that can be used to equally explain everything in reality explains nothing. Why, “if you are equally good at explaining any outcome, you have zero knowledge”. This is summed up in the very, very simple equation P(E | H) + P(~E | H) = 100%.

When I read that, I read it as it saying that the probability of the evidence given that your hypothesis is true plus the probability of not having the evidence (or if E is not binary, all other types of that evidence) given that your hypothesis is true accounts for 100% of all possible evidence of a certain type.

So your hypothesis has to exhaust all iterations of the evidence, and give weight to each iteration of evidence. By “iterations” I mean instances of a certain type of evidence. Like all instances of a coin flip are either heads or tails. So heads + tails = 100%. Or, all instances of the evidence are, say, the number of planets in the solar system. So Mercury + Venus + Earth + Mars + Jupiter + Saturn + Uranus + Neptune (+ Pluto? lol) = 100%.

Instead of using percentages, I think it would be easier to use money. Using the money analogy, your hypothesis only has 100 dollars to bet on each instance of the evidence, like you would place a certain amount of money on each planet in the solar system or something like that. Where does your hypothesis place most of its cash? Do you go all in on only one type of evidence or spread it evenly across all of it?

Let’s say you have a headache. This is a nice binary event (you either have a headache or you don’t). The “it’s a brain tumor” hypothesis places almost all of its money on causing headaches. An alternative hypothesis, a head cold, places relatively little money on causing headaches (there are other hypotheses, such as you bumped your head, your wife kept nagging you, etc., which would all place various different bets out of their 100 dollars on causing a headache). At least, the head cold hypothesis places a lot less of its money on causing headaches than the “it’s a tumor” hypothesis does. If the prior probabilities of each were equal, which hypothesis gets the big cash-in on the event of having a headache? The tumor one. By the “it’s a tumor” hypothesis placing all, or almost all, of its money on causing headaches, when a headache actually happens, then it wins big.

But for this explanation, I posited that the prior probabilities are equal. In reality, though, the prior probability of having a tumor is much smaller than other prior probabilities (like a head cold); the number of people in the world right now with head colds — who have a headache because of it — is much higher than the number of people in the world with brain tumors. And this is why “IT’S NOT A TUMOR”.

Anyway, in this case, P(E | H) is the probability of having a headache given that you have a tumor. Since the vast, vast majority of tumors cause headaches, the “it’s a tumor” hypothesis is going all-in in regards to P(E | H); effectively saying that tumors almost never don’t cause a headache, which is represented by P(~E | H). A tumor that doesn’t cause a headache would be a huge surprise, and depending on how much of our 100 bucks we place on P(E | H), we would place almost none of the 100 dollars on a tumor not causing a headache P(~E | H).

Some hypothesis where having a headache and not having a headache is no surprise would analogously be placing less money on both, and would not go to extremes like the “it’s a tumor” hypothesis does. They would distribute their 100 dollars more evenly between P(E | H) and P(~E | H). And this is why something that equally explains everything explains nothing.

If E were not some binary event, but an event with multiple possible outcomes, then something that is attempting to “not be surprised” by any event would start evenly spreading their 100 dollars across all possible outcomes. If there are 10,000 possible events/outcomes, then something that is attempting to explain everything equally would only be betting one cent out of 100 dollars on each particular outcome ( P(E1 | H), P(E2 | H), … P(E10,000 | H) ).

This brings us to the ultimate “attempting to explain everything” hypothesis: The Abrahamic god. Is there any type of evidence that this god can’t explain? Given that the Abrahamic god exists (this is a very important assumption), is there any sort of evidence or event that would surprise us? In the sea of all possible instances of a type of evidence, which one can’t the existence of god explain? Which evidence would the the Abrahamic god hypothesis place the least of its 100 dollar allocation on?

There isn’t any. With the Abrahamic god, there should be no surprises. As they say, “the Lord works in mysterious ways”. Because of the celebrated mysterious ways, we have no warrant for placing more money on one instance of the evidence to the exclusion of the other instances.

This would mean that the god hypothesis is spreading itself thin with its budget of 100 dollars. So for the solar system example, given that the Abrahamic god exists, and taking into account the limitations of the Abrahamic god (i.e. none), we could easily be living on Earth or Mercury or Neptune. The Abrahamic god could use perpetual miracles to keep us alive on any planet. So P(E | H) would be one planet, and P(~E | H) would be the other seven planets. The Abrahamic god would spread its money evenly across all planets, unless we can come up with a reason for the Abrahamic god to put most of its money on one planet to the exclusion of the others. But we have none, like I said, “mysterious ways…”. Whereas a limited god, or some other hypothesis that doesn’t allow for anything, would put most of its money on a planet that was in the Goldilocks Zone.

And indeed, if life was found on a planet within the Goldilocks Zone, then the hypothesis that put all of its money on that one planet would win the good payout, whereas another hypothesis that spreads itself across all planets would gain a negligible payout. Given equal prior probabilities, a Goldilocks Zone is evidence against the Abrahamic god, or any hypothesis that allows for anything, via the likelihood ratio.

What if there are a million possibilities that the Abrahamic god could explain, but some other hypothesis goes all-in on only one explanation? The hypothesis that goes all-in will get the huge payout, and the god hypothesis gets an even more negligible payout. Of course, this, again, assumes equal prior probabilities for both hypotheses. Yet in reality, the prior probability of god is extremely low to begin with. That is actually two strikes against the Abrahamic god.

The problem with religious thinking is that religious people never correctly apply the counterfactual. I’m probably using that word loosely, but yeah. P(E | H) is usually the focus when religious people attempt to use probabilistic thinking, but they never take into account P(~E | H), which might be described as the counterfactual. If they do think about the counterfactual, they usually don’t think that the “factual” and the counterfactual are related probability wise. In other words, they don’t think that P(E | H) has any relationship with P(~E | H) and because of that they unwittingly ascribe 99% to both. And since both terms are in reality added together, you end up being much greater than 100% when you have P(E | H) + P(~E | H), which is the probability version of dividing by zero.

So really, the probability equation P(E | H) + P(~E | H) = 100% succinctly explains why something that can be used to equally explain everything in reality explains nothing. To reiterate, if you are equally good at explaining any outcome (i.e. P(E | H) = P(~E | H) in the case of a binary event/evidence) then you have zero knowledge; if there are no surprises then you have zero knowledge. This doesn’t actually prove that the Abrahamic god doesn’t exist. It only says that the Abrahamic god isn’t a good explanation for something; that there are better explanations out there.

When you are attempting to explain something, always go all in. Probability favors the bold.

 
Comments Off on It’s Not A Tumor!

Posted by on February 6, 2012 in Bayes

 

The Ought-Is Fallacy

David Hume defined what is now known as the “Is-Ought” fallacy. Here, I’ll let Mr. Hume speak for himself:

In every system of morality, which I have hitherto met with, I have always remarked, that the author proceeds for some time in the ordinary ways of reasoning, and establishes the being of a God, or makes observations concerning human affairs; when all of a sudden I am surprised to find, that instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought, or an ought not. This change is imperceptible; but is however, of the last consequence. For as this ought, or ought not, expresses some new relation or affirmation, ’tis necessary that it should be observed and explained; and at the same time that a reason should be given; for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it

Basically, you can’t derive an “ought” from an “is”. You can’t say “Bob is homeless, so I ought to give him a roof over his head” without some sort of legwork inbetween the “is” and “ought”.

There is a correlary to this, or a reciprocal version of this oddity that Hume points out that I see a lot in debates about morality, ethics, and the existence of god. I don’t know if I’m the first person to point it out, but lots of people seem to be swayed by it. It is the ought-is fallacy: A person making an argument about how some system of morality ought to be (because if not… uh oh!) and then concludes that this system of morality is.

Recently this has struck me in Adam Lee’s interactions with Peter Hitchens, where Hitchens argues for universal morality, therefore god. Sure, there ought to be a universal, unalterable morality, but just because there ought to be doesn’t mean that there is. The almost universal objection to there not being universal morality is that if there weren’t universal morality, then people could do whatever they wanted.

Yeah… so?

What if that’s actually how the universe is? Will the rules of the universe automatically change just because we arrive at some observation or conclusion that doesn’t privelage human society? I would think not, but the Ought-Is fallacy assumes otherwise.

This is what those type of arguments look like to me:

P1: There ought to be universal morality

P2: ??????

C: Therefore there is universal morality (therefore god)

Another, related instance is where some people contemplate the metaethics of some religion and finds them laudible. Again, maybe that religion has the correct and sensible way of hammering out ethical actions. Maybe it doesn’t. Neither conclusion, however, bears any weight on the truth value of that religion’s other metaphysical claims. Take the following argument:

P1: All cats live in the ocean

P2: Dolphins are cats

C: Dolphins live in the ocean

In this syllogism, the conclusion is true but the argument is horrible. The same sort of error in metaethical reasoning can happen with religions. Maybe some religion has the best and most efficient ethical theory ever encountered. But this fact has no bearing whatsoever on the truth value of any related metaphysics that led to the “true” ethical theory, just like one can’t claim that because the conclusion of an argument is true, it must follow that the premises are true.

While not necessarily a perfect ought-is fallacy, it does follow the same sort of logic. It’s more along the lines of “we ought to do something, therefore whatever led us to do said ‘ought’ is true”. There has to be more legwork between the ought and is, just like in Hume’s original is-ought problem, that builds a solid bridge between the is and ought.

 
2 Comments

Posted by on February 2, 2012 in apologetics

 
 
NeuroLogica Blog

My ὑπομνήματα about religion

Slate Star Codex

SELF-RECOMMENDING!

Κέλσος

Matthew Ferguson Blogs

The Wandering Scientist

What a lovely world it is

NT Blog

My ὑπομνήματα about religion

PsyBlog

Understand your mind with the science of psychology -

Vridar

Musings on biblical studies, politics, religion, ethics, human nature, tidbits from science

Maximum Entropy

My ὑπομνήματα about religion

My ὑπομνήματα about religion

My ὑπομνήματα about religion

Skepticism, Properly Applied

Criticism is not uncivil

Say..

My ὑπομνήματα about religion

Research Digest

My ὑπομνήματα about religion

Disrupting Dinner Parties

Feminism is for everyone!

My ὑπομνήματα about religion

The New Oxonian

Religion and Culture for the Intellectually Impatient

The Musings of Thomas Verenna

A Biblioblog about imitation, the Biblical Narratives, and the figure of Jesus

The Syncretic Soubrette

Snarky musings from an everyday woman