At the age of 10, ‘Amr’s failure to memorize the Qur’an brought him beatings, the force of which he resented even then. Voiced skepticism throughout his youth earned him further harsh treatment from family members, whose religious discipline he recalled growing progressively more strict along with gradually closer subscription to the channels of Gulf-based imams. Upon coming to terms with his own atheism, ‘Amr – like the vast majority of nonbelivers in Egypt – took pains to keep it to himself.
His girlfriend barely spoke a word, but ‘Amr wasn’t nearly finished. With much more to say than the time in which to say it, he suggested we carry on talking in a downtown café. Here, he said, he’d recently spent a good amount of time with a growing group of Egyptian atheists, all of whom he’d met online, sharing similar experiences and venting frustrations with life as a nonbeliever in one of the world’s most religiously restrictive countries. These gatherings were like manna for ’Amr. He heard dozens of accounts comparable to his own – stories of being evicted, forcibly medicated, losing jobs, being blacklisted from entire industries, losing friends, families – wives, husbands, children – and, for an unlucky few, jail.
Research published in 2004 found that strongly handed individuals were more likely to believe in biblical creationism rather than biological evolution. The original study proposed that strongly handed individuals were less likely to update their beliefs in light of evidence. But Chan wondered if other factors could explain the association.
The new study of 743 U.S. adults confirmed that handedness was correlated to religiosity. The strongly handed participants were more likely to agree with statement such as “There is a personal God” while disagreeing with statement such as “Religion makes people do stupid things.”
Chan also found evidence that authoritarianism mediated the relationship between handedness and religiosity. In other words, strongly handed individuals tended to score higher on a measure of right-wing authoritarianism, which in turn was associated with stronger religious belief.
Read more at PsyPost
Well the title of this post is a bit inflammatory. So I won’t be arguing that it “refutes” your religion, but will be arguing more that it’s weak Bayesian evidence against your religion.
So. The Monty Hall problem is an illustration of how our intuitions of probability don’t always match up with reality. In its original formulation, you’re given a choice between three doors. One door has a prize, the other two do not. If you choose one of the doors, then another door that doesn’t have a prize is shown to you. You then have the option of staying with the door you chose or switching doors.
Most people think that it either doesn’t matter whether you switch or that switching lowers your probability of winning. Neither of those is true!
Your initial probability of winning the prize is 1 out of 3. Once one of the doors is opened, the probability that you had picked the correct door stays at 1 out of 3 whereas the other non-picked door now contains the remaining probability of 2 out of 3. Because you have to do a Bayesian update once new information — in this case, the one door revealed to not have the prize — is introduced.
I’ve gone over this before. Yet, I want to add an additional wrinkle to the problem to make intuition fall more in line with Bayesian reasoning.
If, instead of picking one door out of three to win the prize, what if it were one door out of 100? And once you’ve made your selection, 98 other doors are opened up to show that they have no prize, leaving only your choice and one other unknown door? In this case it seems more obvious that something is suspicious about the only other door that wasn’t opened up. And this intuition lines up with a Bayesian update using the same scenario:
P(H): 1 out of 100 or .01
P(~H): 99 out of 100, or .99
P(E | H): Probability of all other doors besides yours and one other being opened to reveal no prize given that you’ve picked the correct door: 100%.
P(E | ~H): Probability of all other doors besides yours and one other being opened to reveal no prize given that you’ve picked the incorrect door is 100%.
This is an easy Bayesian update to do. Both conditional probabilities, P(E | H) and P(E | ~H) are both 100%. Meaning the likelihood ratio is 1, and your posterior probability is the same as your prior probability. But now your selection is still 1 out of 100 and the only other remaining door has a probability of 99 out of 100 of having a prize! So in this case, both Bayesian reasoning and intuition line up: There is something suspicious about the only other door that wasn’t opened.
How does this relate to religion? Specifically, the religion that you grew up with?
Using Willy Wonka’s logic in the meme above, the chance that you just happened to grow up with the correct religion is pretty low. Instead of the chance of picking the correct door out of 3, or out of 100, you’ve picked a door out of thousands of religions; many of which no longer exist. They are “opened doors” revealing no prize in the analogy.
So a Bayesian update will work the same way as it did with picking one door out of 100. Meaning, your religion is probably wrong. And you should probably switch religions. The only reason I say this is weak Bayesian evidence is because there are still a few religions to choose from. But their joint probability of being correct is yet higher than the single chance that your family religion is the correct one.
Analogously, it would be like if, say, you had a choice between choosing one door out of 10,000, and after your choice all but 10 of the doors are closed. Your initial chance of having chosen the correct door is still 1 out of 10,000, but the 10 doors that remained open after closing the rest have a joint probability of 9,999 out of 10,000 of being the correct door: Those 10 other doors individually have (approximately) 10% chance of being the correct door. As opposed to your original selection’s probability of 1 out of 10,000.
So the Monty Hall problem is weak Bayesian evidence against your religion.
“Cultural attitudes are mostly acquired during childhood and adolescence in family and school environments and we may not realize how these attitudes ‘dictate’ the mode of our thoughts and the pattern of our brain’s activity even in a state of rest,” explained study author Gennady G. Knyazev of the Institute of Physiology and Basic Medicine in Novosibirsk.
“Our data show that collectivist attitude prompts the engagement of brain regions involved in semantic processes and reasoning on moral issues, which, in its turn, prompt the appearance of others-related thoughts.”
“Collectivism-individualism is one of the major dimensions of culture and each culture has its position on this dimension,” Knyazev told PsyPost. “For instance, the United States is considered the most individualistic culture, whereas China and other East-Asian cultures are mostly collectivists.
“A typical individualist sees him/herself as fundamentally separate from others, whereas a typical collectivist considers him/herself as a representative of a group (e.g., family, social class, ethnic group and so on).”
“It could be expected that in a quiet resting condition, a collectivist would spontaneously think more about his/her close friends or relatives, whereas an individualist would think more about him/herself.”
“This association between cultural attitude and the content of thoughts has to have some reflection in the activity of the brain and we were interested to find out how brain’s activity mediates this association. The default mode network (DMN) is the brain functional network that is most active in the resting condition and is involved in self-referential and social cognition.”
Read more at PsyPost
To follow up on my previous review of Christian scholar Craig Keener’s “Otho: A Targeted Comparison” in Biographies and Jesus, I’d like to briefly discuss the relevance of numismatic evidence in evaluating Suetonius’ Life of Otho in comparison to the NT Gospels.
Numismatics is the study of ancient currency, and is particularly relevant to the study of Roman emperors, since the rulers of the Roman Empire would stamp their faces on the currency in circulation throughout the Mediterranean. A number of years ago I took a seminar on Roman numismatics with professor Edward Watts at UC Riverside, in which I did a research project on the emperor Otho and the currency he circulated with his image during his short reign. It is also relevant to another seminar that I took with professor Michele Salzman in which I did a research project related to the depiction of Roman taxation in…
View original post 1,933 more words
This volume pulls together and republishes, with some editing, updating, and additions, articles written during 1978-86 for internal use within the CIA Directorate of Intelligence. The information is relatively timeless and still relevant to the never-ending quest for better analysis. The articles are based on reviewing cognitive psychology literature concerning how people process information to make judgments on incomplete and ambiguous information. Richard Heur has selected the experiments and findings that seem most relevant to intelligence analysis and most in need of communication to intelligence analysts. He then translates the technical reports into language that intelligence analysts can understand and interpreted the relevance of these findings to the problems intelligence analysts face.
Money quote, chapter 12 pages 152 – 156
Expression of Uncertainty
Probabilities may be expressed in two ways. Statistical probabilities are based on empirical evidence concerning relative frequencies. Most intelligence judgments deal with one-of-a-kind situations for which it is impossible to assign a statistical probability. Another approach commonly used in intelligence analysis is to make a “subjective probability” or “personal probability” judgment. Such a judgment is an expression of the analyst’s personal belief that a certain explanation or estimate is correct. It is comparable to a judgment that a horse has a three-to-one chance of winning a race.
Verbal expressions of uncertainty—such as “possible,” “probable,” “unlikely,” “may,” and “could”—are a form of subjective probability judgment, but they have long been recognized as sources of ambiguity and misunderstanding. To say that something could happen or is possible may refer to anything from a 1-percent to a 99-percent probability. To express themselves clearly, analysts must learn to routinely communicate uncertainty using the language of numerical probability or odds ratios. As explained in Chapter 2 on “Perception,” people tend to see what they expect to see, and new information is typically assimilated to existing beliefs. This is especially true when dealing with verbal expressions of uncertainty.
By themselves, these expressions have no clear meaning. They are empty shells. The reader or listener fills them with meaning through the context in which they are used and what is already in the reader’s or listener’s mind about that context. When intelligence conclusions are couched in ambiguous terms, a reader’s interpretation of the conclusions will be biased in favor of consistency with what the reader already believes. This may be one reason why many intelligence consumers say they do not learn much from intelligence reports.
It is easy to demonstrate this phenomenon in training courses for analysts. Give students a short intelligence report, have them underline all expressions of uncertainty, then have them express their understanding of the report by writing above each expression of uncertainty the numerical probability they believe was intended by the writer of the report. This is an excellent learning experience, as the differences among students in how they understand the report are typically so great as to be quite memorable.
In one experiment, an intelligence analyst was asked to substitute numerical probability estimates for the verbal qualifiers in one of his own earlier articles. The first statement was: “The cease-fire is holding but could be broken within a week.” The analyst said he meant there was about a 30-percent chance the cease-fire would be broken within a week. Another analyst who had helped this analyst prepare the article said she thought there was about an 80-percent chance that the cease-fire would be broken. Yet, when working together on the report, both analysts had believed they were in agreement about what could happen. Obviously, the analysts had not even communicated effectively with each other, let alone with the readers of their report.
Sherman Kent, the first director of CIA’s Office of National Estimates, was one of the first to recognize problems of communication caused by imprecise statements of uncertainty. Unfortunately, several decades after Kent was first jolted by how policymakers interpreted the term “serious possibility” in a national estimate, this miscommunication between analysts and policymakers, and between analysts, is still a common occurrence.
I personally recall an ongoing debate with a colleague over the bona fides of a very important source. I argued he was probably bona fide. My colleague contended that the source was probably under hostile control. After several months of periodic disagreement, I finally asked my colleague to put a number on it. He said there was at least a 51-percent chance of the source being under hostile control. I said there was at least a 51-percent chance of his being bona fide. Obviously, we agreed that there was a great deal of uncertainty. That stopped our disagreement. The problem was not a major difference of opinion, but the ambiguity of the term probable.
The table in Figure 18 shows the results of an experiment with 23 NATO military officers accustomed to reading intelligence reports. They were given a number of sentences such as: “It is highly unlikely that. . . .” All the sentences were the same except that the verbal expressions of probability changed. The officers were asked what percentage probability they would attribute to each statement if they read it in an intelligence report. Each dot in the table represents one officer’s probability assignment.
While there was broad consensus about the meaning of “better than even,” there was a wide disparity in interpretation of other probability expressions. The shaded areas in the table show the ranges proposed by Kent.
The main point is that an intelligence report may have no impact on the reader if it is couched in such ambiguous language that the reader can easily interpret it as consistent with his or her own preconceptions. This ambiguity can be especially troubling when dealing with low-probability, high-impact dangers against which policymakers may wish to make contingency plans.
Consider, for example, a report that there is little chance of a terrorist attack against the American Embassy in Cairo at this time. If the Ambassador’s preconception is that there is no more than a one-in-a-hundred chance, he may elect to not do very much. If the Ambassador’s preconception is that there may be as much as a one-in-four chance of an attack, he may decide to do quite a bit.
The term “little chance” is consistent with either of those interpretations, and there is no way to know what the report writer meant. Another potential ambiguity is the phrase “at this time.” Shortening the time frame for prediction lowers the probability, but may not decrease the need for preventive measures or contingency planning.
An event for which the timing is unpredictable may “at this time” have only a 5-percent probability of occurring during the coming month, but a 60-percent probability if the time frame is extended to one year (5 percent per month for 12 months). How can analysts express uncertainty without being unclear about how certain they are? Putting a numerical qualifier in parentheses after the phrase expressing degree of uncertainty is an appropriate means of avoiding misinterpretation. This may be an odds ratio (less than a one-in-four chance) or a percentage range (5 to 20 percent) or (less than 20 percent). Odds ratios are often preferable, as most people have a better intuitive understanding of odds than of percentages.
I’ll probably (heh) use the ranges in the figure to do Bayesian updates in the app I’m coding.
Let’s take a trip. Let’s say around 2,000 years ago.
The Roman Empire rules the known (Western) world. Its military superiority in the West is without equal. As the Borg would say, “Resistance is futile”.
Part of the Western world under the boot of Roman rule was Judea. The area promised to the Jews by their god, Yahweh. Many Jews were sickened and disgusted by the rule of sacred Jewish lands by the Romans. Many Jews felt betrayed by their priests and scribes that they would kowtow to Roman hegemony.
What happened to the glory days of the Maccabees or even Joshua, kicking the asses of foreign powers and ensuring the sacred land promised to the Jews was theirs?
Some Jews even used sacred scripture, like the book of Daniel, to predict that a new Joshua would arrive in the 1st century and kick ass and return Judea to its rightful heirs. As the Jewish historian Josephus wrote in the late 1st century:
But now, what did the most elevate them in undertaking this war, was an ambiguous oracle that was also found in their sacred writings, how,” about that time, one from their country should become governor of the habitable earth.” The Jews took this prediction to belong to themselves in particular, and many of the wise men were thereby deceived in their determination.
And so, this began a 100 year period of many Jews attempting to be the new Joshua, of kicking the Romans out of the area and restoring rightful rule to the Jews. The Jews went to war with the Romans three times in this 100 year span.
The first time, beginning under the reign of Nero , led to the destruction of the Jewish temple in 70 CE and changed Judaism forever by eliminating the temple cult portion of Jewish religious practice to this day. The destroyed temple was raided by the Romans and they used the funds they plundered from said Jewish temple to build the Roman Colosseum.
The second time, around a generation after the first war, Jews went to war with the Romans again. And again, were sent packing.
The third and final time, in the early 30s of the second century, Jews actually won, albeit for a short time. This was the short-lived reign of the last Jewish kingdom under the rule of Simon Bar-Kokhba. But after about three years of Jewish rule in Judea, the Romans used the 2nd century version of the nuclear option and subsequently kicked the Jews out of the area forever, and renamed what was once called Judea as “Palestine”. What we call it to this day. Well, at least until after WWII when we chopped up Palestine and set apart a portion for what’s now Israel.
Interspersed between these wars were what one might call “terrorist attacks”. Though nothing remotely like the suicide bombings we get today, they were still thorns in the side of the Romans. Though the Romans had no qualms about swift and brutal reprisals.
Where are all of these Jewish terrorist attacks today? Nowhere. Because the religion that could probably be seen as inherently violent at one point in history had a reformation. One branch became what’s now called Rabbinic Judaism. The other branch began worshiping a spiritual Joshua who did his conquering in the spiritual realm, and returned the spiritual Jewish kingdom to the Jews and thus had no need for violence against material Romans.