A digression, from an alternative universe.
“Murderism” is the ideology that murdering people is good and letting them live is bad. It’s practically omnipresent: 14,000 people are murdered in the US each year. That’s a lot of murderists, and a testament to the degree to which our schools teach murderist values.
But not all murderism is that obvious. For years, people have been pushing “soft-on-crime” policies that will defund the police and reduce the length of jail sentences – inevitably increasing the murder rate. Advocates of these policies might think that just because they’re not gangsters with knives, they must not be murderists. But anybody who supports murder, whether knife-wielding gangster or policy analyst – is murderist and responsible for the effects of their murderism.
Our two major parties have many differences – but both are united in their support for murderism. Republicans push murderist policies like the invasion of Iraq, which caused the murder of thousands of Iraqis. Democrats claim to be better, but they support openly murderist ideas like euthanasia, promoting the killing of our oldest and most vulnerable citizens. There’s no party in Washington that’s willing to take a good look at itself and challenge the murderist ideals that our political system is built on.
Murderism won’t stop until people understand that it’s not okay to be murderist. So next time you hear people opposing police militarization, or speaking out in favor of euthanasia – tell them that that’s murderism and it’s not okay.
…okay, done. Back in our own universe, we recognize that “murderism” is silly: it confuses cause and effect.
Murder is usually an effect of a strategy pursued for other reasons. The drug dealer who wants to keep rivals off his turf, the soldier who wants to win a war, the gangster who wants to get rid of inconvenient witnesses. If you want to stretch it, add the neocon who wants to “liberate” foreign countries, the cancer patient who wants to “die with dignity”, or the activist who wants to keep people out of jail.
But except in maybe the most deranged serial killers, it’s never pursued because of an inherent preference for murder. Most murderers would probably prefer not to have to kill. If the drug dealer could protect his business equally well by politely requesting people stay off his territory, that would be much easier. If the soldier could win his war without bloodshed, so much the better for everybody. Murder is an effect of other goals – sometimes base, sometimes noble – and the invocation of “murderism” only serves to hide these goals and conflate different actions into a single meaningless category.
Talking about murderism isn’t just uninformative, it’s actively confusing. If you believed that gangsters killed their rivals because of murderism, then there’s no point in examining how poverty interacts with gang membership, or whether the breakdown of law forces people to form gangs to defend themselves. The problem is just that gangsters have murderist values. It should be solved by censoring the works of philosopher David Benatar, who writes about how being alive is bad and it’s morally better not to exist at all. Or by banning high school Goths, whose pro-death aesthetic makes murderism seem cool to teens and causes them to harbor murderist thoughts as adults.
Talk about murderism is obviously confused. But it’s the same confusion between the Definition By Consequences versus the Definition By Motive that we saw was a hallmark of racism.
Monthly Archives: December 2018
Absence of evidence IS evidence of absence. There is literally a formula used to determine how much the absence of evidence is indeed evidence of absence:
P(H | E) = P(H) x P(E | H) / P(E).
This shows how much the evidence supports the explanation, and it has a perfectly valid formulation for absence of evidence:
P(H | ~E) = P(H) x P(~E | H) / P(~E).
Example using some made up stats: If dangerous fires are rare (1%) but smoke from barbecues is fairly common (10%), and 90% of dangerous fires make smoke then:
P(Dangerous Fire | Smoke) = P(Dangerous Fire) * P(Smoke | Dangerous Fire) / P(Smoke)
=1% x 90% / 10%
Absence of evidence version is derived from each term’s compliments:
P(Smoke | Dangerous Fire) + P(No Smoke | Dangerous Fire) = 100%
P(Dangerous Fire) + P(No Dangerous Fire) = 100%
P(Smoke) + P(No Smoke) = 100%
P(Dangerous Fire | No Smoke) = P(Dangerous Fire) x P(No Smoke | Dangerous Fire) / P(No Smoke)
=1% x 10% / 90%
Absence of evidence lowers the posterior probability, therefore absence of evidence (smoke) is evidence of absence (dangerous fire).
Looks like someone else has come up with the two very different “systems” of cognition that we wade through every day: Contextualizing and Decoupling.
The differing debating norms between scientific vs. political contexts are not just a cultural difference but a psychological and cognitive one. Beneath the culture clash there are even deeper disagreements about the nature of facts, ideas and claims and what it means to entertain and believe them.
Consider this quote from an article by Sarah Constantin (via Drossbucket):
Stanovich talks about “cognitive decoupling”, the ability to block out context and experiential knowledge and just follow formal rules, as a main component of both performance on intelligence tests and performance on the cognitive bias tests that correlate with intelligence. Cognitive decoupling is the opposite of holistic thinking. It’s the ability to separate, to view things in the abstract, to play devil’s advocate.
Speculatively, we might imagine that there is a “cognitive decoupling elite” of smart people who are good at probabilistic reasoning and score high on the cognitive reflection test and the IQ-correlated cognitive bias tests. These people would be more likely to be male, more likely to have at least undergrad-level math education, and more likely to have utilitarian views. Speculating a bit more, I’d expect this group to be likelier to think in rule-based, devil’s-advocate ways, influenced by economics and analytic philosophy. I’d expect them to be more likely to identify as rational.
This is a conflict between high-decoupling and low-decoupling thought.
It’s a member of a class of disagreements that depend on psychological differences so fundamental that we’re barely even aware they exist.
High-decouplers isolate ideas from each other and the surrounding context. This is a necessary practice in science which works by isolating variables, teasing out causality and formalizing and operationalizing claims into carefully delineated hypotheses. Cognitive decoupling is what scientists do.
To a high-decoupler, all you need to do to isolate an idea from its context or implications is to say so: “by X I don’t mean Y”. When that magical ritual has been performed you have the right to have your claims evaluated in isolation. This is Rational Style debate.
But “decoupling as default” can’t be assumed in Public Discourse like it is in science. Studies suggest that decoupling is not natural behavior (non-WEIRD populations often don’t think this way at all, because they have no use for it). We need to be trained to do it, and even then it’s hard; many otherwise intelligent people have traumatic memories of being taught mathematics in school.
While science and engineering disciplines (and analytic philosophy) are populated by people with a knack for decoupling who learn to take this norm for granted, other intellectual disciplines are not. Instead they’re largely composed of what’s opposite the scientist in the gallery of brainy archetypes: the literary or artistic intellectual.
This crowd doesn’t live in a world where decoupling is standard practice. On the contrary, coupling is what makes what they do work. Novelists, poets, artists and other storytellers like journalists, politicians and PR people rely on thick, rich and ambiguous meanings, associations, implications and allusions to evoke feelings, impressions and ideas in their audience. The words “artistic” and “literary” refers to using idea couplings well to subtly and indirectly push the audience’s meaning-buttons.
This looks to be an expansion of System 1 and System 2 thinking (the Intuitionists and the Rationalists), or what I’ve classified as Moral Thinking vs Rational Thinking (the two are always in conflict). Other descriptions are Emotional vs Mechanical Thinking or Empathizing and Systemizing. It seems as though a bunch of cognitive science is converging on these two modes of thought.
As the author notes, everyone defaults to the Contextualizing mode of thinking (i.e., System 1), where people like to talk about social relationships between agents. Not to beat around the bush, but over-Contextualizing is why we believe in god. And is why, if you think that believing in god is just-so-obviously-irrational-and-wrong, even the people who don’t believe in god (which probably includes EVEN YOU) will succumb to the same family of delusions:
Removing The Unclean Spirit of Religion: Communities built around pseudoscience and woo will probably fill the void left by religion
Nature or Nature’s God: Any new “religion” will have both its nuanced version and its lowest common denominator version floating concurrently in the wider memespace; in the battle of ideas, the most popular ideas are optimized for virulence… not for truth
“If I Think Really, Really Hard, I Can Get The Right Answer”: The average person’s brain is optimized for making friends and influencing people. Not figuring out what’s true. Thinking you can figure out what’s true without first getting the proper tools for figuring out what’s true is folly; thinking that you already have those tools is worse. You have to not only constantly use the tools, but be wary of using the tools improperly.
Truth vs. Morality; Rationality vs. Intuition: There will always be scientific truths that are made as a burnt offering on the altar of an ethical theory. Most moral or ethical theories have some facet of anti-epistemology by dint of tribalistic human nature. This tribalism usually manifests and calls their anti-epistemology Other Ways Of KnowingTM
I’m always weary of methodologies that hijack our intuition. Stories and narratives — contextualizations — are trojan horses that can trick us into believing things by manipulating our feeling of certainty (because the feeling of certainty feels good, it’s easy to succumb to “epiphany porn” without even realizing it). Stories should be viewed with extreme prejudice and massive amounts of skepticism. They’re the easiest way to smuggle influence, the closest thing we have to mind control.