So there’s been some discussion in the blogosphere about “free will”. I’ve posted this on a few blogs, so this post is really just me copy/pasting my responses here for posterity. Of course, this is really a rehash of a lot of the links that are already available in my tags cognitive science and sociology/economics. Before I get into it, though, it’s impossible for me to stress this quote enough:
The tool we use to philosophize is the brain. And if you don’t know how your tool works you’ll use it poorly
The first thing we should do when we come upon a complicated question is to try to taboo the conflict word/phrase. In this case, what happens when we taboo “free will”? If it helps, try making your beliefs pay rent; what would you expect to feel if you do have “free will”? What would it feel like to not have free will? For that negative question, try not to generalize from fictional evidence. When you Taboo “free will” you find yourself trying to say that your brain is not controlled by physics but by “you”. This implies that “you” has to be not physics, which is really just the supernatural.
Because of my readings about why people are religious, this has had the side effect of me being wholly against the idea of any sort of free will.
The main reason that people are religious is because our brains are much more optimized for social activity than intellectual activity. We believe things because the groups that we [want to] belong to believe them. As a matter of fact, our human-level intelligence only came about as a way of navigating tribal politics.
People chanting, singing, dancing, or even walking together in synchrony increases group bonding, which makes you much more likely to adopt the beliefs of the group you are involved with. Born again Christians and the military implicitly know this. Being told that you belong to a group that does XYZ makes you try harder to do XYZ (this probably explains why women are less represented in STEM fields than men).
Then there is the art of persuasion that takes advantage (if you’re that kind of person) all of this cognitive architecture; a really good salesman will act like a “detective of influence” and not have you realize that you’ve been swayed… e.g. a menu item at a restaurant was marked as being “the most popular” and its sales increased by 13-20%. Related, there’s the neurological differences between the “modules” in the brain dedicated to liking and wanting. They are correlated, but separate. You can like something but not want it and vice versa, because you don’t actually have control over those two modules. There’s also the concept of the apologist and the revolutionary working in your brain that, again, you have no conscious control over when encountering and making sense of new information. Linked in that article is the pretty weird fact that squirting cold water in your left ear makes you more likely to accept new information if you originally didn’t believe it. Seems exactly like the sort of bug that would happen to a robot.
All of these things don’t happen on a conscious level, they happen at the unconscious level of moral intuition and the feeling of certainty; one study showed that we temporarily adopt the moral intuitions of people we read in fiction. We don’t have conscious access to the cognitive algorithm that produces feelings like trusting people in a group or feeling certain about something, we just have the end product: That feeling. Our brain is like a government and the conscious “you” that feels like it experiences the world is more like a press secretary than the president who tries to explain what the government did to the public instead of the one actually making the decisions. Overall, we have no control over any of the emotions we feel, yet it’s those emotions that drive all of our decisions. Worse yet, we all have the ego of thinking that we are the rational actors in the drama of life instead of the emotional ones. Moreover (since I arrived at all this from reading about causes for religion) high income inequality, loneliness, or feeling out of control all (subconsciously) increase religiosity.
With all of that in mind, I’m having trouble seeing where any sort of free will comes into play. We’re a product of our environment at a level that naive introspection is impossible to detect objectively. Naive introspection would just produce the same confusion as this anecdote:
“Forget about minds,” he told her. “Say you’ve got a device designed to monitor—oh, cosmic rays, say. What happens when you turn its sensor around so it’s not pointing at the sky anymore, but at its own guts?”
He answered himself before she could: “It does what it’s built to. It measures cosmic rays, even though it’s not looking at them any more. It parses its own circuitry in terms of cosmic-ray metaphors, because those feel right, because they feel natural, because it can’t look at things any other way. But it’s the wrong metaphor. So the system misunderstands everything about itself. Maybe that’s not a grand and glorious evolutionary leap after all. Maybe it’s just a design flaw.”
Now, if I try to make “free will” pay rent in anticipated experiences, I would think that none of the above evidences for why people are religious would happen. I would think that people believed in religions simply because they didn’t know any better; that they were religious because they were “dumb”. I would assume that there was a little rational homunculus inside of our brains that is just getting corrupted by logical fallacies and cognitive biases.
In reality, there is no separate “you” that exists outside of this list of cognitive biases. You are that list. Full stop.
Take for example the rhyme as a reason effect. This where you remember an aphorism because it’s easier to remember a phrase that rhymes over a phrase that doesn’t rhyme; a rhyming phrase has less cognitive load on your System 2 than a non-rhyming one. Thoughts subsequently get cached in your brain due to the slow processor speed of your brain-CPU; the rate your neurons fire at is about 100 Hz (the computer I’m using to type this has a processor of about 2.3 GHz. That is, approx. 2,300,000,000 Hz). To speed things up, your brain uses caching, just like your computer. And because your brain doesn’t differentiate between the feeling of certainty and the feeling of familiarity, you will unconsciously remember a cached, rhyming argument and then conclude that it’s “true”.
At what point in this cognitive algorithm do “you” decide on the truth of the aphorism? Like I said, you don’t have inbuilt access to any of this, especially if you don’t study cognitive science. The only thing “you” have access to is the end product; the feeling of certainty. And “you” can’t just decide to feel certain about something.
Going back to making beliefs pay rent, I really can’t imagine any subjective difference between having free will and not having free will. For example, most conceptions of “free will” seem to be the one we have from video games/movies/etc. (e.g. in Dragon’s Dogma, your pawns can get possessed by a dragon and then they run towards you trying to kill you while telling you that they’re not in control of themselves) but I don’t think we should generalize from fictional evidence. Fictional evidence is meant to be entertaining, not true.
So if there’s no observational difference between two hypotheses, we should pick the one that has fewer metaphysical coin flips. Positing a brain plus some undetectable homunculus controlling everything has more metaphysical coin flips than just a brain.
Lastly, just because I don’t think free will exists (or is even meaningful) doesn’t mean I don’t want there to be free will.