When people hear the phrase “unfalsifiable” it’s usually in a scientific context. It was one of Karl Popper’s definitions of science, which he crafted in opposition to Freud’s method of psychoanalysis. In this case, religious people aren’t really concerned that their beliefs are unfalsifiable; religion is not science.
But the appeal to the unfalsifiable isn’t restricted to religious belief. It seems to apply and appeal to people in a general moral domain, and the largest sample of unfalsifiable beliefs outside of religion are found in the realm of politics.
From Friesen, et al. (2014):
We propose that people may gain certain “offensive” and “defensive” advantages for their cherished belief systems (e.g., religious and political views) by including aspects of unfalsifiability in those belief systems, such that some aspects of the beliefs cannot be tested empirically and conclusively refuted. This may seem peculiar, irrational, or at least undesirable to many people because it is assumed that the primary purpose of a belief is to know objective truth. However, past research suggests that accuracy is only one psychological motivation among many, and falsifiability or testability may be less important when the purpose of a belief serves other psychological motives (e.g., to maintain one’s worldviews, serve an identity). In Experiments 1 and 2 we demonstrate the “offensive” function of unfalsifiability: that it allows religious adherents to hold their beliefs with more conviction and political partisans to polarize and criticize their opponents more extremely. Next we demonstrate unfalsifiability’s “defensive” function: When facts threaten their worldviews, religious participants frame specific reasons for their beliefs in more unfalsifiable terms (Experiment 3) and political partisans construe political issues as more unfalsifiable (“moral opinion”) instead of falsifiable (“a matter of facts”; Experiment 4). We conclude by discussing how in a world where beliefs and ideas are becoming more easily testable by data, unfalsifiability might be an attractive aspect to include in one’s belief systems, and how unfalsifiability may contribute to polarization, intractability, and the marginalization of science in public discourse.
As an sort of aside, just because a belief is unfalsifiable doesn’t mean that it’s false. A belief can be unfalsifiable but yet still be true. Falsifiability is a problem for epistemology, not ontology. For example, there’s no possible observation I can make where I’m not alive. So from my point of view, being alive is unfalsifiable.
Anyway, retreating to unfalsifiable beliefs once you feel you’re under attack seems like it’s a pretty good example of a Mott and Bailey tactic. If you recall, Mott and Baliey behavior, as Scott describes, is when:
I feel like every single term in social justice terminology has a totally unobjectionable and obviously important meaning – and then is actually used a completely different way.
The closest analogy I can think of is those religious people who say “God is just another word for the order and beauty in the Universe” – and then later pray to God to smite their enemies. And if you criticize them for doing the latter, they say “But God just means there is order and beauty in the universe, surely you’re not objecting to that?”
The result is that people can accuse people of “privilege” or “mansplaining” no matter what they do, and then when people criticize the concept of “privilege” they retreat back to “but ‘privilege’ just means you’re interrupting women in a women-only safe space. Surely no one can object to criticizing people who do that?”
So someone presents evidence that the type of god that the average religious person believes in doesn’t exist and a sophisticate rejoins with a completely unfalsifiable version of that god, saying that “of course” no one believes in the first type of god. But you can bet that once the sophisticate feels like they are no longer under attack, they will go back to believing in the “falsifiable” version of that god again.
So what we have here seems to be basic human psychology. When we feel threatened, we retreat to our Mott: The unfalsifiable version of our cherished belief(s). Maybe in the future we’ll read psychology articles about the Mott & Bailey Effect that describes people’s tendency to retreat to the unfalsifiable version of their beliefs when they feel like they’re under attack.
Related, if you want to try persuading someone, try to make sure they don’t feel like you’re attacking them.