# Category Archives: Bayes

## DeMorgan’s Law And Probability Theory

I touched on this a bit a few years ago but didn’t really post what I wanted to due to time constraints. I blog from my phone so blog posts usually take a few weeks to write 🙂.

Anyway, this post is my demonstration of how laws of logic also apply to probability theory.

In Boolean logic, we can use a truth table to demonstrate that two logic statements are equivalent.

So if you want to prove the statement P v ~Q (“P or Not Q”) is equal to Q v ~P, you would draw up a truth table and see if the columns for each statement are equal:

As you can see, the P v ~Q column doesn’t match the Q v ~P column. This logically proves that P v ~Q is not equal to Q v ~P.

DeMorgan’s Law is the proof that shows that the statement ~ (P ^ Q) is equal to the statement ~P v ~Q, proving that you can logically substitute one statement for the other:

This is a visual representation of DeMorgan’s Law from Wikipedia:

This exact methodology can be used to prove that probability theory is just Boolean logic that incorporates uncertainty; the columns will match. Boolean “and” is equal to mathematical multiplication and Boolean “or” is equal to mathematical addition. Though both come with a caveat for mutual exclusion.

So for example. Flipping a coin for heads or tails is mutually exclusive; you either get heads or tails. But if you’re flipping more than one coin, then heads or tails is no longer mutually exclusive. This is easy, you just go from multiplying your probabilities to adding them (though if you’re flipping more than one coin and want heads and tails, you keep with multiplying probabilities) If you flip two coins, the probability of getting head or tails is 100%.

But what if you flip three (or more) coins? It can’t be 150% chance of getting heads or tails! But notice what is going on in this addition: you’re effectively counting getting both heads and tails twice. Necessarily, you’re either going to get two heads with three coins being flipped at the same time (thus one instance of heads and tails) or two tails (thus the second instance of heads and tails).

So you have to subtract the second heads and tails when doing probabilistic “or”. In other words, P v Q becomes P + Q – P * Q. Indeed, if you go back to flipping only one coin for heads or tails, because this one coin cannot be both heads and tails, P * Q is zero.

Here we see DeMorgan’s replicating when substituting 100% and 0% for “true” and “false”:

Sure, this works with 0%/100%, but what about other probabilities?

As you can see, DeMorgan’s Law holds up even when using probabilities that aren’t equal to 0%/100%.

Thus my dilettante proof that probability theory is Boolean logic expanded to account for uncertainty 🙂

Comments Off on DeMorgan’s Law And Probability Theory

Posted by on March 4, 2020 in Bayes, logical fallacies as weak bayesian evidence

## Bongo-Bongoism

From Mind Hacks:

A curious term from anthropology describing the tendency for someone to come up with a counter-example from some usually obscure and remote tribe when anyone makes a general claim about human culture.

Bongo-bongoism: the venerable but ultimately sterile anthropological practice of countering every generalization with an exception located somewhere at some time.

Apparently, it was first used by anthropologist Mary Douglas in her book Natural Symbols.

Link to the culture evolves! blog (where I found the definition).

Posted by on December 31, 2019 in Bayes

## Probability As Proportions

Comments Off on Probability As Proportions

Posted by on December 23, 2019 in Bayes

## The Univariate Fallacy And Everest Regressions

The Fallacy of Univariate Solutions to Complex Systems Problems

Abstract

Complex biological systems, by definition, are composed of multiple components that interact non-linearly. The human brain constitutes, arguably, the most complex biological system known. Yet most investigation of the brain and its function is carried out using assumptions appropriate for simple systems—univariate design and linear statistical approaches. This heuristic must change before we can hope to discover and test interventions to improve the lives of individuals with complex disorders of brain development and function. Indeed, a movement away from simplistic models of biological systems will benefit essentially all domains of biology and medicine. The present brief essay lays the foundation for this argument.

The Univariate Fallacy is when someone argues that, because there is no single quality that separates two categories, the two categories do not exist and are actually just one category.

So for example, there’s no one single quality that separates Windows from Mac iOS, therefore Windows and iOS are the same.

Ridiculous, right?

There are multiple differences between Windows and iOS. There are also many commonalities. Yet there’s no one indicator that all Macs have that Windows OSs do not, though. Or vice versa. Because there isn’t one, concluding that Windows and iOS are the same would be the Univariate Fallacy.

Another example: There’s no single brain structure that separates left-handedness from right-handedness, therefore left or right handedness does not exist.

Here’s an example from Tw****r:

Another example I like is accent recognition: it’s a lot easier to say “She has a British accent” rather than individually describing all the phoneme-level features that your brain is using to make that judgement

The Univariate Fallacy can probably be thought of as a type of statistical fallacy, since this sort of thing seems to always happen in discussions with laypeople about differing statistical populations.

While I’m on the subject of statistics, there’s another statistics fail I see happen pretty regularly. Someone has named it the “Everest Regression”.

The Everest Regression is what happens when you “control” for a fundamental variable when comparing two populations. You might even think of it as the opposite side of, or similar lane to, the Univariate Fallacy. Maybe a sort of multivariate fallacy? I defer to the creator of the Everest Regression.

Basically, “controlling for height, Mount Everest is room temperature”.

Another: Controlling for number of electrons, helium and carbon have the same freezing point.

Controlling for distance from the equator, Alaska and Italy are the same climate.

Controlling for AU, Mars and Earth can support complex life.

You get the point. It’s assuming multivariate differences between univariate phenomena. This is understandable if you’re dealing with new phenomena, but is pointless and frankly sophistry to apply to concepts and categories that we already know are different along one or few axes in order to prove an ideological point.

Comments Off on The Univariate Fallacy And Everest Regressions

Posted by on September 19, 2019 in Bayes, economics/sociology, religion

## Is Logic Alone Enough To Become Rational?

Binary true/false Aristotelian logic is not sufficient to guarantee rationality.

Let’s say you go to the doctor due to an annoying mole on your nose. The doctor takes one look at it and says “That’s a cancerous mole. You should get surgery”. What do you do? Is this true or false? What binary major premise-minor premise-conclusion style argument would you formulate from this in order to support your decision?

Let’s say a friend of yours spent \$5,000 on a 2 week cruise. Two days before your friend is set to go, he reads two separate stories of cruise liners sinking, and decides to cancel his entire trip. What major-premise-minor-premise-conclusion argument could you use to persuade your friend to keep his cruise? Or would you formulate a syllogism to support his decision?

Let’s say you meed Ned at a party. Ned is 25, majored in Computer Science, and lives in California. Which statement about Ned is more likely? A) Ned is a software engineer B) Ned is a software engineer who works in Silicon Valley

What all of these examples have in common is that they’re dealing with incomplete information. That’s the world we live in; every one of our decisions deals with varying levels of uncertainty. We don’t live in a world of Aristotelian logic. Any system that claims rationality has to deal — rationally — with uncertainty.

In the doctor example, it’s somewhat common knowledge that the doctor might be wrong. We have a handy meme for dealing with this uncertainty: Getting a second opinion. Formally, though, trusting a doctor in this situation is called a base rate fallacy. That is, cancer is so uncommon, and making a judgement on so little information… this is less than the likelihood that the mole is just a mole: A false positive. A much better rule of thumb would be to compare the likelihood of false positives and the likelihood of true positives while keeping in mind how (un)common cancer (or whatever the claim) is. Or for multiple competing claims, comparing the likelihood of true positives for each claim, while keeping in mind how (un)common the claim is.

What about Ned? It seems pretty intuitive that Ned is software engineer who works in Silicon Valley. But this is wrong, no matter how intuitive it seems. Because the population of software engineers who work throughout the entire state of California is larger than the population of software engineers who work in Silicon Valley, so (A) is more likely. This brings up another, related point. Our *feeling* of something being correct is also subject to uncertainty itself… though it doesn’t *feel* that way: There are other illusions besides optical illusions.

Comments Off on Is Logic Alone Enough To Become Rational?

Posted by on September 11, 2019 in Bayes

## Rhesus macaques use probabilities to predict future events

Abstract

Humans can use an intuitive sense of statistics to make predictions about uncertain future events, a cognitive skill that underpins logical and mathematical reasoning. Recent research shows that some of these abilities for statistical inferences can emerge in preverbal infants and non-human primates such as apes and capuchins. An important question is therefore whether animals share the full complement of intuitive reasoning abilities demonstrated by humans, as well as what evolutionary contexts promote the emergence of such skills. Here, we examined whether free-ranging rhesus macaques (Macaca mulatta) can use probability information to infer the most likely outcome of a random lottery, in the first test of whether primates can make such inferences in the absence of direct prior experience. We developed a novel expectancy-violation looking time task, adapted from prior studies of infants, in order to assess the monkeys’ expectations. In Study 1, we confirmed that monkeys (nhttps://www.sciencedirect.com/science/article/abs/pii/S1090513818303222?dgcid=raven_sd_via_email=https://www.sciencedirect.com/science/article/abs/pii/S1090513818303222?dgcid=raven_sd_via_email20) looked similarly at different sampled items if they had no prior knowledge about the population they were drawn from. In Study 2, monkeys (nhttps://www.sciencedirect.com/science/article/abs/pii/S1090513818303222?dgcid=raven_sd_via_email=https://www.sciencedirect.com/science/article/abs/pii/S1090513818303222?dgcid=raven_sd_via_email80) saw a dynamic ‘lottery’ machine containing a mix of two types of fruit outcomes, and then saw either the more common fruit (expected trial) or the relatively rare fruit (unexpected trial) fall from the machine. We found that monkeys looked longer when they witnessed the unexpected outcome. In Study 3, we confirmed that this effect depended on the causal relationship between the sample and the population, not visual mismatch: monkeys looked equally at both outcomes if the experimenter pulled the sampled item from her pocket. These results reveal that rhesus monkeys spontaneously use information about probability to reason about likely outcomes, and show how comparative studies of nonhumans can disentangle the evolutionary history of logical reasoning capacities.

Comments Off on Rhesus macaques use probabilities to predict future events

Posted by on August 21, 2019 in Bayes, cognitive science

## Falsifiability is Bayesian Evidence

I’ve explained how Bayes Theorem demonstrates nicely why an explanation being unfalsifiable is a bad explanation. An explanation that can be used to explain some data and explain the complete opposite of those data is a bad explanation. But I should note that “bad” is doing a lot of work here; on topics besides the existence of god (a hypothesis of maximum entropy; one of the most common and most unfalsifiable explanations out there) it should be more comparative. Yet, “bad” doesn’t necessitate “wrong”.

If you don’t feel like clicking on my many previous posts on the topic, I’ll explain here again using a similar example.

Say I have two pants pockets. One pocket has only \$USD coins and the other pocket has coins from all across the world, including \$USD coins. If I pull a coin from a random pocket and it’s a \$USD dime, does this mean that I pulled it from the pocket with only \$USD coins or from the pocket that has coins from all over the world?

All else being equal (e.g., the same number of coins or same number of \$USD coins in each pocket) it’s more likely that I pulled it from the pocket that only has \$USD coins. This is because there are more possible coins available in the pocket with coins from all over the globe. The citizen of the world pocket is less falsifiable than the \$USD pocket. But that doesn’t mean it’s wrong. That the citizen of the world pocket can be used to explain both pulling out a \$USD coin and also not pulling out a \$USD coin makes it a worse explanation than the \$USD only pocket. Less likely doesn’t mean wrong though.

What does an unfalsifiable explanation look like? Again, I’ll invoke the god hypothesis. One pocket has only \$USD coins and the other pocket is a pocket of miracles. Any coin from any civilization on any planet in the entirety of existence is possible from the other pocket. The reason this is unfalsifiable is because there are more possibilities for coins. Someone trying to use the miracle pocket to explain pulling out a \$USD quarter would reply “but you can’t prove it wrong!

The fact that you can’t prove it wrong is what makes it less likely to be true. Less likely than something that can be proven wrong.

Again, you can look at the math behind this logic here.

The operating principle here is that, the more potential (and mutually exclusive) data a hypothesis, explanation, or model can explain, the less likely it is to explain any particular datum. Somewhat counter-intuitive, but this is what probability theory tells us. Do not trust your intuitions when it comes to probability. They are wrong.

As I wrote in one of the other posts on Bayes Theorem and falsifiability, god could have had us live on any planet in the solar system. With god, all things are possible. But we just so happen to live on the only planet in our solar system where it’s possible for intelligent life to exist without supernatural intervention. Yet if we invoke god, that hypothesis could be used to explain us living on Jupiter or Mercury or a comet orbiting the sun every 150 years; the naturalism hypothesis cannot explain those possibilities, if one of them is what had actually happened. God has more possible locations to create humanity (i.e., can’t be proven wrong) than a godless hypothesis.

For a more real-world example so I can stop beating up on god, the same situation happens between evolutionary psychology and blank-slatism. Blank-slatism and the social constructionist hypotheses / social role theories claim that any configuration of human behavior is possible due to socialization, but we just so happen to live in a world where e.g., men have a high murder rate among their fellow men, just like the males of our primate cousins.

The social constructionist hypothesis isn’t as egregious as the god hypothesis, but it’s still less falsifiable than any evolutionary explanation. In this case, intrasexual competition.

Let’s continue with the examples.

What if I told you that the number of men against abortion is higher than the number of women who are anti-abortion? Of course, Patriarchy makes men misogynists who treat women as baby machines so of course they don’t want women to have a choice.

What if I told you that the number of women against abortion is higher than the number of men who are anti-abortion?

Of course, Patriarchy makes women have to behave like men, that makes women internalize their misogyny so that means they’ll be viewing themselves as baby machines.

Do you see what I did there? A hypothesis (Patriarchy) that can explain data and be used to explain the complete opposite of those data? That means it can’t be proven wrong. That means it’s behaving like the god hypothesis.

That means it’s unfalsifiable. That means it’s a bad explanation.

What would be a good hypothesis? One that explains only one of those situations and completely fails at explaining the other. Something like intrasexual competition, which would say that abortion is a woman on woman problem and men care less either way. Intrasexual competition can only apply to one of those hypotheticals, and ***spoiler alert*** it’s the one that we see:

Intrasexual competition makes no sense for men being more anti-abortion than women, (and if that was indeed the case we could rule intrasexual competition out, like how getting a non-\$USD coin rules out the \$USD-only pocket) but makes sense to explain why women are more anti-abortion than men (and more pro-abortion than men): They are competing with other women. Indeed, the poll above shows that the abortion debate is mainly a battle between republican women and democrat women. Now it shouldn’t surprise you that the same thing happens in other domains related to sex and reproduction among humans. Like going topless, sex work, fat shaming, promiscuity/slut shaming, wearing makeup, etc. All of those are topics where men are less polarized about it than women, just like abortion.

Now I’m not saying that all current models of evolutionary psychology are gold. But with no other information, evolutionary hypotheses have a higher prior of being correct than social role hypotheses when it comes to large scale human behavior. Indeed, human beings are biased to apply social rules to phenomena when they don’t actually apply. The opposite — not applying social rules when they should — rarely happens. Remember, our intuitions about invoking social rules as an explanation are because we want to curry favor, support allies, or denigrate enemies. It makes System 1 sense that society tells men to be more violent than women, or that society forces women to be less aggressive than men. Unfortunately, your intuitions are highly unlikely to be true. And as I demonstrated above, the social role explanation isn’t a very robust explanation.

As a matter of fact, a lot of the problems people point out with evolutionary psychology today are the same ones that were encountered with evolutionary biology in the 19th century. Darwin’s original formulation had no mechanism (DNA hadn’t been discovered), no predictions (it was explaining things that people had already seen, i.e., a “just-so” story), and no mathematical models.

But it was still a good explanation; a better explanation than alternatives. A lot better explanatory framework than social constructionism Creationism. And that’s all that really matters. The more falsifiable explanation wins the race.

The moral of the story is, that there are very few things we encounter in either science or every day life that are wholly unfalsifiable (besides the existence of god). Unfalsifiability shouldn’t be viewed as a binary option. Some explanations are either more or less falsifiable than others. Falsifiability is Bayesian, and like all things Bayesian, should be used in a comparative, greater than/less than manner and not in a binary is/is not formulation. I’m just a lone voice crying out in the wilderness, but I would like “unfalsifiable” to be replaced with “more” or “less” falsifiable in almost all common usage; only save “unfalsifiable” for truly egregious cases like omnipotent gods or Last Thursdayism.

1 Comment

Posted by on April 30, 2019 in Bayes, cognitive science, religion

NeuroLogica Blog

Slate Star Codex

SELF-RECOMMENDING!

The Wandering Scientist

What a lovely world it is

NT Blog

PsyBlog

Understand your mind with the science of psychology -

Vridar

Musings on biblical studies, politics, religion, ethics, human nature, tidbits from science

Maximum Entropy

Skepticism, Properly Applied

Criticism is not uncivil

Say..

Research Digest

Disrupting Dinner Parties

Feminism is for everyone!