Extraordinary claims

https://arbital.com/p/extraordinary_claims

by Eliezer Yudkowsky Mar 4 2016 updated Mar 4 2016

What makes something an 'extraordinary claim' that requires extraordinary evidence?


[summary: To determine whether something is an 'extraordinary claim', for purposes of deciding whether it requires extraordinary evidence, we consider / don't consider these factors:

We don't automatically believe claims that pass such tests; we just consider them as ordinary claims which we'd believe on obtaining ordinary evidence.]

Summary

What makes something count as an 'extraordinary claim' for the purposes of determining whether it requires extraordinary evidence?

Broadly speaking:

Some properties that do not make a claim inherently 'extraordinary' in the above sense:

The ultimate grounding for the notion of 'extraordinary claim' would come from Solomonoff induction, or some generalization of Solomonoff induction to handle more [-naturalistic] reasoning about the world. Since this page is a Work In Progress, it currently only lists out the derived heuristics, rather than trying to explain in any great detail how those heuristics might follow from Solomonoff-style reasoning.

Example 1: Anthropogenic global warming

Consider the idea of anthropogenic global warming as it might have been analyzed in advance of observing the actual temperature record. Would the claim, "Adding lots of carbon dioxide to the atmosphere will result in increased global temperatures and climate change", be an 'extraordinary' claim requiring extraordinary evidence to verify, or an ordinary claim not requiring particularly strong evidence? We assume in this case you can do all the physics reasoning or ecological reasoning you want, but you can't actually look at the temperature record yet.

The core argument (in advance of looking at the temperature record) will be: "Carbon dioxide is a greenhouse gas, so adding sufficient amounts of it to the atmosphere ought to trap more heat, which ought to raise the equilibrium temperature of the Earth."

To evaluate the ordinariness or extraordinariness of this claim:

We don't ask whether the future consequences of the claim are extreme or important. Suppose that adding carbon dioxide actually did trap more heat; would the Standard Model of physics think to itself, "Uh oh, that has some extreme consequences" and decide to let the heat radiate away anyway? Obviously not; the laws of physics have no privileged tendency to avoid consequences that are, on a human scale, very important in a positive or negative direction.

We don't ask whether the policies required to address the claim are very costly - this isn't something that would prevent the causal mechanisms behind the claim from operating, and more generally, reality doesn't try to avoid inconveniencing us, so it doesn't affect the prior probability we assign to a claim in advance of seeing any evidence.

We don't ask whether someone has a motive to lie to us about the claim, or if they might be inclined to believe it for crazy reasons. If someone has a motive to lie to us about the evidence, this affects the strength of evidence, rather than lowering the prior probability. Suppose somebody said, "Hey, I own an apartment in New York, and I'll rent it to you for $2000/month." They might be lying and trying to trick you out of the money, but this doesn't mean "I own an apartment in New York" is an extraordinary claim. Lots of people own apartments in New York. It happens all the time, even. The monetary stake means that the person might have a motive to lie to you, but this affects the likelihood ratio, not the prior odds. If we're just considering their unsupported word, the probability that they'll say "I own an apartment in New York", given that they don't own an apartment in New York, might be unusually high because they could be trying to run a rent scam. But this doesn't mean we have to call in physicists to check out whether the apartment is really there - we just need stronger, but ordinary, evidence. Similarly, even if there was someone tempted to lie about global warming, we'd consider this as a potential weakness of the evidence they offer us, but not a weakness in the prior probability of the proposition "Adding carbon dioxide to the atmosphere heats it up."

(Similarly, wanting strong evidence about a subject doesn't always coincide with the underlying claim being improbable. Maybe you're considering buying a house in San Francisco, and millions of dollars are at stake. This implies a high [value_of_information value of information] and you might want to invest in extra-strong evidence like having a third party check the title to the house. But this isn't because it's a Lovecraftian anomaly for anyone to own a house in San Francisco. The money at stake just means that we're willing to pay more to eliminate small residues of improbability from this very ordinary claim.)

We do ask whether "adding carbon dioxide warms the atmosphere" or "carbon dioxide doesn't warm the atmosphere" seems more consonant with the previously observed behavior of carbon dioxide.

After we finish figuring out how carbon dioxide molecules and infrared photons usually behave, we don't give priority to generalizations like, "For as long as we've observed it, the average summer temperature in Freedonia has never gone over 30C." It's true that the predicted consequences of carbon dioxide behaving like it usually does, are violating another generalization about how Freedonia usually behaves. But we generally give priority to deeper generalizations continuing, i.e., generalizations that are lower-level or closer to the start of causal chains.

We don't consider whether lots of prestigious scientists believe in global warming. If you expect that lots of prestigious scientists usually won't believe in a proposition like global warming in worlds where global warming is false, then observing an apparent scientific consensus might be moderately strong evidence favoring the claim. But that isn't part of the prior probability before seeing any evidence. For that, we want to ask about how complicated the claim is, and whether it violates or obeys generalizations we already know about.

Another way of looking at a test of extraordinariness is to ask whether the claim's truth or falsity would imply learning more about the universe that we didn't already know. If you'd never observed the temperature record, and had only guessed a priori that adding carbon dioxide would warm the atmosphere, you wouldn't be too surprised to go look at the temperature record and find that nothing seemed to be happening. In this case, rather than imagining that you were wrong about the behavior of infrared light, you might suspect, for example, that plants were growing more and absorbing the carbon dioxide, keeping the total atmospheric level in equilibrium. But in this case you would have learned a new fact not already known to you (or science) which explained why global temperatures were not rising. So to expect that outcome in advance would be a more extraordinary claim than to not expect it. If we can imagine some not-too-implausible ways that a claim could be wrong, but they'd all require us to postulate new facts we don't solidly know, then this doesn't make the original claim 'extraordinary'. It's still a very ordinary claim that we'd start believing in after seeing an ordinary amount of evidence.


Comments

Alexei Andreev

But we generally give priority to deeper generalizations continuing, i.e., generalizations that are lower-level or closer to the start of causal chains.

This sounds like a really useful antidote to many inside view / outside view failure modes. Would love to read more about that.