We don't ask whether the future consequences of this claim seem extreme or important\. We don't ask whether the policies that would be required to address the claim are very costly\. We ask whether "carbon dioxide warms the atmosphere" or "carbon dioxide fails to warm the atmosphere" seems to conform better to the deep, causal generalizations we already have about carbon dioxide and heat\. If we've already considered the deep causal generalizations like those, we don't ask about generalizations causally downstream of the deep causal ones we've already considered\. \(E\.g\., we don't say, "But on every observed day for the last 200 years, the global temperature has stayed inside the following range; it would be 'extraordinary' to leave that range\."\)

This (the ignoring of cost) seems like a flaw to Bayesian analysis, and makes me think there's probably some extension to it, which is being omitted here for simplicity, but which takes into account something like cost, value, or utility.

For example, the "cost" of a bayesian filter deciding to show a salesman a spam email is far lower than the "cost" of the same filter deciding to prevent them from seeing an email from a million-dollar sales lead.

So, while the *calculation* of probabilities *should not* take into account cost, it feels like the *making decisions of based on* those probabilities *should* take cost into account.

For example: the chances of our getting wiped out in the near future by a natural disaster. Yet, the potential consequences are dire, and the net costs per person of detection are low, or even negative. Therefore, we have a global near-earth-object detection network, a tsunami and quake detection network, fire watch towers, weather and climate monitors, disease tracking centers, and so on.

If this extension to Bayesian analysis exists, this seem a sensible place to link to it.