# No-Free-Lunch theorems are often irrelevant

https://arbital.com/p/nofreelunch_irrelevant

by Eliezer Yudkowsky Jun 19 2016 updated Jun 20 2016

There's often a theorem proving that some problem has no optimal answer across every possible world. But this may not matter, since the real world is a special case. (E.g., a low-entropy universe.)

[summary: "No Free Lunch" theorems prove that across some problem class, it's impossible to do better in one case without doing worse in some other case. They say some equivalent of, "There's no universal good strategy: For every universe where an action causes you to gain \$5, there's a different universe where that same action causes you to lose \$5. You can't get a free lunch across every universe; if you gain \$5 in one possible world you must lose \$5 in another possible world."

To this the realistic reply is very often, "Okay, that's true across maximum-entropy universes in general. But the real world is an extremely special case. We happen to live in a low-entropy universe, where things are far more ordered and predictable than in a universe where all the atoms have been placed at random; and the world where we gain \$5 is far more probable than the world where we lose \$5."

Similarly: In theory, the ideal form of the protein folding problem is NP-hard. In practice, biology is happy to select proteins that reliably fold up a particular way even if they don't reach the exact theoretical minimum. "NP-hard" problems in real life with nonrandom data often have regularities that make them much more easily solvable than the worst-case problems; or they have pretty good approximate solutions.]

There's a wide variety of "No Free Lunch" theorems proving that, in general, some problem is unsolvable. Very often, these theorems are not relevant because the real universe is a special case.

In a very general and metaphorical sense, most of these theorems say some equivalent of, "There's no universal good strategy: For every universe where an action causes you to gain \$5, there's a different universe where that same action causes you to lose \$5. You can't get a free lunch across every universe; if you gain \$5 in one universe you must lose \$5 in another universe." To which the reply is very often, "Sure, that's true across maximum-entropy universes in general, but we happen to live in a low-entropy universe where things are far more ordered and predictable than in a universe where all the atoms have been placed at random."

Similarly: In theory, the protein folding problem (predicting the lowest-energy configuration for a string of amino acids) is NP-hard (sorta). But since quantum mechanics is not known to solve NP-hard problems, this just means that in the real world, some proteins fold up in a way that doesn't reach the ideal lowest energy. Biology is happy with just picking out proteins that reliably fold up a particular way. "NP-hard" problems in real life with nonrandom data often have regularities that make them much more easily solvable than the worst cases; or they have pretty good approximate solutions; or we can work with just the solvable cases.

A related but distinct idea: It's impossible to prove in general whether a Turing machine halts, or has any other nontrivial property since it might be conditioned on a subprocess halting. But that often doesn't stop us from picking particular machines that do halt, or limiting our consideration to computations that run in less than a quadrillion timesteps, etcetera.

This doesn't mean all No-Free-Lunch theorems are irrelevant in the real-universe special case. E.g., the Second Law of Thermodynamics can also be seen as a No-Free-Lunch theorem, and does actually prohibit perpetual motion in our own real universe (on the standard model of physics).

It should finally be observed that human intelligence does work in the real world, meaning that there's no No-Free-Lunch theorem which prohibits an intelligence from working at least that well. Any claim that a No-Free-Lunch theorem prohibits machine intelligence in general, must definitely Prove Too Much, because the same reasoning could be applied to a human brain considered as a physical system.

## Comments

Alexei Andreev

Similarly: In theory, the protein folding problem $$predicting the lowest\-energy configuration for a string of amino acids$$ is NP\-hard $$sorta$$\. But since quantum mechanics is not known to solve NP\-hard problems, this just means that in the real world, biology is happy with just picking out proteins that reliably fold up a particular way $$even if it's not the ideal lowest energy$$\. "NP\-hard" problems in real life with nonrandom data often have regularities that make them much more easily solvable than the worst cases; or they have pretty good approximate solutions; or we can work with just the solvable cases\.

This is interesting! Is there a name for this concept / area of thought?