Attainable optimum

by Eliezer Yudkowsky Feb 13 2017

The 'attainable optimum' of an agent's preferences is the best that agent can actually do given its finite intelligence and resources (as opposed to the global maximum of those preferences).

The 'attainable optimum' of an agent's preferences is the most preferred option that the agent can (a) obtain using its bounded material capabilities and (b) find as an available option using its limited cognitive resources; as distinct from the theoretical global maximum of the agent's utility function. When you run a non-mildly-optimizing agent, what you actually get as the resulting outcome is not the single outcome that theoretically maximizes the agent's utility function; you rather get that agent's attainable optimum of its expectation of that utility function. A preference framework's 'attainable optimum' is what you get in practice when somebody runs the corresponding agent.