Utility function

https://arbital.com/p/utility_function

by Eliezer Yudkowsky Dec 28 2015 updated Feb 8 2017

The only coherent way of wanting things is to assign consistent relative scores to outcomes.


A utility function is an abstract way of describing the relative degree to which an [agent agent] prefers or disprefers certain outcomes, by assigning an abstract score, the utility, to each outcome.

For example, let's say that an agent's utility function:

This tells us that if we offer the agent choices like:

…then the agent will prefer B to A and C to B, since the respective expected utilities are:

$$~$\begin{array}{rl} 0.5 \cdot €0 + 0.5 \cdot €8 \ &= \ €4 \\ 1.0 \cdot €5 \ &= \ €5 \\ 0.3 \cdot €0 + 0.7 \cdot €8 \ &= \ €5.6 \end{array}$~$$

Observe that we could multiply all the utilities above by 2, or 1/2, or add 5 to all of them, without changing the agent's behavior. What the above utility function really says is:

"The interval from vanilla ice cream to chocolate ice cream is 60% of the size of the interval from no ice cream to vanilla ice cream, and the sign of both intervals is positive."

These relative intervals don't change under positive affine transformations (adding a real number or multiplying by a positive real number), so utility functions are equivalent up to a positive affine transformation.

Confusions to avoid

The agent is not pursuing chocolate ice cream in order to get some separate desideratum called 'utility'. Rather, this notion of 'utility' is an abstract measure of how strongly the agent pursues chocolate ice cream, relative to other things it pursues.

Contemplating how utility functions stay the same when multiplied by 2 helps to emphasize:

Some other potential confusions to avoid:

• To say that we can talk about an agent behaving consistently with some utility function(s), does not say anything about what the agent wants. There's no sense in which the theory of expected utility, by itself, mandates that chocolate ice cream must have more utility than vanilla ice cream.

• The expected utility formalism is hence something entirely different from utilitarianism, a separate moral philosophy with a confusingly neighboring name.

• Expected utility doesn't say anything about needing to value each additional unit of ice cream, or each additional dollar, by the same amount. We can easily have scenarios like:

That is: consistent utility functions must be consistent in how they value complete final outcomes rather than how they value different marginal added units of ice cream.

Similarly, there is no rule that a gain of \$200,000 has to be assigned twice the utility of a gain of \$100,000, and indeed this is generally not the case in real life. People have diminishing returns on money; the richer you already are, the less each additional dollar is worth.

This in turn implies that the expected money of a gamble will usually be different from its expected utility.

For example: Most people would prefer (A) a certainty of \$1,000,000 to (B) a 50% chance of \$2,000,010 and a 50% chance of nothing; since the second \$1,000,010 will have substantially less further value to them than the first \$1,000,000. The utilities of \$0, \$1,000,000, and \$2,000,010 might be something like €0, €1, and €1.2.

Thus gamble A has higher expected utility than gamble B, even though gamble B leads to a higher expectation of gain in dollars (by a margin of \$5). There's no useful concept corresponding to "the utility of the expectation of the gain"; what we want is "the expectation of the utility of the gain".

• Conversely, when we talk about utilities, we are talking about the unit we use to measure diminishing returns. By the definition of utility, a gain that you assign +€10 (relative to some baseline alternative) is something you want twice as much as a gain you assign +€5. It doesn't make any sense to imagine diminishing returns on utility as if utility were a separate good rather than being the measuring unit of returns.

If you claim to assign gain X an expected utility of +€1,000,000, then you must want it a million times as much as some gain Y that you assign an expected utility interval of +€1. You are claiming that you'd trade a certainty of X for a 1 in 999,999 chance at gaining Y. If that's not true, then you either aren't a consistent expected utility agent (admittedly likely) or you don't really value X a million times as much as Y (also likely). If ordinary gains are in the range of €1 then the notion of a gain of +€1,000,000 is far more startling than talking about a mere gain of a million dollars.

Motivations for utility

Various coherence theorems show that if your behavior can't be viewed as coherent with some consistent utility function over outcomes, you must be using a dominated strategy. Conversely if you're not using a dominated strategy, we can interpret you as acting as if you had a consistent utility function. See this tutorial.