{ localUrl: '../page/utility_function.html', arbitalUrl: 'https://arbital.com/p/utility_function', rawJsonUrl: '../raw/1fw.json', likeableId: '405', likeableType: 'page', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], pageId: 'utility_function', edit: '7', editSummary: '', prevEdit: '6', currentEdit: '7', wasPublished: 'true', type: 'wiki', title: 'Utility function', clickbait: 'The only coherent way of wanting things is to assign consistent relative scores to outcomes.', textLength: '6576', alias: 'utility_function', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'EliezerYudkowsky', editCreatedAt: '2017-02-08 03:55:46', pageCreatorId: 'EliezerYudkowsky', pageCreatedAt: '2015-12-28 20:22:28', seeDomainId: '0', editDomainId: '15', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '0', isEditorComment: 'false', isApprovedComment: 'true', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '122', text: 'A *utility function* is an abstract way of describing the relative degree to which an [agent agent] prefers or disprefers certain outcomes, by assigning an abstract score, the *utility*, to each outcome.\n\nFor example, let's say that an agent's utility function:\n\n- Assigns utility 5 to eating vanilla ice cream.\n- Assigns utility 8 to eating chocolate ice cream.\n- Assigns utility 0 to eating no ice cream at all.\n\nThis tells us that if we offer the agent choices like:\n\n- Choice A: 50% probability of no ice cream, 50% probability of chocolate ice cream\n- Choice B: 100% probability of vanilla ice cream.\n- Choice C: 30% probability of no ice cream, 70% probability of chocolate ice cream\n\n...then the agent will prefer B to A and C to B, since the respective [18t expected utilities] are:\n\n$$\\begin{array}{rl}\n0.5 \\cdot €0 + 0.5 \\cdot €8 \\ &= \\ €4 \\\\\n1.0 \\cdot €5 \\ &= \\ €5 \\\\\n0.3 \\cdot €0 + 0.7 \\cdot €8 \\ &= \\ €5.6\n\\end{array}$$\n\nObserve that we could multiply all the utilities above by 2, or 1/2, or add 5 to all of them, without changing the agent's behavior. What the above utility function really says is:\n\n"The interval from vanilla ice cream to chocolate ice cream is 60% of the size of the interval from no ice cream to vanilla ice cream, and the sign of both intervals is positive." \n\nThese *relative intervals* don't change under positive affine transformations (adding a real number or multiplying by a positive real number), so utility functions are equivalent up to a positive affine transformation.\n\n# Confusions to avoid\n\nThe agent is not pursuing chocolate ice cream in order to get some separate desideratum called 'utility'. Rather, this notion of 'utility' is an abstract measure of how strongly the agent pursues chocolate ice cream, relative to other things it pursues.\n\nContemplating how utility functions stay the same when multiplied by 2 helps to emphasize:\n\n- Utility isn't a solid entity; there's no invariant way of saying "how much utility" an agent scored over the course of its life. (We could just as easily say it scored twice as much utility.)\n- Utility measures an agent's relative preferences; it's not something an agent wants *instead of* other things. We could as easily describe everything's relative value by describing each thing's value relative to eating a scoop of chocolate ice cream--so without introducing any separate unit of 'utility'.\n- An agent doesn't need to mentally represent a 'utility function' in order for the agent's *behavior* to be *consistent* with that utility function. In the case above, the agent could actually want chocolate ice cream at €8.1 and it would express the same visible preferences of A < B < C. That is, its behavior could be *viewed as consistent* with either of those two utility functions, and maybe the agent doesn't explicitly represent any utility function at all.\n\nSome other potential confusions to avoid:\n\n• To say that we can talk about an agent behaving consistently with some utility function(s), does not say anything about *what* the agent wants. There's no sense in which the theory of expected utility, by itself, mandates that chocolate ice cream must have more utility than vanilla ice cream.\n\n• The [18v expected utility formalism] is hence something entirely different from [utilitarianism](https://plato.stanford.edu/entries/utilitarianism-history/), a separate moral philosophy with a confusingly neighboring name. \n\n• Expected utility doesn't say anything about needing to value each additional unit of ice cream, or each additional dollar, by the same amount. We can easily have scenarios like:\n\n- Eat 1 unit of vanilla ice cream: €5.\n- Eat 2 units of vanilla ice cream: €7.\n- Eat 3 units of vanilla ice cream: €7.5.\n- Eat 4 units of vanilla ice cream: €3 (because stomachache).\n\nThat is: consistent utility functions must be consistent in how they value *complete final outcomes* rather than how they value *different marginal added units of ice cream.*\n\nSimilarly, there is no rule that a gain of \\$200,000 has to be assigned twice the utility of a gain of \\$100,000, and indeed this is generally not the case in real life. People have diminishing returns on money; the richer you already are, the less each additional dollar is worth.\n\nThis in turn implies that the expected money of a gamble will usually be different from its expected utility.\n\nFor example: Most people would prefer (A) a certainty of \\$1,000,000 to (B) a 50% chance of \\$2,000,010 and a 50% chance of nothing; since the second \\$1,000,010 will have substantially less further value to them than the first \\$1,000,000. The utilities of \\$0, \\$1,000,000, and \\$2,000,010 might be something like €0, €1, and €1.2.\n\nThus gamble A has higher expected utility than gamble B, even though gamble B leads to a higher expectation of gain in dollars (by a margin of \\$5). There's no useful concept corresponding to "the utility of the expectation of the gain"; what we want is "the expectation of the utility of the gain".\n\n• Conversely, when we talk about utilities, we are talking about the unit we use to *measure* diminishing returns. By the definition of utility, a gain that you assign +€10 (relative to some baseline alternative) is something you want twice as much as a gain you assign +€5. It doesn't make any sense to imagine diminishing returns on utility as if utility were a separate good rather than being the measuring unit of returns.\n\nIf you claim to assign gain X an expected utility of +€1,000,000, then you must want it a million times as much as some gain Y that you assign an expected utility interval of +€1. You are claiming that you'd trade a certainty of X for a 1 in 999,999 chance at gaining Y. If that's *not* true, then you either aren't a consistent expected utility agent (admittedly likely) or you don't really value X a million times as much as Y (also likely). If ordinary gains are in the range of €1 then the notion of a gain of +€1,000,000 is far more startling than talking about a mere gain of a million dollars.\n\n# Motivations for utility\n\nVarious [7ry coherence theorems] show that if your behavior can't be viewed as coherent with some consistent utility function over outcomes, you must be using a dominated strategy. Conversely if you're not using a dominated strategy, we can interpret you as acting as if you had a consistent utility function. See [7hh this tutorial].', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '1', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '2016-02-13 00:11:25', hasDraft: 'false', votes: [], voteSummary: 'null', muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'EliezerYudkowsky' ], childIds: [], parentIds: [ 'expected_utility_formalism' ], commentIds: [], questionIds: [], tagIds: [ 'b_class_meta_tag' ], relatedIds: [], markIds: [], explanations: [], learnMore: [], requirements: [], subjects: [], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '21960', pageId: 'utility_function', userId: 'EliezerYudkowsky', edit: '7', type: 'newEdit', createdAt: '2017-02-08 03:55:46', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '21959', pageId: 'utility_function', userId: 'EliezerYudkowsky', edit: '0', type: 'newTag', createdAt: '2017-02-08 03:26:20', auxPageId: 'b_class_meta_tag', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '21958', pageId: 'utility_function', userId: 'EliezerYudkowsky', edit: '0', type: 'deleteTag', createdAt: '2017-02-08 03:26:09', auxPageId: 'stub_meta_tag', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '21956', pageId: 'utility_function', userId: 'EliezerYudkowsky', edit: '6', type: 'newEdit', createdAt: '2017-02-08 03:25:54', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '21801', pageId: 'utility_function', userId: 'EliezerYudkowsky', edit: '0', type: 'newEditGroup', createdAt: '2017-01-20 05:36:02', auxPageId: '15', oldSettingsValue: '123', newSettingsValue: '15' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '12225', pageId: 'utility_function', userId: 'EliezerYudkowsky', edit: '4', type: 'newEdit', createdAt: '2016-06-09 22:32:44', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4525', pageId: 'utility_function', userId: 'EliezerYudkowsky', edit: '2', type: 'newEdit', createdAt: '2015-12-28 20:45:19', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4526', pageId: 'utility_function', userId: 'EliezerYudkowsky', edit: '3', type: 'newEdit', createdAt: '2015-12-28 20:45:19', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4520', pageId: 'utility_function', userId: 'EliezerYudkowsky', edit: '1', type: 'newEdit', createdAt: '2015-12-28 20:22:28', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4519', pageId: 'utility_function', userId: 'EliezerYudkowsky', edit: '0', type: 'newTag', createdAt: '2015-12-28 20:12:24', auxPageId: 'stub_meta_tag', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4517', pageId: 'utility_function', userId: 'EliezerYudkowsky', edit: '0', type: 'newParent', createdAt: '2015-12-28 20:12:19', auxPageId: 'expected_utility_formalism', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'false', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }