{ localUrl: '../page/Indirect_decision_theory.html', arbitalUrl: 'https://arbital.com/p/Indirect_decision_theory', rawJsonUrl: '../raw/1wb.json', likeableId: '821', likeableType: 'page', myLikeValue: '0', likeCount: '1', dislikeCount: '0', likeScore: '1', individualLikes: [ 'TsviBT' ], pageId: 'Indirect_decision_theory', edit: '11', editSummary: '', prevEdit: '10', currentEdit: '11', wasPublished: 'true', type: 'wiki', title: 'Indirect decision theory', clickbait: '', textLength: '4908', alias: 'Indirect_decision_theory', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'PaulChristiano', editCreatedAt: '2016-02-27 00:34:30', pageCreatorId: 'PaulChristiano', pageCreatedAt: '2016-02-04 02:38:19', seeDomainId: '0', editDomainId: '705', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '0', isEditorComment: 'false', isApprovedComment: 'true', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '44', text: 'In which I argue that understanding decision theory can be delegated to AI.\n\n### Indirect normativity.\n\nMy preferences can probably described by a utility function U : \\[Possible worlds] → $\\mathbb{R}$. But U is likely to lack a simple specification, and even if it has one, I certainly don’t know it. So if I wanted to describe my preferences, I might define the utility of a world _w_ as:\n\nU(> w) = “How good I would judge the world > w to be, after an idealized process of reflection.”\n\nIntuitively, if there were a powerful AI around, I’d like it perform an action _a_ such that $\\mathbb{E}$[U(_w_)|do(_a_)] is as large as possible.\n\n### Indirect decision theory\n\nBut what does $\\mathbb{E}$[U(_w_)|do(_a_)] mean anyway? We haven’t given a prescription for interpreting “|do(_a_),” and we haven’t specified a distribution over possible worlds.\n\nReally, I’d like to leave these questions up to an AI. That is, whatever work _I_ would do in order to answer these questions, an AI should be able to do just as well or better. And it should behave sensibly in the interim, just like I would.\n\nTo this end, consider the definition of a map U' : \\[Possible actions] → $\\mathbb{R}$:\n\nU'(> a) = “How good I would judge the action > a to be, after an idealized process of reflection.”\n\nNow we’d just like to build an “agent” that takes the action _a_ maximizing $\\mathbb{E}$[U'(_a_)]. Rather than defining our decision theory or our beliefs right now, we will instead come up with some answer during the “idealized process of reflection.” And as long as an AI is uncertain about what we’d come up with, it will behave sensibly in light of its uncertainty.\n\nThis feels like a bit of a cheat. But I think the feeling is an illusion. More precisely:\n\n**A successful AI will need to be able to reason about quantities like**$\\mathbb{E}$**[U(_w_)|do(_a_)], and we can’t dodge this algorithmic problem with a sleight of hand. But a sleight of hand might dodge the philosophical hazard of committing ourselves to a particular definition of**$\\mathbb{E}$**[U(_w_)|do(_a_)].**\n\nU' doesn’t seem any harder to define than U. Indeed it may be easier—possible worlds are complex and massive objects, and to evaluate them we might have to think long and hard and become very different people than we are today. But actions are close to home.\n\nAnd U' seems every bit as actionable as U: if a program can calculate $\\mathbb{E}$[U(_w_)|do(_a_)] (whatever we mean by that), it can probably just as well calculate $\\mathbb{E}$[U'(_a_)].\n\nIt may be that this approach isn’t tenable. But I think that is necessarily a question about the internal structure of an AI.\n\nPossible problems\n=================\n\n### Is “idealized reflection” up to it?\n\nIn order to evaluate how good an action is, we will often want to understand its consequences. This puts an additional requirement “idealized process of reflection:” it needs to be powerful enough to understand the consequences of each possible action.\n\nI don’t think this is a big deal:\n\n1. In order for U' to guide an AI’s decisions, U' just needs to be as wise as the AI itself. It doesn’t matter if we would like an action because of some hard-to-anticipate consequences, unless the AI can anticipate that we’ll like it.\n2. The bar for an “idealized process of reflection” into whose hands we would entrust the entire future seems much _higher_ than the bar for a process of reflection that can determine the consequences of actions today.\n\n### A final wrinkle\n\nComputing $\\mathbb{E}$[U’(_a_)] doesn’t seem any more or less complicated than computing $\\mathbb{E}$[U(_w_)|do(_a_)].\n\n**But,** it seems unlikely that superintelligent AI systems will simply compute $\\mathbb{E}$[U(_w_)|do(_a_)] for each possible action _a_ and then do the best one. For example, they may have to think about how to think; more broadly, it seems hard to predict what a successful AI system of the future will look like.\n\nIt may well be that the internal structure of AI systems favors rational agents over other designs. For example, “maximize $\\mathbb{E}$[U(_w_)]” might be a really useful invariant to organize a system around, and if so it’s not clear whether maximizing $\\mathbb{E}$[U’(_a_)] is a satisfactory alternative. (I discuss this issue inconclusively [here](https://arbital.com/p/1t7).) It’s plausible that an understanding of decision theory will help us see how the global goal-directed behavior of a system emerges from a combination of heuristics and goal-directed components; but for now we don’t have a very clear picture.\n\nConclusion\n==========\n\nI think this final wrinkle gives us our best reason to study decision theory today. But I think the case is weaker and more subtle than is often assumed, and I am certainly not yet convinced that we can’t delegate decision theory to an AI.', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '1', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '2016-02-22 08:15:44', hasDraft: 'false', votes: [], voteSummary: 'null', muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'PaulChristiano' ], childIds: [], parentIds: [ 'paul_ai_control' ], commentIds: [], questionIds: [], tagIds: [ 'decision_theory' ], relatedIds: [], markIds: [], explanations: [], learnMore: [], requirements: [], subjects: [], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '7944', pageId: 'Indirect_decision_theory', userId: 'JessicaChuan', edit: '11', type: 'newEdit', createdAt: '2016-02-27 00:34:30', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6643', pageId: 'Indirect_decision_theory', userId: 'JessicaChuan', edit: '10', type: 'newEdit', createdAt: '2016-02-10 01:43:30', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6642', pageId: 'Indirect_decision_theory', userId: 'JessicaChuan', edit: '9', type: 'newEdit', createdAt: '2016-02-10 01:35:03', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6444', pageId: 'Indirect_decision_theory', userId: 'JessicaChuan', edit: '8', type: 'newEdit', createdAt: '2016-02-04 03:20:06', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6443', pageId: 'Indirect_decision_theory', userId: 'JessicaChuan', edit: '7', type: 'newEdit', createdAt: '2016-02-04 03:19:28', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6442', pageId: 'Indirect_decision_theory', userId: 'JessicaChuan', edit: '6', type: 'newEdit', createdAt: '2016-02-04 03:18:54', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6441', pageId: 'Indirect_decision_theory', userId: 'JessicaChuan', edit: '5', type: 'newEdit', createdAt: '2016-02-04 03:17:46', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6431', pageId: 'Indirect_decision_theory', userId: 'JessicaChuan', edit: '4', type: 'newEdit', createdAt: '2016-02-04 02:42:52', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6430', pageId: 'Indirect_decision_theory', userId: 'JessicaChuan', edit: '3', type: 'newEdit', createdAt: '2016-02-04 02:42:18', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6429', pageId: 'Indirect_decision_theory', userId: 'JessicaChuan', edit: '2', type: 'newEdit', createdAt: '2016-02-04 02:40:09', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6428', pageId: 'Indirect_decision_theory', userId: 'JessicaChuan', edit: '1', type: 'newEdit', createdAt: '2016-02-04 02:38:19', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6427', pageId: 'Indirect_decision_theory', userId: 'JessicaChuan', edit: '0', type: 'newParent', createdAt: '2016-02-04 02:38:01', auxPageId: 'paul_ai_control', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6424', pageId: 'Indirect_decision_theory', userId: 'JessicaChuan', edit: '0', type: 'deleteParent', createdAt: '2016-02-04 02:37:45', auxPageId: 'synthesizing_training_data', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6420', pageId: 'Indirect_decision_theory', userId: 'JessicaChuan', edit: '0', type: 'newParent', createdAt: '2016-02-04 02:26:40', auxPageId: 'synthesizing_training_data', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'false', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }