{ localUrl: '../page/learn_policies_goals.html', arbitalUrl: 'https://arbital.com/p/learn_policies_goals', rawJsonUrl: '../raw/1vj.json', likeableId: '795', likeableType: 'page', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], pageId: 'learn_policies_goals', edit: '4', editSummary: '', prevEdit: '3', currentEdit: '4', wasPublished: 'true', type: 'wiki', title: 'Learn policies or goals?', clickbait: '', textLength: '6471', alias: 'learn_policies_goals', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'PaulChristiano', editCreatedAt: '2016-02-27 01:29:35', pageCreatorId: 'PaulChristiano', pageCreatedAt: '2016-02-03 08:59:24', seeDomainId: '0', editDomainId: '705', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '0', isEditorComment: 'false', isApprovedComment: 'true', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '17', text: '\n\nI’ve [recently proposed](https://arbital.com/p/1t7/approval_directed_agents) training agents to make the decision we would most approve of, rather than to rationally pursue the outcomes we most desire.\n\nMany researchers object to this idea, based on the practical observation that learning goals seems easier and more powerful than directly learning good policies.\n\nI don’t think this objection actually undermines approval seeking; I think that goal inference and approval seeking are complements rather than substitutes.\n\nI’ll justify this claim in the context of approval-seeking and [inverse reinforcement learning](http://ai.stanford.edu/~ang/papers/icml04-apprentice.pdf). But the examples are chosen for concreteness, and the basic point is more general.\n\nSome objections\n===============\n\nI’ve encountered three objections of this flavor:\n\n1. The simplest representation of a policy is often in terms of a goal and a world model. So learning goals can be statistically and computationally easier than learning policies directly.\n2. There is already plenty of data to form good models of the world and human goals, whereas training policies directly requires expensive feedback about individual decisions, or domain-specific expert demonstrations.\n3. Approval-directed decisions are never better than the judgment of the human overseer, but we would ultimately like to make better decisions than any human.\n\nMy responses to these objections:\n\n### 1. Maximizing approval is not (much) harder than inverse reinforcement learning\n\nFor concreteness, imagine that **you** are in the position of the approval-directed agent, trying to find actions that will earn the approval of an alien overseer.\n\nHere’s one simple strategy: observe the behavior of the aliens, use an inverse reinforcement learning algorithm to try to figure out what they value, and then take actions that help them get what they seem to want.\n\nThis is especially plausible if the aliens provided a good inverse reinforcement learning algorithm as a hint to help you get started. Then you could perform well even if you couldn’t discover such an algorithm on your own.\n\nYou might not start off using this strategy. And you might eventually find a better strategy. But over the long run, you’ll probably earn approval at least as well as if you pursued this simple strategy.\n\n### 2. Maximizing approval doesn’t require (much) extra training data\n\nIt’s easier to get training data about what actions lead to what outcomes, or about human goals, then to get feedback on a large number of an AI’s decisions.\n\nBut the argument from the last section implies this isn’t a big deal: a semi-supervised learner can use observations about cause-and-effect or about the overseer’s values to help it make predictions about what actions will be approved of.\n\nThe approval-directed agent may still need some training data about approval, but not much: just enough to confirm a few hypotheses like the “the overseer approves of actions that lead to desired outcomes,” and “the overseer takes actions that lead to desired outcomes.” Then the definition of “desired outcomes” can be inferred as latent structure, mostly using observations of humans’ behavior across a wide range of contexts.\n\n(The first of these hypotheses is actually a special case of the second. A human can infer both of them without seeing any training data about approval at all, and a good predictor could probably do the same.)\n\n### 3. Maximizing approval leads to good outcomes, even with smart agents\n\nUsing approval-direction introduces a new problem: how can the human overseer determine what actions are good? This is especially problematic if the approval-directed agent is much smarter than the human overseer.\n\nExplicit [bootstrapping](https://arbital.com/p/1t8 ) seems promising: rather than having a human overseer judge an action on their own, we allow them to interact with another approval-directed agent. As long as the (Human+AI) team is smart enough to evaluate the AI’s actions, we have grounds for optimism.\n\n(This kind of bootstrapping is central to my optimism about approval-directed agents.)\n\nInverse reinforcement learning offers an alternative approach to crossing the “human-level” boundary. It aims to model human decision-making as consisting of a rational part and a noisy/irrational/bounded part. We can then ignore the irrational part and use better algorithms and a better world model for the rational part. This is intuitively appealing, but [I think that it remains to be seen](https://arbital.com/p/1vb?title=the-easy-goal-inference-problem-is-still-hard) whether it can work.\n\nSo why bother with the indirection?\n===================================\n\nI’ve argued that an approval-directed agent will do at least as well as an IRL agent, because it can always use IRL as a technique to earn approval. But then why not just start with IRL?\n\nAn approval-directed agent regards IRL as a means to an end, which can be improved upon as the agent learns more about humans. For example, an approval-directed agent recognizes that the learned utility function is only an approximation, and so it can deviate from that approximation if it predicts that the users would object to the consequences of following it.\n\nAt the meta level, when we design algorithms for IRL, we have some standard in mind that lets us say that one design is good and another is bad. By using approval-direction we codify that standard (an algorithm is good if it lets the AI find actions with high approval), so that the AI can do the same kind of work that we are doing when we design an IRL agent.\n\nOf course it may also turn out that other techniques are needed to find good actions. If an approval-directed agent can find these techniques, then it can use them instead of or in addition to approaches based on goal inference.\n\nI care because I think that [a lot of work is still needed](https://arbital.com/p/1vb?title=the-easy-goal-inference-problem-is-still-hard) in order to scale IRL to very intelligent agents. I’m also not optimistic about finding the “right” answer any time soon (I don’t know if there is a “right” answer). Being able to say what we are trying to do with an IRL algorithm — and being able to leave parts of the problem to a clever AI rather than needing to solve the whole thing in advance — may improve our prospects significantly.', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '1', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '', hasDraft: 'false', votes: [], voteSummary: 'null', muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'PaulChristiano' ], childIds: [], parentIds: [ 'paul_ai_control' ], commentIds: [], questionIds: [], tagIds: [], relatedIds: [], markIds: [], explanations: [], learnMore: [], requirements: [], subjects: [], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '7949', pageId: 'learn_policies_goals', userId: 'JessicaChuan', edit: '4', type: 'newEdit', createdAt: '2016-02-27 01:29:35', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '7711', pageId: 'learn_policies_goals', userId: 'JessicaChuan', edit: '3', type: 'newEdit', createdAt: '2016-02-23 22:19:29', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6766', pageId: 'learn_policies_goals', userId: 'JessicaChuan', edit: '0', type: 'deleteChild', createdAt: '2016-02-11 03:07:59', auxPageId: 'simulations_inductive_definitions', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6763', pageId: 'learn_policies_goals', userId: 'JessicaChuan', edit: '2', type: 'newChild', createdAt: '2016-02-11 02:59:16', auxPageId: 'simulations_inductive_definitions', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6747', pageId: 'learn_policies_goals', userId: 'JessicaChuan', edit: '2', type: 'newEdit', createdAt: '2016-02-11 01:59:17', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6278', pageId: 'learn_policies_goals', userId: 'JessicaChuan', edit: '1', type: 'newParent', createdAt: '2016-02-03 09:01:29', auxPageId: 'paul_ai_control', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6276', pageId: 'learn_policies_goals', userId: 'JessicaChuan', edit: '0', type: 'deleteParent', createdAt: '2016-02-03 09:01:07', auxPageId: 'technical_socail_approach_ai_safety', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6274', pageId: 'learn_policies_goals', userId: 'JessicaChuan', edit: '1', type: 'newEdit', createdAt: '2016-02-03 08:59:24', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6273', pageId: 'learn_policies_goals', userId: 'JessicaChuan', edit: '0', type: 'newParent', createdAt: '2016-02-03 08:58:20', auxPageId: 'technical_socail_approach_ai_safety', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'false', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }