{ localUrl: '../page/mimicry_meeting_halfway.html', arbitalUrl: 'https://arbital.com/p/mimicry_meeting_halfway', rawJsonUrl: '../raw/1vp.json', likeableId: 'AdamW', likeableType: 'page', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], pageId: 'mimicry_meeting_halfway', edit: '4', editSummary: '', prevEdit: '3', currentEdit: '4', wasPublished: 'true', type: 'wiki', title: 'Mimicry and meeting halfway', clickbait: '', textLength: '12512', alias: 'mimicry_meeting_halfway', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'PaulChristiano', editCreatedAt: '2016-03-04 02:26:12', pageCreatorId: 'PaulChristiano', pageCreatedAt: '2016-02-03 23:03:18', seeDomainId: '0', editDomainId: '705', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '0', isEditorComment: 'false', isApprovedComment: 'true', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '59', text: '\nI’ve talked recently about two different model-free decision procedures:\n\n- At each step, pick the action which the user would regard as best. (See [Approval-Directed Agents](https://arbital.com/p/1t7) and [In Defense of Maximization](https://arbital.com/p/1vm).)\n- At each step, pick the action the user would have taken in your place. (See [Automated Assistants](https://arbital.com/p/1tk?title=automated-assistants-), and [Against Mimicry](https://arbital.com/p/1vn).)\n\n(Notes: 1. In contrast with goal-directed procedures, both of these proposals needs to be combined with bootstrapping in order to yield very sophisticated behavior. 2. See [here](https://arbital.com/p/1vj?title=learn-policies-or-goals) for a discussion of learning policies vs. goals.)\n\nThe first approach seems especially appropriate when dealing with weak reasoners who are unlikely to be able to “fool” the user, while the second seems appropriate when dealing very powerful reasoners who can successfully imitate human behavior.\n\nIt would be great if there were some way to get the best of both worlds. This is especially important given that practical AI systems are likely to be very powerful in some respects and subhuman in others — they may be able to manipulate the user while being unable to imitate them.\n\nMeeting halfway\n===============\n\nThis post describes a simple procedure that partially realizes that goal. The basic idea is that a human operator can carry out a task in a way that is designed to be easy to imitate. This approach can simultaneously (1) robustly avoid doing anything that is unacceptable to the user, (2) search for a way to achieve a task within the abilities available to an AI system.\n\nFix some task X, such as producing an explanatory document of a given level of quality and clarity. Note that we need not have any formal way of deciding whether the task has been accomplished. I’ll describe a procedure for learning to accomplish X in a human-like way.\n\nOur proposal is essentially an elaboration of the generative adversarial models framework recently proposed by [Goodfellow, Bengio, and others](http://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf), or of the apprenticeship learning scheme proposed by [Abbeel and Ng](http://ai.stanford.edu/~ang/papers/icml04-apprentice.pdf) (the relationship is fleshed out [here](https://arbital.com/p/1vr)). In our elaboration, the process-to-be-modeled collaborates with the modeler (while still retaining its important characteristics, namely achieving the task). Our addition is relevant when the generative task is intrinsically much more difficult than the discriminative task, such that accommodation is necessary to make the task feasible.\n\nWe will assume we have a learning algorithm which is capable of distinguishing successful implementations of X from failures, and another algorithm which is capable of learning to achieve X.\n\n### The actors\n\n\nWe’ll consider three actors:\n\n- Hugh, the human overseer. (Hugh plays the role of the data distribution in the adversarial generative models framework).\n- Arthur, the AI system. (Arthur plays the role of the generative model G.)\n- Eva, another AI system acting as an evaluator. (Eva plays the role of the discriminative model D.)\n\nWe’ll be able to teach Arthur to achieve the task X if it can be achieved by the “intersection” of Arthur and Hugh — we’ll define this more precisely later, but note that it may be significantly weaker than _either_ Arthur or Hugh.\n\n### The game\n\n\nIn each round of the game, Hugh and Arthur will attempt to perform the task.\n\n**Eva** will look at their outputs and attempt to classify them as coming from Hugh or Arthur. We’ll assume that Eva is a powerful enough learner that she can learn to distinguish between a player who is successfully achieving the task and one who is not.\n\n**Hugh** will always perform the task, and subject to that constraint he will try to fool Eva into thinking that he is Arthur.\n\n**Arthur** does not “know” anything about the task, he is simply trying to fool Eva into thinking that he Hugh.\n\nWe’ll assume the game reaches an equilibrium, where one of the players is stuck: either Eva can’t distinguish Arthur and Hugh, or Hugh can’t succeed at the task while looking like Arthur, or Arthur can’t look like Hugh.\n\nCritically, at no point is anyone ever optimizing for performance or other real-world consequences — they are only ever optimizing for the outputs of this simple prediction game. All of the optimization power is in Hugh’s decisions about how to achieve X.\n\nWe can imagine playing this game many times, giving each participant plenty of time to adjust to the behavior of the others. If the AI systems are able to generalize across domains, then each time we play the game it may involve a different domain. (And if the systems aren’t able to generalize across domains, then it is inevitable that we will have to provide separate training data for each new domains.)\n\n### Performance\n\nI mentioned before that this system matched the performance of the “intersection” of Hugh and Arthur. Intuitively by “intersection” I mean the system whose capabilities are the intersection of Arthur’s capabilities and Hugh’s, i.e. which can do something if both Hugh and Arthur can do it. \n\nThis definition is subtle, because it operates at every level simultaneously: for example, if Hugh can move only by walking, and Arthur can move only by crawling, then the intersection cannot move at all, even though both can move. More formally, we’ll say that the intersection of Hugh and Arthur can accomplish a task, relative to Eva, if each of them can accomplish that task in such a way that Eva can’t distinguish them.\n\nTo see why this is relevant (and to make it more precise), suppose that Hugh would like to train Arthur to achieve X, but cannot. What can we infer from this failure?\n\nBy hypothesis, Arthur is failing to achieve X. But by construction, Hugh always achieves X. And by assumption, Eva can distinguish success at X from failure at X. So Eva can distinguish between Arthur and Hugh.\n\nEva’s distinguisher induces a partition on the set of ways to achieve X, into those classified as “Hugh” and those classified as “Arthur.” Arthur is not able to find any action in the space marked Hugh, and Hugh is not able to find any action in the space marked Arthur.\n\nThis precisely describes what tasks can be Arthur can be trained to accomplished by this scheme.\n\nWe say: **the intersection of Hugh and Arthur (with respect to some learning algorithm Eva) can achieve X if Eva** _cannot_ **learn a partition of ways of achieving X into two sets H and A such that:**\n\n- **Hugh cannot learn to achieve X by a means in A.**\n- **Arthur cannot learn to achieve X by a means in H.**\n\nNote that in particular, if one of Hugh and Arthur can’t achieve X, then the intersection of the two can’t achieve X.\n\nConversely, if the intersection of Hugh and Arthur _can’t_ learn to do a task, then it seems intuitive to say that Arthur cannot accomplish that task in a human-like way. So in one sense this scheme is the best we should hope for.\n\nAs with approval-direction in general, we will then need to us a bootstrapping scheme to scale up to Arthur’s capabilities from the intersection of Hugh and Arthur. This bootstrapping process is conceptually similar to, but somewhat more subtle than, the process for approval-directed agents more broadly. (And I am correspondingly more uncertain about how often it would work.)\n\n### Safety\n\nAt no point in this scheme is any process explicitly or implicitly optimizing for any external consequence. All of the optimization occurs in Hugh’s decision-making. This makes it seem much less likely that there will be any carefully engineered catastrophe.\n\nWe can get some additional reassurance from Eva’s judgments. That is, it is very unlikely that Hugh would take any action with catastrophic consequences. If Eva is intelligent enough to recognize catastrophic consequences then she could use them as a signal that a decision was made by Arthur. As a result, Arthur’s only incentive is to avoid catastrophic consequences (and we can also reject any action that Eva thinks is probably produced by Arthur).\n\n### Transparency\n\nIn this proposal the evaluator Eva plays two rolls:\n\n1. It defines the mimicry task and provides feedback to Arthur. This is the same role played by the discriminative model in the adversarial generative models framework.\n2. It defines the “be easy to imitate task” and provides feedback to Hugh.\n\nRole \\[2] is necessary for the proposal to work. But in order for it to be effective, the models learned by Eva must be sufficiently comprehensible that Hugh can understand the constraints they impose on his behavior. This might be achieved either by providing fine-grained feedback so that Hugh can learn by trial and error, or by training a transparent model, or by training Eva to produce explanations of her judgments.\n\nIn general, Hugh may need to make extensive use of AI assistants in order to successfully model an appropriate behavior. There is no bright line between better understanding Eva’s model and the other aspects of the bootstrapping process.\n\nWith counterfactuals\n====================\n\nWe’ve described a procedure for training Arthur to perform a task. Actually performing this procedure is quite labor-intensive. As usual, the hope would be to train a model once, and then use it with only minimal human involvement. [Counterfactual oversight](https://arbital.com/p/1tj?title=human-in-counterfactual-loop) provides a clean understanding of what exactly the model is being trained to do. But thinking about the resulting system is a little bit counterintuitive, so it it’s worth thinking about what the final product looks like.\n\nIn production, Arthur is given enough data to understand his current situation, including the desires of the users (which are reflected in the counterfactual behavior of the overseer). He then outputs an action _x_.\n\nMost of the time action _x_ is simply taken, and we move on to the next decision. But sometimes (either randomly or when an [adversarial system believe that Arthur may have made an error](https://arbital.com/p/1vh)) a human overseer is contacted and involved with the decision.\n\nAt this point the execution of the entire system pauses, with the action _x_ not yet taken. The overseer Hugh is given time to understand the situation (but is not shown the action _x_).\n\nThe evaluator Eva is woken up and also given enough data to understand the situation (but is not shown the action _x_). Hugh is given the opportunity to interact with Eva, to understand her model of the situation and how she will behave in the next step. He can also work with other AI assistants to help model an appropriate behavior.\n\nHugh ultimately produces an action _x’_ which both (1) reflects Hugh’s desires about what Arthur should do, and (2) is likely to fool Eva into thinking that _x’_ was output by Arthur.\n\nEva is then given _x’_ and _x_, and her goal is to guess which is which. If Eva succeeds, then she receives a payoff of 1 and Arthur receives a payoff of -1. If Eva fails, then she receives a payoff of -1 and Arthur receives a payoff of 1.\n\nArthur’s output _x_ is chosen in order to maximize the expected payoff in this round, assuming that the round actually happened.\n\nConclusion\n==========\n\nThis post suggests a simple scheme that may capture some of the benefits of directly imitating behaviors, without running into the same fundamental obstruction. There are many clear challenges for implementing anything long these lines; some can already be explored in the context of the [generative adversarial models framework](http://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf) (for example, empirical results highlight the necessity of maintaining rough parity between Eva and Arthur), while others are distinctive to the role of the human overseer.\n\nI expect there is room for many more sophisticated schemes that capture the best aspects of mimicry and maximization, but I think that this proposal should make us more optimistic about being able to find a working solution, and less concerned that there is a fundamental tradeoff between flexibility and safety.\n\n(_Funding for this research was provided by a grant from the Future of Life Institute.)_', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '1', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '', hasDraft: 'false', votes: [], voteSummary: 'null', muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'PaulChristiano' ], childIds: [], parentIds: [ 'paul_ai_control' ], commentIds: [], questionIds: [], tagIds: [], relatedIds: [], markIds: [], explanations: [], learnMore: [], requirements: [], subjects: [], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '8283', pageId: 'mimicry_meeting_halfway', userId: 'JessicaChuan', edit: '4', type: 'newEdit', createdAt: '2016-03-04 02:26:13', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '7767', pageId: 'mimicry_meeting_halfway', userId: 'JessicaChuan', edit: '3', type: 'newEdit', createdAt: '2016-02-24 23:55:08', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6753', pageId: 'mimicry_meeting_halfway', userId: 'JessicaChuan', edit: '2', type: 'newEdit', createdAt: '2016-02-11 02:20:40', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6752', pageId: 'mimicry_meeting_halfway', userId: 'JessicaChuan', edit: '0', type: 'newAlias', createdAt: '2016-02-11 02:20:39', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6363', pageId: 'mimicry_meeting_halfway', userId: 'JessicaChuan', edit: '0', type: 'deleteChild', createdAt: '2016-02-03 23:44:15', auxPageId: 'IRL_VOI', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6312', pageId: 'mimicry_meeting_halfway', userId: 'JessicaChuan', edit: '1', type: 'newChild', createdAt: '2016-02-03 23:04:25', auxPageId: 'IRL_VOI', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6311', pageId: 'mimicry_meeting_halfway', userId: 'JessicaChuan', edit: '1', type: 'newEdit', createdAt: '2016-02-03 23:03:18', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6310', pageId: 'mimicry_meeting_halfway', userId: 'JessicaChuan', edit: '0', type: 'newParent', createdAt: '2016-02-03 23:03:12', auxPageId: 'paul_ai_control', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6308', pageId: 'mimicry_meeting_halfway', userId: 'JessicaChuan', edit: '0', type: 'deleteParent', createdAt: '2016-02-03 23:03:02', auxPageId: 'against_mimicry', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6306', pageId: 'mimicry_meeting_halfway', userId: 'JessicaChuan', edit: '0', type: 'newParent', createdAt: '2016-02-03 22:57:19', auxPageId: 'against_mimicry', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'false', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }