{ localUrl: '../page/against_mimicry.html', arbitalUrl: 'https://arbital.com/p/against_mimicry', rawJsonUrl: '../raw/1vn.json', likeableId: '799', likeableType: 'page', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], pageId: 'against_mimicry', edit: '4', editSummary: '', prevEdit: '3', currentEdit: '4', wasPublished: 'true', type: 'wiki', title: 'Against mimicry', clickbait: '', textLength: '5387', alias: 'against_mimicry', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'PaulChristiano', editCreatedAt: '2016-03-04 02:23:57', pageCreatorId: 'PaulChristiano', pageCreatedAt: '2016-02-03 22:55:18', seeDomainId: '0', editDomainId: '705', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '0', isEditorComment: 'false', isApprovedComment: 'true', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '19', text: '\n\nOne simple and apparently safe AI system is a “copycat:” an agent that predicts what its user would do in a situation and simply does that. I think that this approach to AI safety (and AI) is more sensible than it at first appears, but that it is ultimately seriously flawed.\n\n### Initial objections\n\n\nThere are a few obvious problems with this proposal, which turn out not to seem too serious.\n\n- What does it mean for me to be “in the AI’s situation”?\n\nWe can consider a user operating an AI by remote control. That is, the user continues to do things that are in their interest, but their outputs have exactly the same format as the AI’s outputs.\n\nThe human has additional inputs that the AI lacks, but these will be naturally absorbed into the prediction problem being solved by the AI, i.e. the predictor can try to predict the human’s behavior given the sensor readings available to it.\n\n- The human may not be that good at the task in question.\n\nEven today, robotic control systems routinely perform tasks that would be very difficult for a human controlling a robot manually. In the future we can expect the problem to get worse as AI systems acquire additional capabilities that humans lack.\n\nI suspect that this problem can be addressed by having the AI imitate a (human+AI) system. That is, the human user can get help from one AI (e.g. one that performs basic motions) in order to produce the training data that will be used to train another AI (e.g. one that strings together basic motions in order to perform a complex task). See [here](https://arbital.com/p/1t8) and [here](https://arbital.com/p/1th?title=implementing-our-considered-judgment) for a more extensive discussion of these ideas.\n\n- Predicting what a human would do is a very roundabout way to solve the problems that humans are solving.\n\nIt’s worth noting that more typical AI techniques can be used as tools to help predict human behavior: I discuss this [here](https://arbital.com/p/1vj?title=learn-policies-or-goals). For example, if you want to predict what plan a human will follow, a planning algorithm might help — we need not restrict our attention to what we normally think of as “prediction” algorithms.\n\n### Advantages\n\n\nThe main advantage of the copycat, from a safety perspective, is that it “never does anything you wouldn’t do.” That cleanly rules out most of the possible catastrophic outcomes. For example, you can be quite confident that the AI won’t hatch an elaborate scheme to take over the world — unless that’s what you would do in its place. Moreover, these failure modes are ruled out in a way that is exceptionally simple, robust, and easy to understand.\n\n### A big problem\n\nHumans and machines have very different capabilities. Even a machine which is superhuman in many respects may be completely unable to match human performance in others. In particular, most realistic systems will be unable to exactly mimic human performance in any rich domain.\n\nIn light of this, it’s not clear what mimicry even means, and a naive definition won’t do what we want.\n\nFor example, suppose that a human is trying train a robot to stack some blocks. Suppose that the human does one of two things in any given trial:\n\n1. Successfully stacks the blocks. This happens 95% of the time and takes about 3 seconds.\n2. Fails to stack the blocks — the pile collapses after adding the second block.\n\nConsider a robot which is able to stack the blocks only by proceeding more slowly, taking 10 seconds. However, the robot is fine at failing, which it can do in a way nearly indistinguishable from the human.\n\nFor many natural definitions of “best,” the “best” way to mimic the human is for the robot to fail 100% of the time. For example, this will maximize the probability that the robot and human do the same thing; it will minimize the KL divergence between the distribution of the robot’s and the human’s behaviors; it will maximize the difficult of distinguishing the robot and the human’s behavior.\n\nIndeed, what we really want — the “right” emulation — is a thing that the human never does: to stack the blocks successfully and slowly. This seems to be a fundamental problem with the copycat strategy, which can’t be resolved in an easy and principled way.\n\n### Conclusion\n\nOf course we can still train an algorithm to imitate a person, but we don’t actually have any guarantee at all like “the agent won’t do anything I wouldn’t do,” and moreover we don’t even want such a guarantee. The outcome will depend on what our system is actually doing — on how it approximates an objective that it cannot exactly meet (or even meet any quantitative approximation to).\n\nThe most obvious response is to use maximization, even if our goal is to imitate human behaviors. We could apply this maximization either at the level of outcomes (e.g. inverse reinforcement learning) or at the level of actions (e.g. maximizing a rater’s score for how well those actions reproduce the intended behavior).\n\nI think falling back to maximization is a sensible response, and the only one which we really understand at the moment. But I’m also interested in avoiding some of the possible problems associated with maximization, and so I think it’s worth spending some time trying to “fix” direct imitation.', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '1', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '', hasDraft: 'false', votes: [], voteSummary: 'null', muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'PaulChristiano' ], childIds: [], parentIds: [ 'paul_ai_control' ], commentIds: [], questionIds: [], tagIds: [], relatedIds: [], markIds: [], explanations: [], learnMore: [], requirements: [], subjects: [], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '8282', pageId: 'against_mimicry', userId: 'JessicaChuan', edit: '4', type: 'newEdit', createdAt: '2016-03-04 02:23:57', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6750', pageId: 'against_mimicry', userId: 'JessicaChuan', edit: '0', type: 'newAlias', createdAt: '2016-02-11 02:15:07', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6751', pageId: 'against_mimicry', userId: 'JessicaChuan', edit: '3', type: 'newEdit', createdAt: '2016-02-11 02:15:07', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6307', pageId: 'against_mimicry', userId: 'JessicaChuan', edit: '0', type: 'deleteChild', createdAt: '2016-02-03 23:03:02', auxPageId: 'mimicry_meeting_halfway', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6305', pageId: 'against_mimicry', userId: 'JessicaChuan', edit: '2', type: 'newChild', createdAt: '2016-02-03 22:57:19', auxPageId: 'mimicry_meeting_halfway', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6304', pageId: 'against_mimicry', userId: 'JessicaChuan', edit: '2', type: 'newEdit', createdAt: '2016-02-03 22:57:11', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6303', pageId: 'against_mimicry', userId: 'JessicaChuan', edit: '1', type: 'newEdit', createdAt: '2016-02-03 22:55:18', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6302', pageId: 'against_mimicry', userId: 'JessicaChuan', edit: '0', type: 'newParent', createdAt: '2016-02-03 22:54:02', auxPageId: 'paul_ai_control', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6300', pageId: 'against_mimicry', userId: 'JessicaChuan', edit: '0', type: 'deleteParent', createdAt: '2016-02-03 22:53:54', auxPageId: 'in_defense_maximization', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6298', pageId: 'against_mimicry', userId: 'JessicaChuan', edit: '0', type: 'newParent', createdAt: '2016-02-03 22:53:38', auxPageId: 'in_defense_maximization', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'false', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }