{
  localUrl: '../page/apprenticeship_learning_mimicry.html',
  arbitalUrl: 'https://arbital.com/p/apprenticeship_learning_mimicry',
  rawJsonUrl: '../raw/1vr.json',
  likeableId: 'DavidTarrant',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '0',
  dislikeCount: '0',
  likeScore: '0',
  individualLikes: [],
  pageId: 'apprenticeship_learning_mimicry',
  edit: '5',
  editSummary: '',
  prevEdit: '4',
  currentEdit: '5',
  wasPublished: 'true',
  type: 'wiki',
  title: 'Apprenticeship learning and mimicry',
  clickbait: '',
  textLength: '9795',
  alias: 'apprenticeship_learning_mimicry',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'PaulChristiano',
  editCreatedAt: '2016-03-04 02:30:36',
  pageCreatorId: 'PaulChristiano',
  pageCreatedAt: '2016-02-03 23:09:45',
  seeDomainId: '0',
  editDomainId: '705',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '40',
  text: '\nThis post compares my [recent proposal](https://arbital.com/p/1vp/mimicry_meeting_halfway) with [Abbeel and Ng 2004, Apprenticeship Learning via Inverse Reinforcement Learning](http://machinelearning.wustl.edu/mlpapers/paper_files/icml2004_PieterN04.pdf). My hope is to point out how similar the two schemes are, and also to illustrate some practical issues that I’ve been discussing in the abstract.\n\n### The correspondence\n\n- Hugh : the expert demonstrator\n- Arthur : the RL algorithm used in step 4 of the max-margin algorithm\n- Eva : the SVM solving the inverse RL problem in step 2.\n\nIn each step, Eva produces a classifier that can distinguish Hugh from every strategy identified by Arthur so far. Arthur then searches for a new policy that is classified as “Hugh” by this classifier.\n\n\\[Abbeel and Ng] call Eva’s classifier a “reward function,” but they stress that it need not correspond to the actual reward function that we care about. I think that both names are reasonable, since this classifier is intended to converge towards a reward function for “successful human-like performance” or at least some sufficient criterion for such performance. I’ll probably stick with the less-loaded word “classifier” anyway.\n\nThe correspondence is essentially perfect. The primary differences are simplifying assumptions in \\[Abbeel and Ng]:\n\n- They assume that the unknown reward function is a linear function of a given set of features, so that they get the desired result even if Eva only considers linear classifiers.\n- They assume their RL agent can optimize any linear reward function as well as the human, guaranteeing that Arthur can find a policy that Eva can’t distinguish from the expert demonstration.\n\nI’m happy to stick with linear classifiers, but I am especially interested in the case where the reinforcement learning algorithm is not able to exactly reproduce human behavior.\n\nIn these domains, I recommend that once Eva has learned a candidate human-recognizing classifier, and once Arthur has failed to fool Eva, Hugh attempts to adjust his behavior so that he continues to achieve his goal without being flagged by the human-recognizer.\n\n### Example\n\nConsider a video game, where we want to to teach an RL agent to play as well as possible without doing anything a human wouldn’t. Assume that “score” is one of the features available to Eva.\n\nThe scheme in \\[Abbeel and Ng] will sometimes fail to perform reasonably. For example, suppose that the human sometimes collects a star, which the RL agent cannot learn to do. In this case , Eva might learn the rule “if it collects a star, it’s a human.” An RL agent trained to get stars may end up with very few points, even if the RL agent is capable of getting scores just as high as the human.\n\nOur proposal is for the human expert to inspect the model learned by Eva — perhaps by just trying to play the game and observing what causes Eva to guess “human,” or by using tools for understanding what the learned model is really doing, or so on. Then the human expert attempts to maximize their score while looking like an AI to Eva. In the case of linear models it is plausible that the human can just look directly at the coefficients.\n\nFor example, if Eva learns the model “if it collects a star, it’s human” then the human player simply needs to stop trying to collect stars. They then train Eva again to learn a different classifier and repeat the process.\n\nIn general the problem will be much harder. For example, it may be that the human and AI take different kinds of trajectories through the game world, and it may be difficult for a human to take an AI-like trajectory.\n\nSome of these difficulties can be addressed by using AI assistant to help impersonate the AI player. For example, a human might control an AI avatar, who can carry out a sequence of instructions of the form “move to point X.” The resulting trajectories might be able to reproduce much of human performance without looking distinctively human. Or we might use gradient descent to find a trajectory near the human’s which nevertheless fools Eva.\n\n### Playing games\n\nIn my post I describe Eva as learning to distinguish Arthur from Hugh. In \\[Abbeel and Ng], Eva learns to distinguish Hugh from _every_ strategy learned by Eva. Why the apparent difference?\n\nIn fact this isn’t a difference at all. My proposal calls for Eva to learn a classifier which Arthur can’t learn to fool. This cannot be done by simply training on the current strategy that Arthur is using, or else the two could continue to go around in a loop indefinitely, with Eva losing at every turn. The strategy given by \\[Abbeel and Ng] is the most natural way for Eva to solve this learning problem in their setting.\n\nIn general, we should think of Eva and Arthur as learning to play a zero sum game against each other — Eva needs to use a learning strategy that is appropriate for playing games. Responding to your opponent’s last move is not a good idea in general. However there are almost-as-simple iterative techniques that do work well — for example you can obtain strong guarantees using multiplicative weights, which amounts to (1) train against all of the strategies that have appeared so far, giving them equal weight, (2) use a regularization that rewards you for playing more unpredictably.\n\nMoreover, Eva and Arthur don’t inherently have to use iterative strategies, if they are smart enough agents that they can perform sensibly in their first time playing the game. This is a key observation for the scalability of the proposal to very powerful AI’s, for whom supervision is more costly and training is expected to generalize further, but who are also more likely to be able to learn to play the game without iteration. (It is also important if we want to train Arthur without ever actually carrying out this dynamic — Arthur still picks a good strategy to play in the game, but the game is only actually played very rarely to provide training data.)\n\nThis game framing continues to apply just as well when we introduce Hugh — the game is now a cooperative game, with Arthur and Hugh competing against Eva.\n\nAnother nice fact about two player (or two team) games is that we can choose which player moves first without changing the essential character of the game. (This is clear in \\[Abbeel and Ng] — when Eva succeeds she has a particular linear classifier that Arthur can’t fool, and when Arthur succeeds we can take a convex combination of his strategies that Eva can’t distinguish from Arthur’s.) Thus Eva can pick her model before Hugh picks his demonstration, if Eva is a clever enough learner to anticipate how Hugh will approach the task in question. This means that Hugh only needs to reason about the particular model that Eva has learned, which he is free to inspect while devising his strategy, rather than thinking about what classifiers Eva is capable of learning.\n\n### Meeting halfway implicitly or explicitly\n\nEven if we don’t use an explicit process of “meeting halfway,” training a system by imitation learning is likely to involve a similar implicit dynamic — researchers focus on performing the kinds of tasks that available systems can actually accomplish, by means that that available systems can actually imitate.\n\nThere may be practical benefits to making this procedure explicit and having an interactive process between the human and the distinguisher; or there may not be. But in either case, making this process explicit is useful from the AI control perspective: it allows us to think in advance about how this process will scale as AI becomes more sophisticated and human involvement becomes more costly, and whether there are any fundamental obstacles that don’t appear in existing models of the problem.\n\nOne important observation about the formalized process is that it does not inherently require much human interaction. The use of multiple rounds of interaction is necessary only for learners who are too weak to “see where this is going” in advance — for learners strong enough to find human-like ways to accomplish novel tasks, no additional effort is needed to actually provide appropriate demonstrations. This makes the technique a plausible ingredient in scalable AI control.\n\n### Conclusion\n\nAlthough \\[Abbeel and Ng] uses reinforcement learning as a technical tool, it directly imitates the expert’s behavior. For example, its easy to see that their framework will never achieve a higher performance than the expert, and will reproduce every linearly measurable quirk of the expert’s demonstration. Their work nicely illustrates how the _goal_ of imitation is compatible with algorithmic techniques and representations based on reward functions.\n\nMany researchers concerned with AI safety in particular have been historically uninterested in this kind of proposal because mimicry seems inherently unscalable beyond human level. While this is intuitively plausible, I think it isn’t quite right; because expert demonstrators can themselves make use of AI systems, the resulting performance can radically exceed human capabilities. I suspect that this approach can make full use of whatever AI abilities are available, though for now that is a big open question.\n\nInstead, I think that the key problem with mimicry is that directly learning to imitate is often more challenging than directly accomplishing a goal by other means. Fortunately, this is a challenge that is already being addressed today. Moreover, unlike some other difficulties in AI control, I don’t think this problem will change fundamentally or become radically more difficult as AI systems become more capable.',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '1',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'PaulChristiano'
  ],
  childIds: [],
  parentIds: [
    'paul_ai_control'
  ],
  commentIds: [],
  questionIds: [],
  tagIds: [],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8285',
      pageId: 'apprenticeship_learning_mimicry',
      userId: 'JessicaChuan',
      edit: '5',
      type: 'newEdit',
      createdAt: '2016-03-04 02:30:36',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '7770',
      pageId: 'apprenticeship_learning_mimicry',
      userId: 'JessicaChuan',
      edit: '4',
      type: 'newEdit',
      createdAt: '2016-02-24 23:59:29',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6756',
      pageId: 'apprenticeship_learning_mimicry',
      userId: 'JessicaChuan',
      edit: '3',
      type: 'newEdit',
      createdAt: '2016-02-11 02:24:24',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6754',
      pageId: 'apprenticeship_learning_mimicry',
      userId: 'JessicaChuan',
      edit: '0',
      type: 'newAlias',
      createdAt: '2016-02-11 02:23:40',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6755',
      pageId: 'apprenticeship_learning_mimicry',
      userId: 'JessicaChuan',
      edit: '2',
      type: 'newEdit',
      createdAt: '2016-02-11 02:23:40',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6361',
      pageId: 'apprenticeship_learning_mimicry',
      userId: 'JessicaChuan',
      edit: '0',
      type: 'deleteChild',
      createdAt: '2016-02-03 23:43:56',
      auxPageId: 'counterfactual_oversight_vs_training_data',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6330',
      pageId: 'apprenticeship_learning_mimicry',
      userId: 'JessicaChuan',
      edit: '1',
      type: 'newChild',
      createdAt: '2016-02-03 23:10:26',
      auxPageId: 'counterfactual_oversight_vs_training_data',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6325',
      pageId: 'apprenticeship_learning_mimicry',
      userId: 'JessicaChuan',
      edit: '1',
      type: 'newEdit',
      createdAt: '2016-02-03 23:09:45',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6320',
      pageId: 'apprenticeship_learning_mimicry',
      userId: 'JessicaChuan',
      edit: '0',
      type: 'newParent',
      createdAt: '2016-02-03 23:07:56',
      auxPageId: 'paul_ai_control',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6318',
      pageId: 'apprenticeship_learning_mimicry',
      userId: 'JessicaChuan',
      edit: '0',
      type: 'deleteParent',
      createdAt: '2016-02-03 23:07:47',
      auxPageId: 'IRL_VOI',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6316',
      pageId: 'apprenticeship_learning_mimicry',
      userId: 'JessicaChuan',
      edit: '0',
      type: 'newParent',
      createdAt: '2016-02-03 23:06:49',
      auxPageId: 'IRL_VOI',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}