{
  localUrl: '../page/imitation_justification.html',
  arbitalUrl: 'https://arbital.com/p/imitation_justification',
  rawJsonUrl: '../raw/1vv.json',
  likeableId: '805',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '0',
  dislikeCount: '0',
  likeScore: '0',
  individualLikes: [],
  pageId: 'imitation_justification',
  edit: '5',
  editSummary: '',
  prevEdit: '4',
  currentEdit: '5',
  wasPublished: 'true',
  type: 'wiki',
  title: 'Imitation and justification',
  clickbait: '',
  textLength: '6207',
  alias: 'imitation_justification',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'PaulChristiano',
  editCreatedAt: '2016-03-04 04:32:32',
  pageCreatorId: 'PaulChristiano',
  pageCreatedAt: '2016-02-03 23:18:58',
  seeDomainId: '0',
  editDomainId: '705',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '20',
  text: '\n\nSuppose that I am training an AI system to play Go. One approach is to have the AI observe human moves and learn to predict those moves. The AI can then pick moves by sampling from its predicted distribution over “what a human would do.”\n\nBut an AI may be able to learn more quickly by reproducing justifications along with the moves themselves. That is, we can use the modified training procedure:\n\n- Each time a human makes a move, they provide a justification for that move. For example, the human may point out which groups are dead, or that a particular piece is a ladder breaker.\n- The AI is trained to reproduce moves+justifications. (The justifications might have to be [adjusted](https://arbital.com/p/1vp) in order to be learnable.)\n- In order to make a move, the AI produces a move+justification and then throws away the justification.\n\nNaturally, this approach can be generalized to arbitrary tasks, not just playing games.\n\n### Benefits\n\n\nThe justification helps teach the AI system how to think about the problem, and so can accelerate a learning process which would otherwise have to learn a lot of hidden structure on its own. For example, it may be quite difficult to learn about ladder breakers by watching a sequence of games in which no ladders are ever played, while it is much easier if the justification is pointed out. This is much more similar to how individual humans learn new tasks — commentary from teachers is an essential part of the process.\n\nProviding justifications can also make the AI system significantly easier to understand and correct — e.g. if the AI plays poorly because it mistakenly believes a live group is dead, it may explain this fact as part of its justification.\n\nDepending on how we define “imitation,” this process may have an additional advantage. Suppose that we define imitation by [introducing an evaluator](https://arbital.com/p/1vp) who attempts to distinguish human behavior from the AI’s behavior, and training the AI to fool the evaluator. Then hearing justifications is helpful for the evaluator as well as the imitator, and so increases the rate of learning through this additional channel. Beyond hastening learning, explanations may reduce the probability of blatantly inhuman behavior, by eliminating cases where the evaluator can’t understand why the human is doing what they are doing. This allows the evaluator to reliably give a low score to complex actions that don’t serve the kinds of goals the human would claim to pursue.\n\n### Gathering training data, and counterfactual oversight\n\n\nThis proposal is consistent with [gathering training data continuously](https://arbital.com/p/1tj?title=human-in-counterfactual-loop) rather than during a separate training phase.\n\nWith small probability, each time the AI would act it instead outputs its move + justification (but doesn’t take any action). The human then provides an example move + justification, and a distinguisher tries [to tell the difference](https://arbital.com/p/1vp). Our AI is trained to make the distinguishing task as hard as possible.\n\nBootstrapping and argument\n==========================\n\nI [previously proposed](https://arbital.com/p/1vh) that AI systems gather training data through an argument between an action-proposer and a reviewer. The reviewer can search for the parts of the proposed action (or its justification) which are most likely to be problematic, and then the human can provide additional training data by arbitrating the dispute — that is, by judging whether the most-likely-to-be-problematic aspects really are problematic. This kind of active search for possible mistakes seems very helpful for coping with realistic limitations of our learning systems.\n\nThe same dynamic can be applied to imitation. Consider the following game, played between two learning systems Alice and Bob:\n\n- Alice proposes an action.\n- Alice justifies her proposal.\n- Bob criticizes this justification. Alice and Bob interact, carrying out an informal argument.\n\nThe argument is never evaluated formally, and no one “wins.” Instead, the goal of the two players is to imitate human performance. That is, a human may decide to play the role of one of Alice or Bob, while the other is still played by an AI. As before, the goal of Alice is to behave in the same way that a human would in her shoes, and the goal of Bob is to behave in the same way that a human would in his shoes.\n\nAn evaluator can observe the entire transcript before trying to predict which players (if any) were human or machine. As with justifications, this can make the task of the evaluator much easier.\n\nFooling the evaluator can subsume the goal of “winning the argument:” if the human always wins the argument (as a human would judge “winning”) then the evaluator can learn to use “winning the argument (according to a typical human observer)” as a signal about which player is human. So if the human tries to win the argument, then so will the AI systems.\n\nTo obtain a move, we just stop the interaction as soon as Alice proposes a move — during normal operation, Bob never does anything. His role is only to help train the system.\n\nComparison to approval-maximization\n===================================\n\nThis proposal seems to have most of the advantages of approval-directed behavior, while having a minimal risk of perverse instantiation.\n\nThe key challenge is that imitating human behavior may be more difficult than actually solving the problem at hand. The human who is modeling the behavior can [try to help](https://arbital.com/p/1vp), but it’s not clear whether/when that will be enough. Hopefully other techniques can further bridge the gap, or we can develop a better understanding of how the human model can reliably make themselves imitable.\n\nI suspect that practical approval-directed systems will _not_ have a serious difficulty with perverse instantiation (for the reasons given [here](https://arbital.com/p/1vm)). But it’s still a problem to keep in mind, and I think that trying to address the key challenge with imitation is the most straightforward way to attack the problem of perverse instantiation.',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '1',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'PaulChristiano'
  ],
  childIds: [],
  parentIds: [
    'ambitious_vs_narrow_value_learning'
  ],
  commentIds: [],
  questionIds: [],
  tagIds: [],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8301',
      pageId: 'imitation_justification',
      userId: 'JessicaChuan',
      edit: '5',
      type: 'newEdit',
      createdAt: '2016-03-04 04:32:32',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '7779',
      pageId: 'imitation_justification',
      userId: 'JessicaChuan',
      edit: '4',
      type: 'newEdit',
      createdAt: '2016-02-25 01:30:33',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '7778',
      pageId: 'imitation_justification',
      userId: 'JessicaChuan',
      edit: '3',
      type: 'newEdit',
      createdAt: '2016-02-25 01:29:24',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6761',
      pageId: 'imitation_justification',
      userId: 'JessicaChuan',
      edit: '0',
      type: 'newAlias',
      createdAt: '2016-02-11 02:50:23',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6762',
      pageId: 'imitation_justification',
      userId: 'JessicaChuan',
      edit: '2',
      type: 'newEdit',
      createdAt: '2016-02-11 02:50:23',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6357',
      pageId: 'imitation_justification',
      userId: 'JessicaChuan',
      edit: '0',
      type: 'deleteChild',
      createdAt: '2016-02-03 23:42:49',
      auxPageId: 'concrete_approval_directed_agents',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6340',
      pageId: 'imitation_justification',
      userId: 'JessicaChuan',
      edit: '1',
      type: 'newChild',
      createdAt: '2016-02-03 23:20:25',
      auxPageId: 'concrete_approval_directed_agents',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6339',
      pageId: 'imitation_justification',
      userId: 'JessicaChuan',
      edit: '1',
      type: 'newEdit',
      createdAt: '2016-02-03 23:18:58',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6338',
      pageId: 'imitation_justification',
      userId: 'JessicaChuan',
      edit: '0',
      type: 'newParent',
      createdAt: '2016-02-03 23:17:57',
      auxPageId: 'ambitious_vs_narrow_value_learning',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}