{
  localUrl: '../page/human_counterfactual_loop.html',
  arbitalUrl: 'https://arbital.com/p/human_counterfactual_loop',
  rawJsonUrl: '../raw/1tj.json',
  likeableId: '763',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '0',
  dislikeCount: '0',
  likeScore: '0',
  individualLikes: [],
  pageId: 'human_counterfactual_loop',
  edit: '5',
  editSummary: '',
  prevEdit: '4',
  currentEdit: '5',
  wasPublished: 'true',
  type: 'wiki',
  title: 'Human in counterfactual loop',
  clickbait: '',
  textLength: '11018',
  alias: 'human_counterfactual_loop',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'PaulChristiano',
  editCreatedAt: '2016-03-04 00:16:13',
  pageCreatorId: 'PaulChristiano',
  pageCreatedAt: '2016-02-01 23:48:12',
  seeDomainId: '0',
  editDomainId: '705',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '27',
  text: '\n\nConsider an autonomous system which is buying or selling assets, operating heavy machinery, or making war. People often discuss three kinds of supervision:\n\n- **Fully autonomous.** The system operates without human supervision.\n- **Human-in-the-loop.** The system does what a human tells it to.\n- **Human-on-the-loop.** A human oversees the system and may override its behavior if necessary.\n\nIn the context of human-level AI, I think a fourth model of supervision is interesting:\n\n- **Human-in-the-counterfactual-loop**. Each time the system wants to act, it consults a human with a very small probability. The system does what it thinks a human would have told it to do if the human had been consulted.\n\nI think this model isolates an interesting characteristic of autonomy, and is likely to be a useful form of supervision for powerful systems.\n\n### Why bother with prediction?\n\n\n\nWhy would you have a human in the counterfactual loop rather than the actual loop?\n\nMachines can make lots of decisions quickly. One drawback of human involvement is that human time is expensive and human responses are high-latency.\n\nIf you are involved with each decision with small probability, then:\n\n- You can oversee many more decisions, much more cheaply. One person can oversee many machines acting in parallel.\n- The system might act much faster than you come to a conclusion. You could take minutes or hours to reach a conclusion that the machine made in seconds. (This is particularly important when one person is overseeing many machines, since they may need to spend some time understanding the context in which a decision was made.)\n- We can reduce human error by allowing human overseers to think for longer about each decision.\n\n### Why bother being in the loop?\n\n\n\nHuman oversight seems particularly important when it is difficult to explicitly state all of the goals of a system. For example, we might be concerned that any simple rule for asset purchases might sometimes misfire. If you buy any asset whose expected price is higher tomorrow than today, the asset might be frozen, or the purchase might have ethical repercussions, or the transaction might be politically or legally problematic, or so on. Unanticipated possibilities become more extreme and more worrying as we broaden the space of possible actions or make higher-stakes decisions.\n\nWith a human in the counterfactual loop, our system applies its predictive abilities to notice conditions that we would have objected to, if we had foreseen them. This partially addresses many of these concerns.\n\nAlthough I’ll describe such a system as having a “human” in the loop, the role could instead be played by any group of humans. As an extreme example, we could have a democracy-in-the-loop (though there would be a few practical issues). This is possible because the turnaround time in a counterfactual loop can be so much longer than in a normal loop.\n\n### More details\n\n\n\nEvery time an autonomous system with a human in the counterfactual loop needs human feedback, it first flips a biased coin which comes up heads with small probability (say 0.001%). If the coin comes up heads, then the system consults a human, supplying them with relevant context and getting their feedback (though the actual feedback can be provided long after the decision). If the coin comes up tails, the system does not consult a human. In either case, it predicts what feedback the human would eventually provide if the coin came up heads, and uses the predicted feedback as an input into its decision-making process (as if the human were actually in the loop and had provided that feedback).\n\nThis requires an algorithm that can effectively predict human judgments. The more complex the domain, the harder this prediction problem is. There is no guarantee that we have predictors powerful enough to help oversee the autonomous systems that we build. But the predictors never need to exercise superhuman ability, and so we will certainly have good enough predictors by the time we reach broadly human-level AI.\n\nThe predictor needs to be able to make counterfactual predictions. But assuming that the predictor can’t predict which way the coin will turn up, this amounts to a conditional prediction: _given_ that the coined turned up heads, what do I predict the human will tell me? Any useful predicting system must be able to make predictions of this form.\n\n### Why actually consult the overseer?\n\n\nWe actually consult the overseer in order to provide a concrete experiment for the predictor to predict. If the predictor is able to consider more extreme counter factuals, then we may not need to even have a human around. We could instead say “Predict what Bob _would_ say, if we had trained him to oversee you.” This doesn’t actually change the details much. But for this post, I’ll conservatively assume that we need to actually set up the experiment in order to get good predictions about it.\n\nThere are two forces that keep us from reducing the probability of human involvement to 0:\n\n**Keeping us sharp**. If the human only steps in once every century, the human may have forgotten what they are supposed to be doing. But this is a _very_ weak requirement, and it applies at the level of the human operator rather than of the machine. So one human could potentially oversee a warehouse of thousands of machines, carefully reviewing only a few decisions each day (or month).\n\n**Training data.** A sophisticated AI could predict the human overseer’s behavior using very few direct observations, by leveraging other related information. For example, if I hear someone say “Please be careful with those boxes” I will expect them to object to any plan that involves dropping the boxes. I don’t need a library of training data which includes boxes being dropped. That said, this still requires some contact or common understanding between the overseer and the machines being overseen, so that they can understand how their decisions will be evaluated.\n\nFor a weak system, actually consulting the human overseer regularly may be the only way to make good predictions about the overseer. This data can still be shared across many similar systems, potentially driving the per-system cost very low.\n\nMany systems will be somewhere in between these extremes, able to leverage some general information (whether from observations of the overseer, or general heuristics installed by the designers of the system) but would need to get some direct observations of the overseer in order to make good predictions.\n\nDiscussion\n==========\n\n\nA human in the counterfactual loop eliminates some but not all of the concerns with automation (and so serves as a thought experiment to identify which aspects of human control are most important). In particular, such a system will not do anything _it thinks_ a human is OK with—but it might err by failing to correctly predict what a human would approve of.\n\n### Strong predictors\n\n\nFor an AI with a very good understanding of humans, prediction errors would be rare, and so a human in the counterfactual loop may be better than the real thing (since they can think longer).\n\nThere are also advantages over approaches without counterfactual oversight:\n\n- The predicted response of the human can change as the environment changes (assuming that the predictor knows about the changes, and can predict how the human would respond to them).\n- The human overseer can handle all of the factors that we care about, which may otherwise be difficult to formalize.\n- Because the human only needs to be able to learn about a decision when prompted, they need not have extensive training in advance. So putting a predicted human in the loop may be a lot cheaper than spending engineer time making sure you have correctly defined a domain-specific decision criterion.\n\nWith a strong predictor, any failure of the system can be traced to either (1) a failure of the predictor, or (2) a failure that also would have occurred with human oversight.\n\n### Weak predictors\n\n\nFor an AI with a weak understanding of humans, this proposal separates _human judgment_ from _human preferences_. A system with a human in the counterfactual loop may make errors that would be caught by a human overseer, but it won’t fail by virtue of having different preferences than a human overseer.\n\nThat said, in some contexts a weak predictor might do much better than a human—especially when decisions are made quickly or under emotional pressure (in the counterfactual loop, the human can take a long time and consider the issue carefully). In some cases this could be tested directly: we can record the context surrounding a decision, and empirically evaluate whether an AI or human is better able to predict the results of an extended deliberative process.\n\nIf this test suggests that the AI is a better predictor of deliberative human judgment, then the objection to automated decision-making becomes substantially weaker.\n\n**A digression on weapons**. Autonomous weapons are not my primary concern in this post, but it’s an example that would be hard to avoid. I think that a human in the counterfactual loop would be a legitimate response to the most common concerns with human rights violations stemming from autonomous weapons—assuming the system passed the test described in the last paragraph (which may already be possible in simple contexts). However, I’m afraid that these humans rights issues obscure a more serious concern with autonomous weapons, namely that they may radically reduce the expense and risk of ending human lives.\n\n### Robustness\n\n\n\nAll approaches to oversight require the overseen system to “go along with it”: if a flexible autonomous system wanted to cut the human out of the loop, they would have many opportunities to do so. Any successful approach to oversight must either make this impossible or (more likely) ensure that the autonomous system has no motivation to cut the human out of the loop.\n\nHaving a human in the counterfactual loop doesn’t fundamentally change these dynamics. It introduces a new point of failure for the human’s involvement (the predictor which is responsible for anticipating the human’s response). But given how many possible points of failure there already are (the communication channel, the human’s understanding of the situation, the part of the code that responds to the human’s judgment, etc.), this is a minor addition.\n\nUsing a counterfactual loop makes it more realistic for the human to provide fine-grained oversight, improving the probability that they can oversee the detailed plans made by the machine or the full range of actions that might cut them out of the loop.\n\nConclusion\n==========\n\n\n\nFor powerful AI systems, effective human oversight may be cheaper than it appears; we should keep this in mind when thinking about superintelligent AI. The idea of counterfactual human oversight may also be a useful building block more generally.',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '1',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'PaulChristiano'
  ],
  childIds: [],
  parentIds: [
    'paul_ai_control'
  ],
  commentIds: [],
  questionIds: [],
  tagIds: [],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8255',
      pageId: 'human_counterfactual_loop',
      userId: 'JessicaChuan',
      edit: '5',
      type: 'newEdit',
      createdAt: '2016-03-04 00:16:13',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8254',
      pageId: 'human_counterfactual_loop',
      userId: 'JessicaChuan',
      edit: '4',
      type: 'newEdit',
      createdAt: '2016-03-04 00:15:36',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '7942',
      pageId: 'human_counterfactual_loop',
      userId: 'JessicaChuan',
      edit: '3',
      type: 'newEdit',
      createdAt: '2016-02-26 23:37:13',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '7939',
      pageId: 'human_counterfactual_loop',
      userId: 'JessicaChuan',
      edit: '2',
      type: 'newEdit',
      createdAt: '2016-02-26 23:30:04',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6045',
      pageId: 'human_counterfactual_loop',
      userId: 'JessicaChuan',
      edit: '1',
      type: 'newEdit',
      createdAt: '2016-02-01 23:48:12',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6041',
      pageId: 'human_counterfactual_loop',
      userId: 'JessicaChuan',
      edit: '0',
      type: 'newParent',
      createdAt: '2016-02-01 23:13:14',
      auxPageId: 'paul_ai_control',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}