{
  localUrl: '../page/handingling_adversarial_errors.html',
  arbitalUrl: 'https://arbital.com/p/handingling_adversarial_errors',
  rawJsonUrl: '../raw/1vg.json',
  likeableId: '793',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '0',
  dislikeCount: '0',
  likeScore: '0',
  individualLikes: [],
  pageId: 'handingling_adversarial_errors',
  edit: '5',
  editSummary: '',
  prevEdit: '4',
  currentEdit: '5',
  wasPublished: 'true',
  type: 'wiki',
  title: 'Handling adversarial errors',
  clickbait: '',
  textLength: '15293',
  alias: 'handingling_adversarial_errors',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'PaulChristiano',
  editCreatedAt: '2016-03-04 02:18:18',
  pageCreatorId: 'PaulChristiano',
  pageCreatedAt: '2016-02-03 08:47:26',
  seeDomainId: '0',
  editDomainId: '705',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '28',
  text: '\nEven a very powerful learning system can’t do everything perfectly at first — it requires time to learn. Any proposal for AI control needs cope with mistakes while learning. In this post I’ll describe why this problem is non-trivial, and in the next post I’ll offer some more constructive answers.\n\nRegret\n======\n\nIt’s convenient to imagine a “training” period during which a machine learning system learns, and a “test” period during which it is guaranteed to perform well. But unless (and even if) you make a heroic effort, there will be features of the domain that just don’t appear in your training data. And so no matter how good your learner is, and no matter how thorough your training is, you can never realistically say that “now the model is trained — it’s all clear sailing from here.”\n\nThat said, in many contexts we _can_ prove [guarantees](https://arbital.com/p/1vc?title=online-guarantees-and-ai-control) like “the learner won’t make too many mistakes in total,” for a sufficiently careful definition of “mistake.” That is: the learner may take a long time to learn the whole domain, if some aspects of it just don’t appear in the data for a long time. But in that case, the learner’s ignorance can’t do any damage.\n\nThe possibility of errors at test time presents a challenge for AI control. It’s hard to say much about when these errors occur, and so if our systems make use of supervised learning, they may fail at inopportune times. If we can’t cope with adversarial errors, then:\n\n- In order to prove that our systems will work, we need to make some additional assumptions about their behavior. But we don’t have any good candidates, and I think there are fundamental obstacles to finding any.\n- There may actually be serious problems that we haven’t anticipated. For example, failure to cope with “adversarial” errors can be in indication that your training data may not be rich enough. What’s more, at the end of this post, I’ll describe two reasons that these errors could actually occur at the worst possible time.\n\nI see three possible responses:\n\n1. Either find an alternative to the supervised learning paradigm.\n2. Develop a much deeper understanding of supervised learning paradigm.\n3. Make systems that are robust to a small number of adversarial failures.\n\nI think that \\[1] and \\[2] are problematic. The supervised learning paradigm seems to be extremely effective in practice (note that this problem also afflicts conventional approaches to semi-supervised learning), so giving up on it seems to be a massive handicap. Moreover, we don’t seem to be anywhere close to the kind of understanding that would let us control the errors that learners make while learning, and I’m not sure if it’s even possible to do so (that is, I’m not sure if the statements we’d want to prove are even true).\n\nIf possible, I think that \\[3] would be by far the least demanding and most useful option.\n\nWhat is “one” error?\n====================\n\nIf a system makes only “one” error, that _doesn’t_ mean that its output differs from the “right” output on only one occasion.\n\nFrom the learner’s perspective, an output is only an “error” if it leads to suboptimal payoff. So all we can say is that the learner won’t receive a suboptimal payoff too often before it gets the picture. This could involve any number of “bad” decisions, if those bad decisions don’t lead to bad payoffs.\n\nFor example, suppose that we train a system for a month, and then release a static version of it into the world. We can make no guarantees at all about the behavior of the resulting system in the real world (though we might be able to make such guarantees if we can justify strong assumptions about the training data). If there is some difference between the training data and the test data, this learner may just keep messing things up, over and over again. We may not have trained the function we expect to have trained; the training data may fail to pin down some key ambiguity that we didn’t consider.\n\nIn order to prove a rigorous bound on the number of mistakes, we need to actually provide the learner with feedback telling it when it made a mistake, and the learner needs to continue to learn from that feedback.\n\n### Counterfactual oversight\n\nIf we use [counterfactual oversight](https://arbital.com/p/1tj?title=human-in-counterfactual-loop), and the learner receives feedback on 1% of their decisions chosen at random, then “one” mistake corresponds to 99 bad decisions (in expectation) followed by one last bad decision that finally receives negative feedback. An upper bound of “ten” mistakes corresponds to 1000 bad decisions, and so on.\n\nThis shouldn’t be too surprising — if we give the learner training data very rarely, it takes it a long time to learn.\n\nThis consideration seems to be the biggest reason to adopt a significant probability of oversight. But thinking through this issue suggests more effective approaches than purely stochastic oversight. I’ll discuss this in a future post.\n\nAn illustration\n===============\n\nConsider my [recent proposal](https://arbital.com/p/1v7?title=steps-towards-safe-ai-from-online-learning) for safely applying supervised learning. This proposal is completely compromised by adversarial failures, and hence a completely unsatisfactory proposal.\n\nFor a simple case, suppose that I ask such an AI “Is the number of primes between 1 and 8000 even or odd?”\n\nTo evaluate its answer, I ask about the the number of primes between 1 and 4000, and between 4001 and 8000. I check the answers for consistency, and give the original answer a good reward iff the answers are consistent.\n\nThis process then continues recursively: at each step we choose one of the two questions from the preceding step, and then we compute its rating in the same way. Finally, in the last step the learner makes a claim of the form “X is prime,” and we check this claim by hand, assigning it a high score iff it is true.\n\nIf our learner never makes any errors, then it’s easy to verify by induction that every question must be answered correctly.\n\nBut suppose that our learner is only guaranteed to make at most 1 error. Then our final result can be completely wrong. In fact, the situation is even more dire.\n\nFor example, suppose that the learner is wrong about whether 1 is prime. Then the learner is guaranteed to make at most one mistake, but our answer is guaranteed to be wrong. Moreover, with probability 7999/8000, the learner won’t realize the error of their ways. And if we do the same thing again, the learner will be wrong again. Even if the learner is split 50–50 about whether 1 is prime, they will make on average 8000 errors before they figure if out. But it seems like they should only have to make 1 error. That’s a big gap (and it would be easy for it to be even bigger).\n\nInterpretation\n==============\n\nI think this isn’t just a formal artifact — the fact that our system breaks down under adversarial errors is a real sign that it can’t work. There may be other algorithms that work in practice but fail under adversarial errors. But I think that “failure under adversarial errors” is generally a sign that we should either try to find some assumptions that rule out the particular failure that causes trouble, or else be skeptical of the system.\n\nWhat’s going on in the context of my proposal?\n\nWe are generating training data by walking down a random tree. But all of the “action” may happen in a very small part of that tree, and so the learner may never get feedback about the really important issues.\n\nTo overcome this problem, we need some system for directing the learner’s attention to the data points that are most likely to be surprising.\n\nProving a system works, even under “adversarial” failures, guarantees that we’ve addressed this issue. It also guarantees that we’ve addressed any similar issues that we might not have thought of yet.\n\nMalignant failure modes\n=======================\n\n_(Warning: the rest of this post is very weird.)_\n\nVulnerability to adversarial errors is a troubling sign which _suggests_ that a proposal may have flaws that we just haven’t identified.\n\nBut in some contexts, errors may actually _be_ adversarial, in which case vulnerability to adversarial errors is itself a serious problem. I’ll illustrate two cases. Both of these seem somewhat outlandish. I don’t know if this concern is fundamentally limited to outlandish scenarios, and I don’t know if anything this outlandish could actually happen.\n\n### Evolution\n\n\nSuppose that human reproduction worked differently: humans are surrounded by challenges (like big game to hunt) and every time someone solves one, they get to have one offspring. In this environment, it seems like humans may well have evolved the same kinds of complex psychology and culture that they have in the real world, with a love of solving challenges that is similar-in-kind to humans’ love of the various behaviors that lead to reproductive success.\n\nImagine that this environment was engineered by a race of aliens, who are breeding humans to be useful for solving similar challenges that appear in the real world. But it turns out that humans care about all sorts of things other than solving these challenges. So if these humans learn that they are being used, and learn that there is some real-world significance to the challenges they are solving, it wouldn’t be too surprising if they decided to choose a suboptimal answer on a challenge in order to have a desired effect on the outside world (for example, they might answer a challenge in a way that caused them to escape from their artificial environment and gain access to the outside world).\n\nThis may seem somewhat outlandish, but we can imagine similar results from very sophisticated versions of contemporary supervised learning (or even unsophisticated versions, scaled up to a massive enough amount of hardware that they could produce human-level intelligence).\n\nMany unsupervised learning systems work by considering a large class of possible models, and then using the models from that class that have worked well in the past.\n\nUsually this is not done using an explicit evolutionary dynamic, but in any case it shares many of the essential features of evolution: we explore a number of models, and then “keep” the one or ones that seem to perform well in some training environment.\n\nOne way a model can “win” in this game is by being the kind of model that has the desired properties — that will continue to do well in any environment similar to the training environment. But another way that it can win is by having a “survival instinct,” and recognizing that performance in the training environment is the best way to survive.\n\nSuch a model will continue to perform well as long as its survival is contingent on doing well in the training environment. But when selection pressure is removed, or when there are other paths to survival, it may suddenly start behaving differently. If so, its behavior may be strategically optimized for its own long-term survival, rather than for continuing to perform well in the testing environment.\n\n### Simulations\n\nBayesian reasoning can also be used to solve many supervised learning problems. The hope is that the “real” hypothesis is assigned some reasonable probability by the Bayesian’s prior. Every time they make a prediction worse than the “real” hypothesis, the probability of the real hypothesis grows. And so the number total number of “mistakes” is bounded.\n\nBut there can be “skeptical” hypotheses, which are very different from the real one yet make identical predictions in almost all cases. Because almost no evidence bear on these hypotheses, if they start out almost as likely as the real hypothesis, then they will remain almost as likely indefinitely.\n\nOne particularly plausible skeptical hypothesis is the simulation hypothesis: that what we perceive as reality is a computer program being run a computer. There is a reasonable [philosophical argument](http://www.simulation-argument.com/) in favor of the simulation hypothesis, and a small but significant fraction of people think that it is plausible or probable.\n\nUnder the simulation hypothesis, the simulated universe looks just like a simple lawful universe — until the simulation stops, or the simulators decide to change its character. This could lead to [very strange behavior](https://ordinaryideas.wordpress.com/2012/05/24/solomonoff-induction-and-simulations/) by rational agents who expect such a change to happen soon.\n\n### Fixing the problem\n\nIf we apply either of the above AI’s to solve a problem we care about, we may get surprising results at an inopportune time. But if our application is robust to adversarial errors, even these pessimistic scenarios can’t cause trouble.\n\nSuppose that an AI decides to behave strangely, because it expects that the simulation is going to be shut down in a few minutes anyway. When the simulations isn’t shut down, the AI will learn a lesson, and update against the simulation hypothesis. If the AI started out by assigning a very high probability to the simulation hypothesis, then this could happen a few times. But as long as “I’m not in a simulation” is accorded some reasonable prior probability (as it will be by any non-dogmatic prior), this can only happen so many times before the AI will get the picture.\n\nSimilarly, suppose that we’ve run an exhaustive search to find an AI that seems to perform well, but is actually going to behave very strangely when the stakes get high. The AI can do this, but by doing so it decreases its own fitness. As long as our selection process was also _capable_ of identifying an AI that actually just plays the game it was trained to play, and as long as we keep applying selection power “in the wild,” then eventually the intended behavior will win out. Every time that another contender decides to pull a punch, the honest AI will pull a bit further ahead. (Of course, this can fail in any number of ways — for example, by getting stuck in a local optimum.)\n\nIn any case, a wacky situation can lead to a bit of wacky behavior, but there is a bound on how much of it there can be. So if our schemes are resistant to a bit of wacky behavior, then we can ride it out and wait for things to settle down to normality.\n\nWe have bigger problems if our AI is very confident that it is in a simulation, or if our evolution procedure was unable to find any agent that will reliably perform well (rather than behaving strangely after a context change). But these extreme scenarios correspond to actual **failures of the learning algorithms**, reflected in violations of their purported guarantees. So we can hope to rule this out by design better learning algorithms. Designing good learning algorithms may end up being the hardest part of the AI control problem, but for now I’m happy to set it aside — I would be satisfied if we could reduce AI control to the problem of building learning systems that satisfy intuitive formal guarantees.',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '1',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '2016-02-19 02:21:48',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'PaulChristiano'
  ],
  childIds: [
    'handling_error_with_arguments'
  ],
  parentIds: [
    'paul_ai_control'
  ],
  commentIds: [],
  questionIds: [],
  tagIds: [],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8280',
      pageId: 'handingling_adversarial_errors',
      userId: 'JessicaChuan',
      edit: '5',
      type: 'newEdit',
      createdAt: '2016-03-04 02:18:18',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '7935',
      pageId: 'handingling_adversarial_errors',
      userId: 'JessicaChuan',
      edit: '4',
      type: 'newEdit',
      createdAt: '2016-02-26 22:56:16',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '7702',
      pageId: 'handingling_adversarial_errors',
      userId: 'JessicaChuan',
      edit: '3',
      type: 'newEdit',
      createdAt: '2016-02-23 02:09:01',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6744',
      pageId: 'handingling_adversarial_errors',
      userId: 'JessicaChuan',
      edit: '2',
      type: 'newEdit',
      createdAt: '2016-02-11 01:22:46',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6268',
      pageId: 'handingling_adversarial_errors',
      userId: 'JessicaChuan',
      edit: '1',
      type: 'newChild',
      createdAt: '2016-02-03 08:50:51',
      auxPageId: 'handling_error_with_arguments',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6267',
      pageId: 'handingling_adversarial_errors',
      userId: 'JessicaChuan',
      edit: '1',
      type: 'newParent',
      createdAt: '2016-02-03 08:49:32',
      auxPageId: 'paul_ai_control',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6265',
      pageId: 'handingling_adversarial_errors',
      userId: 'JessicaChuan',
      edit: '0',
      type: 'deleteParent',
      createdAt: '2016-02-03 08:49:26',
      auxPageId: 'technical_socail_approach_ai_safety',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6263',
      pageId: 'handingling_adversarial_errors',
      userId: 'JessicaChuan',
      edit: '1',
      type: 'newEdit',
      createdAt: '2016-02-03 08:47:26',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6262',
      pageId: 'handingling_adversarial_errors',
      userId: 'JessicaChuan',
      edit: '0',
      type: 'newParent',
      createdAt: '2016-02-03 08:43:53',
      auxPageId: 'technical_socail_approach_ai_safety',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'true',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}