{
  localUrl: '../page/concrete_approval_directed_agents.html',
  arbitalUrl: 'https://arbital.com/p/concrete_approval_directed_agents',
  rawJsonUrl: '../raw/1vw.json',
  likeableId: 'eliezer_fixes',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '0',
  dislikeCount: '0',
  likeScore: '0',
  individualLikes: [],
  pageId: 'concrete_approval_directed_agents',
  edit: '6',
  editSummary: '',
  prevEdit: '5',
  currentEdit: '6',
  wasPublished: 'true',
  type: 'wiki',
  title: 'Concrete approval-directed agents',
  clickbait: '',
  textLength: '20223',
  alias: 'concrete_approval_directed_agents',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'PaulChristiano',
  editCreatedAt: '2016-02-27 02:14:47',
  pageCreatorId: 'PaulChristiano',
  pageCreatedAt: '2016-02-03 23:26:02',
  seeDomainId: '0',
  editDomainId: '705',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '34',
  text: 'This post lays out my current concrete “[approval-directed agents](https://arbital.com/p/1t7)” proposal and outlines what I see as the most promising broad directions for further research. (I will describe more concrete research problems in a series of upcoming posts.)\n\nWe’ll [take as given a powerful algorithm for episodic reinforcement learning](https://arbital.com/p/1v6?title=problem-safe-ai-from-episodic-rl), and assume that we are interested in building a system which acts effectively in the world in a way consistent with its user’s values.\n\nI’m going to talk a lot about the “scalability” of AI control schemes. Ideally, as our AI systems become more capable they will also robustly become more helpful and better allow us to achieve our goals; I refer to any failure of that property as “failures of scalability.” In other words, scalability is the desirable property of continuing to predictably achieve the desired effect, with acceptable additional effort, as the capabilities of the underlying learning algorithms improve.\n\nCurrent proposal\n================\n### Summary\nEach time our system needs to act we elicit an action from our learner. Most of the time we simply take this action. With small probability we instead submit a description of the action to a human operator for review. The operator assigns that action a score in the interval [-1, 1]; in the process of assigning the score, the operator may make use of as many automated assistants as needed, each of which works according to the same principle.\n\nOur learner is trained to output actions that maximize the score assigned by the human operator, conditioned on the operator reviewing that action.\n\n### Detail\n\nWe’ll work with a pair of independent learning algorithms A, B.\n\nThese algorithms play a simple online learning game: in each round they produce a binary output, and receive a score in [-1, 1]. They are trained to choose individual outputs that will receive the highest score. We may elicit an output from the learning algorithms and not assign any score; in this case the episode is not added into the labelled training data (though it may be included for semi-supervised learning). The learners effectively condition their choices on the assumption that a score will be given.\n\nEach time our system needs to take an action, we perform the following procedure. We’ll assume that actions are encoded as binary strings.\n\n1. Ask A for a binary output _a_∈{0, 1}*. With probability 99%: interpret _a_ as an action, take that action, and move on to the next action.\n2. With probability 1%: ask B for a binary output _b_∈{0, 1}. If b = 0: interpret _a_ as an action and take that action, give B a payoff of 0, and move on to the next action.\n3. If _b_ = 1 we say that B “challenged” A’s proposed action. In this case we notify the human operator and ask her to score _a_ and _b_.\n4. The operator may instantiate any number of additional approval-directed agents (i.e. agents who act by iterating steps 1–6). These agents are modified to be more conservative: they terminate in step 1 with 0% probability, always moving on to step 2 and providing an opportunity for challenges.\n5. The operator gives A a payoff of ±1 based on whether _a_ seems like a good proposal, and similarly gives B a payoff of ±1 based on whether challenging was appropriate.\n6. We return to step 1 (without taking any action) and elicit a new action.\n\nIn step 4, it is critical that the operator instantiate additional approval-directed agents and use them to identify and weigh up possible faults in A’s proposal. The operator may also consult other approval-directed agents prior to beginning this protocol to help improve the integrity of the setup or make adjustments (e.g. to address possible software faults that A might use to undermine the evaluation process).\n\n### State\n\nA and B should both be given some description of the state, describing the problem they currently face and what they “know” about the world. Maintaining this state is a subtle part of the proposal — we can’t determine optimize an internal representation of state using gradient descent without compromising the format.\n\nI will discuss this issue in more detail a future post. For now, suppose that A and B take as input a “complete description” of the state (we may have to use a larger model class, such as RNNs rather than neural networks, in order to accommodate large and structured description). One of the actions we can make available to A is to forget part of history or write down additional information that it might find useful — these actions are evaluated in the same way as other possible actions.\n\n### Concreteness\n\nThis proposal corresponds to some code we could _literally write_ _today_. This is sufficiently unusual, amongst scalable approaches to AI control, that I want to emphasize it.\n\nIf you set out to write this code, the main problem you face is writing the learning systems A and B. This is an extremely common _kind_ of problem that can be attacked with very standard methods. But it is quite a difficult instance of that problem, and existing techniques won’t handle it very well.\n\nMaking things actually work will probably require both progress in AI and improvements to the scheme.\n\nDiscussion\n==========\n\n### Goal\n\nFor some task X, say we are “capable of automating X” if we can build a system which achieves X in some context that does not inherently reduce the difficulty of the task, e.g. which will perform X whenever the user requests it, or whenever doing X so is the best way to earn money, or so on.\n\nI’d consider the approval-directed agents approach a success if, for every task X which we are capable of automating at a cost _c_, an approval-directed agent can achieve X at a cost not much more than _c_, and **in a manner consistent with the operator’s preferences**.\n\nFor example, if we could train an AI system that could design a rocket if doing so was the best way for it to maximize paperclips, then we ought to be able to use this system to design a rocket without side-effects or distortions that the operator would be unhappy with.\n\n### Why might this protocol succeed?\n\nIntuitively, the success of this proposal rests on a few hopes:\n\n1. **Explicability:** Our AI systems are nearly as good as explaining _why_ they want to do something as they are at choosing _what to do_. Producing these explanations may require more time than choosing what to do, but it should at least be possible. The explanations should be sufficiently rigorous that it is possible to distinguish “correct” vs. “incorrect” actions on the basis of their explanations.\n2. **Efficacy:** By getting help from additional approval-directed agents, the human operator can evaluate proposals as if she were as smart as those agents. In particular, the human can evaluate the given rationale for a proposed action and determine whether the action really does what the human wants.\n3. **Soundness:** By working with AI assistants (at run-time and in advance), the operator can ensure that AI systems cannot learn to undermine the evaluation procedure itself — learner A cannot learn to find actions which are ineffective but criticism-proof. For example, A can’t find actions such that merely looking at them jeopardizes the operator’s judgment or their software.\n\nI find these assumptions intuitively meaningful, but to be precise we need to be much more careful about what kind of feedback is available. For example, there is a trivial sense in which a human+AI system is “as smart as” the AI — we need something stronger.\n\n1. **Explicability** requires that our AI systems can learn to behave well using feedback only on the quality of their explanations.\n2. **Efficacy** requires that a human+AI system can learn to evaluate explanations _with no feedback_ — the operator can train her AI assistants however she wants, but ultimately all direction must come from the operator’s choices rather than being a response to an external source of feedback.\n3. **Soundness** requires that the evaluation system can be made robust using partial feedback: by simply running the evaluation procedure the human can get feedback about what kinds of attacks are feasible and can then correct those attacks. But this feedback is only available for non-catastrophic problems; for some problems, a small number of failures is already unacceptable.\n\nNote that if **direct feedback** is available for a task, then the explicability+efficacy requirement is essentially trivial: the operator can “evaluate” an AI’s behavior by simply allowing the AI to do the task and then providing it with direct feedback on its performance.\n\n“Explanations” are only challenging in domains where a demonstration is not itself adequate explanation — for example where a human may not be able to tell whether the AI has succeeded at a task, or where a task may extend over a very long time period, or where a demonstration may compromise soundness by giving the AI the ability to interfere with the humans’ evaluations.\n\nTechnically soundness and explicability aren’t distinct requirements — the reason that we need our evaluation scheme to be secure is that, if they are not, then training our AI systems to produce good explanations won’t result in good behavior. Moreover, if our evaluation scheme isn’t sound, it’s not clear in what sense it constitutes a good evaluation. Nevertheless, it seems helpful to split this claim in two: first that we can learn to produce good explanations, and second that we can’t learn to get good payoffs by other means.\n\n### A warning on soundness\n\nIn order to address concerns about soundness, I previously included a simple procedure based on “watchdogs:” I recommended that whenever the operator consulted either A or B, she first run their output by a slightly-less-intelligent filtering algorithm C. I no longer believe that this procedure robustly improves soundness, so I have eliminated it for simplicity. This may serve as a bad omen of surprises to come.\n\nThe problem with this proposal is quite simple: even though the filtering algorithm C is less intelligent than algorithms A and B, algorithms A and B may implicitly communicate with C. This makes it problematic to draw any conclusions from C’s limitations. And at that point, we might as well just allow B (or a new approval-directed agent) to play the same role.\n\nDirections for future work\n==========================\n\nI think there are two basic directions for further research: (1) actually making a scheme like this work using existing technology, and (2) understanding and addressing possible concerns with scalability.\n\nThese directions mirror the most salient concerns with the proposal, namely (1) that its requirements, and especially the requirement for human understanding, significantly harm performance, and (2) that its reliance on the behavior of the (manipulable and fallible) humans may introduce serious scaling difficulties as the underlying learning systems improve.\n\n### Making it work\n\nWe can implement the agent above, but it’s not going to be pretty. In particular, it won’t give state of the art performance in most domains.\n\nActually getting this to work would require a lot of effort, and will no doubt involve radically changing the formulation above (probably resulting in something with no commonalities, a la the ship of Theseus). I think that this would tell us an awful lot about the practical prospects for this proposal in the long term, and in many cases would reveal more general insight into the nature of the AI control problem.\n\nIn the rest of this section I’ll outline a few salient problems in this direction. I’ll talk about these problems at more length in a future post, probably quite soon. For the most part, these projects address concerns with **explicability** and **efficacy**.\n\n**Explanation:** many AI systems cannot “explain” their behavior to a human in even the weakest sense. For example, imagine an RL agent trained to fly a drone; it seems much easier to train based on end-to-end performance than to build a system which can actually explain to a human operator why a particular motor action is an appropriate response to a particular situation.\n\nIn order for our proposal to work, we must be able to train systems to produce this kind of explanation rather than directly training them to produce the desired outcome. This is a tall order.\n\n**Bootstrapping:** scaling this kind of “explanation” (perhaps even to current AI capabilities) will require human operators to collaborate with one iteration of a learning system in order to understand the behavior of the next. This bootstrapping process has not been studied at all as far as I am aware of.\n\n**Imitation in unfavorable domains:** In order to address concerns with scalability (see the next section), we may be interested in training systems to mimic human behavior rather than maximize approval. But this introduces a new class of practical difficulties that haven’t received much attention — existing research practice uses imitation in some settings where it happens to be an effective approach, but has not explored applying imitation in unfavorable settings (e.g. those where AI systems can surpass human performance, or can only accomplish a task in a very inhuman way). If we wanted to apply mimicry more broadly for safety reasons, we would need to overcome the obvious difficulties in these unfavorable settings.\n\n### Addressing scalability concerns\n\nIn addition to (considerable) practical difficulties, it’s worth thinking about the scalability of this proposal to very advanced AI capabilities. Would we encounter new problems that are exacerbated as capabilities increase?\n\nThis requires thinking about theoretical questions, though there may also be complementary empirical work. This section lays out some of those directions, which I’ll examine at more length in a future post. For the most part, these represent concerns with **soundness**.\n\n**Manipulation resistance:** This proposal involves training AI systems to produce explanations that the user will approve of. As our AI systems become increasingly sophisticated, they may find unintended ways to cause the user to express approval.\n\nThe main check against this concern is that the user can consult with a community of AI systems; before interacting with the system A the community of AI users has already have interacted with many very-slightly-less-powerful AI systems, and during interaction with A each user can consult other comparably-powerful AI systems.\n\nWe might divide these concerns into two classes: those that are _recoverable_   e.g. an attempt by A to deceive the user (which might be addressed by an appropriate rebuttal by B or by another approval-directed agent) — and those that are _unrecoverable_  e.g. a threat to the user or an attack that compromises the user’s hardware, causing them to register approval without consulting agent B.\n\nIt’s unclear how robust our proposal is to these kinds of attacks, once the AI systems are significantly more intelligent than their users. This is a question that can be considered in much more detail. It could also be addressed by using mimicry instead of approval-direction (see above) or by having an appropriate internal organization rather than using reinforcement learners as black boxes (see below).\n\n**Simulations and inductive definitions:** All systems along these lines define the “correct” behavior in terms of what the human user “would have done.” Intuitively this may give safe behavior, but there is no formal argument in some cases we may get surprising results.\n\nOne surprising possibility is that our algorithms reason about the possibility that they will be instantiated in a simulation, where the behavior of the “user” is anticipated to be very different from the behavior of the actual user (and may in fact be opposed to the user’s intentions).\n\nI discuss this issue [here](https://arbital.com/pages/8209403624108606962); the most promising resolution is probably to apply pressure to the internal organization of our agents rather than treating the RL agents as a black box (see below). That post also discusses other possible solutions, and it’s far from clear that this problem will materialize in actual systems.\n\n**Internal organization:** In order to avoid the previous two problems, we might want to build agents which think about problems in a way we would approve of, rather than trying to maximize the extent to which we would approve of their actions.\n\nFor example, if we had insight into the internal behavior of an agent then we might disapprove of a cognitive process which was searching for zero-day exploits which would enable them to gain maximal approval; so if internal cognitive processes were organized so as to maximize our approval _of those processes_ , we would not expect our agent to perform that kind of search.\n\nThis idea suggests both theoretical and a practical research directions.\n\nPractically, we can engage with the design of approval-directed systems built from existing technology and try to understand whether the internal behavior can be organized in a way that is itself-approval directed. This seems to ultimately be a question about the particular algorithms that prove useful.\n\nFor example, two common algorithmic technique are gradient ascent or trial-and-error, which effectively optimize internal weights by backward-chaining from the desired output behavior. In some sense this procedure inherently optimizes external behaviors. Is there a way to change this process so that user feedback can adjust internal processes? Can existing training procedures actually produce the kinds of manipulative or surprising behaviors we are concerned with, or do they only arise from explicit decision-theoretic reasoning? Can we build systems that are broken up into modules that e.g. separate out such decision-theoretic reasoning so that it can be influenced by additional requirements?\n\nTheoretically, it’s not clear how we would analyze the behavior of a system made from “approval-directed parts.” How do we formulate the intended properties of the whole, and how do those properties relate to the properties of the parts? Filling in this theoretical gap would do quite a lot to bridge the gap between the approval-directed approach (and I suspect many more “common-sense” views) and the [MIRI perspective](http://intelligence.org/files/TechnicalAgenda.pdf). Unfortunately, it’s not clear whether we should expect to find any clean theoretical picture, even if this will ultimately be a satisfactory solution.\n\nConclusion\n==========\n\nThis framework is not yet well understood, and not many people have spent time thinking about it. But to me it looks more tractable than other direct attacks on scalable AI control. For example, we can build approval-directed systems today, and many key long-term challenges may actually materialize as practical challenges with getting those actually-existing systems to work well. I think that is a big advantage, and almost a prerequisite for studying the issue within the AI field as it currently exists. (This will probably be my focus in the immediate future.)\n\nWhether or not we are interested in approval-direction per se, the theoretical difficulties discussed in the “scalability concerns” section seem to apply to most practical work on AI alignment (as well as to common intuitions about why AI will be aligned-by-default). I suspect this family of questions should be a priority for researchers interested in the theoretical question of scalability — mainstream AI researchers interested in alignment are unlikely to be persuaded to pursue more abstract approaches unless they are convinced that these kinds of scalability concerns are really deal-breakers. Approval-directed agents provide a simple but concrete context in which to think about these concerns.',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '1',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'PaulChristiano'
  ],
  childIds: [],
  parentIds: [],
  commentIds: [],
  questionIds: [],
  tagIds: [],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '7952',
      pageId: 'concrete_approval_directed_agents',
      userId: 'JessicaChuan',
      edit: '6',
      type: 'newEdit',
      createdAt: '2016-02-27 02:14:47',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '7777',
      pageId: 'concrete_approval_directed_agents',
      userId: 'JessicaChuan',
      edit: '5',
      type: 'newEdit',
      createdAt: '2016-02-25 01:26:35',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6776',
      pageId: 'concrete_approval_directed_agents',
      userId: 'JessicaChuan',
      edit: '0',
      type: 'newAlias',
      createdAt: '2016-02-11 03:10:28',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6777',
      pageId: 'concrete_approval_directed_agents',
      userId: 'JessicaChuan',
      edit: '4',
      type: 'newEdit',
      createdAt: '2016-02-11 03:10:28',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6358',
      pageId: 'concrete_approval_directed_agents',
      userId: 'JessicaChuan',
      edit: '0',
      type: 'deleteParent',
      createdAt: '2016-02-03 23:42:49',
      auxPageId: 'imitation_justification',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6348',
      pageId: 'concrete_approval_directed_agents',
      userId: 'JessicaChuan',
      edit: '0',
      type: 'deleteChild',
      createdAt: '2016-02-03 23:38:06',
      auxPageId: 'elaborations_apprenticeship_learning',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6346',
      pageId: 'concrete_approval_directed_agents',
      userId: 'JessicaChuan',
      edit: '3',
      type: 'newEdit',
      createdAt: '2016-02-03 23:33:47',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6345',
      pageId: 'concrete_approval_directed_agents',
      userId: 'JessicaChuan',
      edit: '2',
      type: 'newEdit',
      createdAt: '2016-02-03 23:30:27',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6343',
      pageId: 'concrete_approval_directed_agents',
      userId: 'JessicaChuan',
      edit: '1',
      type: 'newChild',
      createdAt: '2016-02-03 23:26:58',
      auxPageId: 'elaborations_apprenticeship_learning',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6342',
      pageId: 'concrete_approval_directed_agents',
      userId: 'JessicaChuan',
      edit: '1',
      type: 'newEdit',
      createdAt: '2016-02-03 23:26:02',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6341',
      pageId: 'concrete_approval_directed_agents',
      userId: 'JessicaChuan',
      edit: '0',
      type: 'newParent',
      createdAt: '2016-02-03 23:20:26',
      auxPageId: 'imitation_justification',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'false',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}