{
  localUrl: '../page/state_of_steering_problem.html',
  arbitalUrl: 'https://arbital.com/p/state_of_steering_problem',
  rawJsonUrl: '../raw/1v9.json',
  likeableId: '788',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '0',
  dislikeCount: '0',
  likeScore: '0',
  individualLikes: [],
  pageId: 'state_of_steering_problem',
  edit: '6',
  editSummary: '',
  prevEdit: '5',
  currentEdit: '6',
  wasPublished: 'true',
  type: 'wiki',
  title: 'The state of the steering problem',
  clickbait: '',
  textLength: '7658',
  alias: 'state_of_steering_problem',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'PaulChristiano',
  editCreatedAt: '2016-03-04 01:58:48',
  pageCreatorId: 'PaulChristiano',
  pageCreatedAt: '2016-02-03 08:27:39',
  seeDomainId: '0',
  editDomainId: '705',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '19',
  text: '\n\nThe [steering problem](https://arbital.com/p/1tv?title=the-steering-problem) asks: given some powerful AI capabilities, how can we engineer a system that is both efficient and aligned with human interests?\n\nHere’s what I think we know about the steering problem right now:\n\n- **Given a question-answering system**, that can answer any precisely posed question as well as a human, I think we have a [reasonable candidate solution](https://arbital.com/p/1v4?title=safe-ai-from-question-answering). But the proposed solution is hard-to-test, relies on the sensible behavior of humans in very exotic situations, and is hard to adapt to more realistic capabilities.\n- **Given a very powerful predictor**, I think we have a [promising but unsatisfying solution](https://arbital.com/p/1th?title=implementing-our-considered-judgment). The big problem is that requires a _very_ powerful predictor. The big advantage is that the solution can be tested thoroughly, and the predictor can be tested and trained directly on “important” predictions (rather than relying on transfer learning).\n- **Given a reinforcement learner or supervised learner**, I think we have a [superficially plausible solution](https://arbital.com/p/1v7?title=steps-towards-safe-ai-from-online-learning). I have [little idea whether it would really work](https://arbital.com/p/1v7?title=steps-towards-safe-ai-from-online-learning). A solution to the steering problem under these assumptions would imply a solution under essentially any other reasonable assumptions.\n\nOverall, this problem seem much easier than I anticipated. I feel more like “It’s easy to see how things could go wrong,” rather than “It’s hard to see how things could go right.”\n\nI think these approaches are mostly common-sensical, and are in some sense an evasion of the more exotic issues that will need to be resolved eventually. (“What do we really want?”, “what standards of reasoning should we ultimately accept?”, and so on.) But in fact I think we have a good chance of evading these exotic issues for now, postponing a resolution until they no longer seem so exotic.\n\nOpen problems\n=============\n\n\n**Better solutions for reinforcement learning.** My solution for reinforcement learning is definitely dubious. It would be great find new approaches, find new problems with the existing approach, better patch known problems with the existing approach, or try to find a more robust/reliable way to reason about possible solutions.\n\nI’m confident that there is room for significant improvement, though I’m very unsure how good a solution we can ultimately find.\n\n**More conservative assumptions**. All of these solutions make a number of “nice” assumptions. For example, I assume that the training error of our algorithms is either very small, or mostly incurred very early, or else that there is no small set of “make or break” decisions. But can we design algorithms that are robust to a modest number of adversarial failures at any point during their execution? (Note that adversarial failures can be correlated across different AIs, and that there are a number of reasons that such correlations might arise.) Or can we articulate plausible guarantees for our algorithms that rule out problematic behaviors?\n\nAnother mild assumption is that a modest amount of human labor is available to oversee AIs (we aren’t trying to make an AI that can reconstruct civilization after an apocalypse). Removing this assumption is also an interesting problem — not so much because the scenario itself is particularly plausible, but because it could lead to much more robust solutions.\n\n**Move on**. I think it may be helpful to specifically ask “Supposing that we can solve the steering problem, how can things still go wrong?” For example, we still need to avoid undesirable internal behavior by pieces of a system optimized for instrumental purposes. And we wouldn’t be happy if our RL agent decided at a key moment that it really cared about self-actualization rather than getting a high reward. (It wouldn’t be completely unheard of: humans were selected for reproduction, but often decide that we have better things to do.)\n\nAre these serious problems? Are there other lurking dangers? I don’t really know. These questions are more closely tied up with empirical issues and the particular techniques used to produce AI.\n\n**White box solutions.** _(See next section.)_ All of these solutions are “black box” approaches. It would be good to find white box solution in any model, under any assumptions. That is, to implement a white box solution using _any_ well-defined capability, or even infinite computing power.\n\nTo formalize the “white-box” requirement, we can try to [implement the preferences of uncooperative agents](https://ordinaryideas.wordpress.com/2014/08/27/challenges-for-extrapolation/), or work under other pessimistic assumptions that make black box approaches clearly unworkable.\n\nAlong similar lines, could we design a system that could efficiently create a good world even if its operators were unaging simulations of young children? Or a dog? Are these questions meaningful? If we know that a solution doesn’t work or isn’t meaningful for sufficiently immature or underdeveloped humans, can we really suppose that we are on the right side of a magical threshold?\n\nBlack box vs. white box methods\n===============================\n\n*(This section’s dichotomy is closely related to, but different from, Wei Dai’s [here](http://lesswrong.com/lw/hzs/three_approaches_to_friendliness/).)*\n\nAll of these solutions use human judgment as a “black box:” we define what the right behavior is by making reference only to what humans would do or say under appropriate conditions. For example, we think of someone’s judgment as a “mistake” if they would change it after thinking about it enough and having it explained to them.\n\nA different approach is to treat human behavior as a “white box:” to reason about _why_ a human made a certain decision, and then to try figure out what the human really wanted based on an understanding. For example, we might say that someone’s judgment is a “mistake” by looking at the causal process that produced the judgment, identifying some correspondence between that process and actual facts about the world, and noticing possible inconsistencies.\n\nWhite box approaches seem more intuitively promising. Inverse reinforcement learning aims to model human behavior as a goal-directed process perturbed by noise and error, and to use the extracted goals to guide an AI’s decisions. Eliezer describes an analogous proposal in _Creating Friendly AI_; I doubt he stands by the proposal, but I believe he does stand by his optimism about white box approaches.\n\nBlack box approaches suffer from some clear disadvantages. For example, an AI using one might “know” that we are making a mistake, yet still not care. We can try minimize the risk of error (as we usually do in life), but it would be nice to do so in a more principled way. There are also some practical advantages: white box approaches extract motives which are _simpler_ than the human they motivate, while black box approaches extract “motives” which may be much more complex.\n\nThat said, I don’t yet see how to make a white box solution work, even in principle. Even given a perfectly accurate model of a person, and an unlimited amount of time to think, I don’t know what kind of algorithm would be able to classify a particular utterance as an error. So for now I mostly consider this a big open question.',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '2',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'PaulChristiano'
  ],
  childIds: [],
  parentIds: [
    'paul_ai_control'
  ],
  commentIds: [],
  questionIds: [],
  tagIds: [],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8275',
      pageId: 'state_of_steering_problem',
      userId: 'JessicaChuan',
      edit: '6',
      type: 'newEdit',
      createdAt: '2016-03-04 01:58:48',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '7686',
      pageId: 'state_of_steering_problem',
      userId: 'JessicaChuan',
      edit: '5',
      type: 'newEdit',
      createdAt: '2016-02-23 01:13:13',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6740',
      pageId: 'state_of_steering_problem',
      userId: 'JessicaChuan',
      edit: '4',
      type: 'newEdit',
      createdAt: '2016-02-11 01:10:38',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6739',
      pageId: 'state_of_steering_problem',
      userId: 'JessicaChuan',
      edit: '3',
      type: 'newEdit',
      createdAt: '2016-02-11 01:10:23',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6243',
      pageId: 'state_of_steering_problem',
      userId: 'JessicaChuan',
      edit: '0',
      type: 'deleteChild',
      createdAt: '2016-02-03 08:31:33',
      auxPageId: 'Easy_goal_inference_problem_still_hard',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6241',
      pageId: 'state_of_steering_problem',
      userId: 'JessicaChuan',
      edit: '2',
      type: 'newChild',
      createdAt: '2016-02-03 08:30:03',
      auxPageId: 'Easy_goal_inference_problem_still_hard',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6240',
      pageId: 'state_of_steering_problem',
      userId: 'JessicaChuan',
      edit: '2',
      type: 'newEdit',
      createdAt: '2016-02-03 08:29:34',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6239',
      pageId: 'state_of_steering_problem',
      userId: 'JessicaChuan',
      edit: '1',
      type: 'newEdit',
      createdAt: '2016-02-03 08:27:39',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6238',
      pageId: 'state_of_steering_problem',
      userId: 'JessicaChuan',
      edit: '0',
      type: 'newParent',
      createdAt: '2016-02-03 08:21:41',
      auxPageId: 'paul_ai_control',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}