{
  localUrl: '../page/safe_ai_from_question_answering.html',
  arbitalUrl: 'https://arbital.com/p/safe_ai_from_question_answering',
  rawJsonUrl: '../raw/1v4.json',
  likeableId: '783',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '0',
  dislikeCount: '0',
  likeScore: '0',
  individualLikes: [],
  pageId: 'safe_ai_from_question_answering',
  edit: '5',
  editSummary: '',
  prevEdit: '4',
  currentEdit: '5',
  wasPublished: 'true',
  type: 'wiki',
  title: 'Safe AI from question-answering',
  clickbait: '',
  textLength: '7377',
  alias: 'safe_ai_from_question_answering',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'PaulChristiano',
  editCreatedAt: '2016-03-04 01:34:38',
  pageCreatorId: 'PaulChristiano',
  pageCreatedAt: '2016-02-03 07:59:57',
  seeDomainId: '0',
  editDomainId: '705',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '22',
  text: '\n_(Warning: minimal new content. Just a clearer framing.)_\n\nSuppose that I have a question-answering box. You give the box a question, and it gives you an answer. For simplicity, let’s focus on yes/no questions, and assume it just outputs yes or no. Can we use this box to implement an efficient autonomous agent that is aligned with human values?\n\nLet’s make the following assumptions:\n\n1. The box never gives an answer predictably worse than the answer that some reference human would give. That is, there is no distribution over questions Q where we expect the human to guess the right answer more often than the box.\n2. Whatever language the box uses, it’s _at least_ as expressive as mathematical notation.\n\nUnder assumption \\[2], the worst possible box is a “math intuition module,” a box which takes as input a mathematical assertion and outputs a probability. We’ll assume that we are working with such a box, though the whole thing would just be easier if we had a box which could answer questions in e.g. natural language.\n\nA proposal\n==========\n\nWe’ll describe an AI which outputs binary actions. To make a decision, it asks the following question:\n\nIf the user considered my situation at **great** length, which decision would they think was best?\n\nIt then takes the most probable action.\n\nFormally (with links):\n\n- [Step 1](https://ordinaryideas.wordpress.com/2014/08/24/specifying-a-human-precisely-reprise/): We provide a precise definition of “the user” as an input-output mapping, by giving concrete examples and specifying how to learn the model from these examples. This specification can’t be used to efficiently find a model of the user, but it does give a formal definition.\n- [Step 2](https://ordinaryideas.wordpress.com/2014/08/27/specifying-enlightened-judgment-precisely-reprise/): We provide a precise definition of “considered my situation at **great** length” by defining a simple interactive environment in which the user can deliberate. The deliberation may be very, very long — it may involve growing smarter, simulating an entire civilization, developing new branches of science, examining each possible decision’s consequences in detail… The interactive environment is designed to facilitate such a prolonged deliberation.\n- Step 3: The user eventually outputs either 0 or 1. The choice may be made based on a deep understanding of what the AI is thinking and why, and what the effect of a particular strategy would be on the AI’s decision. This seems like a good idea for getting a good outcome, but it also leads to a complicated analysis. For the purpose of simplicity, we’ll assume that the after deliberation the user represents the quality of each possible outcome by a utility in [0,1], and then chooses an action randomly so that the difference in the probabilities is equal to the difference in the utilities that would result if the AI took one action or the other. (Note that this depends not only on the state of the world, but on the behavior of the AI in the future)\n\nGiven the simple interpretation of step 3, our AI is maximizing the utility function defined by the user’s judgment-after-deliberation. But the user has a great deal of latitude here, and they could e.g. direct the AI to make arbitrary changes to its own code. So this procedure doesn’t commit the user to bounded utilities, or to essentially any non-trivial philosophical or normative positions.\n\nNote that any interesting behavior for this agent will be composed of a long sequence of binary decisions. But we can express any message in binary, so this is a weak assumption.\n\nThis procedure can be easily adapted to AI that maintains state, by including the AI’s current state in the question: (“What would the user want the AI to do, when its state is given by X?”). We can use similar questions to find appropriate updates to the AI’s state based on its recent observations. The deliberation then needs to consider a weighted average over states of the world that correspond to a particular state for the AI. We’ll set aside this complication for now.\n\nA proof\n=======\n\n\n**Theorem**. Let A be the AI described above, and let B be another AI. Assume that the user follows the simplified strategy described in step 3 above. Suppose that running B results in a higher expected utility, according to the user’s judgment-after-deliberation, than running A. Then, by using B we can sample from a distribution over questions for which we expect B’s answer to be right with higher probability than A’s answers.\n\nSo in some sense, this design makes optimal use of the question-answering system we have available. We can’t get a better outcome without also building a better question-answerer — at least not if the deliberative process successfully captures the user’s values. So the quality of the proposal comes down to the quality of the deliberative process and the model of the user.\n\n**Proof.** To produce the distribution, we simply run B and choose a random question it is posed. It’s easy to evaluate the sum of the differences between P(B is right) and P(A is right) over this whole sequence: it’s a telescoping sum that adds up to U(taking B’s decisions) - U(taking A’s decisions). So if B receives a higher utility, B also has a higher probability of being right for a random question from this distribution.\n\nA problem\n=========\n\nI discuss some possible problems in the posts linked under step 1 and step 2 above, and in [this post](https://ordinaryideas.wordpress.com/2012/04/21/indirect-normativity-write-up/) describing an older version of the proposal. There seems to be one major problem (and many minor problems), with several facets:\n\n- The quality of the scheme rests on the quality of the user’s outputs upon deliberation. But we can never actually run this deliberative process, and so we can’t test if our definition is reasonable.\n- During the deliberative process, the user’s experiences are completely unlike their everyday experiences. This may introduce some divergence between the actual user and the hypothetical user. For example, the hypothetical user may become convinced that they are in a simulation and then start behaving erratically.\n- Because we can’t ever run this deliberative process, we can’t ever give our AI training data for the kinds of problems that we want it to actually solve. So we are relying on a kind of transfer learning that may or may not occur in practice.\n- The last point raises a general concern with the question-answering system as a model of AI capabilities. It may be that this just isn’t the kind of AI that we are likely to get. For example, many approaches to AI are based heavily on finding and exploiting strategies that work well empirically.\n\nI don’t think these problems are deal-breakers, though I agree they are troubling. I think that if the system failed, it would probably fail innocuously (by simply not working). Moreover, I think it is more likely to succeed than to fail malignantly.\n\nBut taken together, these objections provide a motive to look for solutions where we can train the AI on the same kind of problem that we ultimately want it to perform well on. We can capture this goal by trying to build a safe AI using capabilities like supervised learning or reinforcement learning.',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '1',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'PaulChristiano'
  ],
  childIds: [],
  parentIds: [
    'paul_ai_control'
  ],
  commentIds: [],
  questionIds: [],
  tagIds: [],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8271',
      pageId: 'safe_ai_from_question_answering',
      userId: 'JessicaChuan',
      edit: '5',
      type: 'newEdit',
      createdAt: '2016-03-04 01:34:38',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '7945',
      pageId: 'safe_ai_from_question_answering',
      userId: 'JessicaChuan',
      edit: '4',
      type: 'newEdit',
      createdAt: '2016-02-27 00:38:14',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '7668',
      pageId: 'safe_ai_from_question_answering',
      userId: 'JessicaChuan',
      edit: '3',
      type: 'newEdit',
      createdAt: '2016-02-23 00:24:46',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6734',
      pageId: 'safe_ai_from_question_answering',
      userId: 'JessicaChuan',
      edit: '2',
      type: 'newEdit',
      createdAt: '2016-02-11 00:12:16',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6221',
      pageId: 'safe_ai_from_question_answering',
      userId: 'JessicaChuan',
      edit: '1',
      type: 'newEdit',
      createdAt: '2016-02-03 07:59:57',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6220',
      pageId: 'safe_ai_from_question_answering',
      userId: 'JessicaChuan',
      edit: '0',
      type: 'newParent',
      createdAt: '2016-02-03 06:22:18',
      auxPageId: 'paul_ai_control',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}