{
  localUrl: '../page/advanced_nonagent.html',
  arbitalUrl: 'https://arbital.com/p/advanced_nonagent',
  rawJsonUrl: '../raw/42h.json',
  likeableId: '0',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '0',
  dislikeCount: '0',
  likeScore: '0',
  individualLikes: [],
  pageId: 'advanced_nonagent',
  edit: '5',
  editSummary: '',
  prevEdit: '4',
  currentEdit: '5',
  wasPublished: 'true',
  type: 'wiki',
  title: 'Advanced nonagent',
  clickbait: 'Hypothetically, cognitively powerful programs that don't follow the loop of "observe, learn, model the consequences, act, observe results" that a standard "agent" would.',
  textLength: '7739',
  alias: 'advanced_nonagent',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'EliezerYudkowsky',
  editCreatedAt: '2016-06-08 01:36:11',
  pageCreatorId: 'EliezerYudkowsky',
  pageCreatedAt: '2016-06-08 01:31:43',
  seeDomainId: '0',
  editDomainId: 'EliezerYudkowsky',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '1',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '62',
  text: 'A standard agent:\n\n 1. Observes reality\n 2. Uses its observations to build a model of reality\n 3. Uses its model to forecast the effects of possible actions or policies\n 4. Chooses among policies on the basis of its [109 utility function] over the consequences of those policies\n 5. Carries out the chosen policy\n\n(...and then observes the actual results of its actions, and updates its model, and considers new policies, etcetera.)\n\nIt's conceivable that a cognitively powerful program could carry out some, but not all, of these activities.  We could call this an "advanced pseudoagent" or "advanced nonagent".\n\n# Example: Planning Oracle\n\nImagine that we have an [6x Oracle] agent which outputs a plan $\\pi_0$ which is meant to maximize lives saved or [ eudaimonia] etc., *assuming* that the human operators decide to carry out the plan.  By hypothesis, the agent does not assess the probability that the plan will be carried out, or try to maximize the probability that the plan will be carried out.\n\nWe could look at this as modifying step 4 of the loop: rather than this pseudoagent selecting the output whose expected consequences optimize its utility function, it selects the output that optimizes utility *assuming* some other event occurs (the humans deciding to carry out the plan).\n\nWe could also look at the whole Oracle schema as interrupting step 5 of the loop.  If the Oracle works as [6h intended], its purpose is not to immediately output optimized actions into the world; rather it is meant to output plans for humans to carry out.  This though is more of a metaphorical or big-picture property.  If not for the modification of step four where the Oracle calculates $\\mathbb E [U | \\operatorname{do}(\\pi_0), HumansObeyPlan]$ instead of $\\mathbb E [U | \\operatorname{do}(\\pi_0)],$ the Oracle's outputted plans would just *be* its actions within the agent schema above.  (And it would optimize the general effects of its plan-outputting actions, including the problem of getting the humans to carry out the plans.)\n\n# Example: Imitation-based agents\n\n[2sj Imitation-based agents] would modify steps 3 and 4 of the loop by "trying to output an action indistinguishable from the output of the human imitated" rather than forecasting consequences or optimizing over consequences, except perhaps insofar as forecasting consequences is important for guessing what the human would do, or they're internally imitating a human mode of thought that involves mentally imagining the consequences and choosing between them.  "Imitation-based agents" might justly be called pseudoagents, in this schema.\n\n(But the "pseudoagent" terminology is relatively new, and a bit awkward, and it won't be surprising if we all go on saying "imitation-based agents" or "act-based agents".  The point of having terms like 'pseudoagent' or 'advanced nonagent' is to have a name for the general concept, not to [10l reserve and guard] the word 'agent' for only 100% real pure agents.)\n\n# Safety benefits and difficulties\n\nAdvanced pseudoagents and nonagents are usually proposed in the hope of averting some [2l advanced safety issue] that seems to arise from the *agenty* part of "[2c advanced agency]", while preserving other [2c advanced cognitive powers] that seem [6y useful].\n\nA proposal like this can fail to the extent that it's not pragmatically possible to unentangle one aspect of agency from another; or to the extent that removing that much agency would make the AI [42k safe but useless].\n\nSome hypothetical examples that would, if they happened, constitute cases of failed safety or [42k unworkable tradeoffs] in pseudoagent compromises:\n\n• Somebody proposes to obtain an Oracle merely in virtue of only giving the AI a text output channel, and only taking what it says as suggestions, thereby interrupting the loop between the agent's policies and it acting in the world.  If this is all that changes, then from the Oracle's perspective it's still an agent, its text output is its motor channel, and it still immediately outputs whatever act it expects to maximize subjective expected utility, treating the humans as part of the environment to be optimized.  It's an agent that somebody is trying to *use* as part of a larger process with an interrupted agent loop, but the AI design itself is a pure agent.\n\n• Somebody advocates for designing an AI that [1v4 only computes and outputs probability estimates]; and never searches for any EU-maximizing policies, let alone outputs them.  It turns out that this AI cannot well-manage its internal and reflective operations, because it can't use consequentialism to select the best thought to think next.  As a result, the AI design fails to bootstrap, or fails to work sufficiently well before competing AI designs that use internal consequentialism.  (Safe but useless, much like a rock.)\n\n• Somebody advocates that an [2sj imitative agent design] will avoid invoking the [2c advanced safety issues that seem like they should be associated with consequentialist reasoning], because the imitation-based pseudoagent never does any consequentialist reasoning or planning; it only tries to produce an output extremely similar to its training set of observed human outputs.  But it turns out (arguendo) that the pseudoagent, to imitate the human, has to imitate consequentialist reasoning, and so the implied dangers end up pretty much the same.\n\n• An agent is supposed to just be an extremely powerful policy-reinforcement learner instead of an expected utility optimizer.  After a huge amount of optimization and mutation on a very general representation for policies, it turns out that the best policies, the ones that were the most reinforced by the highest rewards, are computing consequentialist models internally.  The actual result ends up being that the AI is doing consequentialist reasoning that is obscured and hidden, since it takes place outside the designed and easily visible high-level-loop of the AI.\n\nComing up with a proposal for an advanced pseudoagent, that still did something pivotal and was actually safer, would reasonably require: (a) understanding how to slice up agent properties along their natural joints; (b) understanding which advanced-agency properties lead to which expected safety problems and how; and (c) understanding which internal cognitive functions would be needed to carry out some particular [6y pivotal] task; adding up to (d) see an exploitable prying-apart of the advanced-AI joints.\n\nWhat's often proposed in practice is more along the lines of:\n\n- "We just need to build AIs without emotions so they won't have drives that make them compete with us."  (Can you translate that into the language of utility functions and consequentialist planning, please?)\n- "Let's just build an AI that answers human questions."  (It's doing a lot more than that internally, so how are the internal operations organized?  Also, [6y what do you do] with a question-answering AI that averts the consequences of somebody else building a more agenty AI?)\n\nComing up with a sensible proposal for a pseudoagent is hard.  The reason for talking about "agents" in talking about future AIs isn't because the speaker wants to give AIs lots of power and have them wandering the world doing whatever they like under their own drives (for this entirely separate concept see [1g3 autonomous AGI]).  The reason we talk about observe-model-predict-act expected-utility consequentialists, is that this seems to carve a lot of important concepts at their joints.  Some alternative proposals exist, but they often have a feel of "carving against the joints" or trying to push through an unnatural arrangement, and aren't as natural or as simple to describe.',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '1',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'EliezerYudkowsky'
  ],
  childIds: [],
  parentIds: [
    'advanced_agent'
  ],
  commentIds: [],
  questionIds: [],
  tagIds: [
    'needs_summary_meta_tag'
  ],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '12021',
      pageId: 'advanced_nonagent',
      userId: 'AlexeiAndreev',
      edit: '5',
      type: 'newTag',
      createdAt: '2016-06-08 17:17:58',
      auxPageId: 'needs_summary_meta_tag',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '12005',
      pageId: 'advanced_nonagent',
      userId: 'EliezerYudkowsky',
      edit: '5',
      type: 'newEdit',
      createdAt: '2016-06-08 01:36:11',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '12004',
      pageId: 'advanced_nonagent',
      userId: 'EliezerYudkowsky',
      edit: '4',
      type: 'newEdit',
      createdAt: '2016-06-08 01:34:17',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '12003',
      pageId: 'advanced_nonagent',
      userId: 'EliezerYudkowsky',
      edit: '3',
      type: 'newEdit',
      createdAt: '2016-06-08 01:33:29',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '12002',
      pageId: 'advanced_nonagent',
      userId: 'EliezerYudkowsky',
      edit: '2',
      type: 'newEdit',
      createdAt: '2016-06-08 01:32:32',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '12001',
      pageId: 'advanced_nonagent',
      userId: 'EliezerYudkowsky',
      edit: '1',
      type: 'newEdit',
      createdAt: '2016-06-08 01:31:43',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '11970',
      pageId: 'advanced_nonagent',
      userId: 'EliezerYudkowsky',
      edit: '1',
      type: 'newParent',
      createdAt: '2016-06-07 23:34:56',
      auxPageId: 'advanced_agent',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}