{
  localUrl: '../page/pointing_finger.html',
  arbitalUrl: 'https://arbital.com/p/pointing_finger',
  rawJsonUrl: '../raw/2s0.json',
  likeableId: '1697',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '1',
  dislikeCount: '0',
  likeScore: '1',
  individualLikes: [
    'EliezerYudkowsky'
  ],
  pageId: 'pointing_finger',
  edit: '7',
  editSummary: '',
  prevEdit: '6',
  currentEdit: '7',
  wasPublished: 'true',
  type: 'wiki',
  title: 'Look where I'm pointing, not at my finger',
  clickbait: 'When trying to communicate the concept "glove", getting the AGI to focus on "gloves" rather than "my user's decision to label something a glove" or "anything that depresses the glove-labeling button".',
  textLength: '13864',
  alias: 'pointing_finger',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'EliezerYudkowsky',
  editCreatedAt: '2016-09-09 22:40:33',
  pageCreatorId: 'EliezerYudkowsky',
  pageCreatedAt: '2016-03-23 20:04:39',
  seeDomainId: '0',
  editDomainId: 'EliezerYudkowsky',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '650',
  text: '[summary:  Suppose we're trying to give a [6w Task AGI] the task, "Put a strawberry on this pedestal".  We mean to identify our intended category of strawberries by waving some strawberries and some non-strawberries in front of the AI's webcam.  Alice in the control room will press a button to label which of these objects are strawberries.  The "Look where I'm pointing, not at my finger" problem is getting the AI to focus on the strawberries rather than Alice or the button.  The concepts "strawberry on the pedestal" and "event that makes Alice think of strawberries" and "event that causes the button to be pressed" are different goals to pursue, even though as concepts they'll all equally well-classify any normal training cases.  AIs pursuing these goals respectively put a strawberry on the pedestal, fool Alice using a plastic strawberry, and build a robotic arm to press the labeling button.\n\nWe want a way to point to a particular part of the AI's model of the causal lattice that produces the labeled training data - the event we intuitively consider to be the strawberry on the pedestal, versus other parts of the causal lattice like Alice and the button.  Hence "look where I'm pointing, not at my finger".\n\n![strawberry diagram](http://www.gliffy.com/go/publish/image/10424137/L.png)]\n\n## Example problem\n\nSuppose we're trying to give a [6w Task AGI] the task, "Make there be a strawberry on the pedestal in front of your webcam."  For example, a human could fulfill this task by buying a strawberry from the supermarket and putting it on the pedestal.\n\nAs part of aligning a Task AGI on this goal, we'd need to [36y identify] strawberries and the pedestal.\n\nOne possible approach to communicating the concept of "strawberry" is through a training set of human-selected cases of things that are and aren't strawberries, on and off the pedestal.\n\nFor the sake of distinguishing causal roles, let's say that one human, User1, is selecting training cases of objects and putting them in front of the AI's webcam.  A different human, User2, is looking at the scene and pushing a button when they see something that looks like a strawberry on the pedestal.  The [6h intention] is that pressing the button will label positive instances of the goal concept, namely strawberries on the pedestal.  In actual use after training, the AI will be able to generate its own objects to put inside the room, possibly with further feedback from User2.  We want these objects to be instances of our [6h intended] goal concept, aka, actual strawberries.\n\nWe could draw an intuitive causal model for this situation as follows:\n\n![strawberry diagram](http://www.gliffy.com/go/publish/image/10423843/L.png)\n\nSuppose that during the use phase, the AI actually creates a realistic plastic strawberry, one that will fool User2 into pressing the button.  Or, similarly, suppose the AI creates a small robot that sprouts tiny legs and runs over to User2's button and presses the button directly.\n\nNeither of these are the goal concept that we wanted the AI to learn, but any *test* of the hypothesis "Is this event classified as a positive instance of the goal concept?" will return "Yes, the button was pressed."  %%comment: (If you imagine some other User3 watching this and pressing an override button to tell the AI that this fake strawberry wasn't really a positive instance of the intended goal concept, imagine the AI modeling and then manipulating or bypassing User3, etcetera.)%%\n\nMore generally, the human is trying to point to their intuitive "strawberry" concept, but there may be other causal concepts that also separate the training data well into positive and negative instances, such as "objects which come from strawberry farms", "objects which cause (the AI's psychological model of) User2 to think that something is a strawberry", or "any chain of events leading up to the positive-instance button being pressed".\n\n%%comment: move this to sensory identification section:  However, in a case like this, it's not like the actual physical glove is inside the AGI's memory.  Rather, we'd be, say, putting the glove in front of the AGI's webcam, and then (for the sake of simplified argument) pressing a button which is meant to label that thing as a "positive instance".  If we want our AGI to achieve particular states of the environment, we'll want it to reason about the causes of the image it sees on the webcam and identify a concept over those causes - have a goal over 'gloves' and not just 'images which look like gloves'.  In the latter case, it could just as well fulfill its goal by setting up a realistic monitor in front of its webcam and displaying a glove image.  So we want the AGI to [2rz identify its task] over the causes of its sensory data, not just pixel fields.%%\n\n## Abstract problem\n\nTo state the above [6r potential difficulty] more generally:\n\nThe "look where I'm pointing, not at my finger" problem is that the labels on the training data are produced by a complicated causal lattice, e.g., (strawberry farm) -> (strawberry) -> (User1 takes strawberry to pedestal) -> (Strawberry is on pedestal) -> (User2 sees strawberry) -> (User2 classifies strawberry) -> (User2 presses 'positive instance' button).  We want to point to the "strawberry" part of the lattice of causality, but the finger we use to point there is User2's psychological classification of the training cases and User2's hand pressing the positive-instance button.\n\nWorse, when it comes to which model *best* separates the training cases, concepts that are further downstream in the chain of causality should classify the training data better, if the AI is [2c smart enough] to understand those parts of the causal lattice.\n\nSuppose that at one point User2 slips on a banana peel, and her finger slips and accidentally classifies a scarf as a positive instance of "strawberry".  From the AI's perspective there's no good way of accounting for this observation in terms of strawberries, strawberry farms, or even User2's psychology.  To *maximize* predictive accuracy over the training cases, the AI's reasoning must take into account that things are more likely to be positive instances of the goal concept when there's a banana peel on the control room floor.  Similarly, if some deceptively strawberry-shaped objects slip into the training cases, or are generated by the AI [2qq querying the user], the best boundary that separates 'button pressed' from 'button not pressed' labeled instances will include a model of what makes a human believe that something is a strawberry.\n\nA learned concept that's 'about' layers of the causal lattice that are further downstream of the strawberry, like User2's psychology or mechanical force being applied to the button, will implicitly take into account the upstream layers of causality.  To the extent that something being strawberry-shaped causes a human to press the button, it's implicitly part of the category of "events that end applying mechanical force to the 'positive-instance' button").  Conversely, a concept that's about upstream layers of the causal lattice can't take into account events downstream.  So if you're looking for pure predictive accuracy, the best model of the labeled training data - given sufficient AGI understanding of the world and the more complicated parts of the causal lattice - will always be "whatever makes the positive-instance button be pressed".\n\nThis is a problem because what we actually want is for there to be a strawberry on the pedestal, not for there to be an object that looks like a strawberry, or for User2's brain to be rewritten to think the object is a strawberry, or for the AGI to seize the control room and press the positive instance button.\n\nThis scenario may qualify as a [6q context disaster] if the AGI only understands strawberries in its development phase, but comes to understand User2's psychology later.  Then the more complicated causal model, in which the downstream concept of User2's psychology separates the data better than reasoning about properties of strawberries directly, first becomes an issue only when the AI is over a high threshold level of intelligence.\n\n## Approaches\n\n[2qp conservatism] would try to align the AGI to plan out goal-achievement events that were as similar as possible to the particular goal-achievement events labeled positively in the training data.  If the human got the strawberry from the supermarket in all training instances, the AGI will try to get the same brand of strawberry from the same supermarket.\n\n[4w Ambiguity identification] would focus on trying to get the AGI to ask us whether we meant 'things that make humans think they're strawberries' or 'strawberry'.  This approach might need to go through resolving ambiguities by the AGI explicitly symbolically communicating with us about the alternative possible goal concepts, or generating sufficiently detailed multiple-view descriptions of a hypothetical case, not the AGI trying real examples.  Testing alternative hypotheses using real examples always says that the label is generated further causally downstream; if you are sufficiently intelligent to construct a fake plastic strawberry that fools a human, trying out the hypothesis will produce the response "Yes, this is a positive instance of the goal concept."  If the AGI tests the hypothesis that the 'real' explanation of the positive instance label is 'whatever makes the button be pressed' rather than 'whatever makes User2 think of a strawberry' by carrying out the distinguishing experiment of pressing the button in a case where User2 doesn't think something is a strawberry, the AGI will find that the experimental result favors the 'it's just whatever presses the button hypothesis'.  Some modes of ambiguity identification break for sufficiently advanced AIs, since the AI's experiment interferes with the causal channel that we'd intended to return information about *our* intended goal concept.\n\nSpecialized approaches to the pointing-finger problem in particular might try to define a supervised learning algorithm that tends to internally distill, in a predictable way, some model of causal events, such that the algorithm could be instructed somehow to try learning a *simple* or *direct* relation between the positive "strawberry on pedestal" instances, and the observed labels of the "sensory button" node within the training cases; with this relation not being allowed to pass through the causal model of User2 or mechanical force being applied to the button, because we know how to say "those things are too complicated" or "those things are too far causally downstream" relative to the algorithm's internal model.\n\n![strawberry diagram](http://www.gliffy.com/go/publish/image/10424137/L.png)\n\nThis specialized approach seems potentially susceptible to initial approach within modern machine learning algorithms.\n\nBut to restate the essential difficulty from an [2l advanced-safety] perspective: in the limit of [2c advanced intelligence], the *best possible* classifier of the relation between the training cases and the observed button labels will always pass through User2 and anything else that might physically press the button.    Trying to 'forbid' the AI from using the most effective classifier for the relation between Strawberry? and observed values of Button! seems potentially subject to a [42 Nearest Unblocked] problem, where the 'real' simplest relation re-emerges in the advanced phase after being suppressed during the training phase.  Maybe the AI reasons about certain very complicated properties of the material object on the pedestal... in fact, these properties are so complicated that they turn out to contain implicit models of User2's psychology, again because this produces *a better separation* of the labeled training data.  That is, we can't allow the 'strawberry' concept to include complicated logical properties of the strawberry-object that in effect include a psychological model of User2 reacting to the strawberry, implying that if User2 can be fooled by a fake plastic model, that must be a strawberry.  *Even though* this richer model will produce a more accurate classification of the training data, and any actual experiments performed will return results favoring the richer model.\n\nEven so, this doesn't seem impossible to navigate as a machine learning problem; an algorithm might be able to recognize when an upstream causal mode starts to contain predicates that belong in a downstream causal node; or an algorithm might contain strong regularization rules that collect all inference about User2 into the User2 node rather than letting it slop over anywhere else; or it might be possible to impose a constraint, after the strawberry category has been learned sufficiently well, that the current level of strawberry complexity is the most complexity allowed; or the granularity of the AI's causal model might not allow such complex predicates to be secretly packed into the part of the causal graph we're identifying, without visible and transparent consequences when we monitor how the algorithm is learning the goal predicate.\n\nA toy model of this setup ought to include analogues of User2 that sometimes make mistakes in a regular way, and actions the AI can potentially take to directly press the labeling button; this would test the ability to point an algorithm to learn about the compact properties of the strawberry in particular, and not other concepts causally downstream that could potentially separate the training data better, or better explain the results of experiments.  A toy model might also introduce new discoverable regularities of the User2 analogue, or new options to manipulate the labeling button, as part of the test data, in order to simulate the progression of an advanced agent gaining new capabilities.\n',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '1',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'EliezerYudkowsky',
    'AlexeiAndreev'
  ],
  childIds: [],
  parentIds: [
    'task_identification'
  ],
  commentIds: [
    '40w'
  ],
  questionIds: [],
  tagIds: [
    'taskagi_open_problems',
    'value_alignment_open_problem',
    'ontology_identification'
  ],
  relatedIds: [
    'identify_causal_goals'
  ],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '19526',
      pageId: 'pointing_finger',
      userId: 'EliezerYudkowsky',
      edit: '7',
      type: 'newEdit',
      createdAt: '2016-09-09 22:40:33',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '19452',
      pageId: 'pointing_finger',
      userId: 'EliezerYudkowsky',
      edit: '6',
      type: 'newEdit',
      createdAt: '2016-09-02 22:41:59',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9307',
      pageId: 'pointing_finger',
      userId: 'EliezerYudkowsky',
      edit: '5',
      type: 'newEdit',
      createdAt: '2016-04-14 22:00:01',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9304',
      pageId: 'pointing_finger',
      userId: 'EliezerYudkowsky',
      edit: '4',
      type: 'newEdit',
      createdAt: '2016-04-14 20:54:48',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9303',
      pageId: 'pointing_finger',
      userId: 'EliezerYudkowsky',
      edit: '3',
      type: 'newEdit',
      createdAt: '2016-04-14 20:44:25',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8959',
      pageId: 'pointing_finger',
      userId: 'AlexeiAndreev',
      edit: '2',
      type: 'newEdit',
      createdAt: '2016-03-23 21:50:39',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8952',
      pageId: 'pointing_finger',
      userId: 'EliezerYudkowsky',
      edit: '1',
      type: 'newTag',
      createdAt: '2016-03-23 20:04:59',
      auxPageId: 'value_alignment_open_problem',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8950',
      pageId: 'pointing_finger',
      userId: 'EliezerYudkowsky',
      edit: '1',
      type: 'newTag',
      createdAt: '2016-03-23 20:04:55',
      auxPageId: 'taskagi_open_problems',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8948',
      pageId: 'pointing_finger',
      userId: 'EliezerYudkowsky',
      edit: '1',
      type: 'newEdit',
      createdAt: '2016-03-23 20:04:39',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8947',
      pageId: 'pointing_finger',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newTag',
      createdAt: '2016-03-23 19:33:41',
      auxPageId: 'ontology_identification',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8945',
      pageId: 'pointing_finger',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newParent',
      createdAt: '2016-03-23 19:29:05',
      auxPageId: 'task_identification',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}