{
  localUrl: '../page/2hr.html',
  arbitalUrl: 'https://arbital.com/p/2hr',
  rawJsonUrl: '../raw/2hr.json',
  likeableId: '1433',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '1',
  dislikeCount: '0',
  likeScore: '1',
  individualLikes: [
    'EricRogstad'
  ],
  pageId: '2hr',
  edit: '1',
  editSummary: '',
  prevEdit: '0',
  currentEdit: '1',
  wasPublished: 'true',
  type: 'comment',
  title: '"Eliezer [objects](https://a..."',
  clickbait: '',
  textLength: '4168',
  alias: '2hr',
  externalUrl: '',
  sortChildrenBy: 'recentFirst',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'PaulChristiano',
  editCreatedAt: '2016-03-11 21:05:26',
  pageCreatorId: 'PaulChristiano',
  pageCreatedAt: '2016-03-11 21:05:26',
  seeDomainId: '0',
  editDomainId: '705',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: 'Act\\-based agents seem to be robust to certain kinds of errors\\. You need only the vaguest understanding of humans to guess that killing the user is: \\(1\\) not something they would approve of, \\(2\\) not something they would do, \\(3\\) not in line with their instrumental preferences\\.',
  anchorText: 'Act\\-based agents seem to be robust to certain kinds of errors\\. You need only the vaguest understanding of humans to guess that killing the user is: \\(1\\) not something they would approve of, \\(2\\) not something they would do, \\(3\\) not in line with their instrumental preferences\\.',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '183',
  text: 'Eliezer [objects](https://arbital.com/p/2fr/?l=2fr#subpage-2h4) to this post's optimism about robustness.\n\nConcretely, the complaint seems to be that a human-predictor would form generalizations like "the human takes and approves action that maximize expected utility" for some notion of "utility" and some notion of counterfactuals etc. It might then end up killing the users (or making some other irreversibly bad decision) because the bad action is the utility-maximizing thing to do according to the the learned values/decision theory/priors/etc. (which aren't identical to humans' values/decision theory/priors/etc.).\n\nI'm not impressed by this objection.\n\nClearly this would be an objectively bad prediction of the human. And so the question is entirely about how hard it is to notice that it's a bad, or at least uncertain, prediction. That is, to a human it appears to be a comically bad prediction. So the question is: to what extent is this just because we are humans predicting humans?\n\n* This class of errors has literally been *talked about by humans* in advance, as has the general observation that humans won't endorse irreversible and potentially catastrophic actions without checking in with humans first. It will probably be talked about in much more detail at the time. So noticing this is an error only requires something like an understanding of how humans' actions relate to their words, which is significantly easier than building a model of a human as an approximately rational goal-directed agent (since that approximate model would *also* need to explain human utterances). That is, you just need to be able to infer from a human saying "I think X would be a catastrophic mistake" that a human won't do X.\n* It seems like this error is only possible for an agent that is unable to predict anything like "how a human would talk about their decision," or "how other people would respond to a decision," or so on. Are you imagining a system that can't predict any of these properties, but can just make OK predictions about actions? Or are you imagining a system that fills in the details of the "kill all humans" action with the human patiently explaining how the action is good because we are probably living in a simulation controlled by an adversarial superintelligence who will torture us if we don't take it, yet isn't able to distinguish this explanation from the explanations that are actually given in the real world for real actions?\n* You seem to be describing the situation as though expected utility maximization with an aggregate utility function is an OK description of human behavior but for some issues like Pascal's mugging that only appear in future edge cases. This view seems surprising for a few reasons. First, how does it account for human philosophical deliberation, and the actual discussions that humans engage in when faced with cases superficially resembling these pathological edge cases? I don't see how any plausible human model is going to throw out the human deliberative model in favor of some simple general theory. Second, expected utility maximization basically can't reproduce even a single human decision. Taken literally these philosophical frameworks are mostly predictively useless, it's not like this is a basically right framework that has a few weird edge cases. A muggable value system doesn't behave badly in weird corner cases, it behaves badly literally all of the time (except perhaps when implementing convergent instrumental values).\n* It doesn't seem necessary for a learner to generalize correctly to some far-out case on the first shot, it only seems necessary for it to know that this is a case where it is uncertain (e.g. because it entertains several conflicting hypotheses, or because there are several general regularities that come into conflict in this case).\n\nI don't think these points totally capture my position, but hopefully they help explain where I am coming from. I still feel pretty good about the argument in the "robustness" section of this post. It really does seem like it is pretty easy to predict that the human won't generally endorse actions that leaves them dead.',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '1',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'PaulChristiano'
  ],
  childIds: [],
  parentIds: [
    'act_based_agents'
  ],
  commentIds: [],
  questionIds: [],
  tagIds: [],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8525',
      pageId: '2hr',
      userId: 'PaulChristiano',
      edit: '1',
      type: 'newEdit',
      createdAt: '2016-03-11 21:05:26',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8524',
      pageId: '2hr',
      userId: 'PaulChristiano',
      edit: '0',
      type: 'newParent',
      createdAt: '2016-03-11 21:03:47',
      auxPageId: 'act_based_agents',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}