{
  localUrl: '../page/1hn.html',
  arbitalUrl: 'https://arbital.com/p/1hn',
  rawJsonUrl: '../raw/1hn.json',
  likeableId: '462',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '0',
  dislikeCount: '0',
  likeScore: '0',
  individualLikes: [],
  pageId: '1hn',
  edit: '1',
  editSummary: '',
  prevEdit: '0',
  currentEdit: '1',
  wasPublished: 'true',
  type: 'comment',
  title: '"I don't think you've correc..."',
  clickbait: '',
  textLength: '4520',
  alias: '1hn',
  externalUrl: '',
  sortChildrenBy: 'recentFirst',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'PaulChristiano',
  editCreatedAt: '2015-12-31 01:31:33',
  pageCreatorId: 'PaulChristiano',
  pageCreatedAt: '2015-12-31 01:31:33',
  seeDomainId: '0',
  editDomainId: 'EliezerYudkowsky',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '828',
  text: 'I don't think you've correctly diagnosed the disagreement yet (your strawman position is obviously crazy, given that some forms of "predicting humans" are already tractable while others won't be until humans are obsolete).\n\nWhen I imply that "making prediction errors about humans isn't a big deal," it's not because I think that algorithms won't make such errors. It's because the resulting failures don't look malignant.\n\nWe are concerned about a particular class of failures, namely those that lead to intelligent optimization of alien goals. So to claim that mis-predicting humans is catastrophic, you need to explain why it leads to this special kind of failure. This seems to be where we disagree. Misunderstanding human values doesn't seem to necessarily lead to this kind of catastrophe, as long as you get the part right where human values are the things that humans want. Other failures cause you to transiently do things that aren't quite what the humans want, which is maybe regrettable but basically fits into the same category as other errors about object level tasks.\n\nA simple example: \n\nSuppose that I am giving instructions to the pseudo-genie (like a genie but follows predicted rather than actual instructions), and the pseudo-genie is predicting what instructions I would give it. I fully expect the pseudo-genie *not* to predict any instructions that predictably lead to it killing me, except in exceptional circumstances or in cases where I get to sign off on dying first or etc.\n\nThis is not a subtle question requiring full coverage of human value, or nuances of the definition of "dying." I also don't think there is any edge instantiation here in the problematic sense. There is edge instantiation in that the *human* may pick instructions that are as good as possible subject to the constraint of not killing anyone, but I claim that this kind of edge instantiation does not put significant extra burden on our predictor.\n\nDo we disagree about this point? That is, do you think that such a pseudo-genie would predict me issuing instructions that lead to me dying? If not, do you think that outcomes like "losing effective control of the machines that I've built" or "spawning a brooding elder god" are much subtler than dying and therefore more likely to be violated?\n\n(I actually think that killing the user directly is probably a much more likely failure mode than spawning an alien superintelligence.)\n\n\nI also do think that a classifier trained to identify "instructions that the human would object vigorously to with 1% probability" could identify most instructions that a human would in fact object vigorously to. (At least in the context of the pseudo-genie, where this classifier is being applied to predicted human actions. If the plans are optimized for not being *classified* as objectionable, which seems like it should never ever happen, then indeed something may go wrong.)\n\n\n> If you consider the low-impact paradigm, then the idea is that you can get a lot of the same intended benefit of "do no harm" via "try not to needlessly affect things and tell me about the large effects you do expect so I can check, even if this involves a number of needlessly avoided effects and needless checks"\n\nI think I understand the motivation. I'm expressing skepticism that this is really an easier problem. Sorry if "do no harm" was an unfairly ambitious paraphrase.\n\n\nOne motivating observation is that human predictions of other humans seem to be complete overkill for running my argument---that is, the kinds of errors you must be concerned about are totally unlike the errors that a sophisticated person might make when reasoning about another person. If you disagree about this then that seems like a great opportunity to flesh out our disagreement, since I think it is a slam dunk and it seems way easier to reason about.\n\nAssuming that we agree on that point, then we can perhaps agree on a simpler claim: for a strictly superhuman AI, there would be no reason to have actual human involvement. Human involvement is needed only in domains where humans actually have capabilities, especially for reasoning about other humans, that our early AI lacks.\n\nThat is, in some sense the issue (on your scenario) seems to be that AI systems are good at some tasks and humans are good at other tasks, and we want to build a composite system that has both abilities. This is quite different from the usual relationship in AI control, where the human is contributing goals rather than abilities.',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '0',
  maintainerCount: '0',
  userSubscriberCount: '0',
  lastVisit: '2016-02-25 04:36:01',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'PaulChristiano'
  ],
  childIds: [],
  parentIds: [
    '1gj',
    'task_agi'
  ],
  commentIds: [],
  questionIds: [],
  tagIds: [],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4836',
      pageId: '1hn',
      userId: 'PaulChristiano',
      edit: '1',
      type: 'newEdit',
      createdAt: '2015-12-31 01:31:33',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4830',
      pageId: '1hn',
      userId: 'PaulChristiano',
      edit: '0',
      type: 'newParent',
      createdAt: '2015-12-31 00:22:48',
      auxPageId: 'task_agi',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4832',
      pageId: '1hn',
      userId: 'PaulChristiano',
      edit: '0',
      type: 'newParent',
      createdAt: '2015-12-31 00:22:48',
      auxPageId: '1gj',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}