{
  localUrl: '../page/1hv.html',
  arbitalUrl: 'https://arbital.com/p/1hv',
  rawJsonUrl: '../raw/1hv.json',
  likeableId: '468',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '0',
  dislikeCount: '0',
  likeScore: '0',
  individualLikes: [],
  pageId: '1hv',
  edit: '1',
  editSummary: '',
  prevEdit: '0',
  currentEdit: '1',
  wasPublished: 'true',
  type: 'comment',
  title: '"Sorry, I tried to be concre..."',
  clickbait: '',
  textLength: '2610',
  alias: '1hv',
  externalUrl: '',
  sortChildrenBy: 'recentFirst',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'PaulChristiano',
  editCreatedAt: '2015-12-31 08:35:19',
  pageCreatorId: 'PaulChristiano',
  pageCreatedAt: '2015-12-31 08:35:19',
  seeDomainId: '0',
  editDomainId: 'EliezerYudkowsky',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '828',
  text: 'Sorry, I tried to be concrete about what we were discussing, but I will try harder:\n\nConsider some putative design for a genie, which behaves safely with human involvement.\n\nNow form a pseudo-genie, that works as follows. Every time the original genie would consult a human (or provide an opportunity for human intervention) the pseudo-genie consults the human with small probability. It predicts how the human would respond, and behaves as if it had actually received the predicted feedback.\n\nMy weak claim is that the pseudo-genie will not have catastrophic failures unless either (1) it makes an inaccurate prediction or (2) the real genie has a catastrophic failure. This seems obvious on its face. But your most recent comment seems to be rejecting this claim, so it might be a good place to focus in order to clear up the discussion.\n\n(I agree that even the best possible predictor cannot always make accurate predictions, so the relevance of the weak claim is not obvious. But you might hope that in situations that actually arise, very powerful systems will make accurate predictions.)\n\nMy strong claim is that if the human behaves sensibly the pseudo-genie will not have catastrophic failures unless either (1) it makes a prediction which seems obviously and badly wrong, or (2) the real genie has a catastrophic failure. \n\nEven the strong claim is far from perfect reassurance, because the AI might expect to be in a simulation in which the human is about to be replaced by an adversarial superintelligence, and so make predictions that seem obviously and badly wrong. For the moment I am setting that difficulty aside---if you are willing to concede the point modulo that difficulty then I'll declare us on the same page.\n\n\n> it really sounds like you're assuming the problem of Friendly AI - reducing "does useful pivotal things and does not kill you" to "have a sufficiently good answer to some well-specified question whose interpretation doesn't depend on any further reflectively consistent degrees of freedom" - has been fully solved as just one step in your argument\n\nNo, I'm just arguing that *if* you had an AI that works well with human involvement, *then* you can make one that works well with minimal human involvement, modulo certain well-specified problems in AI (namely making good enough predictions about humans). Those problems almost but not quite avoid reflectively consistent degrees of freedom (the predictions still have a dependence on prior).\n\nThis is like one step of ten in the act-based approach, and so to the extent that we disagree it seems important to clear that up.',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '0',
  maintainerCount: '0',
  userSubscriberCount: '0',
  lastVisit: '2016-02-25 04:36:01',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'PaulChristiano'
  ],
  childIds: [],
  parentIds: [
    '1gj',
    'task_agi'
  ],
  commentIds: [],
  questionIds: [],
  tagIds: [],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4862',
      pageId: '1hv',
      userId: 'PaulChristiano',
      edit: '1',
      type: 'newEdit',
      createdAt: '2015-12-31 08:35:19',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4859',
      pageId: '1hv',
      userId: 'PaulChristiano',
      edit: '0',
      type: 'newParent',
      createdAt: '2015-12-31 08:12:15',
      auxPageId: 'task_agi',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4861',
      pageId: '1hv',
      userId: 'PaulChristiano',
      edit: '0',
      type: 'newParent',
      createdAt: '2015-12-31 08:12:15',
      auxPageId: '1gj',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}