{
  localUrl: '../page/1hl.html',
  arbitalUrl: 'https://arbital.com/p/1hl',
  rawJsonUrl: '../raw/1hl.json',
  likeableId: '460',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '0',
  dislikeCount: '0',
  likeScore: '0',
  individualLikes: [],
  pageId: '1hl',
  edit: '2',
  editSummary: '',
  prevEdit: '1',
  currentEdit: '2',
  wasPublished: 'true',
  type: 'comment',
  title: '"> It seems like the only ad..."',
  clickbait: '',
  textLength: '2827',
  alias: '1hl',
  externalUrl: '',
  sortChildrenBy: 'recentFirst',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'EliezerYudkowsky',
  editCreatedAt: '2015-12-30 19:54:12',
  pageCreatorId: 'EliezerYudkowsky',
  pageCreatedAt: '2015-12-30 19:50:51',
  seeDomainId: '0',
  editDomainId: 'EliezerYudkowsky',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '828',
  text: '> It seems like the only advantage of the genie is that it doesn't make prediction errors about humans.\n\nWell, YES.  This seems to reflect a core disagreement about how hard it probably is to get full, correct predictive coverage of humans using a supervised optimization paradigm.  Versus how hard it is to, say, ask a conservative low-impact genie to make a burrito and have it make a burrito even though the genie doesn't and couldn't predict what humans would think about the long-term impact of AI burrito-making on human society and whether making a burrito was truly the right thing to do.  I think the latter is plausibly a LOT easier, though still not easy.\n\nMy instinctive diagnosis of this core disagreement is something like "Paul is overly inspired by this decade's algorithms and thinks everything labeled 'predicting humans' is equally difficult because it's all just 'generalized supervised learning'" but that is probably a strawman.  Even if we're operating primarily on a supervision paradigm rather than a modeling paradigm, I expect differences in how easy it is to get complete coverage of some parts of the problem versus others.  I expect that some parts of what humans want are a LOT easier to supervised-learn than others.  The whole reason for being interested in e.g. 'low impact' genies is because of the suspicion that 'try not to have unnecessary impacts in general and plan to do things in a way that minimizes side effects while getting the job done, then check the larger impacts you expect to have', while by no means trivial, will still be a LOT easier to learn or specify to a usable and safe degree than the whole of human value.\n\n> You seem to be imagining a direct way to formulate an imperative like "do no harm" that doesn't involve predicting what the user would describe as a harm or what harm-avoidance strategy the user would advocate; I don't see much hope for that.\n\nIf you consider the low-impact paradigm, then the idea is that you can get a lot of the same intended benefit of "do no harm" via "try not to needlessly affect things and tell me about the large effects you do expect so I can check, even if this involves a number of needlessly avoided effects and needless checks" rather than "make a prediction of what I would consider 'harm' and avoid only that, which prediction I know to be good enough that there's no point in my checking your prediction any more".  The former isn't trivial and probably is a LOT harder than someone not steeped in edge instantiation problems and unforeseen maxima would expect - if you do it in a naive way, you just end up with the whole universe maximized to minimize 'impact'.  But it's plausible to me (>50% probability) that the latter case, what Bostrom would call a Sovereign, is a LOT harder to build (and know that you've built).',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '0',
  maintainerCount: '0',
  userSubscriberCount: '0',
  lastVisit: '2016-02-25 04:36:01',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'EliezerYudkowsky'
  ],
  childIds: [],
  parentIds: [
    '1gj',
    'task_agi'
  ],
  commentIds: [],
  questionIds: [],
  tagIds: [],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4828',
      pageId: '1hl',
      userId: 'EliezerYudkowsky',
      edit: '2',
      type: 'newEdit',
      createdAt: '2015-12-30 19:54:12',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4827',
      pageId: '1hl',
      userId: 'EliezerYudkowsky',
      edit: '1',
      type: 'newEdit',
      createdAt: '2015-12-30 19:50:51',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4824',
      pageId: '1hl',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newParent',
      createdAt: '2015-12-30 19:40:47',
      auxPageId: 'task_agi',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4826',
      pageId: '1hl',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newParent',
      createdAt: '2015-12-30 19:40:47',
      auxPageId: '1gj',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}