{
  localUrl: '../page/1fp.html',
  arbitalUrl: 'https://arbital.com/p/1fp',
  rawJsonUrl: '../raw/1fp.json',
  likeableId: '399',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '0',
  dislikeCount: '0',
  likeScore: '0',
  individualLikes: [],
  pageId: '1fp',
  edit: '1',
  editSummary: '',
  prevEdit: '0',
  currentEdit: '1',
  wasPublished: 'true',
  type: 'comment',
  title: '"There seems to be some equi..."',
  clickbait: '',
  textLength: '1552',
  alias: '1fp',
  externalUrl: '',
  sortChildrenBy: 'recentFirst',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'PaulChristiano',
  editCreatedAt: '2015-12-28 06:21:07',
  pageCreatorId: 'PaulChristiano',
  pageCreatedAt: '2015-12-28 06:21:07',
  seeDomainId: '0',
  editDomainId: 'EliezerYudkowsky',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '1',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '372',
  text: 'There seems to be some equivocation here between two motivations for studying corrigibility.\n\nAs far as I can tell, there are two obvious routes to solving the "switch problem:"\n\n* Have a principled treatment of normative uncertainty + indirect normativity that yields the desired behavior with respect to reflective consistency (and VOI)\n* Adopt the instrumental preferences of users over possible shutdown / self-modification / etc.\n\nIt looks like *both* of these will probably work if we are able to solve the rest of the AI control problem.\n\nWith this in mind, I thought the motivation for studying corrigibility was the intuition that it should follow from some kind of intellectual humility, which we don't yet understand or have any model of. This seems pretty sensible to me. It's also explicit in the Arbital page on [ corrigibility ].\n\nBut utility indifference doesn't seem to address this motivation at all, no matter how well it works out. Instead it is aimed at resolving some of the symptoms of the underlying issue. So talking about it as an approach to corrigibility (and indeed one of the only concrete approaches) seems to undermine the offered motivation for corrigibility, and to presuppose that the more natural approaches to the "switch problem" don't work. This at least requires some kind of explanation.\n\nI think this may be practically relevant because many mainstream AI researchers might be very sympathetic to work on corrigibility if they understood the problem (and would be much open to the intellectual humility angle).',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '1',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '2016-02-13 12:38:17',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'PaulChristiano'
  ],
  childIds: [],
  parentIds: [
    'utility_indifference'
  ],
  commentIds: [],
  questionIds: [],
  tagIds: [],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4482',
      pageId: '1fp',
      userId: 'PaulChristiano',
      edit: '1',
      type: 'newEdit',
      createdAt: '2015-12-28 06:21:07',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4481',
      pageId: '1fp',
      userId: 'PaulChristiano',
      edit: '0',
      type: 'newParent',
      createdAt: '2015-12-28 06:02:36',
      auxPageId: 'utility_indifference',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}