{
  localUrl: '../page/1ff.html',
  arbitalUrl: 'https://arbital.com/p/1ff',
  rawJsonUrl: '../raw/1ff.json',
  likeableId: '391',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '0',
  dislikeCount: '0',
  likeScore: '0',
  individualLikes: [],
  pageId: '1ff',
  edit: '2',
  editSummary: '',
  prevEdit: '1',
  currentEdit: '2',
  wasPublished: 'true',
  type: 'comment',
  title: '"In practice, Eliezer often ..."',
  clickbait: '',
  textLength: '1303',
  alias: '1ff',
  externalUrl: '',
  sortChildrenBy: 'recentFirst',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'PaulChristiano',
  editCreatedAt: '2015-12-28 04:44:12',
  pageCreatorId: 'PaulChristiano',
  pageCreatedAt: '2015-12-28 04:38:10',
  seeDomainId: '0',
  editDomainId: 'EliezerYudkowsky',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '1148',
  text: 'In practice, Eliezer often invokes this concept in settings where there *isn't* yet an intelligent adversary (especially in order to argue that a particular design might lead to the appearance of an intelligent adversary). For example, he has repeatedly argued that a sophisticated search for an x maximizing f(x) would tend to produce outputs designed to influence the broader world rather than to influence the computation of f(x).\n\nI think that this extension is itself an interesting and potentially important idea, but it is probably worth separating. The methodology of "conservatively assume that your maximizers might maximize really well" is intuitive and pretty defensible. The methodology of "conservatively assume that whenever you use gradient descent to do something actually impressive, it may produce a malicious superintelligence" is considerably more speculative, and arguing about the extension shouldn't distract from its inoffensive little brother.\n\nI guess the post comes out and says this here:\n\n> The 'strain' on our design placed by it needing to run a\n> smarter-than-human AI in a way that doesn't make it adversarial, is\n> similar in many respects to the 'strain' from cryptography facing an\n> intelligent adversary.\n\nBut none of the post seems to defend the stronger version.',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '3',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '2016-02-26 07:56:02',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'PaulChristiano'
  ],
  childIds: [],
  parentIds: [
    'AI_safety_mindset'
  ],
  commentIds: [
    '2xb',
    '394',
    '39s'
  ],
  questionIds: [],
  tagIds: [],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4463',
      pageId: '1ff',
      userId: 'PaulChristiano',
      edit: '2',
      type: 'newEdit',
      createdAt: '2015-12-28 04:44:12',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4460',
      pageId: '1ff',
      userId: 'PaulChristiano',
      edit: '1',
      type: 'newEdit',
      createdAt: '2015-12-28 04:38:10',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4459',
      pageId: '1ff',
      userId: 'PaulChristiano',
      edit: '0',
      type: 'newParent',
      createdAt: '2015-12-28 04:28:02',
      auxPageId: 'AI_safety_mindset',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}