{
  localUrl: '../page/2nh.html',
  arbitalUrl: 'https://arbital.com/p/2nh',
  rawJsonUrl: '../raw/2nh.json',
  likeableId: '1584',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '0',
  dislikeCount: '0',
  likeScore: '0',
  individualLikes: [],
  pageId: '2nh',
  edit: '1',
  editSummary: '',
  prevEdit: '0',
  currentEdit: '1',
  wasPublished: 'true',
  type: 'comment',
  title: '"To me, the most natural way..."',
  clickbait: '',
  textLength: '358',
  alias: '2nh',
  externalUrl: '',
  sortChildrenBy: 'recentFirst',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'PaulChristiano',
  editCreatedAt: '2016-03-16 16:43:41',
  pageCreatorId: 'PaulChristiano',
  pageCreatedAt: '2016-03-16 16:43:41',
  seeDomainId: '0',
  editDomainId: 'EliezerYudkowsky',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: 'In the context of a Task AGI, one application of what we call 'conservatism'  is the Burrito Problem\\.  Suppose I show the AI five burritos and five non\\-burritos\\.  Rather than learning the simplest concept that distinguishes burritos from non\\-burritos and then creating something that is maximally a burrito under this concept, we would like the AI to learn a simple and narrow concept that classifies these five things as burritos according to some simple rule \\(not just the rule, "only these exact five objects are burritos"\\) but which also classifies as few other objects as burritos as possible\\.  This concept however must still be broad enough to permit the construction of a sixth burrito that is not molecularly identical to any of the first five\\.  But not so broad that the burrito includes butolinum toxin \\(because, hey, anything made out of mostly carbon\\-hydrogen\\-oxygen\\-nitrogen that looks like a burrito ought to be fine\\)\\.',
  anchorText: 'Rather than learning the simplest concept that distinguishes burritos from non\\-burritos and then creating something that is maximally a burrito under this concept, we would like the AI to learn a simple and narrow concept that classifies these five things as burritos according to some simple rule',
  anchorOffset: '166',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '360',
  text: 'To me, the most natural way to approach this is to take a probability distribution over "what it means to be a burrito," and to produce a thing that is maximally likely to be a burrito rather than a thing which is maximally burrito-like. Of course this still depends on having a good distribution over "what it means to be a burrito" (as does your approach).',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '2',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'PaulChristiano'
  ],
  childIds: [],
  parentIds: [
    'taskagi_open_problems'
  ],
  commentIds: [
    '2nl'
  ],
  questionIds: [],
  tagIds: [],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8637',
      pageId: '2nh',
      userId: 'PaulChristiano',
      edit: '1',
      type: 'newEdit',
      createdAt: '2016-03-16 16:43:41',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8636',
      pageId: '2nh',
      userId: 'PaulChristiano',
      edit: '0',
      type: 'newParent',
      createdAt: '2016-03-16 16:42:09',
      auxPageId: 'taskagi_open_problems',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}