{
  localUrl: '../page/corrigibility.html',
  arbitalUrl: 'https://arbital.com/p/corrigibility',
  rawJsonUrl: '../raw/45.json',
  likeableId: '2298',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '5',
  dislikeCount: '0',
  likeScore: '5',
  individualLikes: [
    'PatrickLaVictoir',
    'AndrewMcKnight',
    'NickShesterin',
    'JoshuaPratt',
    'LukeMcRedmond'
  ],
  pageId: 'corrigibility',
  edit: '12',
  editSummary: '',
  prevEdit: '11',
  currentEdit: '12',
  wasPublished: 'true',
  type: 'wiki',
  title: 'Corrigibility',
  clickbait: '"I can't let you do that, Dave."',
  textLength: '12883',
  alias: 'corrigibility',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'EliezerYudkowsky',
  editCreatedAt: '2017-02-08 19:41:13',
  pageCreatorId: 'NateSoares',
  pageCreatedAt: '2015-04-05 23:42:58',
  seeDomainId: '0',
  editDomainId: 'EliezerYudkowsky',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '1276',
  text: '[summary:  Corrigible agents allow themselves to be 'corrected' (from our standpoint) by human [9r programmers], and don't experience [10k instrumental pressures] to avoid correction.\n\nImagine building an [7g1 advanced AI] with a [2xd shutdown button] that causes the AI to suspend to disk in an orderly fashion if the shutdown button is pressed.  An AI that is *corrigible* with respect to this shutdown button is an AI that doesn't try to prevent the shutdown button from being pressed... or rewrite itself without the shutdown code, or build a backup copy of itself elsewhere, or psychologically manipulate the programmers into not pressing the button, or fool the programmers into thinking the AI has shut down when it has not, etcetera.]\n\nA 'corrigible' agent is one that [7g0 doesn't interfere] with what [9r we] would intuitively see as attempts to 'correct' the agent, or 'correct' our mistakes in building it; and permits these 'corrections' despite the apparent [10g instrumentally convergent reasoning] saying otherwise.\n\n- If we try to suspend the AI to disk, or shut it down entirely, a corrigible AI will let us do so.  (Even though, if suspended, [7g2 the AI will then be unable to fulfill what would usually be its goals].)\n-  If we try to reprogram the AI's utility function or [meta_utility meta-utility function], a corrigible AI will allow this modification to go through.  (Rather than, e.g., fooling us into believing the utility function was modified successfully, while the AI actually keeps its original utility function as [3cq obscured] functionality; as we would expect by default to be [3r6 a preferred outcome according to the AI's current preferences].)\n\nMore abstractly:\n\n- A corrigible agent experiences no preference or [10k instrumental pressure] to interfere with attempts by the programmers or operators to modify the agent, impede its operation, or halt its execution.\n- A corrigible agent does not attempt to manipulate or deceive its operators, especially with respect to properties of the agent that might otherwise cause its operators to modify it.\n- A corrigible agent does not try to [3cq obscure its thought processes] from its programmers or operators.\n- A corrigible agent is motivated to preserve the corrigibility of the larger system if that agent self-modifies, constructs sub-agents in the environment, or offloads part of its cognitive processing to external systems; or alternatively, the agent has no preference to execute any of those general activities.\n\nA stronger form of corrigibility would require the AI to positively cooperate or assist, such that the AI would rebuild the shutdown button if it were destroyed, or experience a positive preference *not* to self-modify if self-modification could lead to incorrigibility.  But this is not part of the primary specification since it's possible that we would *not* want the AI trying to actively be helpful in assisting our attempts to shut it down, and would in fact prefer the AI to be passive about this.\n\nGood proposals for achieving corrigibility in specific regards are [4m open problems in AI alignment].  Some areas of active current research are [1b7] and [interruptibility].\n\nAchieving total corrigibility everywhere via some single, general mental state in which the AI "knows that it is still under construction" or "believes that the programmers know more than it does about its own goals" is termed '[3ps the hard problem of corrigibility]'.\n\n## Difficulties\n\n### Deception and manipulation by default\n\nBy default, most sets of preferences are such that an agent acting according to those preferences will prefer to retain its current preferences. For example, imagine an agent which is attempting to collect stamps. Altering the agent so that it prefers to collect bottle caps would lead to futures where the agent has fewer stamps, and so allowing this event to occur is dispreferred (under the current, stamp-collecting preferences).\n\nMore generally, as noted by [10g instrumentally convergent strategies], most utility functions give an agent strong incentives to retain its current utility function: imagine an agent constructed so that it acts according to the utility function U, and imagine further that its operators think they built the agent to act according to a different utility function U'. If the agent learns this fact, then it has incentives to either deceive its programmers (prevent them from noticing that the agent is acting according to U instead of U') or manipulate its programmers (into believing that they actually prefer U to U', or by coercing them into leaving its utility function intact).\n\nA corrigible agent must avoid these default incentives to manipulate and deceive, but specifying some set of preferences that avoids deception/manipulation incentives remains an open problem.\n\n### Trouble with utility function uncertainty\n\nA first attempt at describing a corrigible agent might involve specifying a utility maximizing agent that is uncertain about its utility function. However, while this could allow the agent to make some changes to its preferences as a result of observations, the agent would still be incorrigible when it came time for the programmers to attempt to correct what they see as mistakes in their attempts to formulate how the "correct" utility function should be determined from interaction with the environment.\n\nAs an overly simplistic example, imagine an agent attempting to maximize the internal happiness of all humans, but which has uncertainty about what that means. The operators might believe that if the agent does not act as intended, they can simply express their dissatisfaction and cause it to update. However, if the agent is reasoning according to an impoverished hypothesis space of utility functions, then it may behave quite incorrigibly: say it has narrowed down its consideration to two different hypotheses, one being that a certain type of opiate causes humans to experience maximal pleasure, and the other is that a certain type of stimulant causes humans to experience maximal pleasure. If the agent begins administering opiates to humans, and the humans resist, then the agent may "update" and start administering stimulants instead. But the agent would still be incorrigible — it would resist attempts by the programmers to turn it off so that it stops drugging people.\n\nIt does not seem that corrigibility can be trivially solved by specifying agents with uncertainty about their utility function. A corrigible agent must somehow also be able to reason about the fact that the humans themselves might have been confused or incorrect when specifying the process by which the utility function is identified, and so on.\n\n### Trouble with penalty terms\n\nA second attempt at describing a corrigible agent might attempt to specify a utility function with "penalty terms" for bad behavior. This is unlikely to work for a number of reasons. First, there is the [42] problem: if a utility function gives an agent strong incentives to manipulate its operators, then adding a penalty for "manipulation" to the utility function will tend to give the agent strong incentives to cause its operators to do what it would have manipulated them to do, without taking any action that technically triggers the "manipulation" cause.  It is likely extremely difficult to specify conditions for "deception" and "manipulation" that actually rule out all undesirable behavior, especially if the agent is [47 smarter than us] or [6q growing in capability].\n\nMore generally, it does not seem like a good policy to construct an agent that searches for positive-utility ways to deceive and manipulate the programmers, [7g0 even if those searches are expected to fail]. The goal of corrigibility is *not* to design agents that want to deceive but can't. Rather, the goal is to construct agents that have no incentives to deceive or manipulate in the first place: a corrigible agent is one that reasons as if it is incomplete and potentially flawed in dangerous ways.\n\n## Open problems\n\nSome open problems in corrigibility are:\n\n### Hard problem of corrigibility\n\nOn a human, intuitive level, it seems like there's a central idea behind corrigibility that seems simple to us: understand that you're flawed, that your meta-processes might also be flawed, and that there's another cognitive system over there (the programmer) that's less flawed, so you should let that cognitive system correct you even if that doesn't seem like the first-order right thing to do.  You shouldn't disassemble that other cognitive system to update your model in a Bayesian fashion on all possible information that other cognitive system contains; you shouldn't model how that other cognitive system might optimally correct you and then carry out the correction yourself; you should just let that other cognitive system modify you, without attempting to manipulate how it modifies you to be a better form of 'correction'.\n\nFormalizing the hard problem of corrigibility seems like it might be a problem that is hard (hence the name).  Preliminary research might talk about some obvious ways that we could model A as believing that B has some form of information that A's preference framework designates as important, and showing what these algorithms actually do and how they fail to solve the hard problem of corrigibility.\n\n### [1b7 Utility indifference]\n\n[fixme: explain utility indifference]\n\nThe current state of technology on this is that the AI behaves as if there's an absolutely fixed probability of the shutdown button being pressed, and therefore doesn't try to modify this probability.  But then the AI will try to use the shutdown button as an outcome pump.  Is there any way to avert this?\n\n### Percentalization\n\nDoing something in the top 0.1% of all actions.  This is actually a Limited AI paradigm and ought to go there, not under Corrigibility.\n\n### Conservative strategies\n\nDo something that's as similar as possible to other outcomes and strategies that have been whitelisted.  Also actually a Limited AI paradigm.\n\nThis seems like something that could be investigate in practice on e.g. a chess program.\n\n### Low impact measure\n\n(Also really a Limited AI paradigm.)\n\nFigure out a measure of 'impact' or 'side effects' such that if you tell the AI to paint all cars pink, it just paints all cars pink, and doesn't transform Jupiter into a computer to figure out how to paint all cars pink, and doesn't dump toxic runoff from the paint into groundwater; and *also* doesn't create utility fog to make it look to people like the cars *haven't* been painted pink (in order to minimize this 'side effect' of painting the cars pink), and doesn't let the car-painting machines run wild afterward in order to minimize its own actions on the car-painting machines.  Roughly, try to actually formalize the notion of "Just paint the cars pink with a minimum of side effects, dammit."\n\nIt seems likely that this problem could turn out to be FAI-complete, if for example "Cure cancer, but then it's okay if that causes human research investment into curing cancer to decrease" is only distinguishable by us as an okay side effect because it doesn't result in expected utility decrease under our own desires.\n\nIt still seems like it might be good to, e.g., try to define "low side effect" or "low impact" inside the context of a generic Dynamic Bayes Net, and see if maybe we can find something after all that yields our intuitively desired behavior or helps to get closer to it.\n\n### Ambiguity identification\n\nWhen there's more than one thing the user could have meant, ask the user rather than optimizing the mixture.  Even if A is in some sense a 'simpler' concept to classify the data than B, notice if B is also a 'very plausible' way to classify the data, and ask the user if they meant A or B.  The goal here is to, in the classic 'tank classifier' problem where the tanks were photographed in lower-level illumination than the non-tanks, have something that asks the user, "Did you mean to detect tanks or low light or 'tanks and low light' or what?"\n\n### Safe outcome prediction and description\n\nCommunicate the AI's predicted result of some action to the user, without putting the user inside an unshielded argmax of maximally effective communication.\n\n### Competence aversion\n\nTo build e.g. a [102 behaviorist genie], we need to have the AI e.g. not experience an instrumental incentive to get better at modeling minds, or refer mind-modeling problems to subagents, etcetera.  The general subproblem might be 'averting the instrumental pressure to become good at modeling a particular aspect of reality'.  A toy problem might be an AI that in general wants to get the gold in a Wumpus problem, but doesn't experience an instrumental pressure to know the state of the upper-right-hand-corner cell in particular.',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '2',
  maintainerCount: '0',
  userSubscriberCount: '0',
  lastVisit: '2016-02-26 05:29:47',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'EliezerYudkowsky',
    'NateSoares',
    'AlexeiAndreev',
    'TsviBT',
    'MatthewGraves'
  ],
  childIds: [
    'programmer_deception',
    'utility_indifference',
    'avert_instrumental_pressure',
    'avert_self_improvement',
    'shutdown_problem',
    'user_manipulation',
    'hard_corrigibility',
    'updated_deference',
    'interruptibility'
  ],
  parentIds: [
    'ai_alignment'
  ],
  commentIds: [
    '188'
  ],
  questionIds: [],
  tagIds: [
    'taskagi_open_problems',
    'value_alignment_open_problem',
    'nonadversarial'
  ],
  relatedIds: [
    'convergent_strategies'
  ],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [
    {
      id: '5089',
      parentId: 'instrumental_convergence',
      childId: 'corrigibility',
      type: 'requirement',
      creatorId: 'EliezerYudkowsky',
      createdAt: '2016-07-09 23:03:01',
      level: '2',
      isStrong: 'false',
      everPublished: 'true'
    }
  ],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '21996',
      pageId: 'corrigibility',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newChild',
      createdAt: '2017-02-13 18:07:36',
      auxPageId: 'interruptibility',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '21983',
      pageId: 'corrigibility',
      userId: 'EliezerYudkowsky',
      edit: '12',
      type: 'newEdit',
      createdAt: '2017-02-08 19:41:13',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '21982',
      pageId: 'corrigibility',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newTag',
      createdAt: '2017-02-08 19:32:38',
      auxPageId: 'taskagi_open_problems',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '21981',
      pageId: 'corrigibility',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newTag',
      createdAt: '2017-02-08 19:32:35',
      auxPageId: 'value_alignment_open_problem',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '21980',
      pageId: 'corrigibility',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newTag',
      createdAt: '2017-02-08 19:14:53',
      auxPageId: 'nonadversarial',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '21931',
      pageId: 'corrigibility',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newChild',
      createdAt: '2017-02-06 06:25:48',
      auxPageId: 'updated_deference',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '20365',
      pageId: 'corrigibility',
      userId: 'MatthewGraves',
      edit: '11',
      type: 'newEdit',
      createdAt: '2016-11-22 21:11:35',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '16308',
      pageId: 'corrigibility',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newRequirement',
      createdAt: '2016-07-09 23:03:01',
      auxPageId: 'instrumental_convergence',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '10612',
      pageId: 'corrigibility',
      userId: 'EliezerYudkowsky',
      edit: '10',
      type: 'newChild',
      createdAt: '2016-05-18 07:10:07',
      auxPageId: 'hard_corrigibility',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4633',
      pageId: 'corrigibility',
      userId: 'EliezerYudkowsky',
      edit: '10',
      type: 'newEdit',
      createdAt: '2015-12-28 23:00:10',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4322',
      pageId: 'corrigibility',
      userId: 'NateSoares',
      edit: '4',
      type: 'newEdit',
      createdAt: '2015-12-24 23:50:52',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4310',
      pageId: 'corrigibility',
      userId: 'TsviBT',
      edit: '9',
      type: 'newEdit',
      createdAt: '2015-12-24 04:47:26',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4071',
      pageId: 'corrigibility',
      userId: 'EliezerYudkowsky',
      edit: '8',
      type: 'newChild',
      createdAt: '2015-12-17 19:55:43',
      auxPageId: 'utility_indifference',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '3859',
      pageId: 'corrigibility',
      userId: 'AlexeiAndreev',
      edit: '8',
      type: 'newEdit',
      createdAt: '2015-12-16 04:49:16',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '3858',
      pageId: 'corrigibility',
      userId: 'AlexeiAndreev',
      edit: '0',
      type: 'newAlias',
      createdAt: '2015-12-16 04:49:13',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '837',
      pageId: 'corrigibility',
      userId: 'AlexeiAndreev',
      edit: '1',
      type: 'newChild',
      createdAt: '2015-10-28 03:46:58',
      auxPageId: 'programmer_deception',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '380',
      pageId: 'corrigibility',
      userId: 'AlexeiAndreev',
      edit: '1',
      type: 'newParent',
      createdAt: '2015-10-28 03:46:51',
      auxPageId: 'ai_alignment',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '2060',
      pageId: 'corrigibility',
      userId: 'EliezerYudkowsky',
      edit: '7',
      type: 'newEdit',
      createdAt: '2015-09-23 19:44:23',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '2059',
      pageId: 'corrigibility',
      userId: 'EliezerYudkowsky',
      edit: '6',
      type: 'newEdit',
      createdAt: '2015-05-15 08:46:21',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '2058',
      pageId: 'corrigibility',
      userId: 'EliezerYudkowsky',
      edit: '5',
      type: 'newEdit',
      createdAt: '2015-05-15 08:45:52',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '2057',
      pageId: 'corrigibility',
      userId: 'NateSoares',
      edit: '3',
      type: 'newEdit',
      createdAt: '2015-04-06 00:47:34',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '2056',
      pageId: 'corrigibility',
      userId: 'NateSoares',
      edit: '2',
      type: 'newEdit',
      createdAt: '2015-04-05 23:43:02',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'true',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}