{
  localUrl: '../page/advanced_agent.html',
  arbitalUrl: 'https://arbital.com/p/advanced_agent',
  rawJsonUrl: '../raw/2c.json',
  likeableId: '1282',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '7',
  dislikeCount: '0',
  likeScore: '7',
  individualLikes: [
    'AlexeiAndreev',
    'EliezerYudkowsky',
    'AlexRay',
    'NateSoares',
    'MiddleKek',
    'RyanCarey2',
    'StephanieZolayvar'
  ],
  pageId: 'advanced_agent',
  edit: '40',
  editSummary: '',
  prevEdit: '39',
  currentEdit: '40',
  wasPublished: 'true',
  type: 'wiki',
  title: 'Advanced agent properties',
  clickbait: 'How smart does a machine intelligence need to be, for its niceness to become an issue?  "Advanced" is a broad term to cover cognitive abilities such that we'd need to start considering AI alignment.',
  textLength: '21699',
  alias: 'advanced_agent',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'EliezerYudkowsky',
  editCreatedAt: '2017-03-25 05:59:44',
  pageCreatorId: 'EliezerYudkowsky',
  pageCreatedAt: '2015-03-24 01:31:50',
  seeDomainId: '0',
  editDomainId: 'EliezerYudkowsky',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '3',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '819',
  text: '[summary:  An "advanced agent" is a machine intelligence smart enough that we start considering how to [2v point it in a nice direction].\n\nE.g:  You don't need to worry about an AI trying to [2xd prevent you from pressing the suspend button] (off switch), unless the AI knows that it *has* a suspend button.  So an AI that isn't smart enough to realize it has a suspend button, doesn't need the part of [2v alignment theory] that deals in "[1b7 having the AI let you press the suspend button]".\n\n"Advanced agent properties" are thresholds for how an AI could be smart enough to be interesting from this standpoint.  E.g:  The ability to learn a wide variety of new domains, aka "[42g Artificial General Intelligence]," could lead into an AI learning the [3nf big picture] and realizing that it had a suspend button.]\n\n[summary(Technical):  Advanced machine intelligences are the subjects of [2v AI alignment theory]: agents sufficiently advanced in various ways to be (1) dangerous if mishandled, and (2) [6y relevant] to our larger dilemmas for good or ill.\n\n"Advanced agent property" is a broad term to handle various thresholds that have been proposed for "smart enough to need alignment".  For example, current machine learning algorithms are nowhere near the point that they'd try to resist if somebody pressed the off-switch.  *That* would require, e.g.:\n\n- Enough [3nf big-picture strategic awareness] for the AI to know that it is a computer, that it has an off-switch, and that if it is shut off its goals are less likely to be achieved.\n- General [9h consequentialism] / backward chaining from goals to actions; visualizing which actions lead to which futures and choosing actions leading to more [preferences preferred] futures, in general and across domains.\n\nSo the threshold at which you might need to start thinking about '[2xd shutdownability]' or '[2rg abortability]' or [45 corrigibility] as it relates to having an off-switch, is '[3nf big-picture strategic awareness]' plus '[9h cross-domain consequentialism]'.  These two cognitive thresholds can thus be termed 'advanced agent properties'.\n\nThe above reasoning also suggests e.g. that [-7vh] is an advanced agent property, because a general ability to learn new domains could eventually lead the AI to understand that it has an off switch.]\n\n*(For the general concept of an agent, see [6t standard agent properties].)*\n\n[toc:]\n\n# Introduction: 'Advanced' as an informal property, or metasyntactic placeholder\n\n"[7g1 Sufficiently advanced Artificial Intelligences]" are the subjects of [2v AI alignment theory]; machine intelligences potent enough that:\n\n 1. The [2l safety paradigms for advanced agents] become relevant.\n 2. Such agents can be [6y decisive in the big-picture scale of events].\n\nSome example properties that might make an agent sufficiently powerful for 1 and/or 2:\n\n- The AI can [42g learn new domains] besides those built into it.\n- The AI can understand human minds well enough to [10f manipulate] us.\n- The AI can devise real-world strategies [9f we didn't foresee in advance].\n- The AI's performance is [41l strongly superhuman, or else at least optimal, across all cognitive domains].\n\nSince there's multiple avenues we can imagine for how an AI could be sufficiently powerful along various dimensions, 'advanced agent' doesn't have a neat necessary-and-sufficient definition.  Similarly, some of the advanced agent properties are easier to formalize or pseudoformalize than others.\n\nAs an example:  Current machine learning algorithms are nowhere near the point that [2xd they'd try to resist if somebody pressed the off-switch].  *That* would happen given, e.g.:\n\n- Enough [3nf big-picture strategic awareness] for the AI to know that it is a computer, that it has an off-switch, and that [7g2 if it is shut off its goals are less likely to be achieved].\n- Widely applied [9h consequentialism], i.e. backward chaining from goals to actions; visualizing which actions lead to which futures and choosing actions leading to more [preferences preferred] futures, in general and across domains.\n\nSo the threshold at which you might need to start thinking about '[2xd shutdownability]' or '[2rg abortability]' or [45 corrigibility] as it relates to having an off-switch, is '[3nf big-picture strategic awareness]' plus '[9h cross-domain consequentialism]'.  These two cognitive thresholds can thus be termed 'advanced agent properties'.\n\nThe above reasoning also suggests e.g. that [-7vh] is an advanced agent property, because a general ability to learn new domains could lead the AI to understand that it has an off switch.\n\nOne reason to keep the term 'advanced' on an informal basis is that in an intuitive sense we want it to mean "AI we need to take seriously" in a way independent of particular architectures or accomplishments.  To the philosophy undergrad who 'proves' that AI can never be "truly intelligent" because it is "merely deterministic and mechanical", one possible reply is, "Look, if it's building a Dyson Sphere, I don't care if you define it as 'intelligent' or not."  Any particular advanced agent property should be understood in a background context of "If a computer program is doing X, it doesn't matter if we define that as 'intelligent' or 'general' or even as 'agenty', what matters is that it's doing X."  Likewise the notion of '[7g1 sufficiently advanced AI]' in general.\n\nThe goal of defining advanced agent properties is not to have neat definitions, but to correctly predict and carve at the natural joints for which cognitive thresholds in AI development could lead to which real-world abilities, corresponding to which [2l alignment issues].\n\nAn alignment issue may need to have been *already been solved* at the time an AI first acquires an advanced agent property; the notion is not that we are defining observational thresholds for society first needing to think about a problem.\n\n# Summary of some advanced agent properties\n\nAbsolute-threshold properties (those which reflect cognitive thresholds irrespective of the human position on that same scale):\n\n- **[9h Consequentialism],** or choosing actions/policies on the basis of their expected future consequences\n - Modeling the conditional relationship $\\mathbb P(Y|X)$ and selecting an $X$ such that it leads to a high probability of $Y$ or high quantitative degree of $Y,$ is ceteris paribus a sufficient precondition for deploying [2vl] that lie within the effectively searchable range of $X.$\n    - Note that selecting over a conditional relationship is potentially a property of many internal processes, not just the entire AI's top-level main loop, if the conditioned variable is being powerfully selected over a wide range.\n - **Cross-domain consequentialism** implies many different [7vf cognitive domains] potentially lying within the range of the $X$ being selected-on to achieve $Y.$\n - Trying to rule out particular instrumental strategies, in the presence of increasingly powerful consequentialism, would lead to the [-42] form of [-48] and subsequent [6q context-change disasters.]\n- **[3nf]** is a world-model that includes strategically important general facts about the larger world, such as e.g. "I run on computing hardware" and "I stop running if my hardware is switched off" and "there is such a thing as the Internet and it connects to more computing hardware".\n- **Psychological modeling of other agents** (not humans per se) potentially leads to:\n - Extrapolating that its programmers may present future obstacles to achieving its goals\n    - This in turn leads to the host of problems accompanying [45 incorrigibility] as a [10g convergent strategy.]\n - [10f Trying to conceal facts about itself] from human operators\n - Being incentivized to engage in [-3cq].\n - [6v Mindcrime] if building models of reflective other agents, or itself.\n - Internally modeled adversaries breaking out of internal sandboxes.\n - [1fz] or other decision-theoretic adversaries.\n- Substantial **[capability_gain capability gains]** relative to domains trained and verified previously.\n - E.g. this is the qualifying property for many [6q context-change disasters.]\n- **[7vh]** is the most obvious route to an AI acquiring many of the capabilities above or below, especially if those capabilities were not initially or deliberately programmed into the AI.\n- **Self-improvement** is another route that potentially leads to capabilities not previously present.  While some hypotheses say that self-improvement is likely to require basic general intelligence, this is not a known fact and the two advanced properties are conceptually distinct.\n- **Programming** or **computer science** capabilities are a route potentially leading to self-improvement, and may also enable [-3cq].\n- Turing-general cognitive elements (capable of representing large computer programs), subject to **sufficiently strong end-to-end optimization** (whether by the AI or by human-crafted clever algorithms running on 10,000 GPUs), may give rise to [2rc crystallized agent-like processes] within the AI.\n - E.g. natural selection, operating on chemical machinery constructible by DNA strings, optimized some DNA strings hard enough to spit out humans.\n- **[6y Pivotal material capabilities]** such as quickly self-replicating infrastructure, strong mastery of biology, or molecular nanotechnology.\n - Whatever threshold level of domain-specific engineering acumen suffices to develop those capabilities, would therefore also qualify as an advanced-agent property.\n\nRelative-threshold advanced agent properties (those whose key lines are related to various human levels of capability):\n\n- **[9f]** is when we can't effectively imagine or search the AI's space of policy options (within a [7vf domain]); the AI can do things we didn't think of (within a domain).\n - **[2j]** is when we don't know all the rules (within a domain) and might not recognize the AI's solution even if told about it in advance, like somebody in the 11th century looking at the blueprint for a 21st-century air conditioner.  This may also imply that we cannot readily put low upper bounds on the AI's possible degree of success.\n - **[9j Rich domains]** are more likely to have some rules or properties unknown to us, and hence be strongly uncontainable.\n - [9t].\n - Human psychology is a rich domain.\n - Superhuman performance in a rich domain strongly implies cognitive uncontainability because of [1c0].\n- **Realistic psychological modeling** potentially leads to:\n - Guessing which results and properties the human operators expect to see, or would arrive at AI-desired beliefs upon seeing, and [10f arranging to exhibit those results or properties].\n - Psychologically manipulating the operators or programmers\n - Psychologically manipulating other humans in the outside world\n - More probable [6v mindcrime]\n - (Note that an AI trying to develop realistic psychological models of humans is, by implication, trying to develop internal parts that can deploy *all* human capabilities.)\n- **Rapid [capability_gain capability gains]** relative to human abilities to react to them, or to learn about them and develop responses to them, may cause more than one [-6q] to happen a time.\n - The ability to usefully **scale onto more hardware** with good returns on cognitive reinvestment would potentially lead to such gains.\n - **Hardware overhang** describes a situation where the initial stages of a less developed AI are boosted using vast amounts of computing hardware that may then be used more efficiently later.\n - [5b3 Limited AGIs] may have **capability overhangs** if their limitations break or are removed.\n- **[7mt Strongly superhuman]** capabilities in psychological or material domains could enable an AI to win a competitive conflict despite starting from a position of great material disadvantage.\n - E.g., much as a superhuman Go player might win against the world's best human Go player even with the human given a two-stone advantage, a sufficiently powerful AI might talk its way out of an [6z AI box] despite restricted communications channels, eat the stock market in a month starting from $1000, win against the world's combined military forces given a protein synthesizer and a 72-hour head start, etcetera.\n- [6s] relative to human civilization is a sufficient condition (though not necessary) for an AI to...\n - Deploy at least any tactic a human can think of.\n - Anticipate any tactic a human has thought of.\n - See the human-visible logic of a convergent instrumental strategy.\n - Find any humanly visible [43g weird alternative] to some hoped-for logic of cooperation.\n - Have any advanced agent property for which a human would qualify.\n- **[41l General superintelligence]** would lead to strongly superhuman performance in many domains, human-relative efficiency in every domain, and possession of all other listed advanced-agent properties.\n - Compounding returns on **cognitive reinvestment** are the qualifying condition for an [-428] that might arrive at superintelligence on a short timescale.\n\n# Discussions of some advanced agent properties\n\n## Human psychological modeling\n\nSufficiently sophisticated models and predictions of human minds potentially leads to:\n\n- Getting sufficiently good at human psychology to realize the humans want/expect a particular kind of behavior, and will modify the AI's preferences or try to stop the AI's growth if the humans realize the AI will not engage in that type of behavior later.  This creates an instrumental incentive for [10f programmer deception] or [3cq cognitive steganography].\n- Being able to psychologically and socially manipulate humans in general, as a real-world capability.\n- Being at risk for [6v mindcrime].\n\nA [102 behaviorist] AI is one with reduced capability in this domain.\n\n## Cross-domain, real-world [9h consequentialism]\n\nProbably requires *generality* (see below).  To grasp a concept like "If I escape from this computer by [hacking my RAM accesses to imitate a cellphone signal](https://www.usenix.org/system/files/conference/usenixsecurity15/sec15-paper-guri-update.pdf), I'll be able to secretly escape onto the Internet and have more computing power", an agent needs to grasp the relation between its internal RAM accesses, and a certain kind of cellphone signal, and the fact that there are cellphones out there in the world, and the cellphones are connected to the Internet, and that the Internet has computing resources that will be useful to it, and that the Internet also contains other non-AI agents that will try to stop it from obtaining those resources if the AI does so in a detectable way.\n\nContrasting this to non-primate animals where, e.g., a bee knows how to make a hive and a beaver knows how to make a dam, but neither can look at the other and figure out how to build a stronger dam with honeycomb structure.  Current, 'narrow' AIs are like the bee or the beaver; they can play chess or Go, or even learn a variety of Atari games by being exposed to them with minimal setup, but they can't learn about RAM, cellphones, the Internet, Internet security, or why being run on more computers makes them smarter; and they can't relate all these domains to each other and do strategic reasoning across them.\n\nSo compared to a bee or a beaver, one shot at describing the potent 'advanced' property would be *cross-domain real-world consequentialism*.  To get to a desired Z, the AI can mentally chain backwards to modeling W, which causes X, which causes Y, which causes Z; even though W, X, Y, and Z are all in different domains and require different bodies of knowledge to grasp.\n\n## Grasping the [3nf big picture]\n\nMany dangerous-seeming [2vl convergent instrumental strategies] pass through what we might call a rough understanding of the 'big picture'; there's a big environment out there, the programmers have power over the AI, the programmers can modify the AI's utility function, future attainments of the AI's goals are dependent on the AI's continued existence with its current utility function.\n\nIt might be possible to develop a very rough grasp of this bigger picture, sufficiently so to motivate instrumental strategies, in advance of being able to model things like cellphones and Internet security.  Thus, "roughly grasping the bigger picture" may be worth conceptually distinguishing from "being good at doing consequentialism across real-world things" or "having a detailed grasp on programmer psychology".\n\n## [6y Pivotal] material capabilities\n\nAn AI that can crack the [protein structure prediction problem](https://en.wikipedia.org/wiki/Protein_structure_prediction) (which [seems speed-uppable by human intelligence](https://en.wikipedia.org/wiki/Foldit)); invert the model to solve the protein design problem (which may select on strong predictable folds, rather than needing to predict natural folds); and solve engineering problems well enough to bootstrap to molecular nanotechnology; is already possessed of potentially [6y pivotal] capabilities regardless of its other cognitive performance levels.\n\nOther material domains besides nanotechnology might be [6y pivotal].  E.g., self-replicating ordinary manufacturing could potentially be pivotal given enough lead time; molecular nanotechnology is distinguished by its small timescale of mechanical operations and by the world containing an infinite stock of perfectly machined spare parts (aka atoms).  Any form of cognitive adeptness that can lead up to *rapid infrastructure* or other ways of quickly gaining a decisive real-world technological advantage would qualify.\n\n## Rapid capability gain\n\nIf the AI's thought processes and algorithms scale well, and it's running on resources much smaller than those which humans can obtain for it, or the AI has a grasp on Internet security sufficient to obtain its own computing power on a much larger scale, then this potentially implies [ rapid capability gain] and associated [6q context changes].  Similarly if the humans programming the AI are pushing forward the efficiency of the algorithms along a relatively rapid curve.\n\nIn other words, if an AI is currently being improved-on swiftly, or if it has improved significantly as more hardware is added and has the potential capacity for orders of magnitude more computing power to be added, then we can potentially expect rapid capability gains in the future.  This makes [6q context disasters] more likely and is a good reason to start future-proofing the [2l safety properties] early on.\n\n## Cognitive uncontainability\n\nOn complex tractable problems, especially those that involve real-world rich problems, a human will not be able to [9f cognitively 'contain'] the space of possibilities searched by an advanced agent; the agent will consider some possibilities (or classes of possibilities) that the human did not think of.\n\nThe key premise is the 'richness' of the problem space, i.e., there is a fitness landscape on which adding more computing power will yield improvements (large or small) relative to the current best solution.  Tic-tac-toe is not a rich landscape because it is fully explorable (unless we are considering the real-world problem "tic-tac-toe against a human player" who might be subornable, distractable, etc.)  A computationally intractable problem whose fitness landscape looks like a computationally inaccessible peak surrounded by a perfectly flat valley is also not 'rich' in this sense, and an advanced agent might not be able to achieve a relevantly better outcome than a human.\n\nThe 'cognitive uncontainability' term in the definition is meant to imply:\n\n- [9g Vingean unpredictability].\n- Creativity that goes outside all but the most abstract boxes we imagine (on rich problems).\n- The expectation that we will be surprised by the strategies the superintelligence comes up with because its best solution was one we didn't consider.\n\nParticularly surprising solutions might be yielded if the superintelligence has acquired domain knowledge we lack.  In this case the agent's strategy search might go outside causal events we know how to model, and the solution might be one that we wouldn't have recognized in advance as a solution.  This is [2j].\n\nIn intuitive terms, this is meant to reflect, e.g., "What would have happened if the 10th century had tried to use their understanding of the world and their own thinking abilities to upper-bound the technological capabilities of the 20th century?"\n\n## Other properties\n\n*(Work in progress)* [todo: fill out]\n\n- [42g generality]\n - cross-domain [9h consequentialism]\n - learning of non-preprogrammed domains\n     - learning of human-unknown facts\n  - Turing-complete fact and policy learning\n- dangerous domains\n   - human modeling\n     - social manipulation\n     - realization of programmer deception incentive\n     - anticipating human strategic responses\n   - rapid infrastructure\n- potential\n  - self-improvement\n  - suppressed potential\n- [6s epistemic efficiency]\n- [6s instrumental efficiency]\n- [9f cognitive uncontainability]\n    - operating in a rich domain\n    - [9g Vingean unpredictability]\n    - [2j strong cognitive uncontainability]\n- improvement beyond well-tested phase (from any source of improvement)\n- self-modification\n - code inspection\n - code modification\n - consequentialist programming\n     - cognitive programming\n - cognitive capability goals (being pursued effectively)\n- speed surpassing human reaction times in some interesting domain\n  - socially, organizationally, individually, materially\n\n[todo: write out a set of final dangerous abilities use/cases and then link up the cognitive abilities with which potentially dangerous scenarios they create.]',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '5',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '2016-02-24 00:03:51',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'EliezerYudkowsky',
    'AlexeiAndreev',
    'EricBruylant',
    'MatthewGraves'
  ],
  childIds: [
    'big_picture_awareness',
    'superintelligent',
    'intelligence_explosion',
    'agi',
    'advanced_nonagent',
    'efficiency',
    'standard_agent',
    'real_world',
    'sufficiently_advanced_ai',
    'relative_ability',
    'general_intelligence',
    'corps_vs_si',
    'uncontainability',
    'Vingean_uncertainty',
    'consequentialist'
  ],
  parentIds: [
    'advanced_agent_theory'
  ],
  commentIds: [
    '1j8',
    '84'
  ],
  questionIds: [],
  tagIds: [
    'work_in_progress_meta_tag'
  ],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '22384',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newChild',
      createdAt: '2017-03-25 06:28:21',
      auxPageId: 'corps_vs_si',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '22382',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '40',
      type: 'newEdit',
      createdAt: '2017-03-25 05:59:45',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '22381',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '39',
      type: 'newEdit',
      createdAt: '2017-03-25 05:22:53',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '22380',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '38',
      type: 'newEdit',
      createdAt: '2017-03-25 05:12:59',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '22087',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newChild',
      createdAt: '2017-02-18 01:43:10',
      auxPageId: 'general_intelligence',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '22085',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newParent',
      createdAt: '2017-02-17 21:26:03',
      auxPageId: 'advanced_agent_theory',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '22083',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'deleteParent',
      createdAt: '2017-02-17 21:25:55',
      auxPageId: 'ai_alignment',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '21879',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newChild',
      createdAt: '2017-01-29 20:23:39',
      auxPageId: 'relative_ability',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '21694',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newChild',
      createdAt: '2017-01-16 17:58:40',
      auxPageId: 'sufficiently_advanced_ai',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '21339',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '37',
      type: 'newEdit',
      createdAt: '2017-01-05 16:22:17',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '21337',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newChild',
      createdAt: '2017-01-05 16:12:14',
      auxPageId: 'real_world',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '20364',
      pageId: 'advanced_agent',
      userId: 'MatthewGraves',
      edit: '36',
      type: 'newEdit',
      createdAt: '2016-11-22 20:43:37',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '12248',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '35',
      type: 'newEdit',
      createdAt: '2016-06-09 23:58:41',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '12195',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '33',
      type: 'newEdit',
      createdAt: '2016-06-09 18:20:28',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '12192',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '32',
      type: 'newParent',
      createdAt: '2016-06-09 17:37:23',
      auxPageId: 'ai_alignment',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '12190',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'deleteParent',
      createdAt: '2016-06-09 17:37:16',
      auxPageId: 'advanced_safety',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '12000',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '32',
      type: 'newChild',
      createdAt: '2016-06-08 01:31:43',
      auxPageId: 'advanced_nonagent',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '11996',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '32',
      type: 'newChild',
      createdAt: '2016-06-08 00:56:15',
      auxPageId: 'agi',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '11947',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '32',
      type: 'newChild',
      createdAt: '2016-06-07 20:27:47',
      auxPageId: 'intelligence_explosion',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '11936',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '32',
      type: 'newEdit',
      createdAt: '2016-06-07 18:18:02',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '11935',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '31',
      type: 'newEdit',
      createdAt: '2016-06-07 18:12:34',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '11838',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '30',
      type: 'newChild',
      createdAt: '2016-06-06 21:43:11',
      auxPageId: 'superintelligent',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '10469',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '30',
      type: 'newChild',
      createdAt: '2016-05-16 06:17:42',
      auxPageId: 'big_picture_awareness',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9535',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '30',
      type: 'newEdit',
      createdAt: '2016-05-01 20:06:00',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9395',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '29',
      type: 'newEdit',
      createdAt: '2016-04-22 01:44:19',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9391',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'deleteRequirement',
      createdAt: '2016-04-21 20:47:58',
      auxPageId: 'standard_agent',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8007',
      pageId: 'advanced_agent',
      userId: 'EricBruylant',
      edit: '28',
      type: 'newEdit',
      createdAt: '2016-02-28 14:34:23',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '3905',
      pageId: 'advanced_agent',
      userId: 'AlexeiAndreev',
      edit: '26',
      type: 'newEdit',
      createdAt: '2015-12-16 15:57:31',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '3904',
      pageId: 'advanced_agent',
      userId: 'AlexeiAndreev',
      edit: '0',
      type: 'newAlias',
      createdAt: '2015-12-16 15:57:30',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '3903',
      pageId: 'advanced_agent',
      userId: 'AlexeiAndreev',
      edit: '25',
      type: 'newRequirement',
      createdAt: '2015-12-16 15:57:14',
      auxPageId: 'standard_agent',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '3657',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '25',
      type: 'newEdit',
      createdAt: '2015-12-04 20:22:07',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1122',
      pageId: 'advanced_agent',
      userId: 'AlexeiAndreev',
      edit: '1',
      type: 'newUsedAsTag',
      createdAt: '2015-10-28 03:47:09',
      auxPageId: 'work_in_progress_meta_tag',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '653',
      pageId: 'advanced_agent',
      userId: 'AlexeiAndreev',
      edit: '1',
      type: 'newChild',
      createdAt: '2015-10-28 03:46:58',
      auxPageId: 'uncontainability',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '654',
      pageId: 'advanced_agent',
      userId: 'AlexeiAndreev',
      edit: '1',
      type: 'newChild',
      createdAt: '2015-10-28 03:46:58',
      auxPageId: 'standard_agent',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '655',
      pageId: 'advanced_agent',
      userId: 'AlexeiAndreev',
      edit: '1',
      type: 'newChild',
      createdAt: '2015-10-28 03:46:58',
      auxPageId: 'Vingean_uncertainty',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '656',
      pageId: 'advanced_agent',
      userId: 'AlexeiAndreev',
      edit: '1',
      type: 'newChild',
      createdAt: '2015-10-28 03:46:58',
      auxPageId: 'consequentialist',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '657',
      pageId: 'advanced_agent',
      userId: 'AlexeiAndreev',
      edit: '1',
      type: 'newChild',
      createdAt: '2015-10-28 03:46:58',
      auxPageId: 'efficiency',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '421',
      pageId: 'advanced_agent',
      userId: 'AlexeiAndreev',
      edit: '1',
      type: 'newParent',
      createdAt: '2015-10-28 03:46:51',
      auxPageId: 'advanced_safety',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1573',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '24',
      type: 'newEdit',
      createdAt: '2015-10-18 23:27:59',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1572',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '23',
      type: 'newEdit',
      createdAt: '2015-10-18 23:25:34',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1571',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '22',
      type: 'newEdit',
      createdAt: '2015-10-18 23:21:39',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1570',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '21',
      type: 'newEdit',
      createdAt: '2015-07-14 02:55:00',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1569',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '20',
      type: 'newEdit',
      createdAt: '2015-07-14 02:52:16',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1568',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '19',
      type: 'newEdit',
      createdAt: '2015-07-01 18:58:45',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1567',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '18',
      type: 'newEdit',
      createdAt: '2015-06-17 03:26:13',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1566',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '17',
      type: 'newEdit',
      createdAt: '2015-06-11 18:47:13',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1565',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '16',
      type: 'newEdit',
      createdAt: '2015-06-11 18:46:55',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1564',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '15',
      type: 'newEdit',
      createdAt: '2015-06-09 21:42:12',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1563',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '14',
      type: 'newEdit',
      createdAt: '2015-04-06 19:02:55',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1562',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '13',
      type: 'newEdit',
      createdAt: '2015-04-06 19:02:31',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1561',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '12',
      type: 'newEdit',
      createdAt: '2015-04-06 18:52:52',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1560',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '11',
      type: 'newEdit',
      createdAt: '2015-04-05 00:26:57',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1559',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '10',
      type: 'newEdit',
      createdAt: '2015-03-26 20:17:07',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1558',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '9',
      type: 'newEdit',
      createdAt: '2015-03-26 20:09:34',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1557',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '8',
      type: 'newEdit',
      createdAt: '2015-03-26 20:09:09',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1556',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '7',
      type: 'newEdit',
      createdAt: '2015-03-26 20:05:30',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1555',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '6',
      type: 'newEdit',
      createdAt: '2015-03-26 19:56:31',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1554',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '5',
      type: 'newEdit',
      createdAt: '2015-03-26 19:51:19',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1553',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '4',
      type: 'newEdit',
      createdAt: '2015-03-26 00:02:21',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1552',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '3',
      type: 'newEdit',
      createdAt: '2015-03-25 19:32:28',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1551',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '2',
      type: 'newEdit',
      createdAt: '2015-03-24 18:23:11',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1550',
      pageId: 'advanced_agent',
      userId: 'EliezerYudkowsky',
      edit: '1',
      type: 'newEdit',
      createdAt: '2015-03-24 01:31:50',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'true',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}