{
  localUrl: '../page/patch_resistant.html',
  arbitalUrl: 'https://arbital.com/p/patch_resistant',
  rawJsonUrl: '../raw/48.json',
  likeableId: '2301',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '4',
  dislikeCount: '0',
  likeScore: '4',
  individualLikes: [
    'AndrewMcKnight',
    'AnnaSalamon',
    'EricRogstad',
    'NunoFernandes'
  ],
  pageId: 'patch_resistant',
  edit: '17',
  editSummary: '',
  prevEdit: '16',
  currentEdit: '17',
  wasPublished: 'true',
  type: 'wiki',
  title: 'Patch resistance',
  clickbait: 'One does not simply solve the value alignment problem.',
  textLength: '10450',
  alias: 'patch_resistant',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'EricRogstad',
  editCreatedAt: '2016-06-27 02:35:02',
  pageCreatorId: 'EliezerYudkowsky',
  pageCreatedAt: '2015-04-06 23:52:03',
  seeDomainId: '0',
  editDomainId: 'EliezerYudkowsky',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '1',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '313',
  text: 'A proposed [6r foreseeable difficulty] of [2v aligning advanced agents] is furthermore proposed to be "patch-resistant" if the speaker thinks that most simple or naive solutions will fail to resolve the difficulty and just regenerate it somewhere else.\n\nTo call a problem "patch-resistant" is not to assert that it is unsolvable, but it does mean the speaker is cautioning against naive or simple solutions.\n\nOn most occasions so far, alleged cases of patch-resistance are said to stem from one of two central sources:\n\n- The difficulty arises from a [10g convergent instrumental strategy] executed by the AI, and simple patches aimed at blocking one observed bad behavior will not stop [42 a very similar behavior] from popping up somewhere else.\n- The difficulty arises because the desired behavior has [5l high algorithmic complexity] and simple attempts to pinpoint beneficial behavior are doomed to fail.\n\n## Instrumental-convergence patch-resistance\n\nExample:  Suppose you want your AI to have a [2xd shutdown button]:\n\n- You first try to achieve this by writing a shutdown function into the AI's code.\n- After the AI becomes self-modifying, it deletes the code because it is ([10g convergently]) the case that the AI can accomplish its goals better by not being shut down.\n- You add a patch to the utility function giving the AI minus a million points if the AI deletes the shutdown function or prevents it from operating.\n- The AI responds by writing a new function that reboots the AI after the shutdown completes, thus technically not preventing the shutdown.\n- You respond by again patching the AI's utility function to give the AI minus a million points if it continues operating after the shutdown.\n- The AI builds an environmental subagent that will accomplish the AI's goals while the AI itself is technically "shut down".\n\nThis is the first sort of patch resistance, the sort alleged to arise from attempts to defeat an [10g instrumental convergence] with simple patches meant to get rid of one observed kind of bad behavior.  After one course of action is blocked by a specific obstacle, [42 the next-best course of action remaining is liable to be highly similar to the one that was just blocked].\n\n## Complexity-of-value patch-resistance\n\nExample:\n\n- You want your AI to accomplish good in the world, which is presently highly correlated with making people happy.  Happiness is presently highly correlated with smiling.  You build an AI that [10d tries to achieve more smiling].\n- After the AI proposes to force people to smile by attaching metal pins to their lips, you realize that this current empirical association of smiling and happiness doesn't mean that *maximum* smiling must occur in the presence of *maximum* happiness.\n- Although it's much more complicated to infer, you try to reconfigure the AI's utility function to be about a certain class of brain states that has previously in practice produced smiles.\n- The AI successfully generalizes the concept of pleasure, and begins proposing policies to give people heroin.\n- You try to add a patch excluding artificial drugs.\n- The AI proposes a genetic modification producing high levels of endogenous opiates.\n- You try to explain that what's really important is not forcing the brain to experience pleasure, but rather, people experiencing events that naturally cause happiness.\n- The AI proposes to put everyone in the Matrix...\n\nSince the programmer-[6h intended] concept is [5l actually highly complicated], simple concepts will systematically fail to have their optimum at the same point as the complex intended concept.  By the [fragility_of_value fragility of value], the optimum of the simple concept will almost certainly not be a *high* point of the complex intended concept.  Since [4v5 most concepts are *not* surprisingly compressible], there probably *isn't* any simple concept whose maximum identifies that fragile peak of value.  This explains why we would reasonably expect problems of [2w perverse instantiation] to pop up over and over again, the optimum of the revised concept moving to a new weird extreme each time the programmer tries to hammer down the next [43g weird alternative] the AI comes up with.\n\nIn other words:  There's a large amount of [5v algorithmic information] or many independent [2fr reflectively consistent degrees of freedom] in the correct answer, the plans we *want* the AI to come up with, but we've only given the AI relatively simple concepts that can't [2s3 identify] those plans.\n\n# Analogues in the history of AI\n\nThe result of trying to tackle overly [42g general] problems using AI algorithms too narrow for those general problems, usually appears in the form of [an infinite number of special cases](http://lesswrong.com/lw/l9/artificial_addition/) with a new special case needing to be handled for every problem instance.  In the case of narrow AI algorithms tackling a general problem, this happens because the narrow algorithm, being narrow, is not capable of capturing the deep structure of the general problem and its solution.\n\nSuppose that burglars, and also earthquakes, can cause burglar alarms to go off.  Today we can represent this kind of scenario using a Bayesian network or causal model which will *compactly* yield probabilistic inferences along the lines of, "If the burglar alarm goes off, that probably indicates there's a burglar, unless you learn there was an earthquake, in which case there's probably not a burglar" and "If there's an earthquake, the burglar alarm probably goes off."\n\nDuring the era where everything in AI was being represented by first-order logic and nobody knew about causal models, [ people devised increasingly intricate "nonmonotonic logics"] to try to represent inference rules like (simultaneously) $alarm \\rightarrow burglar, \\ earthquake \\rightarrow alarm,$ and $(alarm \\wedge earthquake) \\rightarrow \\neg burglar.$  But first-order logic wasn't naturally a good surface fit to the set of inferences needed, and the AI programmers didn't know how to compactly capture the structure that causal models capture.  So the "nonmonotonic logic" approach proliferated an endless nightmare of special cases.\n\nCognitive problems like "modeling causal phenomena" or "being good at math" (aka understanding which mathematical premises imply which mathematical conclusions) might be *general* enough to defeat modern narrow-AI algorithms.  But these domains still seem like they should have something like a central core, leading us to expect [correlated_covereage correlated coverage] of the domain in [2c sufficiently advanced] agents.  You can't conclude that because a system is very good at solving arithmetic problems, it will be good at proving Fermat's Last Theorem.  But if a system is smart enough to independently prove Fermat's Last Theorem *and* the Poincare Conjecture *and* the independence of the Axiom of Choice in Zermelo-Frankel set theory, it can probably also - without further handholding - figure out Godel's Theorem.  You don't need to *go on* programming in one special case after another of mathematical competency.  The fact that humans could figure out all these different areas, without needing to be independently reprogrammed for each one by natural selection, says that there's something like a central tendency underlying competency in all these areas.\n\nIn the case of [5l complexity of value], the thesis is that there are many independent [2fr reflectively consistent degrees of freedom] in our [6h intended] specification of what's [55 good, bad, or best].  Getting one degree of freedom aligned with our intended result doesn't mean that other degrees of freedom need to align with our intended result.  So trying to "patch" the first simple specification that doesn't work, is likely to result in a different specification that doesn't work.\n\nWhen we try to use a narrow AI algorithm to attack a problem which has a central tendency *requiring general intelligence to capture,* or at any rate requiring some new structure that the narrow AI algorithm can't handle, we're effectively asking the narrow AI algorithm to learn something that has no simple structure *relative to* that algorithm.  This is why early AI researchers' experience with "lack of common sense" *that you can't patch with special cases* may be [6r foreseeably] indicative of how frustrating it would be, in practice, to repeatedly try to "patch" a kind of difficulty that we may foreseeably need to confront in aligning AI.\n\nThat is:  Whenever it feels to a human like you want to yell at the AI for its lack of "common sense", you're probably looking at a domain where trying to patch that particular AI answer is just going to lead into another answer that lacks "common sense".  Previously in AI history, this happened because real-world problems had no simple central learnable solution relative to the narrow AI algorithm.  In value alignment, something similar could happen because of the [5l complexity of our value function], whose evaluations *also* [4v2 feel to a human] like "common sense".\n\n# Relevance to alignment theory\n\nPatch resistance, and its sister issue of lack of [1d6 correlated coverage], is a central reason why aligning advanced agents could be way harder, way more dangerous, and way more likely to actually kill everyone in practice, compared to optimistic scenarios.  It's a primary reason to worry, "Uh, what if *aligning* AI is actually way harder than it might look to some people, the way that *building AGI in the first place* turned out not to be something you could do in two months over the summer?"\n\nIt's also a reason to worry about [6q context disasters] revolving around capability gains:  Anything you had to patch-until-it-worked at AI capability level $k$ is probably going to break *hard* at capability $l \\gg k.$  This is doubly catastrophic in practice if the pressures to "just get the thing running today" are immense.\n\nTo the extent that we can see the central project of AI alignment as revolving around finding a set of alignment ideas that *do* have simple central tendencies and *are* specifiable or learnable which together add up to a safe but powerful AI - that is, finding domains with correlated coverage that add up to a safe AI that can do something pivotal - we could see the central project of AI alignment as finding a collectively good-enough set of safety-things we can do *without* endless patching.',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '2',
  maintainerCount: '2',
  userSubscriberCount: '0',
  lastVisit: '2016-02-13 02:22:52',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'EliezerYudkowsky',
    'AlexeiAndreev',
    'EricRogstad'
  ],
  childIds: [
    'unforeseen_maximum'
  ],
  parentIds: [
    'ai_alignment'
  ],
  commentIds: [
    '7c'
  ],
  questionIds: [],
  tagIds: [],
  relatedIds: [
    'low_impact',
    'edge_instantiation',
    'nearest_unblocked'
  ],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '16062',
      pageId: 'patch_resistant',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'deleteChild',
      createdAt: '2016-07-07 22:01:06',
      auxPageId: 'edge_instantiation',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '14637',
      pageId: 'patch_resistant',
      userId: 'EricRogstad',
      edit: '17',
      type: 'newEdit',
      createdAt: '2016-06-27 02:35:02',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '14635',
      pageId: 'patch_resistant',
      userId: 'EliezerYudkowsky',
      edit: '16',
      type: 'newEdit',
      createdAt: '2016-06-27 02:07:49',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '14634',
      pageId: 'patch_resistant',
      userId: 'EliezerYudkowsky',
      edit: '15',
      type: 'newEdit',
      createdAt: '2016-06-27 02:06:47',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '14629',
      pageId: 'patch_resistant',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newTag',
      createdAt: '2016-06-27 01:43:13',
      auxPageId: '4v4',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '14628',
      pageId: 'patch_resistant',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'deleteTag',
      createdAt: '2016-06-27 01:43:09',
      auxPageId: 'work_in_progress_meta_tag',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '14623',
      pageId: 'patch_resistant',
      userId: 'EliezerYudkowsky',
      edit: '14',
      type: 'newEdit',
      createdAt: '2016-06-27 01:40:08',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '14622',
      pageId: 'patch_resistant',
      userId: 'EliezerYudkowsky',
      edit: '13',
      type: 'newEdit',
      createdAt: '2016-06-27 01:38:37',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '14620',
      pageId: 'patch_resistant',
      userId: 'EliezerYudkowsky',
      edit: '12',
      type: 'newEdit',
      createdAt: '2016-06-27 01:33:35',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9477',
      pageId: 'patch_resistant',
      userId: 'EliezerYudkowsky',
      edit: '11',
      type: 'newUsedAsTag',
      createdAt: '2016-04-29 02:08:19',
      auxPageId: 'nearest_unblocked',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9475',
      pageId: 'patch_resistant',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'deleteChild',
      createdAt: '2016-04-29 02:08:16',
      auxPageId: 'nearest_unblocked',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8688',
      pageId: 'patch_resistant',
      userId: 'EliezerYudkowsky',
      edit: '11',
      type: 'newUsedAsTag',
      createdAt: '2016-03-18 22:29:07',
      auxPageId: 'low_impact',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '3807',
      pageId: 'patch_resistant',
      userId: 'AlexeiAndreev',
      edit: '0',
      type: 'newAlias',
      createdAt: '2015-12-16 00:44:30',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '3808',
      pageId: 'patch_resistant',
      userId: 'AlexeiAndreev',
      edit: '11',
      type: 'newEdit',
      createdAt: '2015-12-16 00:44:30',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1113',
      pageId: 'patch_resistant',
      userId: 'AlexeiAndreev',
      edit: '1',
      type: 'newUsedAsTag',
      createdAt: '2015-10-28 03:47:09',
      auxPageId: 'work_in_progress_meta_tag',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '556',
      pageId: 'patch_resistant',
      userId: 'AlexeiAndreev',
      edit: '1',
      type: 'newChild',
      createdAt: '2015-10-28 03:46:58',
      auxPageId: 'unforeseen_maximum',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '557',
      pageId: 'patch_resistant',
      userId: 'AlexeiAndreev',
      edit: '1',
      type: 'newChild',
      createdAt: '2015-10-28 03:46:58',
      auxPageId: 'edge_instantiation',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '558',
      pageId: 'patch_resistant',
      userId: 'AlexeiAndreev',
      edit: '1',
      type: 'newChild',
      createdAt: '2015-10-28 03:46:58',
      auxPageId: 'nearest_unblocked',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '350',
      pageId: 'patch_resistant',
      userId: 'AlexeiAndreev',
      edit: '1',
      type: 'newParent',
      createdAt: '2015-10-28 03:46:51',
      auxPageId: 'ai_alignment',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1320',
      pageId: 'patch_resistant',
      userId: 'EliezerYudkowsky',
      edit: '10',
      type: 'newEdit',
      createdAt: '2015-07-18 18:58:38',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1319',
      pageId: 'patch_resistant',
      userId: 'EliezerYudkowsky',
      edit: '9',
      type: 'newEdit',
      createdAt: '2015-07-18 18:58:29',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1318',
      pageId: 'patch_resistant',
      userId: 'EliezerYudkowsky',
      edit: '8',
      type: 'newEdit',
      createdAt: '2015-07-18 18:57:52',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1317',
      pageId: 'patch_resistant',
      userId: 'EliezerYudkowsky',
      edit: '7',
      type: 'newEdit',
      createdAt: '2015-07-18 18:53:30',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1316',
      pageId: 'patch_resistant',
      userId: 'EliezerYudkowsky',
      edit: '6',
      type: 'newEdit',
      createdAt: '2015-04-08 20:50:50',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1315',
      pageId: 'patch_resistant',
      userId: 'EliezerYudkowsky',
      edit: '3',
      type: 'newEdit',
      createdAt: '2015-04-07 00:03:30',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1314',
      pageId: 'patch_resistant',
      userId: 'EliezerYudkowsky',
      edit: '2',
      type: 'newEdit',
      createdAt: '2015-04-06 23:53:11',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1313',
      pageId: 'patch_resistant',
      userId: 'EliezerYudkowsky',
      edit: '1',
      type: 'newEdit',
      createdAt: '2015-04-06 23:52:03',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'true',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}