{
  localUrl: '../page/optimized_agent_appears_coherent.html',
  arbitalUrl: 'https://arbital.com/p/optimized_agent_appears_coherent',
  rawJsonUrl: '../raw/21.json',
  likeableId: 'chaminpradeepherath',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '2',
  dislikeCount: '0',
  likeScore: '2',
  individualLikes: [
    'AlexeiAndreev',
    'EliezerYudkowsky'
  ],
  pageId: 'optimized_agent_appears_coherent',
  edit: '18',
  editSummary: '',
  prevEdit: '17',
  currentEdit: '18',
  wasPublished: 'true',
  type: 'wiki',
  title: 'Sufficiently optimized agents appear coherent',
  clickbait: 'If you could think as well as a superintelligence, you'd be at least that smart yourself.',
  textLength: '15756',
  alias: 'optimized_agent_appears_coherent',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'true',
  voteType: 'probability',
  votesAnonymous: 'false',
  editCreatorId: 'AlexeiAndreev',
  editCreatedAt: '2015-12-16 02:16:04',
  pageCreatorId: 'EliezerYudkowsky',
  pageCreatedAt: '2015-03-20 00:12:00',
  seeDomainId: '0',
  editDomainId: 'EliezerYudkowsky',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '4',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '313',
  text: '[summary: Agents which have been subject to sufficiently strong optimization pressures will tend to appear, from a human perspective, as if they obey some bounded form of the Bayesian coherence axioms for probabilistic beliefs and decision theory.]\n\n## Arguments\n\nSummary:  Violations of coherence constraints in probability theory and decision theory correspond to qualitatively destructive or dominated behaviors.  Coherence violations so easily computed as to be humanly predictable should be eliminated by optimization strong enough and general enough to reliably eliminate behaviors that are qualitatively dominated by cheaply computable alternatives.   From our perspective this should produce agents such that, *ceteris paribus*, we do not think we can predict, in advance, any coherence violation in their behavior.\n\n### Coherence violations correspond to qualitatively destructive behaviors\n\nThere is a correspondence between, on the one hand, thought processes that seem to violate intuitively appealing coherence constraints from the Bayesian family, and on the other hand, sequences of overt behaviors that leave the agent qualitatively worse off than before or that seem intuitively dominated by other behaviors.\n\nFor example, suppose you claim that you prefer A to B, B to C, and C to A.  This 'circular preference' (A > B > C > A) seems intuitively unappealing; we can also see how to visualize it as an agent with a qualitatively self-destructive behavior as follows:\n\n- You prefer to be in San Francisco rather than Berkeley, and if you are in Berkeley you will pay \\$50 for a taxi ride to San Francisco.\n- You prefer San Jose to San Francisco and if in San Francisco will pay \\$50 to go to San Jose.  (Still no problem so far.)\n- You like Berkeley more than San Jose and if in San Jose will pay \\$50 to go to Berkeley.\n\nThe corresponding agent will spend \\$150 on taxi rides and then end up in the same position, perhaps ready to spend even more money on taxi rides.  The agent is strictly, qualitatively worse off than before.  We can see this, in some sense, even though the agent's preferences are partially incoherent.  Assuming the agent has a coherent preference for money or something that can be bought with money, alongside its incoherent preference for location, then the circular trip has left it strictly worse off (since in the end the location was unchanged).  The circular trip is still dominated by the option of staying in the same place.\n\n(The above is a variant of an argument first presented by Steve Omohundro.)\n\n(Phenomena like this, known as 'preference reversals', are a common empirical finding in behavioral psychology.  Since a human mind is an ever-changing balance of drives and desires that can be heightened or weakened by changes of environmental context, eliciting inconsistent sets of preferences from humans isn't hard and can consistently be done in the laboratory in economics experiments, especially if the circularity is buried among other questions or distractors.)\n\nAs another illustration, consider the Allais paradox.  As a simplified example, consider offering subjects a choice between hypothetical Gamble 1A, a certainty of receiving \\$1 million if a die comes up anywhere from 00-99, and Gamble 1B, a 10% chance of receiving nothing (if the die comes up 00-09) and a 90% chance of receiving \\$5 million (if the die comes up 10-99). Most subjects choose Gamble 1A. So far, we have a scenario that could be consistent with a coherent utility function in which the interval of desirability from receiving \\$0 to receiving \\$1 million is more than nine times the interval from receiving \\$1 million to receiving \\$5 million.\n\nHowever, suppose only half the subjects are randomly assigned to this condition, and the other half are asked to choose between Gamble 2A, a 90% chance of receiving nothing (00-89) and a 10% chance of receiving \\$1 million (90-99), versus Gamble 2B, a 91% chance of receiving nothing (00-90) and a 9% chance of receiving \\$5 million (91-99). Most subjects in this case will pick Gamble 2B. This combination of results guarantees that at least some subjects must behave in a way that doesn't correspond to any consistent utility function over outcomes.\n\nThe Allais Paradox (in a slightly different formulation) was initially celebrated as showing that humans don't obey the expected utility axioms, and it was thought that maybe the expected utility axioms were 'wrong' in some sense.  However, in accordance with the standard families of coherence theorems, we can crank the coherence violation to exhibit a qualitatively dominated behavior:\n\nSuppose you show me a switch, set to "A", that determines whether I will get Gamble 2A or Gamble 2B. You offer me a chance to pay you one penny to throw the switch from A to B, so I do so (I now have a 91% chance of nothing, and a 9% chance of \\$5 million). Then you roll one of two ten-sided dice to determine the percentile result, and the first die, the tens digit, comes up "9". Before rolling the second die, you offer to throw the switch back from B to A in exchange for another penny. Since the result of the first die transforms the experiment into Gamble 1A vs. 1B, I take your offer. You now have my two cents on the subject. (If the result of the first die is anything but 9, I am indifferent to the setting of the switch since I receive \\$0 either way.)\n\nAgain, we see a manifestation of a powerful family of theorems showing that agents which cannot be seen as corresponding to any coherent probabilities and consistent utility function will exhibit qualitatively destructive behavior, like paying someone a cent to throw a switch and then paying them another cent to throw it back.\n\nThere is a large literature on different sets of coherence constraints that all yield expected utility, starting with the Von Neumann-Morgenstern Theorem.  No other decision formalism has comparable support from so many families of differently phrased coherence constraints.\n\nThere is similarly a large literature on many classes of coherence arguments that yield classical probability theory, such as the Dutch Book theorems.  There is no substantively different rival to probability theory and decision theory which is competitive when it comes to (a) plausibly having some bounded analogue which could appear to describe the uncertainty of a powerful cognitive agent, and (b) seeming highly motivated by coherence constraints, that is, being forced by the absence of qualitatively harmful behaviors that correspond to coherence violations.\n\n### Generic optimization pressures, if sufficiently strong and general, should be expected to eliminate behaviors that are dominated by clearly visible alternatives.\n\nEven an incoherent collection of shifting drives and desires may well recognize, after having paid their two cents or \\$150, that they are wasting money, and try to do things differently (self-modify).  An AI's programmers may recognize that, from their own perspective, they would rather not have their AI spending money on circular taxi rides.  This implies a path from incoherent non-advanced agents to coherent advanced agents as more and more optimization power is applied to them.\n\nA sufficiently advanced agent would presumably catch on to the existence of coherence theorems and see the abstract pattern of the problems (as humans already have).  But it is not necessary to suppose that these qualitatively destructive behaviors are being targeted because they are 'irrational'.  It suffices for the incoherencies to be targeted as 'problems' because particular cases of them are recognized as having produced clear, qualitative losses.\n\nWithout knowing in advance the exact specifics of the optimization pressures being applied, it seems that, in advance and ceteris paribus, we should expect that paying a cent to throw a switch and then paying again to switch it back, or throwing away \\$150 on circular taxi rides, are qualitatively destructive behaviors that optimization would tend to eliminate.  E.g. one expects a consequentialist goal-seeking agent would prefer, or a policy reinforcement learner would be reinforced, or a fitness criterion would evaluate greater fitness, etcetera, for eliminating the behavior that corresponds to incoherence, ceteris paribus and given the option of eliminating it at a reasonable computational cost.\n\nIf there is a particular kind of optimization pressure that seems sufficient to produce a cognitively highly advanced agent, but which also seems sure to overlook some particular form of incoherence, then this would present a loophole in the overall argument and yield a route by which an advanced agent with that particular incoherence might be produced (although the agent's internal optimization must also be predicted to tolerate the same incoherence, as otherwise the agent will self-modify away from it).\n\n### Eliminating behaviors that are dominated by cheaply computable alternative behaviors will produce cognition that looks Bayesian-coherent from our perspective.\n\nPerfect epistemic and instrumental coherence is too computationally expensive for bounded agents to achieve.  Consider e.g. the conjunction rule of probability that P(A&B) <= P(A).  If A is a theorem, and B is a lemma very helpful in proving A, then asking the agent for the probability of A alone may elicit a lower answer than asking the agent about the joint probability of A&B (since thinking of B as a lemma increases the subjective probability of A).  This is not a full-blown form of conjunction fallacy since there is no particular time at which the agent explicitly assigns lower probability to P(A&B %% A&~B) than to P(A&B).  But even for an advanced agent, if a human was watching the series of probability assignments, the human might be able to say some equivalent of, "Aha, even though the agent was exposed to no new outside evidence, it assigned probability X to P(A) at time t, and then assigned probability Y>X to P(A&B) at time t+2."\n\nTwo notions of "sufficiently optimized agents will appear coherent (to humans)" that might be salvaged from the above objection are as follows:\n\n* There will be some *bounded* notion of Bayesian rationality that incorporates e.g. a theory of LogicalUncertainty which agents will appear from a human perspective to strictly obey.  All departures from this bounded coherence that humans can understand using their own computing power will have been eliminated.\n* [OptimizedAppearCoherent]:   It will not be possible for humans to *specifically predict in advance* any large coherence violation as e.g. the above intertemporal conjunction fallacy.  Anything simple enough and computable cheaply enough for humans to predict in advance will also be computationally possible for the agent to eliminate in advance.  Any predictable coherence violation which is significant enough to be humanly worth noticing, will also be damaging enough to be worth eliminating.\n\nAlthough the first notion of salvageable coherence above seems to us quite plausible, it has a large gap with respect to what this bounded analogue of rationality might be.  Insofar as [OptimizedAppearCoherent optimized agents appearing coherent] has practical implications, these implications should probably rest upon the second line of argument.\n\nOne possible loophole of the second line of argument might be some predictable class of incoherences which are not at all damaging to the agent and hence not worth spending even relatively tiny amounts of computing power to eliminate.  If so, this would imply some possible humanly predictable incoherences of advanced agents, but these incoherences would not be *exploitable* to cause any final outcome that is less than maximally preferred by the agent, including scenarios where the agent spends resources it would not otherwise spend, etc.\n\nA final implicit step is the assumption that when all humanly-visible agent-damaging coherence violations have been eliminated, the agent should look to us coherent; or that if we cannot predict specific coherence violations in advance, then we should reason about the agent as if it is coherent.  We don't yet see a relevant case where this would fail, but any failure of this step could also produce a loophole in the overall argument.\n\n## Caveats\n\n### Some possible mind designs may evade the default expectation\n\nSince [ mind design space is large], we should expect with high probability that there are at least some architectures that evade the above arguments and describe highly optimized cognitive systems, or reflectively stable systems, that appear to humans to systematically depart from bounded Bayesianism.\n\n### There could be some superior alternative to probability theory and decision theory that is Bayesian-incoherent\n\nWhen it comes to the actual outcome for advanced agents, the relevant fact is not whether there are currently some even more appealing alternatives to probability theory or decision theory, but whether these exist in principle.  The human species has not been around long enough for us to be sure that this is not the case.\n\nRemark one:  To advance-predict specific incoherence in an advanced agent, (a) we'd need to know what the superior alternative was and (b) it would need to lead to the equivalent of going around in loops from San Francisco to San Jose to Berkeley.\n\nRemark two: If on some development methodology it might prove catastrophic for there to exist some *generic* unknown superior to probability theory or decision theory, then we should perhaps be worried on this score.  Especially since we can be reasonably sure that an advanced agent cannot actually use probability theory and decision theory, and must use some bounded analogue if it uses any analogue at all.\n\n### A cognitively powerful agent might not be sufficiently optimized\n\nScenarios that negate [29], such as [ brute forcing non-recursive intelligence], can potentially evade the 'sufficiently optimized' condition required to yield predicted coherence.  E.g., it might be possible to create a cognitively powerful system by overdriving some fixed set of algorithms, and then to prevent this system from optimizing itself or creating offspring agents in the environment.  This could allow the creation of a cognitively powerful system that does not appear to us as a bounded Bayesian.  (If, for some reason, that was a good idea.)\n\n## Implications\n\nIf probability high:  The predictions we make today about behaviors of generic advanced agents should not depict them as being visibly, specifically incoherent from a probability-theoretic or decision-theoretic perspective.\n\nIf probability not extremely high:  If it were somehow necessary or helpful for safety to create an incoherent agent architecture, this might be possible, though difficult.  The development methodology would need to contend with both the optimization pressures producing the agent, and the optimization pressures that the agent itself might apply to itself or to environmental subagents.  Successful [ intelligence brute forcing] scenarios in which a cognitively powerful agent is produced by using a great deal of computing power on known algorithms, and then the agent is somehow forbidden from self-modifying or creating other environmental agents, might be able to yield predictably incoherent agents.\n\nIf probability not extremely high:  The assumption that an advanced agent will become Bayesian-coherent should not be a [ load bearing premise] of a safe development methodology unless there are further safeguards or fallbacks.  A safe development methodology should not fail catastrophically if there exists a generic, unknown superior to probability theory or decision theory.',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '2',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '2016-02-13 16:46:30',
  hasDraft: 'false',
  votes: [
    {
      value: '85',
      userId: 'EliezerYudkowsky',
      createdAt: '2015-03-20 00:43:57'
    }
  ],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '1',
  currentUserVote: '-2',
  voteCount: '1',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'EliezerYudkowsky',
    'AlexeiAndreev'
  ],
  childIds: [],
  parentIds: [
    'ai_alignment'
  ],
  commentIds: [],
  questionIds: [],
  tagIds: [],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '3825',
      pageId: 'optimized_agent_appears_coherent',
      userId: 'AlexeiAndreev',
      edit: '0',
      type: 'newAlias',
      createdAt: '2015-12-16 02:16:04',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '3826',
      pageId: 'optimized_agent_appears_coherent',
      userId: 'AlexeiAndreev',
      edit: '18',
      type: 'newEdit',
      createdAt: '2015-12-16 02:16:04',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '354',
      pageId: 'optimized_agent_appears_coherent',
      userId: 'AlexeiAndreev',
      edit: '1',
      type: 'newParent',
      createdAt: '2015-10-28 03:46:51',
      auxPageId: 'ai_alignment',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1428',
      pageId: 'optimized_agent_appears_coherent',
      userId: 'EliezerYudkowsky',
      edit: '17',
      type: 'newEdit',
      createdAt: '2015-04-05 00:26:46',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1427',
      pageId: 'optimized_agent_appears_coherent',
      userId: 'EliezerYudkowsky',
      edit: '16',
      type: 'newEdit',
      createdAt: '2015-03-26 21:27:54',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1426',
      pageId: 'optimized_agent_appears_coherent',
      userId: 'EliezerYudkowsky',
      edit: '15',
      type: 'newEdit',
      createdAt: '2015-03-24 18:21:27',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1425',
      pageId: 'optimized_agent_appears_coherent',
      userId: 'EliezerYudkowsky',
      edit: '14',
      type: 'newEdit',
      createdAt: '2015-03-23 21:17:44',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1424',
      pageId: 'optimized_agent_appears_coherent',
      userId: 'EliezerYudkowsky',
      edit: '13',
      type: 'newEdit',
      createdAt: '2015-03-23 21:13:23',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1423',
      pageId: 'optimized_agent_appears_coherent',
      userId: 'EliezerYudkowsky',
      edit: '12',
      type: 'newEdit',
      createdAt: '2015-03-23 21:08:44',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1422',
      pageId: 'optimized_agent_appears_coherent',
      userId: 'EliezerYudkowsky',
      edit: '11',
      type: 'newEdit',
      createdAt: '2015-03-23 21:07:09',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1421',
      pageId: 'optimized_agent_appears_coherent',
      userId: 'EliezerYudkowsky',
      edit: '10',
      type: 'newEdit',
      createdAt: '2015-03-23 20:09:55',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1420',
      pageId: 'optimized_agent_appears_coherent',
      userId: 'EliezerYudkowsky',
      edit: '9',
      type: 'newEdit',
      createdAt: '2015-03-23 20:07:51',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1419',
      pageId: 'optimized_agent_appears_coherent',
      userId: 'EliezerYudkowsky',
      edit: '8',
      type: 'newEdit',
      createdAt: '2015-03-23 19:49:52',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1418',
      pageId: 'optimized_agent_appears_coherent',
      userId: 'EliezerYudkowsky',
      edit: '7',
      type: 'newEdit',
      createdAt: '2015-03-23 19:48:32',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1417',
      pageId: 'optimized_agent_appears_coherent',
      userId: 'EliezerYudkowsky',
      edit: '6',
      type: 'newEdit',
      createdAt: '2015-03-23 19:47:05',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1416',
      pageId: 'optimized_agent_appears_coherent',
      userId: 'EliezerYudkowsky',
      edit: '5',
      type: 'newEdit',
      createdAt: '2015-03-23 19:37:39',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1415',
      pageId: 'optimized_agent_appears_coherent',
      userId: 'EliezerYudkowsky',
      edit: '3',
      type: 'newEdit',
      createdAt: '2015-03-20 00:43:45',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}