{
  localUrl: '../page/Scalable_ai_control.html',
  arbitalUrl: 'https://arbital.com/p/Scalable_ai_control',
  rawJsonUrl: '../raw/1v1.json',
  likeableId: 'ea',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '1',
  dislikeCount: '0',
  likeScore: '1',
  individualLikes: [
    'AndrewMcKnight'
  ],
  pageId: 'Scalable_ai_control',
  edit: '7',
  editSummary: '',
  prevEdit: '6',
  currentEdit: '7',
  wasPublished: 'true',
  type: 'wiki',
  title: 'Scalable AI control',
  clickbait: '',
  textLength: '20871',
  alias: 'Scalable_ai_control',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'PaulChristiano',
  editCreatedAt: '2016-03-04 00:52:09',
  pageCreatorId: 'PaulChristiano',
  pageCreatedAt: '2016-02-03 00:30:53',
  seeDomainId: '0',
  editDomainId: '705',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '22',
  text: '\n\nBy AI control, I mean the problem of getting AI systems to do what we want them to do, to the best of their abilities.\n\nMore precisely, my goal is minimizing the gap between how well AI systems can contribute to _our_ values, and how well they can pursue _other_ values.\n\nWhy might such a gap exist?\n\nDepending on how AI develops, we may be especially good at building AI systems that pursue objectives defined [directly by their experiences](https://arbital.com/p/1v2?title=reinforcement-learning-and-linguistic-convention) — as in reinforcement learning — or which have a simple explicit representation. If human values don’t fit into these frameworks, the best AI systems may optimize simple proxies for “what we really want.”\n\nIf those proxies aren’t good enough? Then we have a gap.\n\nScalability\n===========\n\nWhether or not such a gap _could_ come to exist, today it doesn’t seem to. So: is it possible to do empirical work on AI control today?\n\nI think so. My [preferred approach](https://arbital.com/p/1w2) is to focus on **scalable** approaches: those that will continue to work, and which will preferably work _better_, as our AI systems become more capable. We should not be satisfied with control techniques that will eventually break down, or which will require continuous innovation in order to keep pace with AI capabilities.\n\nThis gives us something to do today: try to devise practical and scalable solutions to the AI control problem.\n\nEven if we don’t think that this is a good objective for current AI control research, I think that the concept of scalable AI control is very useful (and is probably more broadly acceptable than substitutes like “the [superintelligence control problem](http://globalprioritiesproject.org/2015/10/three-areas-of-research-on-the-superintelligence-control-problem/)” or “[omni-safety](https://arbital.com/p/2x)”).\n\nThis definition is not completely precise, and I provide some important clarification in Section 2. I think that it in practice there is often a pretty clear line between scalable and unscalable proposals, which I think can serve both as a useful concept and a useful research direction.\n\nIn Section 3 I’ll provide some examples of approaches which aren’t scalable, and in Section 4 I’ll discuss some alternative goals for AI control research.\n\n2. Clarification\n================\n\n##Scaling to handle what?\n\n\nIt’s only really meaningful to talk about whether a technique can be scaled to handle _some particular kind of progress_. There is no unified spectrum of “capability” that AI is progressing along steadily.\n\nIdeally we would make techniques that can scale to handle any kind of progress that might occur. In practice I am interested in techniques that can handle various simple, foreseeable kinds of progress. Once that bar can be met, we can consider a wider and wider range of possible trajectories.\n\nWhat are simple, foreseeable forms of progress? Easy examples are faster computers, better optimization strategies, or richer model classes that are easier to optimize over. More generally, researchers are working on and making continuous progress on a wide range of concrete problems, and in most cases we can imagine progress continuing for a long time, without fundamentally changing the nature of the AI control problem.\n\nWe can broaden the space of possible trajectories by considering progress on a broader range of existing techniques, including techniques that are currently impractical. We can also consider concrete future capabilities that might be attained. Or we can try to design control techniques that will extend to completely unanticipated developments, based increasingly minimal assumptions about the nature of those developments.\n\nBut for now, I think that scaling to handle concrete, foreseeable developments is a hard enough problem.\n\n### Example: reinforcement and supervised learning\n\nThere is an especially nice way to think about scalability with respect to progress in [reinforcement and supervised learning](https://arbital.com/p/1v2?title=reinforcement-learning-and-linguistic-convention).\n\nThese techniques produce systems that optimize an objective defined by explicit feedback. We can easily imagine better systems that more effectively optimize the given objective. And we can ask: do our control techniques work as our systems get better and better at optimizing these objectives, or are they predicated on implicit assumptions about the limitations of our systems? In the limit, we can consider systems that literally choose the output maximizing the given reward signal.\n\nI think that this view of scalability is distinctive to MIRI, and I think it is a great aspect of their methodology. They would use a slightly different version of the principle, in which a system might be optimized for any precisely-defined objective, but the underlying principle is quite similar.\n\nMy version essentially amounts to assuming that (1) reinforcement learning, broadly construed, [will remain a dominant methodology in AI](https://arbital.com/p/1w3), and (2) there will be no progress in [reward engineering](http://www.danieldewey.net/reward-engineering-principle.pdf).\n\nI think (1) is plausible though unlikely. I think that (2) is implausible. Ignoring future progress in reward engineering is a methodological approach, intended to help us understand the problem of reward engineering rather than to make a prediction. This brings us to:\n\n### The intended path of progress\n\nMy guess is that the null AI control policy — do nothing — would in fact scale to the actual AI progress that actually occurs.\n\nThis is just a restatement of my optimistic view of the world: I expect that we will be good at building AI systems that do the things we want them to do, by the time that we are good at building AI systems that do anything. If that’s how things go, then we wouldn’t need any additional insight to handle AI control, because by assumption there is no problem.\n\nBut my goal is to do work now that decreases the probability and extent of trouble. And from that perspective, it is quite natural to consider alternative (more problematic) trajectories for progress, and to focus on work we could do today that would make those trajectories non-problematic.\n\nThis is useful as a hedge against possible bad outcomes — the reason to work on AI control now is the possibility that it will eventually be a serious problem. But it’s not merely a hedge.\n\nAs an analogy, suppose that I want to devise techniques to make cars safer. I work at a car company, which is currently designing our 2018 model, and I’m thinking of safety features for that car. I wouldn’t say: “it seems like no further work is needed; obviously the 208 model will be built to incorporate all reasonable safety precautions.” The whole point of the exercise is to think of technologies might make the car safer. We are imagining future cars in the interests of better understanding the safety problem.\n\nI want to stress that thinking about these unfavorable trajectories _isn’t a prediction about what AI progress will look like_. It’s a methodological strategy for finding research problems that _improve the probability that AI progress will be robustly beneficial for humanity_.\n\n### Hard values, easy values\n\nMy definition of AI control may be difficult to achieve if our values are fundamentally harder to realize than some other, simpler values. This is a problem for my statement of the AI control problem: “minimizing the gap between \\[how well AI systems can contribute to _our_ values] and \\[how well AI systems can contribute to _other_ values].”\n\nMy statement of the control problem is only really meaningful because there are instrumental subgoals that are shared (or are extremely similar) between many different values, which let us compare the efficacy with which agents pursue those different values. Performance on these very similar subgoals should be used as the performance metric when interpreting my definition of AI control problem.\n\nIn fact even if we only resolved the problem for the similar-subgoals case, it would be pretty good news for AI safety. Catastrophic scenarios are mostly caused by our AI systems failing to effectively pursue [convergent instrumental subgoals](http://www.nickbostrom.com/superintelligentwill.pdf) on our behalf, and these subgoals are by definition shared by a broad range of values.\n\n### Scalable with how much work?\n\nVery few algorithms can be _literally_ applied without modification to a radically different future setting. Obviously our goal is to minimize the work that would be needed to adapt a given control approach to improved future techniques. For example, a technique that increased the difficulty of deploying future AI systems by 1%, or which required a constant amount of work to apply existing AI techniques to new problems, would seem great.\n\nThe real problem is when scaling an AI control technique relies on future people discovering as-yet-unknown insights, doing an unknown and potentially large amount of additional work, or doing an amount of additional work that is large relative to the total quantity of AI research.\n\n3. Example of non-scalable control techniques\n=============================================\n\nExample: reinforcement learning\n-------------------------------\n\n\nConsider a particularly simple technique for AI control. Start with a reinforcement learner. Give the user a button that controls the reinforcement learner’s reward. The user can ask the reinforcement learner to perform tasks, and can provide a reward whenever it succeeds. The learner will hopefully learn how to interpret and satisfy these requests.\n\nThis would be a fine approach to AI control at the moment (though in fact we can’t build systems clever enough that it’s worth doing). However, this approach will work less well as AI systems improve:\n\n- If the reinforcement learner is more competent than the user in a particular domain, then the user may not be able to provide good evaluations of the learner’s behavior. For example, if the learner is a world-class party planner, they will be forced to produce a party plan that _looks good to the user_ rather than one that would actually _be good_.\n- All the reinforcement learner really cares about is the reward signal, not the attitude of the user. So the learner is liable to manipulate, deceive, or threaten the user into receiving additional reward.\n\nBy contrast, an increasingly powerful reinforcement learner would be much more effective while optimizing for its own physical security (in order to ensure that it continues to receive a high reward). For example, if its income depended on party-planning, it would apply its full party-planning abilities towards throwing a profit-maximizing party.\n\nSo I would say this control technique is not scalable. If we want to use this technique, we will either need to accept degraded performance on the AI control problem, or (more likely) continue to do additional work as AI capabilities improve in order to ensure that control “keeps up.”\n\n## Example: imitation\n\n\nSuppose that I want to train an AI to drive a car. A very simple procedure would be to copy human driving: have an expert drive a car, record the sensor readings and the expert’s actions, and train a model which maps a sequence of sensor readings to predictions of the human’s actions. We can then use this model to control a car by doing what the human is predicted to do.\n\nSuppose for the sake of argument that this actually resulted in OK driving. (In fact it has a number of rather serious problems.)\n\nNo matter how good our learning system is, this procedure will never generate substantially superhuman driving. For example, even if a human is expecting the car in front of them to brake, it will still take them hundreds of milliseconds to actually respond to the brake lights. So a system trained to imitate human behavior will add a gratuitous delay before it responds to observed brake lights.\n\nUsing sophisticated learning systems, we could likely achieve better performance by specifying the _goal_ of driving (don’t crash, have a smooth ride, get to the destination quickly) and allowing the system to devise its own policies in order to achieve that goal.\n\nSo this approach to teaching a car to drive is also not scalable. As AI improves, systems trained using this technique will fall behind.\n\n4. Alternatives\n===============\n\nI think that building practical and scalable control systems is an especially good goal for organizing work on AI control. But there are many other possibilities (in addition to just playing it by ear). Here are a few alternatives that seem salient to me:\n\n### Pursue a long-term vision\n\n\nThe [MIRI research agenda](https://intelligence.org/files/TechnicalAgenda.pdf) is built around a particular vision for how sophisticated AI might be aligned with human interests.\n\nOnce we have a vision in mind, we can search for concrete problems that would be needed to realize this vision. This is another perfectly good source of research projects that might help with control.\n\nThe main reason I’m less keen on this approach is that it puts a lot of weight on your long-term vision. Most researchers I know who object to MIRI’s research agenda do so because they don’t think that the long-term vision is especially plausible. If you depart at that stage, we don’t really have any good “rules of the game” that can arbitrate the debate. Moreover, even if MIRI succeeds spectacularly at their research agenda, it won’t really alleviate these concerns.\n\nSo it seems like if we want to take this route, a lot of the work is being done by the first step of the problem where we identify the long-term picture and the necessary ingredients. Given that that’s where a lot of the actual work is getting done, I suspect it’s also where most of the effort should go. But this cuts against “pursuing a particular long-term vision” as an organizing goal for research.\n\nThis isn’t entirely fair, because pursuing a vision also contributes to testing the feasibility of that vision. I am more sympathetic to the kind of “pursuit” that also constitutes “testing,” for this reason.\n\n### The steering problem\n\nI previously suggested that researchers in AI control try to answer the question: “Using black-box access to human-level cognitive abilities, can we write a program that is as useful as a well-motivated human with those abilities?”\n\nI’ve found this perspective to be very useful in my own thinking, and I continue to recommend it as a question to think about.\n\nBut it suffers from (1) a lack of precision, and (2) a reliance on well-defined black-box capabilities that may not match the actual capabilities we develop.\n\nBy working with the capabilities available today, we can largely sidestep these issues. Working with existing capabilities gives us an extremely precise and internally detailed model system.\n\nWhen we think about scalability, and in particular about the kinds of progress that we should be able to cope with, we start to run into some of the same difficulties that afflict the steering problem. But these difficulties look much less precise, and much more closely wedded to actual AI progress in a way that makes it easier to agree about what kinds of extrapolation are reasonable and which are not.\n\n### Model problems\n\n\nAnother approach would be to construct toy domains in which the control problem, or some problem which we would judge to be analogous, is already difficult.\n\n**Hard cases:** For example, the reinforcement learning and imitation solutions discussed in Section 2 don’t really _perfectly_ solve even the existing control problem. So we could focus on having very good solutions to the control problem in challenging domains, where human performance is worse than AI performance and where humans cannot easily evaluate the quality of a given performance.\n\nI think that this is a basically reasonable direction for research, and I doubt it would look too different from thinking about scalable AI control. It forces us to consider a very narrow range of regimes and to confront only a small range of possible problems, which I think is something of a disadvantage. It also forces us to really play for “small stakes,” since the failures of control in existing environments don’t seem like a huge deal (and can largely be resolved by ad hoc measures). I think this is another disadvantage, but it might be partly alleviated by emphasizing the fact that these problems are expected to get worse and are worth attacking in advance.\n\nThis approach has the advantage that it’s not totally dependent on any argument about “scalability,” or really any complex argument at all. It is able to focus on concrete problems that exist today, which are basically guaranteed to be real problems. That said, the argument for these problems being important or especially interesting probably _does_ rest on some kind of argument about the future.\n\nOverall, I think that focusing on hard cases is reasonable, and is a useful complement even if we want to focus on implementing scalable solutions, as long as we can trace failures of scalability back to failures of existing systems in _some_ domain.\n\n**Subproblems**. A different approach would be to identify problems that we think are likely to be solved as part of a successful AI control approach. Those problems might not be resolved today, even if the AI control problem is. For example, we might think that value learning will necessarily play a much larger role in future systems than it plays today. But even today we aren’t very good at value learning, and so this gives us a concrete problem to work on.\n\nThis seems basically equivalent to the “pursuing a long-term vision” point above, and has mostly the same advantages and disadvantages.\n\n**Analogies**. A final approach is to consider problems which look like they are usefully analogous to the control problem, but which are currently significantly harder. This might give us a concrete model that exposes more of the difficulties of the control problem, and it might involve fewer assumptions than picking a long-term vision.\n\nA simple example would be a game played by two AI systems. One is a reinforcement learner which has some hard-to-communicate information about what it wants done. The other is an AI assistant whose design is completely flexible, and which has significantly more computational power than the RL agent. (Alternatively or in addition, it may have some other resources that the RL agent lacks, like extra actuators.) Our goal is to specify the assistant, and provide a strategy for the RL agent, that allows the RL agent to achieve its goals nearly as effectively as if it had all of the resources of the assistant.\n\nBecause we can make the AI assistant much more powerful than the RL agent, this can allow us to capture some anticipated difficulties of AI control before we can actually build AI systems that are much more powerful than their human users.\n\nI think that building this kind of analogy might be very useful for AI control, and it seems quite worthwhile. I think it is a substantially different approach than trying to work on scalable AI control, and it might turn out to be more promising.\n\nHowever, there are many additional difficulties with setting up this kind of analogy, and I think there is a good chance that those difficulties will prove fatal. I think the biggest problems are that:\n\n- There will be solutions that work in the analogy that won’t work for the real control problem. In the example above, an RL agent might pass its reward signals on to the assistant, who could use standard RL algorithms to pursue them. Or there may be difficulties that depend on the absolute capability of the assistant rather than on the difference between its capabilities and those of the human.\n- There will be many difficulties in the analogy that aren’t difficulties in real life. In the example above, it might be quite hard to build the RL system that is supposed to be a model of humans, but this isn’t really part of the AI control problem. Or an accurate model of the problem may require building systems that are actually embedded in the environment, and building such an environment may be a massive engineering challenge orthogonal to control.\n\n### Conclusion\n\n\nI think scalability is a useful concept when reasoning about AI control. I think that designing practical but scalable AI control techniques is also a promising goal for research on AI control.\n\nThis post clarified the term, provided some examples, and discussed some alternative goals.',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '2',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '2016-02-17 20:18:17',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'PaulChristiano'
  ],
  childIds: [],
  parentIds: [
    'paul_ai_control'
  ],
  commentIds: [],
  questionIds: [],
  tagIds: [],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8268',
      pageId: 'Scalable_ai_control',
      userId: 'JessicaChuan',
      edit: '7',
      type: 'newEdit',
      createdAt: '2016-03-04 00:52:09',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '7236',
      pageId: 'Scalable_ai_control',
      userId: 'JessicaChuan',
      edit: '6',
      type: 'newEdit',
      createdAt: '2016-02-16 22:20:36',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '7234',
      pageId: 'Scalable_ai_control',
      userId: 'JessicaChuan',
      edit: '5',
      type: 'newEdit',
      createdAt: '2016-02-16 21:59:12',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '7233',
      pageId: 'Scalable_ai_control',
      userId: 'JessicaChuan',
      edit: '4',
      type: 'newEdit',
      createdAt: '2016-02-16 21:45:06',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6887',
      pageId: 'Scalable_ai_control',
      userId: 'JessicaChuan',
      edit: '3',
      type: 'newEdit',
      createdAt: '2016-02-11 22:24:47',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6201',
      pageId: 'Scalable_ai_control',
      userId: 'JessicaChuan',
      edit: '0',
      type: 'deleteChild',
      createdAt: '2016-02-03 00:41:21',
      auxPageId: 'reinforcement_learning_linguistic_convention',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6199',
      pageId: 'Scalable_ai_control',
      userId: 'JessicaChuan',
      edit: '2',
      type: 'newChild',
      createdAt: '2016-02-03 00:41:10',
      auxPageId: 'reinforcement_learning_linguistic_convention',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6198',
      pageId: 'Scalable_ai_control',
      userId: 'JessicaChuan',
      edit: '2',
      type: 'newEdit',
      createdAt: '2016-02-03 00:32:33',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6197',
      pageId: 'Scalable_ai_control',
      userId: 'JessicaChuan',
      edit: '0',
      type: 'newAlias',
      createdAt: '2016-02-03 00:31:25',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6196',
      pageId: 'Scalable_ai_control',
      userId: 'JessicaChuan',
      edit: '1',
      type: 'newParent',
      createdAt: '2016-02-03 00:31:21',
      auxPageId: 'paul_ai_control',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6194',
      pageId: 'Scalable_ai_control',
      userId: 'JessicaChuan',
      edit: '0',
      type: 'deleteParent',
      createdAt: '2016-02-03 00:31:15',
      auxPageId: 'implicit_consequentialism',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6192',
      pageId: 'Scalable_ai_control',
      userId: 'JessicaChuan',
      edit: '1',
      type: 'newEdit',
      createdAt: '2016-02-03 00:30:53',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6191',
      pageId: 'Scalable_ai_control',
      userId: 'JessicaChuan',
      edit: '0',
      type: 'newParent',
      createdAt: '2016-02-03 00:20:01',
      auxPageId: 'implicit_consequentialism',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}