{
  localUrl: '../page/taskagi_open_problems.html',
  arbitalUrl: 'https://arbital.com/p/taskagi_open_problems',
  rawJsonUrl: '../raw/2mx.json',
  likeableId: '1565',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '4',
  dislikeCount: '0',
  likeScore: '4',
  individualLikes: [
    'AlexeiAndreev',
    'AndrewMcKnight',
    'AdeleLopez',
    'EliezerYudkowsky'
  ],
  pageId: 'taskagi_open_problems',
  edit: '19',
  editSummary: '',
  prevEdit: '18',
  currentEdit: '19',
  wasPublished: 'true',
  type: 'wiki',
  title: 'Open subproblems in aligning a Task-based AGI',
  clickbait: 'Open research problems, especially ones we can model today, in building an AGI that can "paint all cars pink" without turning its future light cone into pink-painted cars.',
  textLength: '13745',
  alias: 'taskagi_open_problems',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'EliezerYudkowsky',
  editCreatedAt: '2016-04-14 22:15:06',
  pageCreatorId: 'EliezerYudkowsky',
  pageCreatedAt: '2016-03-15 20:50:53',
  seeDomainId: '0',
  editDomainId: 'EliezerYudkowsky',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '361',
  text: 'MIRI and related organizations have recently become more interested in trying to sponsor (technical) work on *Task AGI* subproblems.  A [6w task-based agent], aka Genie in Bostrom's lexicon, is an AGI that's meant to implement short-term goals identified to it by the users, rather than the AGI being a Bostromian "Sovereign" that engages in long-term strategic planning and self-directed, open-ended operations.\n\nA Task AGI might be safer than a Sovereign because:\n\n- It is possible to [2qq query the user] before and during task performance, if an ambiguous situation arises and is successfully identified as ambiguous.\n- The tasks are meant to be limited in scope - to be accomplishable, once and for all, within a limited space and time, using some limited amount of effort.\n- The AGI itself can potentially be limited in various ways, since it doesn't need to be as powerful as possible in order to accomplish its limited-scope goals.\n- If the users can select a valuable and *[6y pivotal]* task, identifying an adequately [2l safe] way of accomplishing this task might be simpler than [6c identifying all of human value].\n\nThis page is about open problems in Task AGI safety that we think might be ready for further technical research.\n\n# Introduction: The safe Task AGI problem\n\nA safe Task AGI or safe Genie is an agent that you can safely ask to paint all the cars on Earth pink.\n\n*Just* paint all cars pink.\n\nNot tile the whole future light cone with tiny pink-painted cars.  Not paint everything pink so as to be sure of getting everything that might possibly be a car.  Not paint cars white because white looks pink under the right color of pink light and white paint is cheaper.  Not paint cars pink by building nanotechnology that goes on self-replicating after all the cars have been painted.\n\nThe Task AGI superproblem is to formulate a design and training program for a real-world AGI that we can trust to just paint the damn cars pink.\n\nTo go into this at some greater depth, to build a safe Task AGI:\n\n• You need to be able to identify the goal itself, to the AGI, such that the AGI is then oriented on achieving that goal.   If you put a picture of a pink-painted car in front of a webcam and say "do this", all the AI has is the sensory pixel-field from the webcam.  Should it be trying to achieve more pink pixels in future webcam sensory data?  Should it be trying to make the programmer show it more pictures?  Should it be trying to make people take pictures of cars?  Assuming you can in fact identify the goal that singles out the futures to achieve, is the rest of the AI hooked up in such a way as to optimize that concept?\n\n• You need to somehow handle the *just* part of the *just paint the cars pink.*  This includes not tiling the whole future light cone with tiny pink-painted cars.  It includes not building another AI which paints the cars pink and then tiles the light cone with pink cars.  It includes not painting everything in the world pink so as to be sure of getting everything that might count as a car.  If you're trying to make the AI have "low impact" (intuitively, prefer plans that result in fewer changes to other quantities), then "low impact" must *not* include freezing everything within reach to minimize how much it changes, or making subtle changes to people's brains so that nobody notices their cars have been painted pink.\n\n• The AI needs to not shoot people who are standing between the painter and the car, and not accidentally run them over, and not use poisonous paint even if the poisonous paint is cheaper.\n\n• The AI should have an '[2rg abort]' button which gets it to safely stop doing what it's currently doing.  This means that if the AI was in the middle of building nanomachines, the nanomachines need to also switch off when the abort button is pressed, rather than the AI itself just shutting off and leaving the nanomachines to do whatever.  Assuming we have a safe measure of "low impact", we could define an "abortable" plan as one which can, at any time, be converted relatively quickly to one that has low impact.\n\n• The AI [2vk should not want] to self-improve or control further resources beyond what is necessary to paint the cars pink, and should [2qq query the user] before trying to develop any [2qp new] technology or assimilate any new resources it does need to paint cars pink.\n\nThis is only a preliminary list of some of the requirements and use-cases for a Task AGI, but it gives some of the flavor of the problem.\n\nFurther work on some facet of the open subproblems below might proceed by:\n\n1.  Trying to explore examples of the subproblem and potential solutions within some contemporary machine learning paradigm.\n2.  Building a toy model of some facet of the subproblem, and hopefully observing some non-obvious fact that was not predicted in advance by existing researchers skilled in the art.\n3.  Doing [107 mathematical analysis] of an [107 unbounded agent] encountering or solving some facet of a subproblem, where the setup is sufficiently precise that claims about the consequences of the premise can be [1cv checked and criticized].\n\n# [2qp Conservatism]\n\n A conservative concept boundary is a boundary which is (a) relatively simple and (b) classifies as few things as possible as positive instances of the category.\n\nIf we see that 3, 5, 13, and 19 are positive instances of a category and 4, 14, and 28 are negative instances, then a *simple* boundary which separates these instances is "All odd numbers."  A *simple and conservative* boundary is "All odd numbers between 3 and 19" or "All primes between 3 and 19".  (A non-simple boundary is "Only 3, 5, 13, and 19 are members of the category.")\n\nE.g., if we imagine presenting an AI with smiling faces as instances of a goal concept to be learned, then a conservative concept boundary might lead the future AI to pursue only smiles attached to human heads, rather than tiny molecular smileyfaces (not that this necessarily solves everything).\n\nIf we imagine presenting the AI with 20 positive instances of a burrito, then a conservative boundary might lead the AI to produce a 21st burrito very similar to those.  Rather than, e.g., needing to explicitly present the AGI with a poisonous burrito that's labeled negative somewhere in the training data, in order to force the simplest boundary around the goal concept to be one that excludes poisonous burritos.\n\nConservative *planning* is a related problem in which the AI tries to create plans that are similar to previously whitelisted plans or to previous causal events that occur in the environment.  A conservatively planning AI, shown burritos, would try to create burritos via cooking rather than via nanotechnology, if the nanotechnology part wasn't especially necessary to accomplish the goal.\n\nDetecting and flagging non-conservative goal instances or non-conservative steps of a plan for [2qq user querying] is a related approach.\n\n([2qp Main article.])\n\n# [2pf Safe impact measure]\n\nA low-impact agent is one that's intended to avoid large bad impacts at least in part by trying to avoid all large impacts as such.\n\nSuppose we ask an agent to fill up a cauldron, and it fills the cauldron using a self-replicating robot that goes on to flood many other inhabited areas.  We could try to get the agent not to do this by letting it know that flooding inhabited areas is bad.  An alternative approach is trying to have an agent that avoids needlessly large impacts in general - there's a way to fill the cauldron that has a smaller impact, a smaller footprint, so hopefully the agent does that instead.\n\nThe hopeful notion is that while "bad impact" is a highly value-laden category with a lot of complexity and detail, the notion of "big impact" will prove to be simpler and to be more easily identifiable.  Then by having the agent avoid all big impacts, or check all big impacts with the user, we can avoid bad big impacts in passing.\n\nPossible gotchas and complications with this idea include, e.g., you wouldn't want the agent to freeze the universe into stasis to minimize impact, or try to edit people's brains to avoid them noticing the effects of its actions, or carry out offsetting actions that cancel out the good effects of whatever the users were trying to do.\n\nTwo refinements of the low-impact problem are a [2rf shutdown utility function] and [2rg abortable plans].\n\n([2pf Main article.])\n\n# [4w]\n\nAn 'inductive ambiguity' is when there's more than one simple concept that fits the data, even if some of those concepts are much simpler than others, and you want to figure out *which* simple concept was intended. \n\nSuppose you're given images that show camouflaged enemy tanks and empty forests, but it so happens that the tank-containing pictures were taken on sunny days and the forest pictures were taken on cloudy days.  Given the training data, the key concept the user intended might be "camouflaged tanks", or "sunny days", or "pixel fields with brighter illumination levels".\n\nThe last concept is by far the simplest, but rather than just assume the simplest explanation is correct (has most of the probability mass), we want the algorithm (or AGI) to detect that there's more than one simple-ish boundary that might separate the data, and [2qq check with the user] about *which* boundary was intended to be learned.\n\n([4w Main article.])\n\n# [2r8 Mild optimization]\n\n"Mild optimization" or "soft optimization" is when, if you ask the [6w Task AGI] to paint one car pink, it just paints one car pink and then stops, rather than tiling the galaxies with pink-painted cars, because it's *not optimizing that hard.*\n\nThis is related, but distinct from, notions like "[2pf low impact]".  E.g., a low impact AGI might try to paint one car pink while minimizing its other footprint or how many other things changed, but it would be trying *as hard as possible* to minimize that impact and drive it down *as close to zero* as possible, which might come with its own set of pathologies.  What we want instead is for the AGI to try to paint one car pink while minimizing its footprint, and then, when that's being done pretty well, say "Okay done" and stop.\n\nThis is distinct from [eu_satisficer satisficing expected utility] because, e.g., rewriting yourself as an expected utility maximizer might also satisfice expected utility - there's no upper limit on how hard a satisficer approves of optimizing, so a satisficer is not [1fx reflectively stable].\n\nThe open problem with mild optimization is to describe mild optimization that (a) captures what we mean by "not trying *so hard* as to seek out every single loophole in a definition of low impact" and (b) is [1fx reflectively stable] and doesn't approve e.g. the construction of environmental subagents that optimize harder.\n\n# [2s0 Look where I'm pointing, not at my finger]\n\nSuppose we're trying to give a [6w Task AGI] the task, "Give me a strawberry".  User1 wants to identify their intended category of strawberries by waving some strawberries and some non-strawberries in front of the AI's webcam, and User2 in the control room will press a button to indicate which of these objects are strawberries.  Later, after the training phase, the AI itself will be responsible for selecting objects that might be potential strawberries, and User2 will go on pressing the button to give feedback on these.\n\n![strawberry diagram](http://www.gliffy.com/go/publish/image/10423843/L.png)\n\nThe "look where I'm pointing, not at my finger" problem is getting the AI to focus on the strawberries rather than User2 - the concepts "strawberries" and "events that make User2 press the button" are very different goals even though they'll both well-classify the training cases; an AI might pursue the latter goal by psychologically analyzing User2 and figuring out how to get them to press the button using non-strawberry methods.\n\nOne way of pursuing this might be to try to zero in on particular nodes inside the huge causal lattice that ultimately produces the AI's sensory data, and try to force the goal concept to be about a simple or direct relation between the "potential strawberry" node (the objects waved in front of the webcam) and the observed button values, without this relation being allowed to go through the User2 node.\n\n![strawberry diagram](http://www.gliffy.com/go/publish/image/10424137/L.png)\n\nSee also the related problem of "[36w]".\n\n# More open problems\n\nThis page is a work in progress.  A longer list of Task AGI open subproblems:\n\n- [2pf Low impact]\n  - [2rf Shutdown utility function]\n      - [2rg Abortable plans]\n- [2qp Conservatism]\n- [2r8 Mild optimization]\n  - [2x3 Aversion of instrumental self-improvement goal]\n- [4w Ambiguity identification]\n- [1b7 Utility indifference]\n  - [2xd Shutdown button]\n- Task identification\n  - [5c Ontology identification]\n  - [36w]\n         - [2s0]\n- Hooking up a directable optimization to an identified task\n- Training protocols\n  - Which things do you think can be well-identified by what kind of labeled datasets plus queried ambiguities plus conservatism, and what pivotal acts can you do with combinations of them plus assumed other abilities?\n- [36k Faithful simulation]\n- Safe imitation for [1w4 act-based agents]\n  - Generative imitation with a probability of the human doing that act, guaranteed not to hindsight bias\n  - Typicality (related to conservatism)\n- Plan transparency\n  - Epistemic-only hypotheticals (when you ask how in principle how the AI might paint cars pink, it doesn't run a planning subprocess that plans to persuade the actual programmers to paint things pink).\n- [1g4 Epistemic exclusion]\n  - [102 Behaviorism]\n\n(...more, this is a page in progress) ',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '3',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'EliezerYudkowsky'
  ],
  childIds: [],
  parentIds: [
    'task_agi'
  ],
  commentIds: [
    '2nb',
    '2nc',
    '2nd',
    '2nh',
    '2ql',
    '5wr',
    '89b'
  ],
  questionIds: [],
  tagIds: [
    'value_alignment_open_problem',
    'work_in_progress_meta_tag'
  ],
  relatedIds: [
    'low_impact',
    'conservative_concept',
    'soft_optimizer',
    'pointing_finger',
    'informed_oversight',
    'safe_training_for_imitators',
    'avert_instrumental_pressure',
    'shutdown_problem',
    'faithful_simulation',
    'identify_causal_goals',
    'corrigibility',
    'inductive_ambiguity',
    'nonadversarial'
  ],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9309',
      pageId: 'taskagi_open_problems',
      userId: 'EliezerYudkowsky',
      edit: '19',
      type: 'newEdit',
      createdAt: '2016-04-14 22:15:06',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9308',
      pageId: 'taskagi_open_problems',
      userId: 'EliezerYudkowsky',
      edit: '18',
      type: 'newEdit',
      createdAt: '2016-04-14 22:00:41',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9295',
      pageId: 'taskagi_open_problems',
      userId: 'EliezerYudkowsky',
      edit: '17',
      type: 'newEdit',
      createdAt: '2016-04-14 02:40:12',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9284',
      pageId: 'taskagi_open_problems',
      userId: 'EliezerYudkowsky',
      edit: '16',
      type: 'newEdit',
      createdAt: '2016-04-14 00:22:58',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9191',
      pageId: 'taskagi_open_problems',
      userId: 'JessicaTaylor',
      edit: '0',
      type: 'deleteUsedAsTag',
      createdAt: '2016-04-01 05:58:26',
      auxPageId: 'selective_similarity_metric',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9189',
      pageId: 'taskagi_open_problems',
      userId: 'JessicaTaylor',
      edit: '0',
      type: 'deleteUsedAsTag',
      createdAt: '2016-04-01 05:57:44',
      auxPageId: 'reliable_prediction',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9115',
      pageId: 'taskagi_open_problems',
      userId: 'JessicaTaylor',
      edit: '0',
      type: 'deleteChild',
      createdAt: '2016-03-27 06:00:29',
      auxPageId: 'informed_oversight',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9113',
      pageId: 'taskagi_open_problems',
      userId: 'JessicaTaylor',
      edit: '0',
      type: 'deleteChild',
      createdAt: '2016-03-27 05:59:46',
      auxPageId: 'safe_training_for_imitators',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9111',
      pageId: 'taskagi_open_problems',
      userId: 'JessicaTaylor',
      edit: '0',
      type: 'deleteChild',
      createdAt: '2016-03-27 05:59:32',
      auxPageId: 'reliable_prediction',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9109',
      pageId: 'taskagi_open_problems',
      userId: 'JessicaTaylor',
      edit: '0',
      type: 'deleteChild',
      createdAt: '2016-03-27 05:59:19',
      auxPageId: 'selective_similarity_metric',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9080',
      pageId: 'taskagi_open_problems',
      userId: 'EliezerYudkowsky',
      edit: '15',
      type: 'newEdit',
      createdAt: '2016-03-26 19:41:21',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9024',
      pageId: 'taskagi_open_problems',
      userId: 'JessicaTaylor',
      edit: '14',
      type: 'newChild',
      createdAt: '2016-03-24 04:12:53',
      auxPageId: 'selective_similarity_metric',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9011',
      pageId: 'taskagi_open_problems',
      userId: 'JessicaTaylor',
      edit: '14',
      type: 'newChild',
      createdAt: '2016-03-24 01:24:32',
      auxPageId: 'reliable_prediction',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9004',
      pageId: 'taskagi_open_problems',
      userId: 'JessicaTaylor',
      edit: '14',
      type: 'newChild',
      createdAt: '2016-03-24 00:36:03',
      auxPageId: 'informed_oversight',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9002',
      pageId: 'taskagi_open_problems',
      userId: 'JessicaTaylor',
      edit: '14',
      type: 'newChild',
      createdAt: '2016-03-24 00:34:14',
      auxPageId: 'safe_training_for_imitators',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8953',
      pageId: 'taskagi_open_problems',
      userId: 'EliezerYudkowsky',
      edit: '14',
      type: 'newEdit',
      createdAt: '2016-03-23 20:05:25',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8949',
      pageId: 'taskagi_open_problems',
      userId: 'EliezerYudkowsky',
      edit: '13',
      type: 'newUsedAsTag',
      createdAt: '2016-03-23 20:04:55',
      auxPageId: 'pointing_finger',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8922',
      pageId: 'taskagi_open_problems',
      userId: 'EliezerYudkowsky',
      edit: '13',
      type: 'newEdit',
      createdAt: '2016-03-23 01:13:38',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8914',
      pageId: 'taskagi_open_problems',
      userId: 'EliezerYudkowsky',
      edit: '12',
      type: 'newEdit',
      createdAt: '2016-03-22 19:43:24',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8913',
      pageId: 'taskagi_open_problems',
      userId: 'EliezerYudkowsky',
      edit: '11',
      type: 'newEdit',
      createdAt: '2016-03-22 19:37:27',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8911',
      pageId: 'taskagi_open_problems',
      userId: 'EliezerYudkowsky',
      edit: '10',
      type: 'newEdit',
      createdAt: '2016-03-22 19:28:15',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8909',
      pageId: 'taskagi_open_problems',
      userId: 'EliezerYudkowsky',
      edit: '9',
      type: 'newEdit',
      createdAt: '2016-03-22 19:25:43',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8861',
      pageId: 'taskagi_open_problems',
      userId: 'EliezerYudkowsky',
      edit: '8',
      type: 'newUsedAsTag',
      createdAt: '2016-03-21 21:21:40',
      auxPageId: 'soft_optimizer',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8853',
      pageId: 'taskagi_open_problems',
      userId: 'EliezerYudkowsky',
      edit: '8',
      type: 'newEdit',
      createdAt: '2016-03-20 03:42:31',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8845',
      pageId: 'taskagi_open_problems',
      userId: 'EliezerYudkowsky',
      edit: '7',
      type: 'newUsedAsTag',
      createdAt: '2016-03-20 02:37:13',
      auxPageId: 'inductive_ambiguity',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8828',
      pageId: 'taskagi_open_problems',
      userId: 'EliezerYudkowsky',
      edit: '7',
      type: 'newEdit',
      createdAt: '2016-03-20 01:42:12',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8812',
      pageId: 'taskagi_open_problems',
      userId: 'EliezerYudkowsky',
      edit: '6',
      type: 'newUsedAsTag',
      createdAt: '2016-03-19 23:53:29',
      auxPageId: 'conservative_concept',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8736',
      pageId: 'taskagi_open_problems',
      userId: 'EliezerYudkowsky',
      edit: '6',
      type: 'newEdit',
      createdAt: '2016-03-19 00:33:47',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8675',
      pageId: 'taskagi_open_problems',
      userId: 'EliezerYudkowsky',
      edit: '5',
      type: 'newUsedAsTag',
      createdAt: '2016-03-18 20:54:11',
      auxPageId: 'low_impact',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8622',
      pageId: 'taskagi_open_problems',
      userId: 'EliezerYudkowsky',
      edit: '5',
      type: 'newEdit',
      createdAt: '2016-03-16 01:54:43',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8621',
      pageId: 'taskagi_open_problems',
      userId: 'EliezerYudkowsky',
      edit: '4',
      type: 'newEdit',
      createdAt: '2016-03-16 01:51:42',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8620',
      pageId: 'taskagi_open_problems',
      userId: 'EliezerYudkowsky',
      edit: '3',
      type: 'newEdit',
      createdAt: '2016-03-16 01:51:23',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8619',
      pageId: 'taskagi_open_problems',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'deleteTag',
      createdAt: '2016-03-16 01:48:51',
      auxPageId: 'stub_meta_tag',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8617',
      pageId: 'taskagi_open_problems',
      userId: 'EliezerYudkowsky',
      edit: '2',
      type: 'newEdit',
      createdAt: '2016-03-16 01:48:42',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8605',
      pageId: 'taskagi_open_problems',
      userId: 'EliezerYudkowsky',
      edit: '1',
      type: 'newTag',
      createdAt: '2016-03-15 20:59:06',
      auxPageId: 'value_alignment_open_problem',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8602',
      pageId: 'taskagi_open_problems',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'deleteUsedAsTag',
      createdAt: '2016-03-15 20:59:03',
      auxPageId: 'taskagi_open_problems',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8603',
      pageId: 'taskagi_open_problems',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'deleteTag',
      createdAt: '2016-03-15 20:59:03',
      auxPageId: 'taskagi_open_problems',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8601',
      pageId: 'taskagi_open_problems',
      userId: 'EliezerYudkowsky',
      edit: '1',
      type: 'newEdit',
      createdAt: '2016-03-15 20:50:53',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8600',
      pageId: 'taskagi_open_problems',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newTag',
      createdAt: '2016-03-15 20:50:48',
      auxPageId: 'work_in_progress_meta_tag',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8598',
      pageId: 'taskagi_open_problems',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newTag',
      createdAt: '2016-03-15 20:49:45',
      auxPageId: 'stub_meta_tag',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8596',
      pageId: 'taskagi_open_problems',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newParent',
      createdAt: '2016-03-15 20:26:49',
      auxPageId: 'task_agi',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}