{ localUrl: '../page/mindcrime.html', arbitalUrl: 'https://arbital.com/p/mindcrime', rawJsonUrl: '../raw/6v.json', likeableId: '2381', likeableType: 'page', myLikeValue: '0', likeCount: '3', dislikeCount: '0', likeScore: '3', individualLikes: [ 'AlexeiAndreev', 'EliezerYudkowsky', 'MarianAndrecki' ], pageId: 'mindcrime', edit: '17', editSummary: '', prevEdit: '16', currentEdit: '17', wasPublished: 'true', type: 'wiki', title: 'Mindcrime', clickbait: 'Might a machine intelligence contain vast numbers of unhappy conscious subprocesses?', textLength: '16541', alias: 'mindcrime', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'EliezerYudkowsky', editCreatedAt: '2016-12-29 06:36:44', pageCreatorId: 'EliezerYudkowsky', pageCreatedAt: '2015-06-09 22:30:46', seeDomainId: '0', editDomainId: 'EliezerYudkowsky', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '23', isEditorComment: 'false', isApprovedComment: 'true', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '1632', text: '[summary(Gloss): A huge amount of harm could occur if a [2c machine intelligence] turns out to contain lots of [18j conscious subprograms] enduring poor living conditions. One worry is that this might happen if an AI models humans in too much detail.]\n\n[summary: 'Mindcrime' is [18k]'s suggested term for the moral catastrophe that occurs if a [2c machine intelligence] contains enormous numbers of [18j conscious beings trapped inside its code].\n\nThis could happen as a result of self-awareness being a natural property of computationally efficient subprocesses. Perhaps more worryingly, the best model of a person may be a person itself, even if they're not the same person. This means that AIs trying to model humans might be unusually likely to create hypotheses and simulations that are themselves conscious.]\n\n[summary(Technical): 'Mindcrime' is [18k]'s term for mind designs producing moral harm by their internal operation, particularly through embedding sentient subprocesses.\n\nOne worry is that mindcrime might arise in the course of an agent trying to predict or manipulate the humans in its environment, since this implies a pressure to model the humans in faithful detail. This is especially concerning since several value alignment proposals would explicitly call for modeling humans in detail, e.g. [3c5 extrapolated volition] and [44z imitation-based agents].\n\nAnother problem scenario is if the natural design for an efficient subprocess involves independent consciousness (though it is a separate question if this optimal design involves pain or suffering).\n\nComputationally powerful agents might contain vast numbers of trapped conscious subprocesses, qualifying this as a [ global catastrophic risk].]\n\n"Mindcrime" is [18k]'s suggested term for scenarios in which an AI's cognitive processes are intrinsically doing moral harm, for example because the AI contains trillions of suffering conscious beings inside it.\n\nWays in which this might happen:\n\n- Problem of sapient models (of humans): Occurs naturally if the best predictive model for humans in the environment involves models that are detailed enough to be people themselves.\n- Problem of sapient models (of civilizations): Occurs naturally if the agent tries to simulate, e.g., alien civilizations that might be simulating it, in enough detail to include conscious simulations of the aliens.\n- Problem of sapient subsystems: Occurs naturally if the most efficient design for some cognitive subsystems involves creating subagents that are self-reflective, or have some other property leading to consciousness or personhood.\n- Problem of sapient self-models: If the AI is conscious or possible future versions of the AI are conscious, it might run and terminate a large number of conscious-self models in the course of considering possible self-modifications.\n\n# Problem of sapient models (of humans):\n\nAn [10k instrumental pressure] to produce high-fidelity predictions of human beings (or to predict [ decision counterfactuals] about them, or to [ search] for events that lead to particular consequences, etcetera) may lead the AI to run computations that are unusually likely to possess personhood.\n\nAn [107 unrealistic] example of this would be [11w], where predictions are made by means that include running many possible simulations of the environment and seeing which ones best correspond to reality. Among current machine learning algorithms, particle filters and Monte Carlo algorithms similarly involve running many possible simulated versions of a system.\n\nIt's possible that a sufficiently advanced AI to have successfully arrived at detailed models of human intelligence, would usually also be advanced enough that it never tried to use a predictable/searchable model that engaged in brute-force simulations of those models. (Consider, e.g., that there will usually be many possible settings of a variable inside a model, and an efficient model might manipulate data representing a probability distribution over those settings, rather than ever considering one exact, specific human in toto.)\n\nThis, however, doesn't make it certain that no mindcrime will occur. It may not take exact, faithful simulation of specific humans to create a conscious model. An efficient model of a (spread of possibilities for a) human may still contain *enough* computations that resemble a person *enough* to create consciousness, or whatever other properties may be deserving of personhood. Consider, in particular, an agent trying to use \n\nJust as it almost certainly isn't necessary to go all the way down to the neural level to create a sapient being, it may be that even with some parts of a mind considered abstractly, the remainder would be computed in enough detail to imply consciousness, sapience, personhood, etcetera.\n\nThe problem of sapient models is not to be confused with [ Simulation Hypothesis] issues. An efficient model of a human need not have subjective experience indistinguishable from that of the human (although it will be a model *of* a person who doesn't believe themselves to be a model). The problem occurs if the model *is a person*, not if the model is *the same person* as its subject, and the latter possibility plays no role in the implication of moral harm.\n\nBesides problems that are directly or obviously about modeling people, many other practical problems and questions can benefit from modeling other minds - e.g., reading the directions on a toaster oven in order to discern the intent of the mind that was trying to communicate how to use a toaster. Thus, mindcrime might result from a sufficiently powerful AI trying to solve very mundane problems.\n\n# Problem of sapient models (of civilizations)\n\nA separate route to mindcrime comes from an advanced agent considering, in sufficient detail, the possible origins and futures of intelligent life on other worlds. (Imagine that you were suddenly told that this version of you was actually embedded in a superintelligence that was imagining how life might evolve on a place like Earth, and that your subprocess was not producing sufficiently valuable information and was about to be shut down. You would probably be annoyed! We should try not to annoy other people in this way.)\n\nThree possible origins of a [10g convergent instrumental pressure] to consider intelligent civilizations in great detail:\n\n- Assigning sufficient probability to the existence of non-obvious extraterrestrial intelligences in Earth's vicinity, perhaps due to considering the [ Fermi Paradox].\n- [ Naturalistic induction], combined with the AI considering the hypothesis that it is in a simulated environment.\n- [ Logical decision theories] and utility functions that care about the consequences of the AI's decisions via instances of the AI's reference class that could be embedded inside alien simulations.\n\nWith respect to the latter two possibilities, note that the AI does not need to be considering possibilities in which the whole Earth as we know it is a simulation. The AI only needs to consider that, among the possible explanations of the AI's current sense data and internal data, there are scenarios in which the AI is embedded in some world other than the most 'obvious' one implied by the sense data. See also [5j] for a related hazard of the AI considering possibilities in which it is being simulated.\n\n([2] has advocated that we shouldn't let any AI short of *extreme* levels of safety and robustness assurance consider distant civilizations in lots of detail in any case, since this means our AI might embed (a model of) a hostile superintelligence.)\n\n# Problem of sapient subsystems:\n\nIt's possible that the most efficient system for, say, allocating memory on a local cluster, constitutes a complete reflective agent with a self-model. Or that some of the most efficient designs for subprocesses of an AI, in general, happen to have whatever properties lead up to consciousness or whatever other properties are important to personhood.\n\nThis might possibly constitute a relatively less severe moral catastrophe, if the subsystems are sentient but [ lack a reinforcement-based pleasure/pain architecture] (since the latter is not obviously a property of the most efficient subagents). In this case, there might be large numbers of conscious beings embedded inside the AI and occasionally dying as they are replaced, but they would not be suffering. It is nonetheless the sort of scenario that many of us would prefer to avoid.\n\n# Problem of sapient self-models:\n\nThe AI's models of *itself*, or of other AIs it could possibly build, might happen to be conscious or have other properties deserving of personhood. This is worth considering as a separate possibility from building a conscious or personhood-deserving AI ourselves, when [ we didn't mean to do so], because of these two additional properties:\n\n- Even if the AI's current design is not conscious or personhood-deserving, the current AI might consider possible future versions or subagent designs that would be conscious, and those considerations might themselves be conscious.\n - This means that even if the AI's current version doesn't seem like it has key personhood properties on its own - that we've successfully created the AI itself as a nonperson - we still need to worry about other conscious AIs being embedded into it.\n- The AI might create, run, and terminate very large numbers of potential self-models.\n - Even if we consider tolerable the potential moral harm of creating *one* conscious AI (e.g. the AI lacks all of the conditions that a responsible parent would want to ensure when creating a new intelligent species, but it's just one sapient being so it's okay to do that in order to save the world), we might not want to take on the moral harm of creating *trillions* of evanescent, swiftly erased conscious beings.\n\n# Difficulties\n\nTrying to consider these issues is complicated by:\n\n- [ Philosophical uncertainty] about what properties are constitutive of consciousness and which computer programs have them;\n- [ Moral uncertainty] about what ([ idealized] versions of) (any particular person's) morality would consider to be the key properties of personhood;\n- Our present-day uncertainty about what efficient models in advanced agents would look like.\n\nIt'd help if we knew the answers to these questions, but the fact that we don't know doesn't mean we can thereby conclude that any particular model is not a person. (This would be some mix of [ argumentum ad ignorantiem], and [ availability bias] making us think that a scenario is unlikely when it is hard to visualize.) In the limit of infinite computing power, the epistemically best models of humans would almost certainly involve simulating many possible versions of them; superintelligences would have [ very large amounts of computing power] and we don't know at what point we come close enough to this [ limiting property] to cross the threshold.\n\n## Scope of potential disaster\n\nThe prospect of mindcrime is an especially alarming possibility because sufficiently advanced agents, *especially* if they are using computationally efficient models, might consider *very large numbers* of hypothetical possibilities that would count as people. There's no limit that says that if there are seven billion people, an agent will run at most seven billion models; the agent might be considering many possibilities per individual human. This would not be an [ astronomical disaster] since it would not (by hypothesis) wipe out our posterity and our intergalactic future, but it could be a disaster orders of magnitude larger than the Holocaust, the Mongol Conquest, the Middle Ages, or all human tragedy to date.\n\n## Development-order issue\n\nIf we ask an AI to predict what we would say if we had a thousand years to think about the problem of defining personhood or think about which causal processes are 'conscious', this seems unusually likely to cause the AI to commit mindcrime in the course of answering the question. Even asking the AI to think abstractly about the problem of consciousness, or predict by abstract reasoning what humans might say about it, seems unusually likely to result in mindcrime. There thus exists a [ development order issue] preventing us from asking a Friendly AI to solve the problem for us, since to file this request safely and without committing mindcrime, we would need the request to already have been completed.\n\nThe prospect of enormous-scale disaster mitigates against 'temporarily' tolerating mindcrime inside a system, while, e.g., an [ extrapolated-volition] or [ approval-based] agent tries to compute the code or design of a non-mindcriminal agent. Depending on the agent's efficiency, and secondarily on its computational limits, a tremendous amount of moral harm might be done during the 'temporary' process of computing an answer.\n\n## Weirdness\n\nLiterally nobody outside of MIRI or FHI ever talks about this problem.\n\n# Nonperson predicates\n\nA [1fv nonperson predicate] is an [ effective] test that we, or an AI, can use to determine that some computer program is definitely *not* a person. In principle, a nonperson predicate needs only two possible outputs, "Don't know" and "Definitely not a person". It's acceptable for many actually-nonperson programs to be labeled "don't know", so long as no people are labeled "definitely not a person".\n\nIf the above was the only requirement, one simple nonperson predicate would be to label everything "don't know". The implicit difficulty is that the nonperson predicate must also pass some programs of high complexity that do things like "acceptably model humans" or "acceptably model future versions of the AI".\n\nBesides addressing mindcrime scenarios, Yudkowsky's [original proposal](http://lesswrong.com/lw/x4/nonperson_predicates/) was also aimed at knowing that the AI design itself was not conscious, or not a person.\n\nIt seems likely to be very hard to find a good nonperson predicate:\n\n- Not all philosophical confusions and computational difficulties are averted by asking for a partial list of unconscious programs instead of a total list of conscious programs. Even if we don't know which properties are sufficient, we'd need to know something solid about properties that are necessary for consciousness or sufficient for nonpersonhood.\n- We can't pass once-and-for-all any class of programs that's Turing-complete. We can't say once and for all that it's safe to model gravitational interactions in a solar system, if enormous gravitational systems could encode computers that encode people.\n- The [42] problem seems particularly worrisome here. If we block off some options for modeling humans directly, the *next best* option is unusually likely to be conscious. Even if we rely on a whitelist rather than a blacklist, this may lead to a whitelisted "gravitational model" that secretly encodes a human, and so on.\n\n# Research avenues\n\n+ [102 Behaviorism]: Try to create a [5b3 limited AI] that does not model other minds or possibly even itself, except using some narrow class of agent models that we are pretty sure will not be sentient. This avenue is potentially motivated for other reasons as well, such as avoiding [5j probable environment hacking] and averting [ programmer manipulation].\n\n+ Try to define a nonperson predicate that whitelists enough programs to carry out some [6y pivotal achievement].\n\n+ Try for an AI that can bootstrap our understanding of consciousness and tell us about what we would define as a person, while committing a relatively small amount of mindcrime, with all computed possible-people being stored rather than discarded, and the modeled agents being entirely happy, mostly happy, or non-suffering. E.g., put a happy person at the center of the approval-directed agent, and try to oversee the AI's algorithms and ask it not to use Monte Carlo simulations if possible.\n\n+ Ignore the problem in all pre-interstellar stages because it's still relatively small compared to astronomical stakes and therefore not worth significant losses in success probability. (This may [ backfire] under some versions of the Simulation Hypothesis.)\n\n+ Try to [112 finish] the philosophical problem of understanding which causal processes experience sapience (or are otherwise objects of ethical value), in the next couple of decades, to sufficient detail that it can be crisply stated to an AI, with sufficiently complete coverage that it's not subject to the [42] problem.', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '6', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '2016-02-26 20:11:53', hasDraft: 'false', votes: [], voteSummary: 'null', muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'EliezerYudkowsky', 'AlexeiAndreev', 'NateSoares', 'JeremyPerret' ], childIds: [ 'mindcrime_introduction', 'nonperson_predicate' ], parentIds: [ 'ai_alignment' ], commentIds: [ '70n', '78', '897', '8j1', '8xr', '918' ], questionIds: [], tagIds: [ 'nearest_unblocked' ], relatedIds: [ 'behaviorist' ], markIds: [], explanations: [], learnMore: [], requirements: [ { id: '1379', parentId: 'ai_alignment', childId: 'mindcrime', type: 'requirement', creatorId: 'AlexeiAndreev', createdAt: '2016-06-17 21:58:56', level: '1', isStrong: 'false', everPublished: 'true' } ], subjects: [], lenses: [ { id: '3', pageId: 'mindcrime', lensId: 'mindcrime_introduction', lensIndex: '0', lensName: 'Introduction', lensSubtitle: '', createdBy: '1', createdAt: '2016-06-17 21:58:56', updatedBy: '1', updatedAt: '2016-06-17 21:58:56' } ], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '21151', pageId: 'mindcrime', userId: 'EliezerYudkowsky', edit: '17', type: 'newEdit', createdAt: '2016-12-29 06:36:44', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '16530', pageId: 'mindcrime', userId: 'EliezerYudkowsky', edit: '16', type: 'newEdit', createdAt: '2016-07-10 22:41:18', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '7127', pageId: 'mindcrime', userId: 'JeremyPerret', edit: '15', type: 'newEdit', createdAt: '2016-02-15 21:08:00', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4505', pageId: 'mindcrime', userId: 'EliezerYudkowsky', edit: '14', type: 'newUsedAsTag', createdAt: '2015-12-28 19:42:59', auxPageId: 'behaviorist', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4504', pageId: 'mindcrime', userId: 'EliezerYudkowsky', edit: '14', type: 'newEdit', createdAt: '2015-12-28 19:35:31', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4503', pageId: 'mindcrime', userId: 'EliezerYudkowsky', edit: '13', type: 'newTag', createdAt: '2015-12-28 19:35:18', auxPageId: 'nearest_unblocked', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4501', pageId: 'mindcrime', userId: 'EliezerYudkowsky', edit: '0', type: 'deleteTag', createdAt: '2015-12-28 19:34:59', auxPageId: 'work_in_progress_meta_tag', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4498', pageId: 'mindcrime', userId: 'EliezerYudkowsky', edit: '13', type: 'newChild', createdAt: '2015-12-28 19:34:43', auxPageId: 'nonperson_predicate', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4329', pageId: 'mindcrime', userId: 'NateSoares', edit: '13', type: 'newEdit', createdAt: '2015-12-24 23:54:12', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3788', pageId: 'mindcrime', userId: 'AlexeiAndreev', edit: '0', type: 'newAlias', createdAt: '2015-12-15 23:44:04', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3789', pageId: 'mindcrime', userId: 'AlexeiAndreev', edit: '12', type: 'newEdit', createdAt: '2015-12-15 23:44:04', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3700', pageId: 'mindcrime', userId: 'EliezerYudkowsky', edit: '11', type: 'newEdit', createdAt: '2015-12-14 20:36:01', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3699', pageId: 'mindcrime', userId: 'EliezerYudkowsky', edit: '10', type: 'newEdit', createdAt: '2015-12-14 20:35:29', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3563', pageId: 'mindcrime', userId: 'EliezerYudkowsky', edit: '9', type: 'newEdit', createdAt: '2015-12-01 05:48:02', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3562', pageId: 'mindcrime', userId: 'EliezerYudkowsky', edit: '8', type: 'newEdit', createdAt: '2015-12-01 05:44:55', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3561', pageId: 'mindcrime', userId: 'EliezerYudkowsky', edit: '7', type: 'newEdit', createdAt: '2015-12-01 05:44:15', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3560', pageId: 'mindcrime', userId: 'EliezerYudkowsky', edit: '6', type: 'newEdit', createdAt: '2015-12-01 05:40:08', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3549', pageId: 'mindcrime', userId: 'EliezerYudkowsky', edit: '5', type: 'newEdit', createdAt: '2015-11-30 01:26:41', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3548', pageId: 'mindcrime', userId: 'EliezerYudkowsky', edit: '4', type: 'newRequirement', createdAt: '2015-11-30 01:23:55', auxPageId: 'ai_alignment', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3544', pageId: 'mindcrime', userId: 'EliezerYudkowsky', edit: '4', type: 'newChild', createdAt: '2015-11-30 01:20:34', auxPageId: 'mindcrime_introduction', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3543', pageId: 'mindcrime', userId: 'EliezerYudkowsky', edit: '4', type: 'newEdit', createdAt: '2015-11-30 01:19:22', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1131', pageId: 'mindcrime', userId: 'AlexeiAndreev', edit: '1', type: 'newUsedAsTag', createdAt: '2015-10-28 03:47:09', auxPageId: 'work_in_progress_meta_tag', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '372', pageId: 'mindcrime', userId: 'AlexeiAndreev', edit: '1', type: 'newParent', createdAt: '2015-10-28 03:46:51', auxPageId: 'ai_alignment', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1876', pageId: 'mindcrime', userId: 'AlexeiAndreev', edit: '3', type: 'newEdit', createdAt: '2015-06-17 20:11:55', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1875', pageId: 'mindcrime', userId: 'EliezerYudkowsky', edit: '2', type: 'newEdit', createdAt: '2015-06-09 22:34:19', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1874', pageId: 'mindcrime', userId: 'EliezerYudkowsky', edit: '1', type: 'newEdit', createdAt: '2015-06-09 22:30:46', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'true', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }