{ localUrl: '../page/edge_instantiation.html', arbitalUrl: 'https://arbital.com/p/edge_instantiation', rawJsonUrl: '../raw/2w.json', likeableId: '1791', likeableType: 'page', myLikeValue: '0', likeCount: '8', dislikeCount: '0', likeScore: '8', individualLikes: [ 'AlexeiAndreev', 'RokResnik', 'RafaelCosman', 'EricBruylant', 'OrpheusLummis2', 'ChrisHibbert', 'NateSoares', 'NunoFernandes' ], pageId: 'edge_instantiation', edit: '21', editSummary: '', prevEdit: '20', currentEdit: '21', wasPublished: 'true', type: 'wiki', title: 'Edge instantiation', clickbait: 'When you ask the AI to make people happy, and it tiles the universe with the smallest objects that can be happy.', textLength: '15706', alias: 'edge_instantiation', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: 'probability', votesAnonymous: 'false', editCreatorId: 'EliezerYudkowsky', editCreatedAt: '2016-03-12 07:20:39', pageCreatorId: 'EliezerYudkowsky', pageCreatedAt: '2015-03-26 23:17:37', seeDomainId: '0', editDomainId: 'EliezerYudkowsky', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '22', isEditorComment: 'false', isApprovedComment: 'true', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '4922', text: 'The edge instantiation problem is a hypothesized [48 patch-resistant problem] for [2l safe] [ value loading] in [2c advanced agent scenarios] where, for most utility functions we might try to formalize or teach, the maximum of the agent's utility function will end up lying at an edge of the solution space that is a 'weird extreme' from our perspective.\n\n## Definition\n\nOn many classes of problems, the maximizing solution tends to lie at an extreme edge of the solution space. This means that if we have an intuitive outcome X in mind and try to obtain it by giving an agent a solution fitness function F that sounds like it should assign X a high value, the maximum of F may be at an extreme edge of the solution space that looks to us like a very unnatural instance of X, or not an X at all. The Edge Instantiation problem is a specialization of [ unforeseen maximization] which in turn specializes Bostrom's [ perverse instantiation] class of problems.\n\nIt is hypothesized (by e.g. [2 Yudkowsky]) that many classes of solution that have been proposed to [48 patch] Edge Instantiation would fail to resolve the entire problem and that further Edge Instantiation problems would remain. For example, even if we consider a [ satisficing] utility function with only values 0 and 1 where 'typical' X has value 1 and no higher score is possible, an expected utility maximizer could still end up deploying an extreme strategy in order to maximize the *probability* that a satisfactory outcome is obtained. Considering several proposed solutions like this and their failures suggests that Edge Instantiation is a [48 resistant] (not ultimately unsolvable, but with many attractive-seeming solutions failing to work) for the deep reason that many possible stages of an agent's cognition would potentially rank solutions and choose very-high-ranking solutions.\n\nThe proposition defined is true if Edge Instantiation does in fact surface as a pragmatically important problem for advanced agent scenarios, and would in fact resurface in the face of most 'naive' attempts to correct it. The proposition is not that the Edge Instantiation Problem is unresolvable, but that it's real, important, doesn't have a *simple* answer, and resists most simple attempts to patch it.\n\n### Example 1: Smiling faces\n\nWhen Bill Hibbard was first beginning to consider the value alignment problem, he suggested giving AIs the goals of making humans smile, a goal that could be trained by recognizing pictures of smiling humans, and was intended to elicit human happiness. Yudkowsky replied by suggesting that the true behavior elicited would be to tile the future light cone with tiny molecular smiley faces. This is not because the agent was perverse, but because among the set of all objects that look like smiley faces, the solution with the most extreme value for achievable numerosity (that is, the strategy which creates the largest possible number of smiling faces) also sets the value for the size of individual smiling faces to an extremely small diameter. The tiniest possible smiling faces are very unlike the archetypal examples of smiling faces that we had in mind when specifying the utility function; from a human perspective, the intuitively intended meaning has been replaced by a weird extreme.\n\nStuart Russell observes that maximizing some aspects of a solution tends to set all unconstrained aspects of the solution to extreme values. The solution that maximizes the number of smiles minimizes the size of each individual smile. The bad-seeming result is not just an accidental outcome of mere ambiguity in the instructions. The problem wasn't just that a wide range of possibilities corresponded to 'smiles' and a randomly selected possibility from this space surprised us by not being the central example we originally had in mind. Rather, there's a systematic tendency for the highest-scoring solution to occupy an extreme edge of the solution space, which means that we are *systematically* likely to see 'extreme' or 'weird' solutions rather than the 'normal' examples we had in mind.\n\n### Example 2: Sorcerer's Apprentice\n\nIn the hypothetical Sorcerer's Apprentice scenario, you instruct an artificial agent to add water to a cauldron, and it floods the entire workplace. Hypothetically, you had in mind only adding enough water to fill the cauldron and then stopping, but some stage of the agent's solution-finding process optimized on a step where 'flooding the workplace' scored higher than 'add 4 buckets of water and then shut down safely', even though both of these qualify as 'filling the cauldron'.\n\nThis could be because (in the most naive case) the utility function you gave the agent was increasing in the amount of water in contiguous contact with the cauldron's interior - you gave it a utility function that implied 4 buckets of water were good and 4,000 buckets of water were better. \n\nSuppose that, having foreseen in advance the above possible disaster, you try to [48 patch] the agent by instructing it not to move more than 50 kilograms of material total. The agent promptly begins to build subagents (with the agent's own motions to build subagents moving only 50 kilograms of material) which build further agents and again flood the workplace. You have run into a [42 Nearest Unblocked Neighbor] problem; when you excluded one extreme solution, the result was not the central-feeling 'normal' example you originally had in mind. Instead, the new maximum lay on a new extreme edge of the solution space.\n\nAnother solution might be to define what you thought was a satisficing agent, with a utility function that assigned 1 in all cases where there were at least 4 buckets of water in the cauldron and 0 otherwise. The agent then calculates that it could increase the *probability* of this condition obtaining from 99.9% to 99.99% by replicating subagents and repeatedly filling the cauldron, just in case one agent malfunctions or something else tries to remove water from the cauldron. Since 0.9999 > 0.999, there is then a more extreme solution with greater *expected* utility, even though the utility function itself is binary and satisficing.\n\n## Premises\n\n### Assumes: Orthogonality thesis\n\nAs with most aspects of the value loading problem, [1y] is an implicit premise of the Edge Instantiation problem; for Edge Instantiation to be a problem for advanced agents implies that 'what we really meant' or the outcomes of highest [ normative value] are not inherently picked out by every possible maximizing process; and that most possible utility functions do not care 'what we really meant' unless explicitly constructed to have a [ do what I mean] behavior.\n\n### Assumes: [5l Complexity of values]\n\nIf normative values were extremely simple (of very low algorithmic complexity), then they could be formally specified in full, and the most extreme strategy that scored highest on this formal measure simply *would* correspond with what we really wanted, with no downsides that hadn't been taken into account in the score.\n\n## Arguments\n\n### Interaction with nearest unblocked neighbor\n\nThe Edge Instantiation problem has the [42] pattern. If you foresee one specific 'perverse' instantiation and try to prohibit it, the maximum over the remaining solution space is again likely to be at another extreme edge of the solution space that again seems 'perverse'.\n\n### Interaction with [ cognitive uncontainability] of [2c advanced agents]\n\nAdvanced agents search larger solution spaces than we do. Therefore the project of trying to visualize all the strategies that might fit a utility function, to try to verify in our own minds that the maximum is somewhere safe, seems exceptionally untrustworthy (not [2l]).\n\n### Interaction with context change problem\n\nAgents that acquire new strategic options or become able to search a wider range of the solution space may go from having only apparently 'normal' solutions to apparently 'extreme' solutions. This is known as the [6q context change problem]. For example, an agent that inductively learns human smiles as a component of its utility function, might as a non-advanced agent have access only to strategies that make humans happy in an intuitive sense (thereby producing the apparent observation that everything is going fine and the agent is working as intended), and then after self-improvement, acquire as an advanced agent the strategic option of transforming the future light cone into tiny molecular smileyfaces.\n\n### Strong pressures can arise at any stage of optimization\n\nSuppose you tried to build an agent that was an *expected* utility satisficer - rather than having a 0-1 utility function and thus chasing probabilities of goal satisfaction ever closer to 1, the agent searches for strategies that have at least 0.999 *expected* utility. Why doesn't this resolve the problem?\n\nA bounded satisficer doesn't *rule out* the solution of filling the room with water, since this solution also has >0.999 expected utility. It only requires the agent to carry out one cognitive algorithm which has at least one maximizing or highly optimizing stage, in order for 'fill the room with water' to be preferred to 'add 4 buckets and shut down safely' on that stage (while being equally acceptable at future satisficing stages). E.g., maybe you build an expected utility satisficer and still end up with an extreme result because one of the cognitive algorithms suggesting solutions was trying to minimize its own disk space usage.\n\nOn a meta-level, we may run into problems of [71] for [ reflective agents]. Maybe one simple way of obtaining at least 0.999 expected utility is to create a subagent that *maximizes* expected utility? It seems intuitively clear why bounded maximizers would build boundedly maximizing offspring, but a bounded satisficer doesn't need to build boundedly satisficing offspring - a bounded maximizer might also be 'good enough'. (In the current theory of TilingAgents, we can prove that an expected utility satisficer can tile to an expected utility satisficer with some surprising caveats, but the problem is that it can tile to other things *besides* an expected utility satisficer.)\n\nSince it seems very easy for at least one stage of a self-modifying agent to end up preferring solutions that have higher scores relative to some scoring rule, the [EdgeInstantiation edge instantiation] problem can be expected to resist naive attempts to describe an agent that seems to have an overall behavior of 'not trying quite so hard'. It's also not clear how to make the instruction 'don't try so hard' be ReflectivelyConsistent, or apply to every part of a considered subagent. This is also why [ limited optimization] is an open problem.\n\nDispreferring solutions with 'extreme impacts' in general is the open problem of [ low impact AI]. Currently, no formalizable utility function is known that plausibly has the right intuitive meaning for this. (We're working on it.) Also note that not every extreme 'technically an X' that we think is 'not really an X' has an extreme causal impact in an intuitive sense, so not every case of the Edge Instantiation problem is blocked by dispreferring greater impacts.\n\n## Implications\n\n### One of [ limited optimization], [ low Impact], or [ full coverage value loading] seems critical for real-world agents [todo: insert probability bar]\n\nAs Stuart Russell observes, solving an optimization problem where only some values are constrained or maximized, will tend to set unconstrained variables to extreme values. The universe containing the maximum possible number of [10h paperclips] contains no humans; optimizing for as much human safety as possible will drive human freedom to zero.\n\nThen we must apparently do at least one of the following:\n\n1. Build [ full coverage] advanced agents whose utility functions lead them to terminally disprefer stomping on every aspect of value that we care about (or would care about under reflection). In a full coverage agent there are no unconstrained variables *that we care about* to be set to extreme values that we would dislike; the AI's goal system knows and cares about *all* of these. It will not set human freedom to an extremely low value in the course of following an instruction to optimize human safety, because it knows about human freedom and literally everything else.\n2. Build [2s powerful agents] that are [ limited optimizers] which predictably invent only solutions we intuitively consider 'non-extreme', whose optimizations are such as to not drive to an extreme on any substage. This leaves us with just ambiguity as a (severe) problem, but at least averts a systematic drive toward extremes that will systematically 'exploit' that ambiguity.\n3. Build [2s powerful agents] that are [ low impact] and prefer to avoid solutions that produce greater impacts on *anything* we intuitively see as an important predicate, including both everything we value and a great many more things we don't particularly value.\n4. Find some other escape route from the [2z value achievement problem].\n\n### Insufficiently cautious attempts to build advanced agents are likely to be highly destructive [todo: insert probability bar]\n\nEdge Instantiation is one of the contributing reasons why value loading is hard and naive solutions end up doing the equivalent of tiling the future light cone with paperclips.\n\nWe've previously observed certain parties proposing utility functions for advanced agents that seem obviously subject to the Edge Instantiation problem. Confronted with the obvious disaster forecast, they propose [48 patching] the utility function to eliminate that particular scenario (or rather, say that of course they would have written the utility function to exclude that scenario) or claim that the agent will not 'misinterpret' the instructions so egregiously (denying the [1y] at least to the extent of proposing a universal preference for interpreting instructions 'as intended'). Mistakes of this type also belong to a class that potentially wouldn't show up during early stages of the AI, or would show up in an initially noncatastrophic way that seemed easily patched, so people advocating an [ empirical first methodology] would falsely believe that they had learned to handle them or eliminated all such tendencies already.\n\nThus the problem of Edge Instantiation (which is much less severe for nonadvanced agents than advanced agents, will not be solved in the advanced stage by patches that seem to fix weak early problems, and has empirically appeared in proposals by multiple speakers who rejected attempts to point out the Edge Instantiation problem) is a significant contributing factor to the overall expectation that the default outcome of developing advanced agents with current attitudes is disastrous.\n\n### Relative to current attitudes, small increases in safety awareness do not produce significantly less destructive final outcomes [todo: insert probability bar]\n\nSimple patches to Edge Instantiation fail and the only currently known approaches would take a lot of work to solve problems like [ limited optimization] or [ full coverage] that are hard for deep reasons. In other words, Edge Instantiation does not appear to be the sort of problem that an AI project can easily avoid just by being made aware of it. (E.g. MIRI knows about it but hasn't yet come up with any solution, let alone one easily patched on to any cognitive architecture.)\n\nThis is one of the factors contributing to the general assessment that the curve of outcome goodness as a function of effort is flat for a significant distance around current levels of effort.', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '1', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '2016-02-26 21:44:03', hasDraft: 'false', votes: [ { value: '95', userId: 'EliezerYudkowsky', createdAt: '2015-03-26 23:17:50' }, { value: '50', userId: 'PaulChristiano', createdAt: '2015-09-18 16:48:39' }, { value: '90', userId: 'NateSoares', createdAt: '2015-09-25 04:53:37' } ], voteSummary: 'null', muVoteSummary: '0', voteScaling: '2', currentUserVote: '-2', voteCount: '3', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'EliezerYudkowsky', 'NateSoares', 'AlexeiAndreev' ], childIds: [], parentIds: [ 'value_identification' ], commentIds: [ '6l' ], questionIds: [], tagIds: [ 'patch_resistant', '4bn', 'c_class_meta_tag', 'goodness_estimate_bias' ], relatedIds: [ 'low_impact' ], markIds: [], explanations: [], learnMore: [], requirements: [], subjects: [], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '16071', pageId: 'edge_instantiation', userId: 'EliezerYudkowsky', edit: '0', type: 'newTag', createdAt: '2016-07-07 22:02:48', auxPageId: 'c_class_meta_tag', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '16070', pageId: 'edge_instantiation', userId: 'EliezerYudkowsky', edit: '0', type: 'newParent', createdAt: '2016-07-07 22:02:10', auxPageId: 'value_identification', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '16068', pageId: 'edge_instantiation', userId: 'EliezerYudkowsky', edit: '0', type: 'deleteParent', createdAt: '2016-07-07 22:02:02', auxPageId: 'value_alignment_problem', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '16066', pageId: 'edge_instantiation', userId: 'EliezerYudkowsky', edit: '0', type: 'newParent', createdAt: '2016-07-07 22:01:20', auxPageId: 'value_alignment_problem', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '16064', pageId: 'edge_instantiation', userId: 'EliezerYudkowsky', edit: '0', type: 'newTag', createdAt: '2016-07-07 22:01:10', auxPageId: 'patch_resistant', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '16063', pageId: 'edge_instantiation', userId: 'EliezerYudkowsky', edit: '0', type: 'deleteParent', createdAt: '2016-07-07 22:01:06', auxPageId: 'patch_resistant', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '16061', pageId: 'edge_instantiation', userId: 'EliezerYudkowsky', edit: '0', type: 'deleteParent', createdAt: '2016-07-07 22:01:04', auxPageId: 'ai_alignment', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '16059', pageId: 'edge_instantiation', userId: 'EliezerYudkowsky', edit: '0', type: 'newTag', createdAt: '2016-07-07 22:00:34', auxPageId: '4bn', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '16058', pageId: 'edge_instantiation', userId: 'EliezerYudkowsky', edit: '0', type: 'deleteTag', createdAt: '2016-07-07 22:00:26', auxPageId: 'a_class_meta_tag', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '16056', pageId: 'edge_instantiation', userId: 'EliezerYudkowsky', edit: '0', type: 'newTag', createdAt: '2016-07-07 21:59:48', auxPageId: 'goodness_estimate_bias', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '15956', pageId: 'edge_instantiation', userId: 'EricBruylant', edit: '0', type: 'newTag', createdAt: '2016-07-07 08:00:11', auxPageId: 'a_class_meta_tag', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '8684', pageId: 'edge_instantiation', userId: 'EliezerYudkowsky', edit: '21', type: 'newUsedAsTag', createdAt: '2016-03-18 22:14:21', auxPageId: 'low_impact', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '8543', pageId: 'edge_instantiation', userId: 'EliezerYudkowsky', edit: '0', type: 'turnOffVote', createdAt: '2016-03-12 07:20:39', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '8544', pageId: 'edge_instantiation', userId: 'EliezerYudkowsky', edit: '21', type: 'newEdit', createdAt: '2016-03-12 07:20:39', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4140', pageId: 'edge_instantiation', userId: 'NateSoares', edit: '20', type: 'newEdit', createdAt: '2015-12-18 01:24:33', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4139', pageId: 'edge_instantiation', userId: 'NateSoares', edit: '19', type: 'newEdit', createdAt: '2015-12-18 01:24:07', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3823', pageId: 'edge_instantiation', userId: 'AlexeiAndreev', edit: '18', type: 'newEdit', createdAt: '2015-12-16 01:51:55', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3822', pageId: 'edge_instantiation', userId: 'AlexeiAndreev', edit: '0', type: 'newAlias', createdAt: '2015-12-16 01:51:54', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '46', pageId: 'edge_instantiation', userId: 'AlexeiAndreev', edit: '1', type: 'newParent', createdAt: '2015-10-28 03:46:51', auxPageId: 'patch_resistant', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '359', pageId: 'edge_instantiation', userId: 'AlexeiAndreev', edit: '1', type: 'newParent', createdAt: '2015-10-28 03:46:51', auxPageId: 'ai_alignment', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1502', pageId: 'edge_instantiation', userId: 'EliezerYudkowsky', edit: '16', type: 'newEdit', createdAt: '2015-06-07 22:03:24', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1501', pageId: 'edge_instantiation', userId: 'EliezerYudkowsky', edit: '15', type: 'newEdit', createdAt: '2015-04-06 23:55:11', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1500', pageId: 'edge_instantiation', userId: 'EliezerYudkowsky', edit: '14', type: 'newEdit', createdAt: '2015-04-06 23:53:50', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1499', pageId: 'edge_instantiation', userId: 'EliezerYudkowsky', edit: '13', type: 'newEdit', createdAt: '2015-04-06 19:47:21', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1498', pageId: 'edge_instantiation', userId: 'EliezerYudkowsky', edit: '12', type: 'newEdit', createdAt: '2015-04-05 21:45:31', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1497', pageId: 'edge_instantiation', userId: 'EliezerYudkowsky', edit: '10', type: 'newEdit', createdAt: '2015-04-05 20:45:36', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1496', pageId: 'edge_instantiation', userId: 'EliezerYudkowsky', edit: '9', type: 'newEdit', createdAt: '2015-04-05 00:07:58', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1495', pageId: 'edge_instantiation', userId: 'EliezerYudkowsky', edit: '8', type: 'newEdit', createdAt: '2015-04-04 23:39:19', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1494', pageId: 'edge_instantiation', userId: 'EliezerYudkowsky', edit: '7', type: 'newEdit', createdAt: '2015-04-04 23:26:33', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1493', pageId: 'edge_instantiation', userId: 'EliezerYudkowsky', edit: '6', type: 'newEdit', createdAt: '2015-03-27 01:13:09', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1492', pageId: 'edge_instantiation', userId: 'EliezerYudkowsky', edit: '5', type: 'newEdit', createdAt: '2015-03-27 01:12:47', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1491', pageId: 'edge_instantiation', userId: 'EliezerYudkowsky', edit: '4', type: 'newEdit', createdAt: '2015-03-26 23:21:40', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1490', pageId: 'edge_instantiation', userId: 'EliezerYudkowsky', edit: '3', type: 'newEdit', createdAt: '2015-03-26 23:21:26', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1489', pageId: 'edge_instantiation', userId: 'EliezerYudkowsky', edit: '2', type: 'newEdit', createdAt: '2015-03-26 23:21:07', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1488', pageId: 'edge_instantiation', userId: 'EliezerYudkowsky', edit: '1', type: 'newEdit', createdAt: '2015-03-26 23:17:37', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'false', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }