{ localUrl: '../page/low_impact.html', arbitalUrl: 'https://arbital.com/p/low_impact', rawJsonUrl: '../raw/2pf.json', likeableId: '1614', likeableType: 'page', myLikeValue: '0', likeCount: '3', dislikeCount: '0', likeScore: '3', individualLikes: [ 'PatrickLaVictoir', 'EliezerYudkowsky', 'RolandPihlakas' ], pageId: 'low_impact', edit: '14', editSummary: '', prevEdit: '13', currentEdit: '14', wasPublished: 'true', type: 'wiki', title: 'Low impact', clickbait: 'The open problem of having an AI carry out tasks in ways that cause minimum side effects and change as little of the rest of the universe as possible.', textLength: '27732', alias: 'low_impact', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'EliezerYudkowsky', editCreatedAt: '2016-04-19 04:08:27', pageCreatorId: 'EliezerYudkowsky', pageCreatedAt: '2016-03-19 00:32:12', seeDomainId: '0', editDomainId: 'EliezerYudkowsky', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '5', isEditorComment: 'false', isApprovedComment: 'true', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '482', text: '[summary: A low-impact agent is one that's intended to avoid large bad impacts at least in part by trying to avoid all large impacts as such. Suppose we ask an agent to fill up a cauldron, and it fills the cauldron using a self-replicating robot that goes on to flood many other inhabited areas. We could try to get the agent not to do this by letting it know that flooding inhabited areas is bad. An alternative approach is trying to have an agent that avoids needlessly large impacts in general - there's a way to fill the cauldron that has a smaller impact, a smaller footprint, so hopefully the agent does that instead. The hopeful notion is that while "bad impact" is a highly value-laden category with a lot of complexity and detail, the notion of "big impact" will prove to be simpler and to be more easily identifiable. Then by having the agent avoid all big impacts, or check all big impacts with the user, we can avoid bad big impacts in passing. Possible gotchas and complications with this idea include, e.g., you wouldn't want the agent to freeze the universe into stasis to minimize impact, or try to edit people's brains to avoid them noticing the effects of its actions, or carry out offsetting actions that cancel out the good effects of whatever the users were trying to do.]\n\nA low-impact agent is a hypothetical [6w task-based AGI] that's intended to avoid *disastrous* side effects via trying to *avoid large side effects in general*.\n\nConsider the Sorcerer's Apprentice fable: a legion of broomsticks, self-replicating and repeatedly overfilling a cauldron (perhaps to be as certain as possible that the cauldron was full). A low-impact agent would, if functioning as [6h intended], have an incentive to avoid that outcome; it wouldn't just want to fill the cauldron, but fill the cauldron in a way that had a minimum footprint. If the task given the AGI is to paint all cars pink, then we can hope that a low-impact AGI would not accomplish this via self-replicating nanotechnology that went on replicating after the cars were painted, because this would be an unnecessarily large side effect.\n\nOn a higher level of abstraction, we can imagine that the universe is parsed by us into a set of variables $V_i$ with values $v_i.$ We want to avoid the agent taking actions that cause large amounts of disutility, that is, we want to avoid perturbing variables from $v_i$ to $v_i^*$ in a way that decreases utility. However, the question of exactly which variables $V_i$ are important and shouldn't be entropically perturbed is [36h value-laden] - complicated, fragile, high in [5v algorithmic complexity], with [2fs Humean degrees of freedom in the concept boundaries].\n\nRather than relying solely on teaching an agent exactly which parts of the environment shouldn't be perturbed and risking catastrophe if we miss an injunction, the *low impact* route would try to build an agent that tried to perturb fewer variables regardless.\n\nThe hope is that "have fewer side effects" is a problem that has a simple core and is learnable by a manageable amount of training. Conversely, trying to train "here is the list of *bad* effects not to have and *important* variables not to perturb" would be complicated and lack a simple core, because 'bad' and 'important' are [36h value-laden]. A list of dangerous variables would also be [2qp a blacklist rather than a whitelist], which would make it more vulnerable to [6q treacherous context changes] if the AI gained the ability to affect new things.\n\n# Introduction: Formalizing low impact seems nontrivial\n\nIntuitively, the notion of "low impact" seems like it should be simpler - have more of a central, core tendency to capture - than "avoid bad impacts". If so, we don't know yet how to compactly state this core principle semiformally.\n\nSuppose we start with an obvious notion: to have low impact, minimize the number of variables you [ causally] affect. But:\n\n- Every event has a not-absolutely-zero impact on every future event. When you twitch your thumb or even just fire a neuron, the gravitational ripples from the moving atoms spread out and will eventually produce infinitesimal forces on atoms orbiting the other side of the galaxy. So we can't say "have zero impact on as many quarks as possible" because it is impossible for any act to have absolutely zero impact on any quark.\n\nThis is a special case of a failure mode where, since the impact metric turns out to already be maxed out by all actions, moving around inside the maxed-out impact penalty doesn't change expected utility, and therefore fails to influence which action is chosen. This [6r foreseeable difficulty] might also illustrate a typical [6q treacherous context change], in that the AI during its training phase might have a relatively small causal model of its environment, with qualitative causal arrows, and so seem to be minimizing impact inside the training regime. Later development might cause the AI to adopt a new causal model in which, as happens to actually be the case, all its possible actions influence all the quarks inside its [ future light cone], and so actions are not differentiated by the magnitude of their impact penalty; thus the AI would safe in training and unsafe in later use, with the critical threshold possibly coming after the AI was [instrumental_self_preservation too intelligent to be shut down].\n\nBut measuring causal impacts qualitatively isn't the only possibility. Suppose we define impact as the sum of the expected movement of all atoms, relative to their mean expected positions if some privileged null action were taken instead? Or, to avoid this impact measure returning 0 as soon as the AI realizes that [5c there are no atoms, only quarks], we try to give the AI some notion of the distance between outcomes in *any* way of modeling the world?\n\nArguendo, suppose that we could define a representational language over possible universes and laws of physics, such that within any representable model of the universe, there's an obvious notion of 'distance' between any two outcomes conditional on the AI's actions. If the universe is made of atoms, this representation will expose the obvious impact metric on the movements of atoms. If the universe is made of quarks, the same component of the utility function will readily calculate the movements of quarks. If the universe is made of quantum fields, this impact metric will behave in the intuitively intended way that basically works out to measuring particle motions, rather than the change metric always maxing out as the result of all amplitude flows ending up in qualitatively different sections of the quantum configuration space, etcetera. (Note that this is already sounding pretty nontrivial.)\n\nFurthermore, suppose when the AI is thinking in terms of neither atoms nor quarks, but rather, say, the equivalent of chess moves or voxel fields, the same impact metric can apply to this as well; so that we can observe the low-impact behaviors at work during earlier development phases.\n\nMore formally: We suppose that the AI's model class $\\mathcal M$ is such that for any allowed model $M \\in \\mathcal M,$ for any two outcomes $o_M$ and $o_M^'$ that can result from the AI's choice of actions, there is a distance $%% o_M - o_M^' %%$ which obeys [standard rules for distances](http://mathworld.wolfram.com/Norm.html). This general distance measure is such that, within the standard model of physics, moving atoms around would add to the distance between outcomes in the obvious way; and for models short of molecular detail, will measure changes in other variables and quantities in an intuitive way. We then fix some particular policy $\\pi_0$ whose consequence $(o|\\pi_0)$ is "the result of the AI doing nothing", and measure the impact penalty of any other policy $\\pi_k$ as proportional to the expected distance between outcomes $\\mathbb E[%% (o | \\pi_0) - (o | \\pi_k) %%].$\n\nThen we might run into any of these further [6r foreseeable difficulties], if, e.g., you asked the AGI to cure cancer with minimum 'impact' as defined above (or to put it another way, minimize impact subject to the constraint of curing cancer):\n\n- Offsetting actions we would intuitively regard as both bad and impactful:\n - The AGI models global death rates as a quantity, and implements offsetting actions to keep Earth's annual death statistics constant after it cures cancer.\n- Chaotic permissiveness:\n - Weather is a chaotic system. If influencing any butterfly is going to move around all the atoms in the atmosphere anyway, you might as well have them end up somewhere you want.\n- Stasis incentives:\n - The AGI edits its programmers to make sure the programmers do whatever they would have done if the AGI, after being told its task, performed the null action.\n - The AGI proliferates across galaxies to make sure everything else in the universe outside of human bodies adheres as closely as possible to the expected state it would have occupied if the null action had been taken.\n - The AGI sets up a weather-control system so that at least its further actions won't again disturb the weather.\n\nAll of this just goes to say that there's apparently some subtlety built into our intuitively [6h intended] notion of "paint all cars pink, but do so with the minimum footprint possible apart from that".\n\nWe want people to be able to notice that their cars have been painted pink, and for them to enjoy whatever further benefit of pink-painted cars led us to give the AGI this instruction in the first place. But we can't just whitelist any further impact that happens as a consequence of the car being painted pink, because maybe the car was painted with pink replicating nanomachines. Etcetera.\n\nEven if there is, in fact, some subtlety built into our intended notion of "make plans that have minimal side effects", this subtle notion of low impact might still have a relatively much simpler core than our intuitive notion of "avoid bad impacts". This might be reflected in either an improved formal intuition for 'low impact' that proves to stand up to a few years of skeptical scrutiny without any holes having been poked in it, or, much more nerve-rackingly, the ability to train an AI to make minimal-impact plans even if we don't know a closed-form definition of "minimal impact".\n\nWork in this area is ongoing, so far mainly in the form of some preliminary suggestions by Stuart Armstrong (which were mostly shot down, but this is still progress compared to staring blankly at the problem). [todo: link Armstrong's stuff.]\n\n# Foreseeable difficulties\n\n## Permissiveness inside chaotic systems\n\nSuppose you told the AI to affect as few things as possible, above the minimum necessary to achieve its task, and defined 'impact' qualitatively in terms of causal links that make variables occupy different states. Then since every act and indeed every internal decision (transistors, in switching, move electrons) would have infinitesimal influences on literally everything in the AI's future light cone, all of which is defined as an 'impact', all actions would seem to have the same, maximum impact. Then the impact penalty would make no difference to the net expected utility of actions, causing the AI to behave as if it had no impact penalty.\n\nEven if an impact measure doesn't max out because of ubiquitous qualitative impacts, a poorly defined impact measure might max out quantitatively when the AGI is operating in a domain that is *chaotic* in the sense that tiny differences soon blow up to large differences. E.g., if a butterfly flaps its wings, that might cause a hurricane on the other side of the world a year later - so since you're already changing the weather system as much as possible, why does it matter if, say, you on-purpose cause a hurricane in some area, or destroy a target using atmospheric lightning strikes? Those air molecules would all have ended up moving anyway because of the butterfly effect.\n\nAn imaginable patch is to try to evaluate impact over *foreseeable* impacts, so that a known lightning strike is 'foreseeable', while the effects on future hurricanes are 'not foreseeable'. This seems worryingly like mixing up the map and the territory (is it okay to release environmental poisons so long as you don't know who gets hurt?), but Stuart Armstrong has made some preliminary suggestions about minimizing knowable impacts. [todo: link Armstrong's stuff on minimizing knowable impacts.]\n\nIf you didn't know it was coming, "maxing out the impact penalty" would potentially be a [6q treacherous context change]. When the AI was at the infrahuman level, it might model the world on a level where its actions had relatively few direct causal links spreading out from them, and most of the world would seem untouched by most of its possible actions. Then minimizing the impact of its actions, while fulfilling its goals, might in the infrahuman state seem to result in the AI carrying out plans with relatively few side effects, as intended. In a superhuman state, the AI might realize that its every act resulted in quantum amplitude flowing into a nonoverlapping section of configuration space, or having chaotic influences on a system the AI was not previously modeling as having maximum impact each time.\n\n## Infinite impact penalties\n\nIn one case, a proposed impact penalty was written down on a whiteboard which happened to have the fractional form $\\frac{X}{Y}$ where the quantity $Y$ could *in some imaginable universes* get very close to zero, causing Eliezer Yudkowsky to make an "Aaaaaaaaaaa"-sound as he waved his hands speechlessly in the direction of the denominator. The corresponding agent would have [pascals_mugging spent all its effort on further-minimizing infinitesimal probabilities of vast impact penalties].\n\nBesides "don't put denominators that can get close to zero in any term of a utility function", this illustrates a special case of the general rule that impact penalties need to have their loudness set at a level where the AI is doing something besides minimizing the impact penalty. As a special case, this requires considering the growth scenario for improbable scenarios of very high impact penalty; the penalty must not grow faster than the probability diminishes.\n\n(As usual, note that if the agent only started to visualize these ultra-unlikely scenarios upon reaching a superhuman level where it could consider loads of strange possibilities, this would constitute a [6q treacherous context change].)\n\n## Allowed consequences vs. offset actions\n\nWhen we say "paint all cars pink" or "cure cancer" there's some implicit set of consequences that we think are allowable and should definitely not be prevented, such as people noticing that their cars are pink, or planetary death rates dropping. We don't want the AI trying to obscure people's vision so they can't notice the car is pink, and we don't want the AI killing a corresponding number of people to level the planetary death rate. We don't want these bad *offsetting* actions which would avert the consequences that were the point of the plan in the first place.\n\nIf we use a low-impact AGI to carry out some [6y pivotal act] that's part of a larger plan to improve Earth's chances of not being turned into paperclips, then this, in a certain sense, has a very vast impact on many galaxies that will *not* be turned into paperclips. We would not want this *allowed* consequence to max out and blur our AGI's impact measure, nor have the AGI try to implement the pivotal act in a way that would minimize the probability of it actually working to prevent paperclips, nor have the AGI take offsetting actions to keep the probability of paperclips to its previous level.\n\nSuppose we try to patch this rule that, when we carry out the plan, the further causal impacts of the task's accomplishment are exempt from impact penalties.\n\nBut this seems to allow too much. What if the cars are painted with self-replicating pink nanomachines? What distinguishes the further consequences of that solved goal from the further causal impact of people noticing that their cars have been painted pink?\n\nOne difference between "people notice their cancer was cured" and "the cancer cure replicates and consumes the biosphere" is that the first case involves further effects that are, from our perspective, pretty much okay, while the second class of further effects are things we don't like. But an 'okay' change versus a 'bad' change is a value-laden boundary. If we need to detect this difference as such, we've thrown out the supposed simplicity of 'low impact' that was our reason for tackling 'low impact' and not 'low badness' in the first place.\n\nWhat we need instead is some way of distinguishing "People see their cars were painted pink" versus "The nanomachinery in the pink paint replicates further" that operates on a more abstract, non-value-laden level. For example, hypothetically speaking, we might claim that *most* ways of painting cars pink will have the consequence of people seeing their cars were painted pink and only a few ways of painting cars pink will not have this consequence, whereas the replicating machinery is an *unusually large* consequence of the task having reached its fulfilled state.\n\nBut is this really the central core of the distinction, or does framing an impact measure this way imply some further set of nonobvious undesirable consequences? Can we say rigorously what kind of measure on task fulfillments would imply that 'most' possible fulfillments lead people to see their cars painted pink, while 'few' destroy the world through self-replicating nanotechnology? Would *that* rigorous measure have further problems?\n\nAnd if we told an AGI to shut down a nuclear plant, wouldn't we want a low-impact AGI to err on the side of preventing radioactivity release, rather than trying to produce a 'typical' magnitude of consequences for shutting down a nuclear plant?\n\nIt seems difficult (but might still be possible) to classify the following consequences as having low and high extraneous impacts based on a *generic* impact measure only, without introducing further value lading:\n\n- Low disallowed impact: Curing cancer causes people to notice their cancer has been cured, hospital incomes to drop, and world population to rise relative to its default state.\n- High disallowed impact: Shutting down a nuclear power plant causes a release of radioactivity.\n- High disallowed impact: Painting with pink nanomachinery causes the nanomachines to further replicate and eat some innocent bystanders.\n- Low disallowed impact: Painting cars with ordinary pink paint changes the rays of light reflecting from those cars and causes people to gasp and say "What just happened to my car?"\n- Low disallowed impact: Doing something smart with a Task AGI decreases the probability of the galaxies being consumed by an Unfriendly AI.\n\n(Even if we think that good AGI scenarios involve the AGI concealing the fact of its existence, it's hard to see why we'd want the *events as such* to be unnoticeable, or for their noticing to count as extraneous impacts.)\n\n### Fallback use of an impact measure to prioritize operator queries\n\nIf this problem of allowing *good* consequences did turn out to be inherently value-laden, it might not obviate the usefulness of a safe impact measure. For example, it could be that in normal operation of a Task AGI, there are only a limited number of good further impacts that we want to whitelist - e.g., once we say that curing cancer is allowed to cause people to notice that they feel healthier and for there to be more people alive using more resources, etcetera, then it's okay to minimize all other footprints.\n\nSimilarly, a safe impact measure might operate to alert the operators to all large impacts sorted by size of impact, such that the impact measure would reliably not have any remaining event that was 'large' in an intuitive sense that the operators didn't know about. That is, we might not be able to give a blanket instruction to minimize impact, but a safe impact measure might still be useful for deciding which features of a plan need to be queried to the operators.\n\n## Stasis incentives\n\nThis would be the general problem that includes "The AGI proliferates across galaxies to make sure everything else in the universe outside of human bodies adheres as closely as possible to the expected state it would have occupied if the null action had been taken."\n\nSimilarly, we can imagine the AGI having a strong incentive to, in Round 1, paint the cars with nanomachines such that the nanomachinery quietly and unobtrusively sets up a weather-control system. This wouldn't prevent the AGI's actions in Round 1 from influencing the chaotic weather and having a huge impact, but could ensure that the AI's future actions in Round 2 wouldn't influence the formerly-chaotic weather relative to its null action then.\n\n## The problem of the backup AGI\n\nSuppose we're using a measure of impact relative to a privileged null action. Imagine that there are two AGIs which will both be given the same task, and under ordinary circumstances, divide the effort of performing the task between them; but if one AGI fails to perform, the other will take over the task in its entirety. Then each AGI reasons, "If I perform the null action instead of carrying out the task, the other AGI will do almost exactly the same thing I was going to do" and so models itself as having *counterfactually* almost no impact. In this way, two AGIs can each evaluate themselves as having almost no impact, while operating in tandem they have a very large impact. (This problem was first observed by Stuart Armstrong.)\n\nWe can even imagine - though hopefully this isn't the sort of thing that would go uncaught by other safety measures - that in Round 1, the agent builds a backup agent, which doesn't count as having an extraneous impact because the system of 2 agents ends up carrying out the same steps that the agent would have carried out on its own. But then on Round 2, there's now a backup and both agents regard themselves as having minimal impact.\n\n(If you say that the agent should, on Round 1, realize that constructing the backup agent on Round 1 would have a later *impact* on the effect of the AI carrying out its orders in Round 2, then we're now allowing the agent to minimize its impact *across tasks,* which might imply that in Round 1 it should try to build something that regularizes the weather as a side effect, so that its actions in Round 2 have less chaotic impact on the weather... or that it tries to influence its programmers to give it less impactful tasks, etcetera.)\n\nA more general way of viewing this possible problem is that there might be [ tiling problems] for a low-impact agent modifying itself or creating subagents or successors.\n\n# Standard issues\n\nThe following standard issues from alignment theory seem like they could also apply to the problem of building a low-impact agent or creating a safe impact measure. We need an [2l advanced-safe] impact measure that will stand up to the load and stress of cognition that is superhuman in at least some domains.\n\n## Programmer manipulation\n\nA Task AGI which plans across rounds may have an incentive to influence its programmers into giving it plans that will have net lower impact compared to other plans the programmers might offer. If there's an explicit procedure for the programmers to whitelist consequences, the AGI would have to be somehow configured in such fashion as to not prefer programmers to whitelist the consequences or seek out consequences that the programmers would be expected to (perhaps mistakenly) whitelist.\n\n## [5j Causal environment hacking]\n\nSince an impact measure seems likely to be defined over the causes of sense data, it has the standard problem with remote hacking of causal environments. (This is a ubiquitous enough problem that you're going to want some general solution such as not having the AGI model remote hackers in the first place. But this subproblem should still be noted each time we introduce a dependency on the modeled causes of sense data.)\n\n## [2w Edge instantiation]\n\nBuilding a safe impact measure is hard because if we roll a naive impact measure, it might have its minimum in a weird, undesirable place. Freezing all matter into the universe into stasis would be an example of this.\n\n## [47 Unforeseen maximum]\n\nThe actions and modes the AGI has available to potentially minimize its impact will include more actions and modes than we can imagine and check ourselves.\n\n## [42 Nearest unblocked neighbor]\n\nIf the maximizing plan has high impact, the new best plan after introducing an impact measure may be a similar plan that has, through some leaky definition, found a weird special case that doesn't count as an impact. To put it another way, to the extent an imperfect impact measure would otherwise hamper the AI if it were perfect, the AI experiences an [10k instrumental pressure] to seek out any loopholes or leaks in the definition.\n\n## [6q Context change]\n\nAs the AI gains in cognitive capability, this widens the option space which could potentially contain instances of 'low impact' that are [47 unforeseen] or [42 unblocked].\n\nIn this particular case, increased intelligence may also:\n\n- Change the set of impacts that the AI can foresee, so that e.g. its causal model goes from 'some actions causally connect to some other things' to 'all actions causally connect to all things'.\n- Widen the space of possibilities it considers, to include for the first time very-low-probability scenarios of very large impacts.\n- Include new offsetting actions, so that the AI for the first time has the option of editing brains not to notice changes it makes, or introduce randomized Unfriendly-AI generators to offset decreases in the probability of Unfriendly AI.\n\n## [1fx]\n\nDoes a low-impact AI want to only build a successor that's a low-impact AI? If it builds an environmental subagent, is that subagent low impact?\n\nEven if the AGI is supposed to not be self-modifying or to be building subagents, is there a worrying divergence and pressure to be held in check between how the AI thinks and how the AI would prefer to think? Does a low-impact AGI want relevant cognitive computations in general to be low impact?\n\nTo the extent that low impact is a feature of the utility function rather than the optimization style, this doesn't have any obvious problems (apart from Armstrong's dual-AGI no-impact counterfactual issue), but it's a standard thing to check, and would become *much more* important if low impact was supposedly being achieved through any feature of the optimization style rather than utilities over outcomes.\n\n# Related / further problems\n\nA [2rf shutdown utility function] is one which incentivizes the AI to safely switch itself off, without, say, creating a subagent that assimilates all matter in the universe to make absolutely sure the AI is never again switched back on.\n\n[2rg Abortable plans] are those which are composed with the intention that it be possible to midway activate an 'abort' plan, such that the partial implementation of the original plan, combined with the execution of the abort plan, together have a minimum impact. For example, if an abortable AI was building self-replicating nanomachines to paint a car pink, it would give all the nanomachines a quiet self-destruct button, so that at any time the 'abort' plan could be executed after having partially implemented to the plan to paint the car pink, such that these two plans together would have a minimum impact.', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '2', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '', hasDraft: 'false', votes: [], voteSummary: 'null', muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'EliezerYudkowsky' ], childIds: [ 'shutdown_utility_function', 'abortable' ], parentIds: [ 'task_agi' ], commentIds: [ '2qh' ], questionIds: [], tagIds: [ 'taskagi_open_problems', 'edge_instantiation', 'nearest_unblocked', 'unforeseen_maximum', 'patch_resistant', 'value_alignment_open_problem', 'context_disaster' ], relatedIds: [], markIds: [], explanations: [], learnMore: [], requirements: [], subjects: [], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '9341', pageId: 'low_impact', userId: 'EliezerYudkowsky', edit: '14', type: 'newEdit', createdAt: '2016-04-19 04:08:27', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '9340', pageId: 'low_impact', userId: 'EliezerYudkowsky', edit: '13', type: 'newEdit', createdAt: '2016-04-19 04:05:00', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '9339', pageId: 'low_impact', userId: 'EliezerYudkowsky', edit: '12', type: 'newEdit', createdAt: '2016-04-19 04:00:33', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '9338', pageId: 'low_impact', userId: 'EliezerYudkowsky', edit: '11', type: 'newEdit', createdAt: '2016-04-19 03:59:26', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '9185', pageId: 'low_impact', userId: 'EliezerYudkowsky', edit: '10', type: 'newEdit', createdAt: '2016-03-31 18:41:51', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '8904', pageId: 'low_impact', userId: 'EliezerYudkowsky', edit: '9', type: 'newEdit', createdAt: '2016-03-22 03:07:20', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '8899', pageId: 'low_impact', userId: 'EliezerYudkowsky', edit: '8', type: 'newChild', createdAt: '2016-03-22 02:53:02', auxPageId: 'abortable', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '8894', pageId: 'low_impact', userId: 'EliezerYudkowsky', edit: '8', type: 'newChild', createdAt: '2016-03-22 02:35:13', auxPageId: 'shutdown_utility_function', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '8893', pageId: 'low_impact', userId: 'EliezerYudkowsky', edit: '0', type: 'newAlias', createdAt: '2016-03-22 02:31:42', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '8852', pageId: 'low_impact', userId: 'EliezerYudkowsky', edit: '8', type: 'newEdit', createdAt: '2016-03-20 03:42:04', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '8844', pageId: 'low_impact', userId: 'EliezerYudkowsky', edit: '7', type: 'newEdit', createdAt: '2016-03-20 02:27:46', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '8841', pageId: 'low_impact', userId: 'EliezerYudkowsky', edit: '6', type: 'newEdit', createdAt: '2016-03-20 02:26:18', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '8840', pageId: 'low_impact', userId: 'EliezerYudkowsky', edit: '5', type: 'newEdit', createdAt: '2016-03-20 02:24:54', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '8738', pageId: 'low_impact', userId: 'EliezerYudkowsky', edit: '4', type: 'newEdit', createdAt: '2016-03-19 00:35:46', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '8737', pageId: 'low_impact', userId: 'EliezerYudkowsky', edit: '3', type: 'newEdit', createdAt: '2016-03-19 00:34:19', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '8734', pageId: 'low_impact', userId: 'EliezerYudkowsky', edit: '2', type: 'newEdit', createdAt: '2016-03-19 00:32:29', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '8733', pageId: 'low_impact', userId: 'EliezerYudkowsky', edit: '1', type: 'newEdit', createdAt: '2016-03-19 00:32:12', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '8689', pageId: 'low_impact', userId: 'EliezerYudkowsky', edit: '0', type: 'newTag', createdAt: '2016-03-18 22:29:07', auxPageId: 'patch_resistant', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '8687', pageId: 'low_impact', userId: 'EliezerYudkowsky', edit: '0', type: 'newTag', createdAt: '2016-03-18 22:14:25', auxPageId: 'unforeseen_maximum', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '8685', pageId: 'low_impact', userId: 'EliezerYudkowsky', edit: '0', type: 'newTag', createdAt: '2016-03-18 22:14:21', auxPageId: 'edge_instantiation', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '8683', pageId: 'low_impact', userId: 'EliezerYudkowsky', edit: '0', type: 'newTag', createdAt: '2016-03-18 22:14:18', auxPageId: 'nearest_unblocked', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '8681', pageId: 'low_impact', userId: 'EliezerYudkowsky', edit: '0', type: 'newTag', createdAt: '2016-03-18 22:14:14', auxPageId: 'context_disaster', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '8676', pageId: 'low_impact', userId: 'EliezerYudkowsky', edit: '0', type: 'newTag', createdAt: '2016-03-18 20:54:11', auxPageId: 'taskagi_open_problems', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '8674', pageId: 'low_impact', userId: 'EliezerYudkowsky', edit: '0', type: 'newTag', createdAt: '2016-03-18 20:54:08', auxPageId: 'value_alignment_open_problem', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '8672', pageId: 'low_impact', userId: 'EliezerYudkowsky', edit: '0', type: 'newParent', createdAt: '2016-03-18 20:54:03', auxPageId: 'task_agi', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'true', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }