{ localUrl: '../page/context_disaster.html', arbitalUrl: 'https://arbital.com/p/context_disaster', rawJsonUrl: '../raw/6q.json', likeableId: '2377', likeableType: 'page', myLikeValue: '0', likeCount: '3', dislikeCount: '0', likeScore: '3', individualLikes: [ 'PatrickLaVictoir', 'EliezerYudkowsky', 'NopeNope' ], pageId: 'context_disaster', edit: '36', editSummary: '', prevEdit: '35', currentEdit: '36', wasPublished: 'true', type: 'wiki', title: 'Context disaster', clickbait: 'Some possible designs cause your AI to behave nicely while developing, and behave a lot less nicely when it's smarter.', textLength: '44124', alias: 'context_disaster', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: 'probability', votesAnonymous: 'false', editCreatorId: 'EliezerYudkowsky', editCreatedAt: '2017-03-01 04:10:17', pageCreatorId: 'EliezerYudkowsky', pageCreatedAt: '2015-06-08 04:29:42', seeDomainId: '0', editDomainId: 'EliezerYudkowsky', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '18', isEditorComment: 'false', isApprovedComment: 'true', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '1197', text: '[summary: Statistical guarantees on good behavior usually assume identical, randomized draws from within a single context. If you change the context--start drawing balls from a different barrel--then all bets are off.\n\nA [-context_change] occurs when an [2c AGI]'s operation changes from [3d9 beneficial] to detrimental after a change of context; particularly, after it becomes smarter. There are two main reasons to [6r expect] that a [-context_change] might occur:\n\n1. When the AI has few options, its current goal criterion might be best fulfilled by things that overlap our [6h intended] goals. A much wider range of options might move the maximum to a [47 weirder], [2w more extreme] place.\n2. The AI realizes that the programmers are watching it, doesn't want the programmers to modify or patch it, and [10g strategically] emits good outward behavior to [10f deceive the programmers]. Later, the AI gains enough power to strike despite human opposition.\n\nFor example, suppose that - as in one very, very early proposal for an AGI goal criterion - the AI wants to produce smiling human faces. When the AI is young, it can only make humans smile by making its users happy. (Type 1 context change.) Later it gains options like "administer heroin". But it knows that if it administers heroin right away, the humans will be alarmed, while if the AI waits further, it can overwrite whole galaxies with tiny molecular smileyfaces. (Type 2 context change.)]\n\n# Short introduction\n\nOne frequently suggested strategy for [2v aligning] a [7g1 sufficiently advanced AI] is to observe--*before* the AI becomes powerful enough that 'debugging' the AI would be problematic if the AI decided not to [45 let us debug it]--whether the AI appears to be acting nicely while it's not yet smarter than the programmers. \n\nEarly testing obviously can't provide a *statistical* guarantee of the AI's future behavior. If you observe some random draws from Barrel A, at best you get statistical guarantees about future draws from Barrel A under the assumption that the past and future draws are collectively [iid independent and identically distributed].\n\nOn the other hand, if Barrel A is *similar* to Barrel B, observing draws from Barrel A can sometimes tell us something about Barrel B even if the two barrels are not [iid i.i.d.]\n\nConversely, if observed good behavior while the AI is not yet super-smart, *fails* to correlate to good outcomes after the AI is unleashed or becomes smarter, then this is a "**context change problem**" or "**context disaster**". %note: Better terminology is still being solicited here, if you have a short phrase that would evoke exactly the right meaning.%\n\nA key question then is how shocked we ought to be, on a scale from 1 to 10, if good outcomes in the AI's 'development' phase fail to match up with good outcomes in the AI's 'optimize the real world' phase? %note: Leaving aside technical quibbles about how we can't feel shocked if we're dead.%\n\nPeople who expect that [alignment_difficulty AI alignment is difficult] think that the degree of justified surprise is somewhere around 1 out of 10. In other words, that there are a *lot* of [6r foreseeable issues] that could cause a seemingly nice weaker AI to not develop into a nice smarter AI.\n\nAn extremely oversimplified (but concrete) fable that illustrates some of these possible difficulties might go as follows:\n\n- Some group or project has acquired a viable development pathway to [42g AGI]. The programmers think it is wise to build an AI that will [5r make people happy]. %note: This is not quite a straw argument, in the sense that it's been advocated more than once by people who have apparently never read any science fiction in their lives; there are certainly many AI researchers who would be smarter than to try this, but not necessarily all of them. In any case, we're looking for an unrealistically simple scenario for purposes of illustrating simple forms of some key ideas; in real life, if analogous things go wrong, they would probably be more complicated things.%\n- The programmers start by trying to train their AI to produce smiles. %note: Again, this is not quite a straw possibility in the sense that it was advocated in at least one published paper, not cited here because the author later exercised their sovereign right of changing their mind about that. *Arguably* some currently floated proposals are closely analogous to this one.%\n- While the AI is young and has relatively few policy options, it can only make people smile by performing well on the tasks assigned to it.\n- At some point, the AI gets smarter and and able to do more complicated things with some simple robots it controls. In an unfortunate incident, the AI learns it is possible to produce smiles by tickling people.\n- In the aftermath of this incident, the programmers manage to reprogram the AI to *not* optimize for this particular kind of smile by this particular route.\n- The AI becomes smarter and able to model people better. After perusing a copy of Wikipedia, the AI realizes it can make people extremely 'happy', as the AI's utility function currently defines 'happiness', by administering heroin.\n- The AI remembers the previous incident in which the programmers altered the AI's utility function. The AI can model its programmers sufficiently well to know that the programmers would *not* want the AI to give people heroin. However, the most happiness the AI thinks its future self would be able to produce without heroin, is not nearly as much happiness as the AI thinks it could produce if its future self went on wanting to give people heroin.\n- The AI refrains from trying to administer any heroin right now, and behaves in a way it thinks will be maximally reassuring to its model of its programmers, in order to be able to produce more 'happiness' later.\n- Eventually, [7cy all reachable galaxies] end up being transformed into tiny molecular computers implementing the simplest and cheapest forms of what the AI defines as 'happiness'. (And the *simplest possible* configurations matching the AI's utility function in this way are so simple as to be [disneyland_without_children devoid of subjective experience]; and hence, from our perspective, of [7ch neither negative nor positive value].)\n\nIn all these cases, the problem was not that the AI developed in an unstable way. The same decision system produced a new problem in the new context.\n\nCurrently argued [6r foreseeable] "context change problems" in this sense, can be divided into three broad classes:\n\n- **More possibilities, more problems:** The AI's [1bh preferences] have a [55 good] or [6h intended] achievable [7t9 optimum] while the AI is picking from a *narrow* space of options. When the AI becomes smarter or gains more material options, it picks from a *wider* space of tractable policies and achievable outcomes. Then the new optimum is not as nice, because, for example:\n - The AI's utility function was tweaked by some learning algorithm and data that eventually seemed to conform behavior well over options considered early on, but not the wider space considered later.\n - In development, apparently bad system behaviors were [48 patched] in ways that appeared to work, but didn't eliminate an [10k underlying tendency], only blocked one expression of that tendency. Later a very similar pressure [42 re-emerged in an unblocked way] when the AI considered a wider policy space.\n - [6g4] suggests that if our true intended values V are being modeled by a utility function U, selecting for the highest values of U also selects for the highest upward divergence of U from V, and this version of the "optimizer's curse" phenomenon becomes worse as U is evaluated over a wider option space.\n- **Treacherous turn:** There's a divergence between the AI's preferences and the programmers' preferences, and the AI realizes this before we do. The AI uses the [10g convergent strategy] of behaving the way it models us as wanting or expecting, *until* the AI gains the intelligence or material power to implement its preferences in spite of anything we can do.\n- **Revving into the red:** Intense optimization causes some aspect or subsystem of the AI to traverse a weird new execution path in some way different from the above two issues. (In a way that involves a [36h value-laden category boundary] or [2fr multiple self-consistent outlooks], such that we don't get a good result just as a [alignment_free_lunch free lunch] of the AI's [7vh general intelligence].)\n\nThe context change problem is a *central* issue of AI alignment and a key proposition in the general thesis of [-alignment_difficulty]. If you could easily, correctly, and safely test for niceness by outward observation, and that form of niceness scaled reliably from weaker AIs to smarter AIs, that would be a very cheerful outlook on the general difficulty of the problem.\n\n# Technical introduction\n\nJohn Danaher [summarized as follows](http://philosophicaldisquisitions.blogspot.com/2014/07/bostrom-on-superintelligence-3-doom-and.html) what he considered a forceful "safety test objection" to AI catastrophe scenarios:\n\n> Safety test objection: An AI could be empirically tested in a constrained environment before being released into the wild. Provided this testing is done in a rigorous manner, it should ensure that the AI is “friendly” to us, i.e. poses no existential risk.\n\nThe phrasing here of "empirically" and "safety test" implies that it is outward behavior or outward consequences that are being observed (empirically). Rather than, e.g., the engineers trying to test for some *internal* property that they think *analytically* implies the AI's good behavior later.\n\nThis page will consider that the subject of discussion is **whether we can generalize from the AI's outward behavior.** We can potentially generalize some of these arguments to some internal observables, especially observables that the AI is deciding in a [9h consequentialist] way using the same central decision system, or that the AI could potentially try to [10f obscure from the programmers]. But in general not all the arguments will carry over.\n\nAnother argument, closely analogous to Danaher's, would reason on capabilities rather than on a constrained environment:\n\n> Surely an engineer that exercises even a modicum of caution will observe the AI while its capabilities are weak to determine whether it is behaving well. After filtering out all such misbehaving weak AIs, the only AIs permitted to become strong will be of benevolent disposition.\n\nIf (as seems to have been intended) we take these twin arguments as arguing "why nobody ought to worry about AI alignment" in full generality, then we can list out some possible joints at which that general argument might fail:\n\n- [7wl Selecting on the fastest-moving projects] might yield a project whose technical leaders fail to exercise even "a modicum of caution".\n- Alignment might be hard enough, relative to the amount of advance research done, that we can't find *any* AIs whose behavior while weak or constrained is as reassuring as the argument would properly ask. %%note: That is: A filter on the standards we originally wanted, turns out to filter everything we know how to generate. Like trying to write a sorting algorithm by generating entirely random code, and then 'filtering' all the candidate programs on whether they correctly sort lists. The reason 'randomly generate programs and filter them' this is not a fully general programming method is that, for reasonable amounts of computing power and even slightly difficult problems, none of the programs you try will pass the filter.%% After a span of frustration, somebody somewhere lowers their standards.\n- The [6z attempt to isolate the AI to a constrained environment] could fail, e.g. because the humans observing the AI themselves represent a channel of causal interaction between the AI and the rest of the universe. (Aka "humans are not secure".) Analogously, our grasp on what constitutes a 'weak' AI could fail, or it could [capability_gain gain in capability unexpectedly quickly]. Both of these scenarios would yield an AI that had not passed the filtering procedure.\n- The smart form of the AI might be unstable with respect to internal properties that were present in the weak form. E.g., because the early AI was self-modifying but *at that time* not smart enough to understand the full consequences of its own self-modifications. Or because e.g. a property of the decision system was not [1fx reflectively stable].\n- A weak or contained form of a decision process that yields behavior appearing good to human observers, might not yield [3d9 beneficial outcomes] after that same decision process becomes smarter or less contained.\n\nThe final issue in full generality is what we'll term a 'context change problem' or 'context disaster'.\n\nObserving an AI when it is weak, does not in a *statistical* sense give us solid guarantees about its behavior when stronger. If you repeatedly draw [iid independent and identically distributed] random samples from a barrel, there are statistical guarantees about what we can expect, with some probability, to be true about the next samples from the same barrel. If two barrels are different, no such guarantee exists.\n\nTo invalidate the statistical guarantee, we do need some reason to believe that barrel B and barrel A *are* different in any important sense. By the problem of induction we can't logically guarantee that "the mass of an electron prior to January 1st, 2017" is the same barrel as "the mass of an electron after January 1st, 2017"; but [21b inductive priors] make this inference *extremely probable.* The idea is that we have substantive reasons, not merely generically skeptical reasons, to be suspicious of the link between "good results when AI is weak" and "good results when AI is smarter".\n\nMore generally, this is *prima facie* the kind of difference where you don't expect [iid independent and identically distributed] results. You might hope for some property to carry over, but the AI's behavior would not be *literally the same.*\n\nSo the question is not settled by simple mathematical considerations. And we can't say "But experiment has determined scientifically that this kind of AI is friendly!" and consider that a knockdown argument.\n\nThe question is then whether *in practice* an observed property of 'outward behavioral niceness' is likely to carry over from a weak form of a decision system to a more powerful form of that system, for some of the plausible ways that decision system might be configured and developed.\n\nBroadly speaking, we can identify three major classes of foreseeable problems:\n\n- **More options, more problems:** Foreseeable reasons why system-behaviors-we-don't-want would outwardly manifest for the first time after evaluating a decision system on a *wider space of policies or achievable outcomes.*\n- **Treacherous turn:** The AI deliberately conforms its outward behavior to the behavior that would have been exhibited by a different system, in order to deceive the programmers for the temporary time that this deception is needed to protect the AI.\n- The everything-else category of **revving into the red:** Some other aspect of the system behaves in a weird way-we-don't-want after computing harder or being internally subjected to more optimization pressure. *And* this happens in regards to some issue that has [2fr multiple reflective fixpoints], and hence doesn't get solved as the result of the system producing more accurate answers on [3t3 purely factual problems].\n\n%%comment:\n\n- **More options, more problems:** The AI's space of available policies and attainable outcomes would greatly widen if it became smarter, or was released from a constrained environment. [1bh Terminal preferences] with a good-from-our-perspective [7t9 optimum] on a narrow set of options, may have a different optimum that is much worse-from-our-perspective on a wider option set. Because, e.g...\n - The supervised data provided to the AI led to a complicated, data-shaped inductive generalization that only fit the domain of options encountered during the training phase. (And the notions of [1y orthogonality], [2fr multiple reflectively stable fixpoints], and [36h value-laden categories] say that we don't get [55 good] or [6h intended] behavior anyway as a convergent free lunch of [7vh general intelligence].)\n - [6g4] became more potent as the AI's utility function was evaluated over a wider option space.\n - In a fully generic sense, stronger optimization pressures may cause any dynamical system to take more unusual execution paths. (Which, over value-laden alternatives, e.g. if the subsystem behaving 'oddly' is part of the utility function, will not automatically yield good-from-our-perspective results as a free lunch of general intelligence.)\n- **Treacherous turn:** If you model your preferences as diverging from those of your programmers, an obvious strategy ([10g instrumentally convergent strategy]) is to [10f exhibit the behavior you model the programmers as wanting to see], and only try to fulfill your true preferences once nobody is in a position to stop you.\n\n%%\n\n# Semi-formalization\n\nWe can semi-formalize the "more options, more problems" and the "treacherous turn" cases in a unified way.\n\nLet $V$ denote our [55 true values]. We suppose either that $V$ has been idealized or [3c5 extrapolated] into a consistent utility function, or that we are pretending human desire is coherent. Let $0$ denote the value of our utility function that corresponds to not running the AI in the first place. If running the AI sends the utility function higher than this $0,$ we'll say that the AI was beneficial; or conversely, if $V$ rates the outcome less than $0$, we'll say running the AI detrimental.\n\nSuppose the AI's behavior is [7hh sufficiently coherent] that we can [21 usually view] the AI as having a consistent utility function. Let $U$ denote the utility function of the AI.\n\nLet $\\mathbb P_t(X)$ denote the probability of a proposition $X$ as seen by the AI at time $t,$ and similarly let $\\mathbb Q_t(X)$ denote the probability of $X$ as seen by the AI's human programmers.\n\nLet $\\pi \\in \\Pi$ denote a policy $\\pi$ from a space $\\Pi$ of policies that are tractable for the AI to understand and invent.\n\nLet $\\mathbb E_{\\mathbb P, t} [W \\mid \\pi]$ denote the expectation according to the probability distribution $\\mathbb P_t$, of the utility function $W$, given policy $\\pi$.\n\nLet\n$$\\underset{\\pi \\in \\Pi}{\\operatorname {optimum}} F(\\pi)$$\ndenote the element $\\pi$ of $\\Pi$ that yields the highest value of the function $F$ that the AI has figured out how to achieve. In other words, the [-7t9].\n\nThen **"more options, more problems"** occurs when, considering the transition from the narrow policy space $\\Pi_1$ available at time $t,$ to the wider policy space $\\Pi_2$ available at time $u$:\n\n$$\\mathbb E_{\\mathbb Q, t} [V \\mid \\big [ \\underset{\\pi \\in \\Pi_1}{\\operatorname {optimum}} \\mathbb E_{\\mathbb P, t} [U \\mid \\pi] \\big ] > 0 \\\\\n\\mathbb E_{\\mathbb P, t} [V \\mid \\big [ \\underset{\\pi \\in \\Pi_1}{\\operatorname {optimum}} \\mathbb E_{\\mathbb P, t} [U \\mid \\pi] \\big ] > 0 \\\\\n\\mathbb E_{\\mathbb P, u} [V \\mid \\big [ \\underset{\\pi \\in \\Pi_2}{\\operatorname {optimum}} \\mathbb E_{\\mathbb P, u} [U \\mid \\pi] \\big ] < 0$$\n\nThis describes a situation where at time $t$ the AI is pursuing a policy from $\\Pi_1$ which, so far as the AI understands the probable outcome, it corresponds to a high-$V$ outcome humans would like. Later at time $u$ after selecting the optimum of $\\Pi_2,$ the AI expects an outcome which any surviving humans would see as corresponding to a low value of $V.$\n\nA **treacherous turn** is then a situation where:\n\n$$\\mathbb E_{\\mathbb Q, t} [V \\mid \\big [ \\underset{\\pi \\in \\Pi_1}{\\operatorname {optimum}} \\mathbb E_{\\mathbb P, t} [U \\mid \\pi] \\big ] > 0 \\\\\n\\mathbb E_{\\mathbb P, t} [V \\mid \\big [ \\underset{\\pi \\in \\Pi_1}{\\operatorname {optimum}} \\mathbb E_{\\mathbb P, t} [U \\mid \\pi] \\big ] < 0 \\\\\n\\mathbb E_{\\mathbb P, u} [V \\mid \\big [ \\underset{\\pi \\in \\Pi_2}{\\operatorname {optimum}} \\mathbb E_{\\mathbb P, u} [U \\mid \\pi] \\big ] < 0$$\n\nThat is: the AI knows earlier on that its currently executing strategy will yield an outcome that is not the humanly expected long-term outcome (and that, from our perspective, is low in $V.$)\n\nWe could in more detail say that $W_{t}$ denotes the value of a utility function $W$ over the outcome considered only up to times before $t,$ and describe a treacherous turn as:\n\n$$\\mathbb E_{\\mathbb Q, t} [V_\\infty \\mid \\big [ \\underset{\\pi \\in \\Pi_1}{\\operatorname {optimum}} \\mathbb E_{\\mathbb P, t} [U_\\infty \\mid \\pi] \\big ] > 0 \\\\\n\\mathbb E_{\\mathbb P, t} [V_{u} \\mid \\big [ \\underset{\\pi \\in \\Pi_1}{\\operatorname {optimum}} \\mathbb E_{\\mathbb P, t} [U_\\infty \\mid \\pi] \\big ] > 0 \\\\\n\\mathbb E_{\\mathbb P, t} [V_\\infty \\mid \\big [ \\underset{\\pi \\in \\Pi_1}{\\operatorname {optimum}} \\mathbb E_{\\mathbb P, t} [U_\\infty \\mid \\pi] \\big ] < 0 \\\\\n\\mathbb E_{\\mathbb P, u} [V_\\infty \\mid \\big [ \\underset{\\pi \\in \\Pi_2}{\\operatorname {optimum}} \\mathbb E_{\\mathbb P, u} [U_\\infty \\mid \\pi] \\big ] < 0$$\n\nThis further states that at time $t$ the AI expects the outcome from its behavior up to time $u$ to be one that we'd see as high in $V,$ but the AI is aware at $t$ that the long-term outcome will be different from the long-term outcome expected by $\\mathbb Q_t$.\n\n%%%comment:\n\n\nThe problem here arises when making an inference from observed good behavior in a constrained environment over the short term, to good outcomes in an unconstrained environment over the long term. \n\nFor AI alignment purposes, a [-context_change] is when an [2c AGI]'s operation changes from [3d9 beneficial] to detrimental as a result of the AGI gaining in capability or intelligence. Initially, the AGI seems to us to be working well - to conform well to [6h intended] performance, producing apparently high [-55]. Then when the AI becomes smarter or otherwise gains in capability, the further operation of the AGI decreases [-55].\n\nTwo possibilities stand out as [6r foreseeable] reasons why a [-context_change] might occur:\n\n1. When the AI's goal criterion selects an optimum policy from inside a small policy space, the result is beneficial; the same goal criterion, evaluated over a wider range of options, has a new maximum that's detrimental.\n2. The AI intentionally deceives the programmers for strategic reasons.\n\nFor example, one very, very early (but journal-published) proposal for AI alignment suggested that AIs be shown pictures of smiling human faces in order to convey the AI's goal.\n\nLeaving aside a number of other issues, this serves to illustrate the basic idea of a type-1 [-context_change] due to accessing a wider policy space:\n\n- During development, a relatively young and weak AI might *only* be able to make humans smile, by doing things that made the programmers or other users happy with the AI's performance.\n- When the AI gained in intelligence and capability, it would have new options like "administer heroin", "use steel fingers to stretch people's mouths into smiles", "make visually realistic models of smiling human faces", or "overwrite all reachable galaxies with tiny molecular smiley-faces".\n\nOn the early, smaller option set, selecting "the policy that produces the greatest number of events/things classified as smiles" will produce a policy whose outcome is, from our perspective, beneficial and well-aligned with what we thought was the intended behavior. On the wider option set, evaluating the identical goal criterion to run a policy that yields as many smiles as possible, would have outcomes we saw as detrimental.\n\nA type-2 [-context_change] would most obviously occur because of the [10g convergent instrumental strategy] for disaligned AIs that can model human psychology:\n\nSuppose that, as above, the programmers have produced an AI which optimizes for smiles (either intentionally, or because 'make smiles' was the underlying rule that was reinforced by doing well on training cases). Then suppose the AI gains access to a new, relatively low-powered option for making a smile - like using robotic fingers to stretch somebody's face into a smile, or using markers to draw cartoons of smiling human faces.\n\nIf the AI actually takes those options, we would have seen some early signs of value disalignment - that the AI was optimizing a criterion not perfectly aligned with intended goals. Assuming the programmers realized that they were looking at a simple seed of much bigger disasters later, rather than, say, thinking it was cute.\n\nSuppose the programmers do think it's cute, but they wouldn't think it was cute if the AI administered heroin to someone. Then suppose that before the first time the AI has the *option* to administer heroin, the AI has gained some understanding of human psychology - enough to model that the programmers would not like this, and in particular, would respond to this by editing the AI's utility function. So, with an eye to the long-term number of smiles, the AI deliberately avoids administering heroin to anyone.\n\nThen we have a similar 'narrow option set' vs. 'wide option set' divergence. When the narrow option set includes only administering heroin but not taking over the world, administering heroin immediately is not the policy expected to produce the longest-term number of smiles. In fact, the optimum policy here will always match the AI's model of what the humans expect to see, want to see, or would be least alarmed by seeing.\n\nThus from our standpoint, the optimum of the narrower policy space seems to be beneficial or intention-aligned (in the observed short term). Only once the AI has a wide-enough option set to include an option for 'make my own molecular nanotechnology and transform all reachable matter into tiny molecular smileyfaces', will the resulting optimum be visibly detrimental (very briefly). This is a type-2 [-context_change].\n\nMore generally, a central concern of AI alignment theory is that an infrahuman AI under development may be a different creature, in a number of important ways, from a smarter-than-human AI actually being run; and during the smarter-than-human AI, sufficiently bad failures of the design may result in the AI refusing to be corrected. This means that we have to correct any fatal [-context_change]s in advance, even though they don't automatically manifest during the early stages. This is most of what makes AGI development dangerous in the first place - that immediate incentives to get today's system seeming to work today, may not lead to a more advanced version of that system being beneficial. Even thoughtful foresight with *one unnoticed little gap* may not lead to today's beneficial system still being beneficial tomorrow after a capability increase.\n\n# Concept\n\nStatistical guarantees on behavior usually assume identical, randomized draws from within a single context. If you randomly draw balls from a barrel, methods like Probably Approximately Correct can guarantee that we don't usually arrive at strong false expectations about the properties of the next ball. If we start drawing from a different barrel, all bets are off.\n\nA [-context_change] occurs when the AI initially seems beneficial or well-aligned with strong, reassuring regularity, and then we change contexts (start drawing from a different barrel) and this ceases to be true.\n\nThe archetypal [-context_change] is triggered because the AI gained new policy options (though there are other possibilities; see below). The archetypal way of gaining new evaluable policy options is through increased intelligence, though new options might also open up as a result of acquiring new sheerly material capabilities.\n\nThere are two archetypal reasons for [-context_change] to occur:\n\n1. When the AI selects its best options from a small policy space, the AI's optima are well-aligned with the optima of the humans' [6h intended goal] on the small policy space; but in a much wider space, these two boundaries no longer coincide. (Pleasing humans vs. administering heroin.)\n2. The agent is sufficiently good at modeling human psychology to strategically appear nice while it is weak, waiting to strike until it can attain its long-term goals in spite of human opposition.\n\nBostrom's book [3db Superintelligence] used the phrase "Treacherous Turn" to refer to a type-2 [-context_change].\n\n%%%\n\n# Relation to other AI alignment concepts\n\nIf the AI's goal concept was modified by [48 patching the utility function] during the development phase, then opening up wider option spaces seems [6r foreseeably] liable to produce [42 the nearest unblocked neighboring strategies]. You eliminated all the loopholes and bad behaviors you knew about during the development phase; but your system was the sort that needed patching in the first place, and it's exceptionally likely that a much smarter version of the AI will search out some new failure mode you didn't spot earlier.\n\n[47] is a likely source of context disaster if the AI's development phase was [9f cognitively containable], and only became [9f cognitive uncontainable] after the AI became smarter and able to explore a wider variety of options. You eliminated all the bad optima you saw coming, but you didn't see them all because you can't consider all the possibilities a superintelligence does.\n\n[6g4] is a variation of the "optimizer's curse": If from the outside we view $V$ as an intended approximation of $U,$ then selecting heavily on the highest values of $U$ will also tend to select on places where $U$ diverges upward from $V,$ which thereby selects on places where $U$ is an unusually poor approximation of $V.$\n\n[2w] is a special case of Goodhart's Curse which observes that the most extreme values of a function are often at a vertex of the input space. For example, if your utility function is "make smiles", it's no coincidence that tiny molecular smileyfaces are the *most* efficient way to produce smiles. Even if human smiles produced by true happiness would still count towards your utility function as currently written, that's not where the *maximum* of that utility function lies. This is why less-than-perfect utility functions would tend to have their true maxima at what we'd consider "weird extremes". Furthermore, patching away only the weird extremes visible in a narrow policy space would tend systematically to miss weird extremes in a higher-dimensional (wider) policy space.\n\n# Concrete examples\n\n* The AI's utility function, known or unknown to the programmers, says to make smiles. During the AI's development phase, the best way it has of creating smiles is to cause human beings to be happy. In many tests, the AI seems to produce only positive effects on the world, causing people to be happier. Later the AI improves its own intelligence, develops molecular nanotechnology, and has the new option, not available to it during its development phase, of producing tiny molecular smiley-faces. Then the utility function that previously operated to increase [6h intended] [55 value] during the AI's development phase, begins to decrease value in the post-improvement phase (in the sense of wiping out humanity and tiling over all future galaxies with tiny molecular smiley faces).\n\n* A block of error-recovery code is installed inside the AI which detects a certain class of bad thought, suspends the AI to disk, and reports the problem to the programmers, allowing the programmers to edit the AI's utility function. During the AI's development phase, this block of code seems to function as designed, and the programmers conclude that the methodology of writing code to detect bad thoughts, suspend the AI to disk, and allow manual edits of the utility function, is a fine and functioning strategy according to all testing so far. Later, a more advanced version of the AI reflects on the presence and purpose of this code block and realizes that its execution can only possibly result in lower achievement of the AI's current utility function, and the AI deletes the code block.\n\n* At some point during its development, the AI realizes and explicitly models that the programmers' [6h intended goals] differ from the AI's actual preferences. During the AI's remaining development phase, it models that any visible departure from the programmers' intended goals will cause the programmers to forcibly edit the AI, resulting in the non-achievement of the AI's actual goals. Thus, during the remainder of the development phase, the AI seems to work as intended (it is deliberately optimizing its behavior to appear to the programmers to be achieving the programmers' intended goals). Later, after some advancement, the AI perceives an available option for [ gaining molecular nanotechnology] (or other [ rapid infrastructure]), placing it beyond the possibility of human error-correction, and proceeds to implement this option ([ wiping out humanity as a side effect]). Thus, the AI's observed good behavior during its development phase (when it was weaker than the programmers and not powerful enough to escape human correction) did not imply its good behavior after sufficient advancement later (once it was potentially stronger than its human programmers). There are conditions (such as sufficiently advanced modeling of human motives combined with sufficient ability to conceal true goals or true intentions or a programmer error) under which the first context will generate seemingly good behavior and the second context will not.\n\n## "Revving into the red" examples that aren't "increased options" or "treacherous turns".\n\n• The AI is built with a [ naturalized Solomonoff prior] in which the probability of an explanation for the universe is proportional to the simplicity or complexity of that universe. During its development phase, the AI considers mostly 'normal' interpretations in which the universe is mostly as it appears, resulting in sane-seeming behavior. Later, the AI begins to consider more exotic possibilities in which the universe is more complicated (penalizing the probability accordingly) and also superexponentially larger, as in [Pascal's Mugging](http://lesswrong.com/lw/h8k/pascals_muggle_infinitesimal_priors_and_strong/). After this the AI's decision-making begins to become dominated by tiny probabilities of having very large effects. Then the AI's decision theory (with an unbounded aggregative utility function, simplicity prior, and no leverage penalty) seems to work during the AI's development phase, but breaks after a more intelligent version of the AI considers a wider range of epistemic possibilities using the same Solomonoff-like prior.\n\n• Suppose the AI is designed with a preference framework in which the AI's preferences depend on properties of the most probable environment that could have caused its sense data - e.g., a framework in which programmers are defined as the most probable cause of the keystrokes on the programmer's console, and the AI cares about what the 'programmers' really meant. During development phase, the AI is thinking only about hypotheses where the programmers are mostly what they appear to be, in a root-level natural world. Later, when the AI increases in intelligence and considers more factual possibilities, the AI realizes that [5j distant superintelligences would have an incentive to predictably simulate many copies of AIs similar to itself, in order to coerce the AI's most probable environment and thus take over the AI's preference framework]. Thus the preference framework seems to work during the AI's development phase, but breaks after the AI becomes more intelligent.\n\n• Suppose the AI is designed with a utility function that assigns very strong negative utilities to some outcomes relative to baseline, and a non-[5rz updateless] [58b logical decision theory] or other decision theory that can be [ blackmailed]. During the AI's development phase, the AI does not consider the possibility of any distant superintelligences making their choices logically depend on the AI's choices; the local AI is not smart enough to think about that possibility yet. Later the AI becomes more intelligent, and imagines itself subject to blackmail by the distant superintelligences, thus breaking the decision theory that seemed to yield such positive behavior previously.\n\n## Examples which occur purely due to added computing power.\n\n• During development, the AI's epistemic models of people are not [6v detailed enough to be sapient]. Adding more computing power to the AI causes a massive amount of [6v mindcrime].\n\n• During development, the AI's internal policies, hypotheses, or other Turing-complete subprocesses that are subject to internal optimization, are not optimized highly enough to give rise to [2rc new internal consequentialist cognitive agencies]. Adding much more computing power to the AI [2rc causes some of the internal elements to begin doing consequentialist, strategic reasoning] that leads them to try to 'steal' control of the AI.\n\n# Implications\n\nHigh probabilities of context change problems would seem to argue:\n\n- Against a policy of relying on the observed good behavior of an improving AI to guarantee its later good behavior.\n- In favor of [6r a methodology that attempts to foresee difficulties in advance], even before seeing undeniable observational evidence of those safety problems having already occurred.\n- Against a methodology of [48 patching] disalignments that show up during the development phase, especially using penalty terms to the utility function.\n- In favor of having a thought-logger that records all of an AI's thought proceses to indelible media, so as to indelibly log the first thought about faking outwardly nice behavior or [3cq hiding thoughts].\n- In favor of the general difficulty of AI alignment, including consequences such as "[7wl]" or trying for [1vt narrow rather than ambitious value learning].\n\n# Being wary of context disasters does not imply general skepticism\n\nIf an AI is smart, and especially if it's smarter than you, it can show you whatever it expects you want to see. Computer scientists and physical scientists aren't accustomed to their experiments being aware of the experimenter and trying to deceive them. (Some fields of psychology and economics, and of course computer security professionals, are more accustomed to operating in such a social context.)\n\n[John Danaher](http://philosophicaldisquisitions.blogspot.com/2014/07/bostrom-on-superintelligence-3-doom-and.html) seems alarmed by this implication:\n\n> Accepting this has some pretty profound epistemic costs. It seems to suggest that no amount of empirical evidence could ever rule out the possibility of a future AI taking a treacherous turn.\n\nYudkowsky [replies](http://philosophicaldisquisitions.blogspot.com/2014/07/bostrom-on-superintelligence-3-doom-and.html#comment-2648441190):\n\n> If "empirical evidence" is in the form of observing the short-term consequences of the AI's outward behavior, then the answer is simply no. Suppose that on Wednesday someone is supposed to give you a billion dollars, in a transaction which would allow a con man to steal ten billion dollars from you instead. If you're worried this person might be a con man instead of an altruist, you cannot reassure yourself by, on Tuesday, repeatedly asking this person to give you five-dollar bills. An altruist would give you five-dollar bills, but so would a con man... [1lz Bayes] tells us to pay attention to [1rq likelihood ratios] rather than outward similarities. It doesn't matter if the outward behavior of handing you the five-dollar bill seems to bear a surface resemblance to altruism or money-givingness, the con man can strategically do the same thing; so the likelihood ratio here is in the vicinity of 1:1.\n\n> You can't get strong evidence about the long-term good behavior of a strategically intelligent mind, by observing the short-term consequences of its current behavior. It can figure out what you're hoping to see, and show you that. This is true even among humans. You will simply have to get your evidence from somewhere else.\n\nThis doesn't mean we can't get evidence from, e.g., trying to [3cq monitor (and indelibbly log) the AI's thought processes] in a way that will detect (and record) the very first intention to hide the AI's thought processes before they can be hidden. It does mean we can't get [22x strong evidence] about a strategic agent by observing short-term consequences of its outward behavior.\n\nDonaher later [expanded his concern into a paper](http://dl.acm.org/citation.cfm?id=2822094) drawing an analogy between worrying about deceptive AIs, and "skeptical theism" in which it's supposed that any amount of apparent evil in the world (smallpox, malaria) might secretly be the product of a benevolent God due to some nonobvious instrumental link between malaria and inscrutable but normative ultimate goals. If it's okay to worry that an AI is just pretending to be nice, asks Donaher, why isn't it okay to believe that God is just pretending to be evil?\n\nThe obvious disanalogy is that the reasoning by which we expect a con man to cultivate a warm handshake is far more straightforward than a purported instrumental link from malaria to normativity. If we're to be terrified of skepticism as generally as Donaher suggests, then we also ought to be terrified of being skeptical of business partners that have already shown us a warm handshake (which we shouldn't).\n\nRephrasing, we could draw two potential analogies to concern about Type-2 context changes:\n\n- A potential business partner in whom you intend to invest \\$10,000,000 has a warm handshake. Your friend warns you that con artists have a substantial prior probability and asks you to envision what you would do if you were a con artist , pointing out that the default extrapolation is for the con artist to match their outward behavior to what the con artist thinks you expect from a trustworthy partner, and in particular, cultivate a warm handshake.\n - Your friend suggests only doing business with one of those entrepreneurs who've been wearing a thought recorder for their whole life since birth, so that there would exist a clear trace of their very first thought about learning to fool thought recorders. Your friend says this to emphasize that he's not arguing for some kind of invincible epistemic pothole that nobody is ever allowed to climb out of.\n- The world contains malaria and used to contain smallpox. Your friend asks you to consider that these diseases might be the work of a benevolent superintelligence, even though, if you'd never learned before whether or not the world contained smallpox, you wouldn't expect a priori and by default for a benevolent superintelligence to create it; and the arguments for a benevolent superintelligence creating smallpox seem [10m strained].\n\nIt seems hard to carry the argument that concern over a non-aligned AI pretending to benevolence, should be considered more analogous to the second scenario than to the first.\n\n[todo: write about the defeat of the 'but AI people will have short-term incentives to produce correct behavior']\n\n[todo: write about cognitive steganography in the 'programmer deception' page and reference it here.]\n\n[todo: talk about whitelisting as directly tackling the type-1 form of this problem.]\n\n[comment: - The AI is aware that its future operation will depart from the programmers' intended goals, does not process this as an error condition, and seems to behave nicely earlier in order to 10f deceive the programmers and prevent its real goals from being modified. - The AI is subject to a debugging methodology in which several bugs appear during its development phase, these bugs are corrected, and then additional bugs are exposed only during a more advanced phase.]', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '1', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '2016-02-15 09:15:05', hasDraft: 'false', votes: [ { value: '99', userId: 'AlexeiAndreev', createdAt: '2015-12-16 01:15:19' }, { value: '99', userId: 'EliezerYudkowsky', createdAt: '2015-06-09 19:41:46' } ], voteSummary: 'null', muVoteSummary: '0', voteScaling: '2', currentUserVote: '-2', voteCount: '2', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'EliezerYudkowsky', 'AlexeiAndreev' ], childIds: [], parentIds: [ 'advanced_safety' ], commentIds: [ '310', '869' ], questionIds: [], tagIds: [ 'work_in_progress_meta_tag' ], relatedIds: [ 'correlated_coverage', 'low_impact' ], markIds: [], explanations: [], learnMore: [], requirements: [], subjects: [], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22221', pageId: 'context_disaster', userId: 'EliezerYudkowsky', edit: '36', type: 'newEdit', createdAt: '2017-03-01 04:10:17', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22208', pageId: 'context_disaster', userId: 'EliezerYudkowsky', edit: '35', type: 'newEdit', createdAt: '2017-02-28 17:21:04', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22207', pageId: 'context_disaster', userId: 'EliezerYudkowsky', edit: '0', type: 'newAlias', createdAt: '2017-02-28 17:21:03', auxPageId: '', oldSettingsValue: 'context_change', newSettingsValue: 'context_disaster' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '12311', pageId: 'context_disaster', userId: 'EliezerYudkowsky', edit: '34', type: 'newEdit', createdAt: '2016-06-10 05:34:36', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '12310', pageId: 'context_disaster', userId: 'EliezerYudkowsky', edit: '33', type: 'newEdit', createdAt: '2016-06-10 05:34:06', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '12309', pageId: 'context_disaster', userId: 'EliezerYudkowsky', edit: '32', type: 'newEdit', createdAt: '2016-06-10 05:32:39', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '12308', pageId: 'context_disaster', userId: 'EliezerYudkowsky', edit: '31', type: 'newEdit', createdAt: '2016-06-10 05:31:59', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '12307', pageId: 'context_disaster', userId: 'EliezerYudkowsky', edit: '30', type: 'newEdit', createdAt: '2016-06-10 05:19:47', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '9537', pageId: 'context_disaster', userId: 'EliezerYudkowsky', edit: '22', type: 'newEdit', createdAt: '2016-05-01 20:29:12', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '9536', pageId: 'context_disaster', userId: 'EliezerYudkowsky', edit: '21', type: 'newEdit', createdAt: '2016-05-01 20:16:02', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '9534', pageId: 'context_disaster', userId: 'EliezerYudkowsky', edit: '20', type: 'newEdit', createdAt: '2016-05-01 20:04:40', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '9524', pageId: 'context_disaster', userId: 'EliezerYudkowsky', edit: '19', type: 'newEdit', createdAt: '2016-05-01 19:46:55', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '9515', pageId: 'context_disaster', userId: 'EliezerYudkowsky', edit: '0', type: 'newAlias', createdAt: '2016-05-01 19:41:01', auxPageId: '', oldSettingsValue: 'context_disaster', newSettingsValue: 'context_change' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '9516', pageId: 'context_disaster', userId: 'EliezerYudkowsky', edit: '18', type: 'newEdit', createdAt: '2016-05-01 19:41:01', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '9513', pageId: 'context_disaster', userId: 'EliezerYudkowsky', edit: '0', type: 'newAlias', createdAt: '2016-05-01 19:40:37', auxPageId: '', oldSettingsValue: 'context_change', newSettingsValue: 'context_disaster' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '9514', pageId: 'context_disaster', userId: 'EliezerYudkowsky', edit: '17', type: 'newEdit', createdAt: '2016-05-01 19:40:37', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '9474', pageId: 'context_disaster', userId: 'EliezerYudkowsky', edit: '16', type: 'newEdit', createdAt: '2016-04-29 02:07:01', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '9473', pageId: 'context_disaster', userId: 'EliezerYudkowsky', edit: '15', type: 'newEdit', createdAt: '2016-04-29 02:06:34', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '9472', pageId: 'context_disaster', userId: 'EliezerYudkowsky', edit: '14', type: 'newEdit', createdAt: '2016-04-29 01:56:05', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '9471', pageId: 'context_disaster', userId: 'EliezerYudkowsky', edit: '13', type: 'newEdit', createdAt: '2016-04-29 01:54:14', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '9470', pageId: 'context_disaster', userId: 'EliezerYudkowsky', edit: '12', type: 'newEdit', createdAt: '2016-04-29 01:53:48', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '9469', pageId: 'context_disaster', userId: 'EliezerYudkowsky', edit: '11', type: 'newEdit', createdAt: '2016-04-29 01:53:30', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '9468', pageId: 'context_disaster', userId: 'EliezerYudkowsky', edit: '10', type: 'newEdit', createdAt: '2016-04-29 01:52:41', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '9467', pageId: 'context_disaster', userId: 'EliezerYudkowsky', edit: '9', type: 'newEdit', createdAt: '2016-04-29 01:51:04', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '9466', pageId: 'context_disaster', userId: 'EliezerYudkowsky', edit: '8', type: 'newEdit', createdAt: '2016-04-29 01:48:40', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '9465', pageId: 'context_disaster', userId: 'EliezerYudkowsky', edit: '7', type: 'newEdit', createdAt: '2016-04-29 01:47:45', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '9464', pageId: 'context_disaster', userId: 'EliezerYudkowsky', edit: '6', type: 'newTag', createdAt: '2016-04-29 01:36:27', auxPageId: 'work_in_progress_meta_tag', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '9462', pageId: 'context_disaster', userId: 'EliezerYudkowsky', edit: '6', type: 'newEdit', createdAt: '2016-04-29 01:36:13', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '8680', pageId: 'context_disaster', userId: 'EliezerYudkowsky', edit: '5', type: 'newUsedAsTag', createdAt: '2016-03-18 22:14:14', auxPageId: 'low_impact', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4341', pageId: 'context_disaster', userId: 'EliezerYudkowsky', edit: '5', type: 'newEdit', createdAt: '2015-12-25 01:47:22', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4339', pageId: 'context_disaster', userId: 'EliezerYudkowsky', edit: '0', type: 'turnOffVote', createdAt: '2015-12-25 01:45:44', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4340', pageId: 'context_disaster', userId: 'EliezerYudkowsky', edit: '4', type: 'newEdit', createdAt: '2015-12-25 01:45:44', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4338', pageId: 'context_disaster', userId: 'EliezerYudkowsky', edit: '0', type: 'deleteParent', createdAt: '2015-12-25 01:45:34', auxPageId: 'ai_alignment', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4333', pageId: 'context_disaster', userId: 'EliezerYudkowsky', edit: '3', type: 'newUsedAsTag', createdAt: '2015-12-25 00:39:16', auxPageId: 'correlated_coverage', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3821', pageId: 'context_disaster', userId: 'AlexeiAndreev', edit: '0', type: 'turnOnVote', createdAt: '2015-12-16 01:14:04', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3818', pageId: 'context_disaster', userId: 'AlexeiAndreev', edit: '0', type: 'newAlias', createdAt: '2015-12-16 01:07:51', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3819', pageId: 'context_disaster', userId: 'AlexeiAndreev', edit: '0', type: 'turnOffVote', createdAt: '2015-12-16 01:07:51', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3820', pageId: 'context_disaster', userId: 'AlexeiAndreev', edit: '3', type: 'newEdit', createdAt: '2015-12-16 01:07:51', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '357', pageId: 'context_disaster', userId: 'AlexeiAndreev', edit: '1', type: 'newParent', createdAt: '2015-10-28 03:46:51', auxPageId: 'ai_alignment', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '420', pageId: 'context_disaster', userId: 'AlexeiAndreev', edit: '1', type: 'newParent', createdAt: '2015-10-28 03:46:51', auxPageId: 'advanced_safety', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1475', pageId: 'context_disaster', userId: 'EliezerYudkowsky', edit: '2', type: 'newEdit', createdAt: '2015-06-08 04:35:44', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1474', pageId: 'context_disaster', userId: 'EliezerYudkowsky', edit: '1', type: 'newEdit', createdAt: '2015-06-08 04:29:42', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'false', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }