{ localUrl: '../page/value_alignment_value.html', arbitalUrl: 'https://arbital.com/p/value_alignment_value', rawJsonUrl: '../raw/55.json', likeableId: '2329', likeableType: 'page', myLikeValue: '0', likeCount: '1', dislikeCount: '0', likeScore: '1', individualLikes: [ 'NateSoares' ], pageId: 'value_alignment_value', edit: '32', editSummary: '', prevEdit: '31', currentEdit: '32', wasPublished: 'true', type: 'wiki', title: 'Value', clickbait: 'The word 'value' in the phrase 'value alignment' is a metasyntactic variable that indicates the speaker's future goals for intelligent life.', textLength: '12312', alias: 'value_alignment_value', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'EliezerYudkowsky', editCreatedAt: '2016-06-01 19:56:26', pageCreatorId: 'EliezerYudkowsky', pageCreatedAt: '2015-04-24 20:23:31', seeDomainId: '0', editDomainId: 'EliezerYudkowsky', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '17', isEditorComment: 'false', isApprovedComment: 'true', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '433', text: '[summary: Different people advocate different views on what we should want for the outcome of an '[value aligned]' AI (desiderata like human flourishing, or a [ fun-theoretic eudaimonia], or [3c5 coherent extrapolated volition], or an AI that mostly leaves us alone but protects us from other AIs). These differences might not be [ irreconcilable]; people are sometimes persuaded to change their views of what we should want. Either way, there's (arguably) a tremendous overlap in the technical issues for aligning an AI with any of these goals. So in the technical discussion, 'value' is really a metasyntactic variable that stands in for the speaker's current view, or for what an AI project might later adopt as a reasonable target after further discussion.]\n\n### Introduction\n\nIn the context of [2v value alignment] as a subject, the word 'value' is a speaker-dependent variable that indicates our ultimate goal - the property or meta-property that the speaker wants or 'should want' to see in the final outcome of Earth-originating intelligent life. E.g: [ human flourishing], [ fun], [3c5 coherent extrapolated volition], [ normativity].\n\nDifferent viewpoints are still being debated on this topic; people [ sometimes change their minds about their views]. We don't yet have full knowledge of which views are 'reasonable' in the sense that people with good cognitive skills might retain them [313 even in the limit of ongoing discussion]. Some subtypes of potentially internally coherent views may not be sufficiently [ interpersonalizable] for even very small AI projects to cooperate on them; if e.g. Alice wants to own the whole world and will go on believing that in the limit of continuing contemplation, this is not a desideratum on which Alice, Bob, and Carol can all cooperate. Thus, using 'value' as a potentially speaker-dependent variable isn't meant to imply that everyone has their own 'value' and that no further debate or cooperation is possible; people can and do talk each other out of positions which are then regarded as having been mistaken, and completely incommunicable stances seem unlikely to be reified even into a very small AI project. But since this debate is ongoing, there is not yet any one definition of 'value' that can be regarded as settled.\n\nNonetheless, on many of the current views being advocated, it seems like very similar technical problems of value alignment seem to arise in many of them. We would need to figure out how to [6c identify] the objects of value to the AI, robustly assure that the AI's preferences are [1fx stable] as the AI self-modifies, or create [45 corrigible] ways of recovering from errors in the way we tried to identify and specify the objects of value.\n\nTo centralize the very similar discussions of these technical problems while the outer debate about reasonable end goals is ongoing, the word 'value' acts as a metasyntactic placeholder for different views about the target of value alignment.\n\nSimilarly, in the larger [2z value achievement dilemma], the question of what the end goals should be, and policy difficulties of getting 'good' goals to be adopted in name by the builders or creators of AI, are factored out as the [value_selection value selection problem]. The output of this process is taken to be an input into the value loading problem, and 'value' is a name referring to this output.\n\n'Value' is *not* assumed to be what the AI is given as its utility function or [5f preference framework]. On many views implying that [5l value is complex] or otherwise difficult to convey to an AI, the AI may be, e.g., a [6w Genie] where some stress is taken off the proposition that the AI exactly understands value and put onto human ability to use the Genie well.\n\nConsider a Genie with an explicit preference framework targeted on a [ Do What I Know I Mean system] for making [ checked wishes]. The word 'value' in any discussion thereof should still only be used to refer to whatever the AI creators are targeting for real-world outcomes. We would say the 'value alignment problem' had been successfully solved to the extent that running the Genie produced high-value outcomes in the sense of the humans' viewpoint on 'value', not to the extent that the outcome matched the Genie's preference framework for how to follow orders.\n\n### Specific views on value\n\nObviously, a listing like this will only summarize long debates. But that summary at least lets us point to some examples of views that have been advocated, and not indefinitely defer the question of what 'value' could possibly refer to.\n\nAgain, keep in mind that by technical definition, 'value' is what we are using or should use to rate the ultimate real-world consequences of running the AI, *not* the explicit goals we are giving the AI.\n\nSome of the major views that have been advocated by more than one person are as follows:\n\n- **Reflective equilibrium.** We can talk about 'what I *should* want' as a concept distinct from 'what I want right now' by construing some limit of how our present desires would directionally change given more factual knowledge, time to consider more knowledge, better self-awareness, and better self-control. Modeling this process is **extrapolation**, a reserved term to mean this process in the context of discussing preferences. Value would consist in, e.g., whatever properties a supermajority of humans would agree, in the limit of reflective equilibrium, are desirable. See also [ coherent extrapolated volition].\n- **Standard desires.** An object-level view that identifies value with qualities that we currently find very desirable, enjoyable, fun, and preferable, such as [ Frankena's list of desiderata] (including truth, happiness, aesthetics, love, challenge and achievement, etc.) On the closely related view of **Fun Theory**, such desires may be further extrapolated, without changing their essential character, into forms suitable for transhuman minds. Advocates may agree that these object-level desires will be subject to unknown normative corrections by reflective-equilibrium-type considerations, but still believe that some form of Fun or standardly desirable outcome is a likely result. Therefore (on this view) it is reasonable to speak of value as probably mostly consisting in turning most of the reachable universe into superintelligent life enjoying itself, creating transhuman forms of art, etcetera.\n- **[ImmediateGoods Immediate goods].** E.g., "Cure cancer" or "Don't transform the world into paperclips." Such replies arguably have problems as ultimate criteria of value from a human standpoint (see linked discussion), but for obvious reasons, lists of immediate goods are a common early thought when first considering the subject.\n- **Deflationary moral error theory.** There is no good way to construe a normative concept apart from what particular people want. AI programmers are just doing what they want, and confused talk of 'fairness' or 'rightness' cannot be rescued. The speaker would nonetheless personally prefer not to be turned into paperclips. (This mostly ends up at an 'immediate goods' theory in practice, plus some beliefs relevant to the [value_selection value selection] debate.)\n- **Simple purpose.** Value can easily be identified with X, for some X. X is the main thing we should be concerned about passing on to AIs. Seemingly desirable things besides X are either (a) improper to care about, (b) relatively unimportant, or (c) instrumentally implied by pursuing X, qua X.\n\nThe following versions of desiderata for AI outcomes would tend to imply that the value alignment / value loading problem is an entirely wrong way of looking at the issue, which might make it disingenuous to claim that 'value' in 'value alignment' can cover them as a metasyntactic variable as well:\n\n- **Moral internalist value.** The normative is inherently compelling to all, or almost all cognitively powerful agents. Whatever is not thus compelling cannot be normative or a proper object of human desire.\n- **AI rights.** The primary thing is to ensure that the AI's natural and intrinsic desires are respected. The ideal is to end up in a diverse civilization that respects the rights of all sentient beings, including AIs. (Generally linked are the views that no special selection of AI design is required to achieve this, or that special selection of AI design to shape particular motivations would itself violate AI rights.)\n\n## Modularity of 'value'\n\n### Alignable values\n\nMany issues in value alignment seem to generalize very well across the Reflective Equilibrium, Fun Theory, Intuitive Desiderata, and Deflationary Error Theory viewpoints. In all cases we would have to consider stability of self-modification, the [2w Edge Instantiation] problem in [6c value identification], and most of the rest of 'standard' value alignment theory. This seemingly good generalization of the resulting technical problems across such wide-ranging viewpoints, and especially that it (arguably) covers the case of intuitive desiderata, is what justifies treating 'value' as a metasyntactic variable in 'value loading problem'.\n\nA neutral term for referring to all the values in this class might be 'alignable values'.\n\n### Simple purpose\n\nIn the [ simple purpose] case, the key difference from an Immediate Goods scenario is that the desideratum is usually advocated to be simple enough to negate [5l Complexity of Value] and make [6c value identification] easy.\n\nE.g., Juergen Schmidhuber stated at the 20XX Singularity Summit that he thought the only proper and normative goal of any agent was to increase compression of sensory information [todo: find exact quote, exact Summit]. Conditioned on this being the sum of all normativity, 'value' is algorithmically simple. Then the problems of [2w Edge Instantiation], [47 Unforeseen Maximums], and Nearest Unblocked Neighbor are all moot. (Except perhaps as there is an Ontology Identification problem for defining exactly what constitutes 'sensory information' for a [ self-modifying agent].)\n\nEven in the [ simple purpose] case, the [ value loading problem] would still exist (it would still be necessary to make an AI that cared about the simple purpose rather than paperclips) along with associated problems of [71 reflective stability] (it would be necessary to make an AI that went on caring about X through self-modification). Nonetheless, the overall problem difficulty and immediate technical priorities would be different enough that the Simple Purpose case seems importantly distinct from e.g. Fun Theory on a policy level.\n\n### Moral internalism\n\nSome viewpoints on 'value' deliberately reject [1y Orthogonality]. Strong versions of the [ moral internalist position in metaethics] claim as an empirical prediction that every sufficiently powerful cognitive agent will come to pursue the same end, which end is to be identified with normativity, and is the only proper object of human desire. If true, this would imply that the entire value alignment problem is moot for advanced agents.\n\nMany people who advocate 'simple purposes' also claim these purposes are universally compelling. In a policy sense, this seems functionally similar to the Moral Internalist case regardless of the simplicity or complexity of the universally compelling value. Hence an alleged simple universally compelling purpose is categorized for these purposes as Moral Internalist rather than Simple Purpose.\n\nThe special case of a Simple Purpose claimed to be universally [10g instrumentally convergent] also seems functionally identical to Moral Internalism from a policy standpoint.)\n\n### AI Rights\n\nSomeone might believe as a proposition of fact that all (accessible) AI designs would have 'innate' desires, believe as a proposition of fact that no AI would gain enough advantage to wipe out humanity or prevent the creation of other AIs, and assert as a matter of morality that a good outcome consists of everyone being free to pursue their own value and trade. In this case the value alignment problem is implied to be an entirely wrong way to look at the problem, with all associated technical issues moot. Thus, it again might be disingenuous to have 'value' as a metasyntactic variable try to cover this case.', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '1', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '2016-02-27 12:14:23', hasDraft: 'false', votes: [], voteSummary: [ '0', '0', '0', '0', '0', '0', '0', '0', '0', '0' ], muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'EliezerYudkowsky', 'AlexeiAndreev' ], childIds: [ 'normative_extrapolated_volition', 'cev', 'beneficial', 'frankena_goods', 'detrimental', 'immediate_goods', 'value_cosmopolitan' ], parentIds: [ 'ai_alignment' ], commentIds: [ '1bz', '6js' ], questionIds: [], tagIds: [ 'definition_meta_tag', 'value_alignment_glossary' ], relatedIds: [], markIds: [], explanations: [], learnMore: [ { id: '7277', parentId: 'value_alignment_value', childId: 'value_cosmopolitan', type: 'subject', creatorId: 'EliezerYudkowsky', createdAt: '2017-01-10 21:31:11', level: '2', isStrong: 'false', everPublished: 'true' } ], requirements: [], subjects: [], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '21596', pageId: 'value_alignment_value', userId: 'EliezerYudkowsky', edit: '0', type: 'newTeacher', createdAt: '2017-01-10 22:00:11', auxPageId: 'value_cosmopolitan', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '21594', pageId: 'value_alignment_value', userId: 'EliezerYudkowsky', edit: '0', type: 'newChild', createdAt: '2017-01-10 22:00:10', auxPageId: 'value_cosmopolitan', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '12230', pageId: 'value_alignment_value', userId: 'EliezerYudkowsky', edit: '32', type: 'newChild', createdAt: '2016-06-09 23:15:00', auxPageId: 'detrimental', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '11873', pageId: 'value_alignment_value', userId: 'EliezerYudkowsky', edit: '32', type: 'newChild', createdAt: '2016-06-07 01:53:46', auxPageId: 'frankena_goods', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '11580', pageId: 'value_alignment_value', userId: 'EliezerYudkowsky', edit: '32', type: 'newEdit', createdAt: '2016-06-01 19:56:26', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '9533', pageId: 'value_alignment_value', userId: 'EliezerYudkowsky', edit: '31', type: 'newEdit', createdAt: '2016-05-01 20:02:35', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '9532', pageId: 'value_alignment_value', userId: 'EliezerYudkowsky', edit: '30', type: 'newEdit', createdAt: '2016-05-01 20:01:39', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '9527', pageId: 'value_alignment_value', userId: 'EliezerYudkowsky', edit: '28', type: 'newChild', createdAt: '2016-05-01 19:59:34', auxPageId: 'beneficial', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '9454', pageId: 'value_alignment_value', userId: 'EliezerYudkowsky', edit: '28', type: 'newEdit', createdAt: '2016-04-28 21:41:57', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '9419', pageId: 'value_alignment_value', userId: 'EliezerYudkowsky', edit: '27', type: 'newChild', createdAt: '2016-04-27 02:19:39', auxPageId: 'cev', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3806', pageId: 'value_alignment_value', userId: 'AlexeiAndreev', edit: '27', type: 'newEdit', createdAt: '2015-12-16 00:33:01', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3805', pageId: 'value_alignment_value', userId: 'AlexeiAndreev', edit: '0', type: 'newAlias', createdAt: '2015-12-16 00:33:00', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3764', pageId: 'value_alignment_value', userId: 'EliezerYudkowsky', edit: '26', type: 'newEdit', createdAt: '2015-12-15 06:29:41', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3763', pageId: 'value_alignment_value', userId: 'EliezerYudkowsky', edit: '25', type: 'newEdit', createdAt: '2015-12-15 06:27:42', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3705', pageId: 'value_alignment_value', userId: 'EliezerYudkowsky', edit: '24', type: 'newEdit', createdAt: '2015-12-14 21:40:08', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1094', pageId: 'value_alignment_value', userId: 'AlexeiAndreev', edit: '1', type: 'newUsedAsTag', createdAt: '2015-10-28 03:47:09', auxPageId: 'definition_meta_tag', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1109', pageId: 'value_alignment_value', userId: 'AlexeiAndreev', edit: '1', type: 'newUsedAsTag', createdAt: '2015-10-28 03:47:09', auxPageId: 'value_alignment_glossary', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '830', pageId: 'value_alignment_value', userId: 'AlexeiAndreev', edit: '1', type: 'newChild', createdAt: '2015-10-28 03:46:58', auxPageId: 'immediate_goods', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '379', pageId: 'value_alignment_value', userId: 'AlexeiAndreev', edit: '1', type: 'newParent', createdAt: '2015-10-28 03:46:51', auxPageId: 'ai_alignment', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '2040', pageId: 'value_alignment_value', userId: 'AlexeiAndreev', edit: '23', type: 'newEdit', createdAt: '2015-10-13 17:32:04', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '2039', pageId: 'value_alignment_value', userId: 'EliezerYudkowsky', edit: '22', type: 'newEdit', createdAt: '2015-07-02 19:00:51', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '2038', pageId: 'value_alignment_value', userId: 'EliezerYudkowsky', edit: '21', type: 'newEdit', createdAt: '2015-05-15 13:22:23', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '2037', pageId: 'value_alignment_value', userId: 'EliezerYudkowsky', edit: '20', type: 'newEdit', createdAt: '2015-05-15 11:15:12', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '2036', pageId: 'value_alignment_value', userId: 'EliezerYudkowsky', edit: '19', type: 'newEdit', createdAt: '2015-05-15 09:52:00', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '2035', pageId: 'value_alignment_value', userId: 'EliezerYudkowsky', edit: '18', type: 'newEdit', createdAt: '2015-05-15 06:34:30', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '2034', pageId: 'value_alignment_value', userId: 'EliezerYudkowsky', edit: '17', type: 'newEdit', createdAt: '2015-05-14 09:58:16', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '2033', pageId: 'value_alignment_value', userId: 'EliezerYudkowsky', edit: '16', type: 'newEdit', createdAt: '2015-05-14 09:52:05', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '2032', pageId: 'value_alignment_value', userId: 'EliezerYudkowsky', edit: '15', type: 'newEdit', createdAt: '2015-05-14 08:47:32', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '2031', pageId: 'value_alignment_value', userId: 'EliezerYudkowsky', edit: '14', type: 'newEdit', createdAt: '2015-04-27 22:11:09', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '2030', pageId: 'value_alignment_value', userId: 'EliezerYudkowsky', edit: '13', type: 'newEdit', createdAt: '2015-04-24 23:09:18', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '2029', pageId: 'value_alignment_value', userId: 'EliezerYudkowsky', edit: '12', type: 'newEdit', createdAt: '2015-04-24 23:08:19', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '2028', pageId: 'value_alignment_value', userId: 'EliezerYudkowsky', edit: '11', type: 'newEdit', createdAt: '2015-04-24 23:07:48', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '2027', pageId: 'value_alignment_value', userId: 'EliezerYudkowsky', edit: '10', type: 'newEdit', createdAt: '2015-04-24 23:06:07', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '2026', pageId: 'value_alignment_value', userId: 'EliezerYudkowsky', edit: '9', type: 'newEdit', createdAt: '2015-04-24 23:01:41', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '2025', pageId: 'value_alignment_value', userId: 'EliezerYudkowsky', edit: '8', type: 'newEdit', createdAt: '2015-04-24 23:00:14', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '2024', pageId: 'value_alignment_value', userId: 'EliezerYudkowsky', edit: '7', type: 'newEdit', createdAt: '2015-04-24 22:58:23', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '2023', pageId: 'value_alignment_value', userId: 'EliezerYudkowsky', edit: '6', type: 'newEdit', createdAt: '2015-04-24 22:57:22', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '2022', pageId: 'value_alignment_value', userId: 'EliezerYudkowsky', edit: '5', type: 'newEdit', createdAt: '2015-04-24 22:55:25', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '2021', pageId: 'value_alignment_value', userId: 'EliezerYudkowsky', edit: '4', type: 'newEdit', createdAt: '2015-04-24 22:10:03', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '2020', pageId: 'value_alignment_value', userId: 'EliezerYudkowsky', edit: '3', type: 'newEdit', createdAt: '2015-04-24 22:09:20', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'true', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }