{ localUrl: '../page/complexity_of_value.html', arbitalUrl: 'https://arbital.com/p/complexity_of_value', rawJsonUrl: '../raw/5l.json', likeableId: '2342', likeableType: 'page', myLikeValue: '0', likeCount: '1', dislikeCount: '0', likeScore: '1', individualLikes: [ 'LancelotVerinia' ], pageId: 'complexity_of_value', edit: '18', editSummary: '', prevEdit: '17', currentEdit: '18', wasPublished: 'true', type: 'wiki', title: 'Complexity of value', clickbait: 'There's no simple way to describe the goals we want Artificial Intelligences to want.', textLength: '16578', alias: 'complexity_of_value', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'true', voteType: 'probability', votesAnonymous: 'false', editCreatorId: 'EliezerYudkowsky', editCreatedAt: '2016-04-14 03:17:56', pageCreatorId: 'EliezerYudkowsky', pageCreatedAt: '2015-05-14 08:55:28', seeDomainId: '0', editDomainId: 'EliezerYudkowsky', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '14', isEditorComment: 'false', isApprovedComment: 'true', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '1180', text: '[summary: The proposition that there's no [5v algorithmically simple] [5t object-level goal] we can give to an [2c advanced AI] that yields a future of high [55 value]. Or: Any formally simple goal given to an AI, that talks directly about what sort of world to create, will produce disaster. Or: If you're trying to talk directly about what events or states of the world you want, then any sort of programmatically simple utility function, of the sort a programmer could reasonably hardcode, will lead to a bad end. (The non-simple alternative would be, e.g., an induction rule that can learn complicated classification rules from labeled instances, or a preference framework that explicitly models humans in order to learn complicated facts about what humans want.)]\n\n## Introduction\n\n"Complexity of value" is the idea that if you tried to write an AI that would do right things (or maximally right things, or adequately right things) *without further looking at humans* (so it can't take in a flood of additional data from human advice, the AI has to be complete as it stands once you're finished creating it), the AI's preferences or utility function would need to contain a large amount of data ([Kcomplexity algorithmic complexity]). Conversely, if you try to write an AI that directly wants *simple* things or try to specify the AI's preferences using a *small* amount of data or code, it won't do acceptably right things in our universe.\n\nComplexity of value says, "There's no simple and non-meta solution to AI preferences" or "The things we want AIs to want are complicated in the [5v Kolmogorov-complexity] sense" or "Any simple goal you try to describe that is All We Need To Program Into AIs is almost certainly wrong."\n\nComplexity of value is a further idea above and beyond the [1y orthogonality thesis] which states that AIs don't automatically do the right thing and that we can have, e.g., [10h paperclip maximizers]. Even if we accept that paperclip maximizers are possible, and simple and nonforced, this wouldn't yet imply that it's very *difficult* to make AIs that do the right thing. If the right thing is very simple to encode - if there are [55 value] optimizers that are scarcely more complex than [5g diamond maximizers] - then it might not be especially hard to build a nice AI even if not all AIs are nice. Complexity of Value is the further proposition that says, no, this is forseeably quite hard - not because AIs have 'natural' anti-nice desires, but because niceness requires a lot of work to specify.\n\n### Frankena's list\n\nAs an intuition pump for the complexity of value thesis, consider William Frankena's list of things which many cultures and people seem to value (for their own sake rather than their external consequences):\n\n> "Life, consciousness, and activity; health and strength; pleasures and satisfactions of all or certain kinds; happiness, beatitude, contentment, etc.; truth; knowledge and true opinions of various kinds, understanding, wisdom; beauty, harmony, proportion in objects contemplated; aesthetic experience; morally good dispositions or virtues; mutual affection, love, friendship, cooperation; just distribution of goods and evils; harmony and proportion in one's own life; power and experiences of achievement; self-expression; freedom; peace, security; adventure and novelty; and good reputation, honor, esteem, etc."\n\nWhen we try to list out properties of a human or galactic future that seem like they'd be very nice, we at least *seem* to value a fair number of things that aren't reducible to each other. (What initially look like plausible-sounding "But you do A to get B" arguments usually fall apart when we look for [ third alternatives] to doing A to get B. Marginally adding some freedom can marginally increase the happiness of a human, so a happiness optimizer that can only exert a small push toward freedom might choose to do so. That doesn't mean that a *pure, powerful* happiness maximizer would instrumentally optimize freedom. If an agent cares about happiness but not freedom, the outcome that *maximizes* their preferences is a large number of brains set to maximum happiness. When we don't just seize on one possible case where a B-optimizer might use A as a strategy, but instead look for further C-strategies that might maximize B even better than A, then the attempt to reduce A to an instrumental B-maximization strategy often falls apart. It's in this sense that the items on Frankena's list don't seem to reduce to each other as a matter of pure preference, even though humans in everyday life often seem to pursue several of the goals at the same time.\n\nComplexity of value says that, in this case, the way things seem is the way they are: Frankena's list is *not* encodable in one page of Python code. This proposition can't be established definitely without settling on a sufficiently well-specified [ metaethics], such as [ reflective equilibrium], to make it clear that there is indeed no a priori reason for normativity to be algorithmically simple. But the basic intuition for Complexity of Value is provided just by the fact that Frankena's list was more than one item long, and that many individual terms don't seem likely to have algorithmically simple definitions that distinguish their valuable from non-valuable forms.\n\n### Lack of a central core\n\nWe can understand the idea of complexity of value by contrasting it to the situation with respect to [ epistemic reasoning] aka truth-finding or answering simple factual questions about the world. In an ideal sense, we can try to compress and reduce the idea of mapping the world well down to algorithmically simple notions like "Occam's Razor" and "Bayesian updating". In a practical sense, natural selection, in the course of optimizing humans to solve factual questions like "Where can I find a tree with fruit?" or "Are brightly colored snakes usually poisonous?" or "Who's plotting against me?", ended up with enough of the central core of epistemology that humans were later able to answer questions like "How are the planets moving?" or "What happens if I fire this rocket?", even though humans hadn't been explicitly selected on to answer those exact questions.\n\nBecause epistemology does have a central core of simplicity and Bayesian updating, selecting for an organism that got some pretty complicated epistemic questions right enough to reproduce, also caused that organism to start understanding things like General Relativity. When it comes to truth-finding, we'd expect by default for the same thing to be true about an Artificial Intelligence; if you build it to get epistemically correct answers on lots of widely different problems, it will contain a core of truthfinding and start getting epistemically correct answers on lots of other problems - even problems completely different from your training set, the way that humans understanding General Relativity wasn't like any hunter-gatherer problem.\n\nThe complexity of value thesis is that there *isn't* a simple core to normativity, which means that if you hone your AI to do normatively good things on A, B, and C and then confront the AI with very different problem D, the AI may do the wrong thing on D. There's a large number of independent ideal "gears" inside the complex machinery of value, compared to epistemology that in principle might only contain "prefer simpler hypotheses" and "prefer hypotheses that match the evidence".\n\nThe [1y Orthogonality Thesis] says that, contra to the intuition that [10h maximizing paperclips] feels "stupid", you can have arbitrarily cognitively powerful entities that maximize paperclips, or arbitrarily complicated other goals. So while intuitively you might think it would be simple to avoid paperclip maximizers, requiring no work at all for a sufficiently advanced AI, the Orthogonality Thesis says that things will be more difficult than that; you have to put in some work to have the AI do the right thing.\n\nThe Complexity of Value thesis is the next step after Orthogonality; it says that, contra to the feeling that "rightness ought to be simple, darn it", normativity turns out not to have an algorithmically simple core, not the way that correctly answering questions of fact has a central tendency that generalizes well. And so, even though an AI that you train to do well on problems like steering cars or figuring out General Relativity from scratch, may hit on a core capability that leads the AI to do well on arbitrarily more complicated problems of galactic scale, we can't rely on getting an equally generous bonanza of generalization from an AI that seems to do well on a small but varied set of moral and ethical problems - it may still fail the next problem that isn't like anything in the training set. To the extent that we have very strong reasons to have prior confidence in Complexity of Value, in fact, we ought to be suspicious and worried about an AI that seems to be pulling correct moral answers from nowhere - it is much more likely to have hit upon the convergent instrumental strategy "say what makes the programmers trust you", rather than having hit upon a simple core of all normativity.\n\n## Key sub-propositions\n\nComplexity of Value requires [1y Orthogonality], and would be implied by three further subpropositions:\n\nThe **intrinsic complexity of value** proposition is that the properties we want AIs to achieve - whatever stands in for the metasyntactic variable '[55 value]' - have a large amount of intrinsic information in the sense of comprising a large number of independent facts that aren't being generated by a single computationally simple rule.\n\nA very bad example that may nonetheless provide an important intuition is to imagine trying to pinpoint to an AI what constitutes 'worthwhile happiness'. The AI suggests a universe tiled with tiny Q-learning algorithms receiving high rewards. After some explanation and several labeled datasets later, the AI suggests a human brain with a wire stuck into its pleasure center. After further explanation, the AI suggests a human in a holodeck. You begin talking about the importance of believing truly and that your values call for apparent human relationships to be real relationships rather than being hallucinated. The AI asks you what constitutes a good human relationship to be happy about. The series of questions occurs because (arguendo) the AI keeps running into questions whose answers are not AI-obvious from the previous answers already given, because they involve new things you want such that your desire of them wasn't obvious from answers you'd already given. The upshot is that the specification of 'worthwhile happiness' involves a long series of facts that aren't reducible just to the previous facts, and some of your preferences may involve many fine details of surprising importance. In other words, the specification of 'worthwhile happiness' would be at least as hard to code by hand into the AI as it would be difficult to hand-code a formal rule that could recognize which pictures contained cats. (I.e., impossible.)\n\nThe second proposition is **incompressibility of value** which says that attempts to reduce these complex values into some incredibly simple and elegant principle fail (much like early attempts by e.g. Bentham to reduce all human value to pleasure); and that no simple instruction given an AI will happen to target outcomes of high value either. The core reason to expect a priori that all such attempts will fail, is that most 1000-byte strings aren't compressible down to some incredibly simple pattern no matter how many clever tricks you try to throw at them; fewer than 1 in 1024 such strings can be compressible to 990 bytes, never mind 10 bytes. Due to the tremendous number of different proposals for why some simple instruction to an AI should end up achieving high-value outcomes or why all human value can be reduced to some simple principle, there is no central demonstration that all these proposals *must* fail, but there is a sense in which *a priori* we should strongly expect all such clever attempts to fail. Many disagreeable attempts at reducing value A to value B, such as [ Juergen Schmidhuber's attempt to reduce all human value to increasing the compression of sensory information], stand as a further cautionary lesson.\n\nThe third proposition is **[fragility of value](fragility-1)** which says that if you have a 1000-byte *exact* specification of worthwhile happiness, and you begin to mutate it, the [55 value] created by the corresponding AI with the mutated definition falls off rapidly. E.g. an AI with only 950 bytes of the full definition may end up creating 0% of the value rather than 95% of the value. (E.g., the AI understood all aspects of what makes for a life well-lived... *except* the part about requiring a conscious observer to experience it.)\n\nTogether, these propositions would imply that to achieve an *adequate* amount of value (e.g. 90% of potential value, or even 20% of potential value) there may be no simple handcoded object-level goal for the AI that results in that value's realization. E.g., you can't just tell it to 'maximize happiness', with some hand-coded rule for identifying happiness.\n\n## Centrality\n\nComplexity of Value is a central proposition in [2v value alignment theory]. Many [6r foreseen difficulties] revolve around it:\n\n- Complex values can't be hand-coded into an AI, and require [ value learning] or [ Do What I Mean] preference frameworks.\n- Complex /fragile values may be hard to learn even by induction because the labeled data may not include distinctions that give all of the 1000 bytes a chance to cast an unambiguous causal shadow into the data, and it's very bad if 50 bytes are left ambiguous.\n- Complex / fragile values require error-recovery mechanisms because of the worry about getting some single subtle part wrong and this being catastrophic. (And since we're working inside of highly intelligent agents, the recovery mechanism has to be a [45 corrigible preference] so that the agent accepts our attempts at modifying it.)\n\nMore generally:\n\n- Complex values tend to be implicated in [48 patch-resistant problems] that wouldn't be resistant if there was some obvious 5-line specification of *exactly* what to do, or not do.\n- Complex values tend to be implicated in the [6q context change problems] that wouldn't exist if we had a 5-line specification that solved those problems once and for all and that we'd likely run across during the development phase.\n\n### Importance\n\nMany policy questions strongly depend on Complexity of Value, mostly having to do with the overall difficulty of developing value-aligned AI, e.g.:\n\n- Should we try to develop [ Sovereigns], or restrict ourselves to [6w Genies]?\n- How likely is a moderately safety-aware project to succeed?\n- Should we be more worried about malicious actors creating AI, or about well-intentioned errors?\n- How difficult is the total problem and how much should we be panicking?\n- How attractive would be any genuinely credible [2z game-changing alternative] to AI?\n\nIt has been advocated that there are [ psychological biases] and [ popular mistakes] leading to beliefs that directly or by implication deny Complex Value. To the extent one credits that Complex Value is probably true, one should arguably be concerned about the number of early assessments of the value alignment problem that seem to rely on Complex Value being false (like just needing to hardcode a particular goal into the AI, or in general treating the value alignment problem as not panic-worthily difficult). \n\n## Truth condition\n\nThe Complexity of Value proposition is true if, relative to viable and acceptable real-world [ methodologies] for AI development, there isn't any reliably knowable way to specify the AI's [ object-level preferences] as a structure of low [ algorithmic complexity], such that the result of running that AI is [2z achieving] [ enough] of the possible [55 value], for reasonable definitions of [55 value].\n\nCaveats:\n\n### Viable and acceptable computation\n\nSuppose there turns out to exist, in principle, a relatively simple Turing machine (e.g. 100 states) that picks out 'value' by re-running entire evolutionary histories, creating and discarding a hundred billion sapient races in order to pick out one that ended up relatively similar to humanity. This would use an unrealistically large amount of computing power and *also* commit an unacceptable amount of [6v mindcrime].', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '2', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '2016-02-25 07:18:48', hasDraft: 'false', votes: [ { value: '92', userId: 'AlexeiAndreev', createdAt: '2015-12-16 02:34:21' }, { value: '95', userId: 'BuckShlegeris', createdAt: '2017-10-29 02:10:52' }, { value: '96', userId: 'BrianMuhia', createdAt: '2018-03-21 08:01:51' }, { value: '93', userId: 'EricBruylant', createdAt: '2016-08-27 20:11:03' }, { value: '97', userId: 'EliezerYudkowsky', createdAt: '2015-05-15 14:15:20' }, { value: '84', userId: 'TravisRivera', createdAt: '2017-01-23 02:13:50' }, { value: '95', userId: 'PaulChristiano', createdAt: '2016-01-30 18:41:28' }, { value: '97', userId: 'NateSoares', createdAt: '2017-01-26 21:35:56' }, { value: '95', userId: 'RobBensinger2', createdAt: '2017-02-08 12:26:38' }, { value: '50', userId: 'EliTyre', createdAt: '2017-05-23 06:15:31' }, { value: '88', userId: 'MarkChimes', createdAt: '2016-10-24 00:28:31' }, { value: '92', userId: 'Sauliusimikas', createdAt: '2017-01-21 19:35:21' }, { value: '91', userId: 'KonradSeifert2', createdAt: '2017-07-31 15:49:41' }, { value: '99', userId: 'PeterTapley', createdAt: '2017-07-14 03:27:17' } ], voteSummary: 'null', muVoteSummary: '0', voteScaling: '11', currentUserVote: '-2', voteCount: '14', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'EliezerYudkowsky', 'AlexeiAndreev' ], childIds: [ 'underestimate_value_complexity_perceputal_property', 'meta_unsolved' ], parentIds: [ 'ai_alignment' ], commentIds: [ '1q4', '7h', '9kz' ], questionIds: [], tagIds: [ 'work_in_progress_meta_tag' ], relatedIds: [ 'value_laden' ], markIds: [], explanations: [], learnMore: [], requirements: [], subjects: [], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22149', pageId: 'complexity_of_value', userId: 'EliezerYudkowsky', edit: '0', type: 'newChild', createdAt: '2017-02-21 23:37:30', auxPageId: 'meta_unsolved', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '14616', pageId: 'complexity_of_value', userId: 'EliezerYudkowsky', edit: '0', type: 'newChild', createdAt: '2016-06-27 01:23:09', auxPageId: 'underestimate_value_complexity_perceputal_property', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '9302', pageId: 'complexity_of_value', userId: 'EliezerYudkowsky', edit: '18', type: 'newEdit', createdAt: '2016-04-14 03:17:56', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3833', pageId: 'complexity_of_value', userId: 'AlexeiAndreev', edit: '0', type: 'newAlias', createdAt: '2015-12-16 02:33:22', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3834', pageId: 'complexity_of_value', userId: 'AlexeiAndreev', edit: '17', type: 'newEdit', createdAt: '2015-12-16 02:33:22', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3768', pageId: 'complexity_of_value', userId: 'EliezerYudkowsky', edit: '16', type: 'newEdit', createdAt: '2015-12-15 07:38:07', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3767', pageId: 'complexity_of_value', userId: 'EliezerYudkowsky', edit: '15', type: 'newEdit', createdAt: '2015-12-15 07:26:55', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3765', pageId: 'complexity_of_value', userId: 'EliezerYudkowsky', edit: '14', type: 'newEdit', createdAt: '2015-12-15 07:09:10', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3761', pageId: 'complexity_of_value', userId: 'EliezerYudkowsky', edit: '13', type: 'newEdit', createdAt: '2015-12-15 06:19:52', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3759', pageId: 'complexity_of_value', userId: 'EliezerYudkowsky', edit: '0', type: 'turnOffVote', createdAt: '2015-12-15 06:13:53', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3760', pageId: 'complexity_of_value', userId: 'EliezerYudkowsky', edit: '12', type: 'newEdit', createdAt: '2015-12-15 06:13:53', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1121', pageId: 'complexity_of_value', userId: 'AlexeiAndreev', edit: '1', type: 'newUsedAsTag', createdAt: '2015-10-28 03:47:09', auxPageId: 'work_in_progress_meta_tag', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '360', pageId: 'complexity_of_value', userId: 'AlexeiAndreev', edit: '1', type: 'newParent', createdAt: '2015-10-28 03:46:51', auxPageId: 'ai_alignment', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1549', pageId: 'complexity_of_value', userId: 'EliezerYudkowsky', edit: '10', type: 'newEdit', createdAt: '2015-05-26 22:16:28', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1548', pageId: 'complexity_of_value', userId: 'EliezerYudkowsky', edit: '9', type: 'newEdit', createdAt: '2015-05-26 21:53:09', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1547', pageId: 'complexity_of_value', userId: 'EliezerYudkowsky', edit: '8', type: 'newEdit', createdAt: '2015-05-16 13:23:10', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1546', pageId: 'complexity_of_value', userId: 'EliezerYudkowsky', edit: '7', type: 'newEdit', createdAt: '2015-05-16 11:39:43', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1545', pageId: 'complexity_of_value', userId: 'EliezerYudkowsky', edit: '6', type: 'newEdit', createdAt: '2015-05-16 09:07:24', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1544', pageId: 'complexity_of_value', userId: 'EliezerYudkowsky', edit: '5', type: 'newEdit', createdAt: '2015-05-15 14:18:03', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1543', pageId: 'complexity_of_value', userId: 'EliezerYudkowsky', edit: '4', type: 'newEdit', createdAt: '2015-05-15 14:17:42', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1542', pageId: 'complexity_of_value', userId: 'EliezerYudkowsky', edit: '3', type: 'newEdit', createdAt: '2015-05-15 14:14:26', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1541', pageId: 'complexity_of_value', userId: 'EliezerYudkowsky', edit: '2', type: 'newEdit', createdAt: '2015-05-14 09:44:00', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1540', pageId: 'complexity_of_value', userId: 'EliezerYudkowsky', edit: '1', type: 'newEdit', createdAt: '2015-05-14 08:55:28', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'true', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }