{ localUrl: '../page/value_achievement_dilemma.html', arbitalUrl: 'https://arbital.com/p/value_achievement_dilemma', rawJsonUrl: '../raw/2z.json', likeableId: '1887', likeableType: 'page', myLikeValue: '0', likeCount: '1', dislikeCount: '0', likeScore: '1', individualLikes: [ 'AndrewMcKnight' ], pageId: 'value_achievement_dilemma', edit: '11', editSummary: '', prevEdit: '10', currentEdit: '11', wasPublished: 'true', type: 'wiki', title: 'Value achievement dilemma', clickbait: 'How can Earth-originating intelligent life achieve most of its potential value, whether by AI or otherwise?', textLength: '7590', alias: 'value_achievement_dilemma', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'EliezerYudkowsky', editCreatedAt: '2017-02-02 00:41:23', pageCreatorId: 'EliezerYudkowsky', pageCreatedAt: '2015-03-27 01:43:34', seeDomainId: '0', editDomainId: 'EliezerYudkowsky', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '9', isEditorComment: 'false', isApprovedComment: 'true', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '190', text: '[summary: The value achievement dilemma is the general, broad challenge faced by Earth-originating intelligent life in steering our [7cy cosmic endowment] into a state of high [55 value] - successfully turning the stars into a happy civilization.\n\nWe face potential existential catastrophes (resulting in our extermination or the corruption of the cosmic endowment) such as sufficiently lethal engineered pandemics, non-value-aligned AIs, or insane smart uploads. A strategy is [2s relevant] to value achievement only if success is a [6y game-changer] for the overall dilemma humanity faces. E.g., [5s value-aligned] [2c powerful AIs] or [ intelligence-enhanced humans] both seem to qualify as strategically relevant; but an AI restricted to *only* [70 prove theorems in Zermelo-Frankel set theory] has no obvious game-changing use.]\n\nThe value achievement dilemma is a way of framing the [2v AI alignment problem] in a larger context. This emphasizes that there might be possible solutions besides AI; and also emphasizes that such solutions must meet a high bar of potency or efficacy in order to *resolve* our basic dilemmas, the way that a sufficiently value-aligned and cognitively powerful AI could resolve our basic dilemmas. Or at least [6y change the nature of the gameboard], the way that a Task AGI could take actions to prevent destruction by later AGI projects, even if is only [1vt narrowly] value-aligned and cannot solve the whole problem.\n\nThe point of considering posthuman scenarios in the long run, and not just an immediate [6w Task AGI] as band-aid, can be seen in the suggestion by [2] [todo: find a citation - CFAI? PtS?] and [18k] [todo: cite Superintelligence] that we can see Earth-originating intelligent life as having two possible [ stable states], [41l superintelligence] and extinction. If intelligent life goes extinct, especially if it drastically damages or destroys the ecosphere in the process, new intelligent life seems unlikely to arise on Earth. If Earth-originating intelligent life becomes superintelligent, it will presumably expand through the universe and stay superintelligent for as long as physically possible. Eventually, our civilization is bound to wander into one of these attractors or another.\n\nFurthermore, by the [3r6 generic preference stability argument], any sufficiently advanced cognitive agent is very likely to be stable in its motivations or [5f meta-preference framework]. So if and when life wanders into the superintelligence attractor, it will either end up in a stable state of e.g. [10d fun-loving] or [3c5 the reflective equilibrium of its creators' civilization] and hence achieving lots of [55 value], or a misaligned AI will go on [10h maximizing paperclips] forever.\n\nAmong the dilemmas we face in getting into the high-value-achieving attractor, rather than the extinction attractor or the equivalence class of paperclip maximizers, are:\n\n- The possibility of careless (or insufficiently cautious, or much less likely malicious) actors creating a non-value-aligned AI that undergoes an intelligence explosion.\n- The possibility of engineered superviruses destroying enough of civilization that the remaining humans go extinct without ever reaching sufficiently advanced technology.\n- Conflict between multipolar powers with nanotechnology resulting in a super-nuclear-exchange disaster that extinguishes all life.\n\nOther positive events seem like they could potentially prompt entry into the high-value-achieving superintelligence attractor:\n\n- Direct creation of a [41k fully] normatively aligned [1g3] agent.\n- Creation of a [6w Task AGI] powerful enough to avert the creation of other [ UnFriendly AI].\n- Intelligence-augmented humans (or 64-node clustered humans linked by brain-computer interface brain information exchange, etcetera) who are able and motivated to solve the AI alignment problem.\n\nOn the other hand, consider someone who proposes that "Rather than building AI, [ we should] build [ Oracle AIs] that just answer questions," and who then, after further exposure to the concept of the [ AI-Box Experiment] and [2j cognitive uncontainability], further narrows their specification to say that [70 an Oracle running in three layers of sandboxed simulation must output only formal proofs of given theorems in Zermelo-Fraenkel set theory], and a heavily sandboxed and provably correct verifier will look over this output proof and signal 1 if it proves the target theorem and 0 otherwise, at some fixed time to avoid timing attacks.\n\nThis doesn't resolve the larger value achievement dilemma, because there's no obvious thing we can do with a ZF provability oracle that solves our larger problem. There's no plan such that it would save the world *if only* we could take some suspected theorems of ZF and know that some of them had formal proofs.\n\nThe thrust of considering a larger 'value achievement dilemma' is that while imaginable alternatives to aligned AIs exist, they must pass a double test to be our best alternative:\n\n- They must be genuinely easier or safer than the easiest (pivotal) form of the AI alignment problem.\n- They must be game-changers for the overall situation in which we find ourselves, opening up a clear path to victory from the newly achieved scenario.\n\nAny strategy that does not putatively open a clear path to victory if it succeeds, doesn't seem like a plausible policy alternative to trying to solve the AI alignment problem or to doing something else such that success leaves us a clear path to victory. Trying to solve the AI alignment problem is something intended to leave us a clear path to achieving almost all of the achievable value for the future and its astronomical stakes. Anything that doesn't open a clear path to getting there is not an alternative solution for getting there.\n\nFor more on this point, see the page on [6y pivotal events].\n\n# Subproblems of the larger value achievement dilemma\n\nWe can see the place of AI alignment in the larger scheme by considering its parent problem, its sibling problems, and examples of its child problems.\n\n- The **value achievement dilemma**: How does Earth-originating intelligent life achieve an acceptable proportion of its potential [55 value]?\n - The **AI alignment problem**: How do we create AIs such that running them produces (global) outcomes of acceptably high value?\n - The **value alignment problem**: How do we create AIs that *want* or *prefer* to cause events that are of high value? If we accept that we should solve the value alignment problem by creating AIs that prefer or want in particular ways, how do we do that?\n - The **[6c]** or **value learning** problem: How can we pinpoint, in the AI's decision-making, outcomes that have high 'value'? (Despite all the [6r foreseeable difficulties] such as [2w edge instantiation] and [6g4 Goodhart's Curse].)\n - Other properties of aligned AIs such as e.g. **[-45]**: How can we create AIs such that, when we make an error in identifying value or specifying the decision system, the AI does not resist our attempts to correct what we regard as an error?\n - [7fx Oppositional] features such as e.g. [6z boxing] that are intended to mitigate harm if the AI's behavior has gone outside expected bounds.\n - The intelligence amplification problem. How can we create smarter humans, preferably without driving them insane or otherwise ending up with evil ones?\n - The [ value selection] problem. How can we figure out what to substitute in for the metasyntactic variable 'value'? ([313 Answer].)', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '2', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '2016-02-23 16:12:35', hasDraft: 'false', votes: [], voteSummary: 'null', muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'EliezerYudkowsky', 'AlexeiAndreev' ], childIds: [ 'moral_hazard', '4j', 'pivotal', 'cosmic_endowment', 'aligning_adds_time' ], parentIds: [ 'ai_alignment' ], commentIds: [], questionIds: [], tagIds: [ 'work_in_progress_meta_tag' ], relatedIds: [], markIds: [], explanations: [], learnMore: [], requirements: [], subjects: [], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22167', pageId: 'value_achievement_dilemma', userId: 'EliezerYudkowsky', edit: '0', type: 'newChild', createdAt: '2017-02-22 01:14:14', auxPageId: 'aligning_adds_time', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '21916', pageId: 'value_achievement_dilemma', userId: 'EliezerYudkowsky', edit: '11', type: 'newEdit', createdAt: '2017-02-02 00:41:23', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '21915', pageId: 'value_achievement_dilemma', userId: 'EliezerYudkowsky', edit: '10', type: 'newEdit', createdAt: '2017-02-02 00:40:27', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '21914', pageId: 'value_achievement_dilemma', userId: 'EliezerYudkowsky', edit: '9', type: 'newEdit', createdAt: '2017-02-02 00:39:35', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '21913', pageId: 'value_achievement_dilemma', userId: 'EliezerYudkowsky', edit: '8', type: 'newEdit', createdAt: '2017-02-02 00:38:47', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '21612', pageId: 'value_achievement_dilemma', userId: 'EliezerYudkowsky', edit: '0', type: 'newChild', createdAt: '2017-01-11 20:48:05', auxPageId: 'cosmic_endowment', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '8984', pageId: 'value_achievement_dilemma', userId: 'EliezerYudkowsky', edit: '7', type: 'newChild', createdAt: '2016-03-23 22:39:20', auxPageId: 'moral_hazard', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3906', pageId: 'value_achievement_dilemma', userId: 'AlexeiAndreev', edit: '7', type: 'newEdit', createdAt: '2015-12-16 16:04:05', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3841', pageId: 'value_achievement_dilemma', userId: 'AlexeiAndreev', edit: '0', type: 'newAlias', createdAt: '2015-12-16 02:56:44', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3842', pageId: 'value_achievement_dilemma', userId: 'AlexeiAndreev', edit: '6', type: 'newEdit', createdAt: '2015-12-16 02:56:44', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3704', pageId: 'value_achievement_dilemma', userId: 'EliezerYudkowsky', edit: '5', type: 'newEdit', createdAt: '2015-12-14 20:47:17', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1114', pageId: 'value_achievement_dilemma', userId: 'AlexeiAndreev', edit: '1', type: 'newUsedAsTag', createdAt: '2015-10-28 03:47:09', auxPageId: 'work_in_progress_meta_tag', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '566', pageId: 'value_achievement_dilemma', userId: 'AlexeiAndreev', edit: '1', type: 'newChild', createdAt: '2015-10-28 03:46:58', auxPageId: '4j', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '567', pageId: 'value_achievement_dilemma', userId: 'AlexeiAndreev', edit: '1', type: 'newChild', createdAt: '2015-10-28 03:46:58', auxPageId: 'pivotal', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '351', pageId: 'value_achievement_dilemma', userId: 'AlexeiAndreev', edit: '1', type: 'newParent', createdAt: '2015-10-28 03:46:51', auxPageId: 'ai_alignment', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1342', pageId: 'value_achievement_dilemma', userId: 'EliezerYudkowsky', edit: '4', type: 'newEdit', createdAt: '2015-06-12 07:16:09', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1341', pageId: 'value_achievement_dilemma', userId: 'EliezerYudkowsky', edit: '3', type: 'newEdit', createdAt: '2015-05-14 16:21:14', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1340', pageId: 'value_achievement_dilemma', userId: 'EliezerYudkowsky', edit: '2', type: 'newEdit', createdAt: '2015-04-24 23:01:10', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1339', pageId: 'value_achievement_dilemma', userId: 'EliezerYudkowsky', edit: '1', type: 'newEdit', createdAt: '2015-03-27 01:43:34', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'true', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }