{ localUrl: '../page/immediate_goods.html', arbitalUrl: 'https://arbital.com/p/immediate_goods', rawJsonUrl: '../raw/5r.json', likeableId: '2347', likeableType: 'page', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], pageId: 'immediate_goods', edit: '4', editSummary: '', prevEdit: '3', currentEdit: '4', wasPublished: 'true', type: 'wiki', title: 'Immediate goods', clickbait: '', textLength: '3960', alias: 'immediate_goods', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'AlexeiAndreev', editCreatedAt: '2015-12-16 17:39:47', pageCreatorId: 'EliezerYudkowsky', pageCreatedAt: '2015-05-15 09:51:29', seeDomainId: '0', editDomainId: 'EliezerYudkowsky', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '3', isEditorComment: 'false', isApprovedComment: 'true', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '64', text: 'One of the potential views on 'value' in the value alignment problem is that what we should want from an AI is a list of immediate goods or outcome features like 'a cure for cancer' or 'letting humans make their own decisions' or 'preventing the world from being wiped out by a paperclip maximizer'. (Immediate Goods as a criterion of 'value' isn't the same as saying we should give the AI those explicit goals; calling such a list 'value' means it's the real criterion by which we should judge how well the AI did.)\n\n# Arguments\n\n## Immaturity of view deduced from presence of instrumental goods\n\nIt seems understandable that Immediate Goods would be a very common form of expressed want when people first consider the [2v value alignment problem]; they would look for valuable things an AI could do.\n\nBut such a quickly produced list of expressed wants will often include [ instrumental goods] rather than [ terminal goods]. For example, a cancer cure is (presumably) a means to the end of healthier or happier humans, which would then be the actual grounds on which the AI's real-world 'value' was evaluated from the human speaker's standpoint. If the AI 'cured cancer' in some technical sense that didn't make people healthier, the original person making the wish would probably not see the AI as having achieved value.\n\nThis is a reason for suspecting the maturity of such expressed views, and to suspect that the stated list of immediate goods will probably evolve into a more [ terminal] view of value from a human standpoint, given further reflection.\n\n### Mootness of immaturity\n\nIrrespective of the above, so far as technical issues like [2w Edge Instantiation] are concerned, the 'value' variable could still apply to someone's spontaneously produced list of immediate wants, and that all the standard consequences of the value alignment problem usually still apply. It means we can immediately say (honestly) that e.g. [2w Edge Instantiation] would be a problem for whatever want the speaker just expressed, without needing to persuade them to some other stance on 'value' first. Since the same technical problems will apply both to the immature view and to the expected mature view, we don't need to dispute the view of 'value' in order to take it at face value and honestly explain the standard technical issues that would still apply.\n\n## Moral imposition of short horizons\n\nArguably, a list of immediate goods may make some sense as a stopping-place for evaluating the performance of the AI, if either of the following conditions obtain:\n\n- There is much more agreement (among project sponsors or humans generally) about the goodness of the instrumental goods, than there is about the terminal values that make them good. E.g., twenty project sponsors can all agree that freedom is good, but have nonoverlapping concepts about why it is good, and it is hypothetically the case that these people would continue to disagree in the limit of indefinite debate or reflection. Then if we want to collectivize 'value' from the standpoint of the project sponsors for purposes of talking about whether the AI methodology achieves 'value', maybe it would just make sense to talk about how much (intuitively evaluated) freedom the AI creates.\n- It is in some sense morally incumbent upon humanity to do its own thinking about long-term outcomes and achieve them through immediate goods, or it is in some sense morally incumbent for humanity to arrive at long-term outcomes via its own decisions or optimization starting from immediate goods. In this case, it might make sense to see the 'value' of the AI as being realized only in terms of the AI getting to those immediate goods, because it would be morally wrong for there to be optimization by the AI of consequences beyond that.\n\nTo the knowledge of [2] as of May 2015, neither of these views have yet been advocated by anyone in particular as a defense of an immediate-goods theory of value.', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '1', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '2016-02-18 03:00:45', hasDraft: 'false', votes: [], voteSummary: 'null', muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'EliezerYudkowsky', 'AlexeiAndreev' ], childIds: [], parentIds: [ 'value_alignment_value' ], commentIds: [], questionIds: [], tagIds: [ 'work_in_progress_meta_tag' ], relatedIds: [], markIds: [], explanations: [], learnMore: [], requirements: [], subjects: [], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3970', pageId: 'immediate_goods', userId: 'AlexeiAndreev', edit: '0', type: 'newAlias', createdAt: '2015-12-16 17:39:47', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3971', pageId: 'immediate_goods', userId: 'AlexeiAndreev', edit: '4', type: 'newEdit', createdAt: '2015-12-16 17:39:47', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1138', pageId: 'immediate_goods', userId: 'AlexeiAndreev', edit: '1', type: 'newUsedAsTag', createdAt: '2015-10-28 03:47:09', auxPageId: 'work_in_progress_meta_tag', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '319', pageId: 'immediate_goods', userId: 'AlexeiAndreev', edit: '1', type: 'newParent', createdAt: '2015-10-28 03:46:51', auxPageId: 'value_alignment_value', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '2121', pageId: 'immediate_goods', userId: 'EliezerYudkowsky', edit: '3', type: 'newEdit', createdAt: '2015-05-15 10:59:07', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '2120', pageId: 'immediate_goods', userId: 'EliezerYudkowsky', edit: '2', type: 'newEdit', createdAt: '2015-05-15 10:58:52', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '2119', pageId: 'immediate_goods', userId: 'EliezerYudkowsky', edit: '1', type: 'newEdit', createdAt: '2015-05-15 09:51:29', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'false', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }