{ localUrl: '../page/4l.html', arbitalUrl: 'https://arbital.com/p/4l', rawJsonUrl: '../raw/4l.json', likeableId: '2311', likeableType: 'page', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], pageId: '4l', edit: '4', editSummary: '', prevEdit: '3', currentEdit: '4', wasPublished: 'true', type: 'wiki', title: 'Safe impact measure', clickbait: 'What can we measure to make sure an agent is acting in a safe manner?', textLength: '4481', alias: '4l', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'AlexeiAndreev', editCreatedAt: '2015-12-16 05:28:47', pageCreatorId: 'EliezerYudkowsky', pageCreatedAt: '2015-04-11 00:44:09', seeDomainId: '0', editDomainId: 'EliezerYudkowsky', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '6', isEditorComment: 'false', isApprovedComment: 'true', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '120', text: 'A safe impact measure is one that captures all changes to every variable a human might care about, with no edge-cases where a lot of value could be destroyed by a 'low impact' action. A safe impact measure must also not generate so many false alarms of 'high impact' that no strategy can be distinguished as 'low impact'.\n\n### Importance / uses\n\nA safe impact measure is an open problem of value alignment, which if solved, might be used in constructing:\n\n- A [ low-impact AI], a subspecies of [6w Genie] that tries to carry out its goals while otherwise minimizing the number of important things that it affects.\n- An [6x Oracle] that tries not to affect the world outside its box at all, apart from its output.\n- A [ shutdown utility function] that incentives a [45 corrigible] AI to halt safely (perhaps after a button is pressed).\n\nSome uses of a safe impact measure postulate that the impact measure has a 'hole' or some form of permitted output channel. For example, one Stuart Armstrong proposal involves an Oracle in a sealed box, with a single output channel that is connected to the Oracle with high quantum probability, and connected to a quantum noise generator with low quantum probability. The Oracle is putatively given the goal of generating an output signal with some informational property, and otherwise minimizing the 'impact' relative to the same output having been generated by the quantum noise generator instead of the Oracle. This is intended to capture the idea that the Oracle's effect on the world should only occur via the effect of the output message, and not take advantage of any side channels such as managing to send a radio signal outside the Oracle box.\n\n### Difficulty\n\nTo be used inside an [2c advanced agent], an impact measure must be [2l safe] in the face of whatever cognitive pressures and optimization pressures might tend to produce [2w edge instantiations] or [42] - it must capture so much variance that there is *no* clever strategy whereby an advanced agent can produce some special type of variance that evades the measure. Ideally, the measure will pass the [ Omni Test], meaning that even if it suddenly gained perfect control over every particle in the universe, there would still be no way for it to have what intuitively seems like a 'large influence' on the future, without that strategy being assessed as having a 'high impact'.\n\nThe reason why a safe impact measure might be possible, and specifiable to an AI without having to solve the entire [ value learning problem] for [5l complex values], is that it may be possible to upper-bound the value-laden and complex quantity 'impact on literally everything cared about' by some much simpler quantity that says roughly 'impact on everything' - all causal processes worth modeling on a macroscale, or something along those lines.\n\nThe challenge of a safe impact measure is that we can't just measure, e.g., 'number of particles influenced in any way' or 'expected shift in all particles in the universe'. For the former case, consider that a one-gram mass on Earth exerts a gravitational pull that accelerates the Moon toward it at roughly 4 x 10^-31 m/s^2, and every sneeze has a *very* slight gravitational effect on the atoms in distant galaxies. Since every decision qualitatively 'affects' everything in its future light cone, this measure will have too many false positives / not approve any strategy / not usefully discriminate unusually dangerous atoms.\n\nFor the proposed quantity 'expectation of the net shift produced on all atoms in the universe': If the universe (including the Earth) contains at least one process chaotic enough to exhibit butterfly effects, then any sneeze anywhere ends up producing a very great expected shift in total motions. Again we must worry that the impact measure, as evaluated inside the mind of a superintelligence, would just assign uniformly high values to every strategy, meaning that unusually dangerous actions would not be discriminated for alarms or vetos.\n\nDespite the first imaginable proposals failing, it doesn't seem like a 'safe impact measure' necessarily has the type of [ value-loading] that would make it [ VA-complete]. One intuition pump for 'notice big effects in general' not being value-laden, is that if we imagine aliens with nonhuman decision systems trying to solve this problem, it seems easy to imagine that the aliens would come up with a safe impact measure that we would also regard as safe.', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '1', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '2016-02-22 08:00:30', hasDraft: 'false', votes: [], voteSummary: 'null', muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'EliezerYudkowsky', 'AlexeiAndreev' ], childIds: [], parentIds: [ 'ai_alignment' ], commentIds: [], questionIds: [], tagIds: [ 'value_alignment_open_problem' ], relatedIds: [], markIds: [], explanations: [], learnMore: [], requirements: [], subjects: [], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3884', pageId: '4l', userId: 'AlexeiAndreev', edit: '4', type: 'newEdit', createdAt: '2015-12-16 05:28:47', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '356', pageId: '4l', userId: 'AlexeiAndreev', edit: '1', type: 'newParent', createdAt: '2015-10-28 03:46:51', auxPageId: 'ai_alignment', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '431', pageId: '4l', userId: 'AlexeiAndreev', edit: '1', type: 'newParent', createdAt: '2015-10-28 03:46:51', auxPageId: 'value_alignment_open_problem', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1451', pageId: '4l', userId: 'EliezerYudkowsky', edit: '3', type: 'newEdit', createdAt: '2015-04-11 00:50:13', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1450', pageId: '4l', userId: 'EliezerYudkowsky', edit: '2', type: 'newEdit', createdAt: '2015-04-11 00:49:07', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1449', pageId: '4l', userId: 'EliezerYudkowsky', edit: '1', type: 'newEdit', createdAt: '2015-04-11 00:44:09', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'false', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }