{ localUrl: '../page/minimality_principle.html', arbitalUrl: 'https://arbital.com/p/minimality_principle', rawJsonUrl: '../raw/7tf.json', likeableId: '4113', likeableType: 'page', myLikeValue: '0', likeCount: '1', dislikeCount: '0', likeScore: '1', individualLikes: [ 'JeremyJaffe' ], pageId: 'minimality_principle', edit: '5', editSummary: '', prevEdit: '4', currentEdit: '5', wasPublished: 'true', type: 'wiki', title: 'Minimality principle', clickbait: 'The first AGI ever built should save the world in a way that requires the least amount of the least dangerous cognition.', textLength: '1623', alias: 'minimality_principle', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'EliezerYudkowsky', editCreatedAt: '2017-10-19 22:48:12', pageCreatorId: 'EliezerYudkowsky', pageCreatedAt: '2017-02-13 19:10:34', seeDomainId: '0', editDomainId: 'EliezerYudkowsky', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '0', isEditorComment: 'false', isApprovedComment: 'false', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '223', text: 'In the context of [2v], the "Principle of Minimality" or "Principle of Least Everything" says that when we are building the *first* [-7g1], we are operating in an extremely dangerous context in which building a marginally more powerful AI is marginally more dangerous. The first AGI ever built should therefore execute the least dangerous plan for [6y preventing immediately following AGIs from destroying the world six months later]. Furthermore, the least dangerous plan is not the plan that seems to contain the fewest material actions that seem risky in a conventional sense, but rather the plan that requires the *least dangerous cognition* from the AGI executing it. Similarly, inside the AGI itself, if a class of thought seems dangerous but necessary to execute sometimes, we want to execute the fewest possible instances of that class of thought.\n\nE.g., if we think it's a dangerous kind of event for the AGI to ask "How can I achieve this end using strategies from across every possible domain?" then we might want a design where most routine operations only search for strategies within a particular domain, and events where the AI searches across all known domains are rarer and visible to the programmers. Processing a goal that can recruit subgoals across every domain would be a dangerous event, albeit a necessary one, and therefore we want to do *less* of it within the AI (and require positive permission for all such cases and then require operators to validate the results before proceeding).\n\nIdeas that inherit from this principle include the general notion of [6w], [4mn taskishness], and [-2r8].', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '1', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '', hasDraft: 'false', votes: [], voteSummary: 'null', muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'EliezerYudkowsky' ], childIds: [], parentIds: [ 'alignment_principle' ], commentIds: [], questionIds: [], tagIds: [ 'start_meta_tag' ], relatedIds: [], markIds: [], explanations: [], learnMore: [], requirements: [], subjects: [], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22840', pageId: 'minimality_principle', userId: 'EliezerYudkowsky', edit: '5', type: 'newEdit', createdAt: '2017-10-19 22:48:13', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22839', pageId: 'minimality_principle', userId: 'EliezerYudkowsky', edit: '4', type: 'newEdit', createdAt: '2017-10-19 22:47:59', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22838', pageId: 'minimality_principle', userId: 'EliezerYudkowsky', edit: '3', type: 'newEdit', createdAt: '2017-10-19 22:47:10', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22837', pageId: 'minimality_principle', userId: 'EliezerYudkowsky', edit: '2', type: 'newEdit', createdAt: '2017-10-19 22:46:12', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22053', pageId: 'minimality_principle', userId: 'EliezerYudkowsky', edit: '0', type: 'newParent', createdAt: '2017-02-16 18:55:03', auxPageId: 'alignment_principle', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22051', pageId: 'minimality_principle', userId: 'EliezerYudkowsky', edit: '0', type: 'deleteParent', createdAt: '2017-02-16 18:54:59', auxPageId: 'AI_safety_mindset', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22014', pageId: 'minimality_principle', userId: 'EliezerYudkowsky', edit: '0', type: 'newParent', createdAt: '2017-02-13 19:10:35', auxPageId: 'AI_safety_mindset', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22015', pageId: 'minimality_principle', userId: 'EliezerYudkowsky', edit: '0', type: 'newTag', createdAt: '2017-02-13 19:10:35', auxPageId: 'start_meta_tag', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22012', pageId: 'minimality_principle', userId: 'EliezerYudkowsky', edit: '1', type: 'newEdit', createdAt: '2017-02-13 19:10:34', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'false', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }