{ localUrl: '../page/mindcrime_introduction.html', arbitalUrl: 'https://arbital.com/p/mindcrime_introduction', rawJsonUrl: '../raw/18h.json', likeableId: '241', likeableType: 'page', myLikeValue: '0', likeCount: '4', dislikeCount: '0', likeScore: '4', individualLikes: [ 'AlexeiAndreev', 'MYass', 'EmmanuelSmith', 'NopeNope' ], pageId: 'mindcrime_introduction', edit: '10', editSummary: 'removing duplicate claim', prevEdit: '9', currentEdit: '10', wasPublished: 'true', type: 'wiki', title: 'Mindcrime: Introduction', clickbait: '', textLength: '5976', alias: 'mindcrime_introduction', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'EricRogstad', editCreatedAt: '2016-12-21 01:49:46', pageCreatorId: 'EliezerYudkowsky', pageCreatedAt: '2015-11-30 01:20:58', seeDomainId: '0', editDomainId: 'EliezerYudkowsky', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '3', isEditorComment: 'false', isApprovedComment: 'true', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '863', text: 'The more predictive accuracy we want from a model, the more detailed the model becomes. A very rough model of an airplane might only contain the approximate shape, the power of the engines, and the mass of the airplane. A model good enough for engineering needs to be detailed enough to simulate the flow of air over the wings, the centripetal force on the fan blades, and more. As a model can predict the airplane in more and more fine detail and with better and better probability distributions, the computations carried out to make the model's predictions may start to look more and more like a detail simulation of the airplane flying.\n\nConsider a machine intelligence building, and testing, the best models it can manage of a human being's behavior. If the model that produces the *best* predictions involves simulations with moderate degrees of isomorphism to human cognition, then the model, as it runs, may itself be self-aware or conscious or sapient or whatever other property stands in for being an object of ethical concern. This doesn't mean that the running model of Fred is Fred, or even that the running model of Fred is human. The concern is that a sufficiently advanced model of a person will be *a* person, even if they might not be the *same* person.\n\nWe might then worry that, for example, if Fred is unhappy, or *might* be unhappy, the agent will consider thousands or millions of hypotheses about versions of Fred. Hypotheses about suffering versions of Fred, when run, might themselves be suffering. As a similar concern, these hypotheses about Fred might then be discarded - cease to be run - if the agent sees new evidence and updates its model. Since [18j programs can be people], stopping and erasing a conscious program is the crime of murder.\n\nThis scenario, which we might call 'the problem of sapient models', is a subscenario of the general problem of what Bostrom terms 'mindcrime'. ([2] has suggested 'mindgenocide' as a term with fewer Orwellian connotations.) More generally, we might worry that there are agent systems that do huge amounts of moral harm just in virtue of the way they compute, by containing embedded conscious suffering and death.\n\nAnother scenario might be called 'the problem of sapient subsystems'. It's possible that, for example, the most efficient possible system for, e.g., allocating memory to subprocesses, is a memory-allocating-subagent that is reflective enough to be an independently conscious person. This is distinguished from the problem of creating a single machine intelligence that is conscious and suffering, because the conscious agent might be hidden at a lower level of a design, and there might be a lot *more* of them than just one suffering superagent.\n\nBoth of these scenarios constitute moral harm done inside the agent's computations, irrespective of its external behavior. We can't conclude that we've done no harm by building a superintelligence, just in virtue of the fact that the superintelligence doesn't outwardly kill anyone. There could be trillions of people suffering and dying *inside* the superintelligence. This sets mindcrime apart from almost all other concerns within the [5s], which usually revolve around external behavior.\n\nTo avoid mindgenocide, it would be very handy to know exactly which computations are or are not conscious, sapient, or otherwise objects of ethical concern. Or, indeed, to know that any particular class of computations are *not* objects of ethical concern.\n\nYudkowsky calls a [ nonperson predicate] any computable test we could safely use to determine that a computation is definitely *not* a person. This test only needs two possible answers, "Not a person" and "Don't know". It's fine if the test says "Don't know" on some nonperson computations, so long as the test says "Don't know" on *all* people and never says "Not a person" when the computation is conscious after all. Since the test only definitely tells us about nonpersonhood, rather than detecting personhood in any positive sense, we can call it a nonperson predicate.\n\nHowever, the goal is not just to have any nonperson predicate - the predicate that only says "known nonperson" for the empty computation and no others meets this test. The goal is to have a nonperson predicate that includes powerful, useful computations. We want to be able to build an AI that is not a person, and let that AI build subprocesses that we know will not be people, and let that AI improve its models of environmental humans using hypotheses that we know are not people. This means the nonperson predicate does need to pass some AI designs, cognitive subprocess designs, and human models that are good enough for whatever it is we want the AI to do.\n\nThis seems like it might be very hard for several reasons:\n\n- There is *unusually extreme* philosophical dispute, and confusion, about exactly which programs are and are not conscious or otherwise objects of ethical value. (It might not be exaggerating to scream "nobody knows what the hell is going on".)\n- We can't fully pass any class of programs that's [ Turing-complete]. We can't say once and for all that it's safe to model gravitational interactions in a solar system, if enormous gravitational systems could encode computers that encode people.\n- The [42] problem applies to any attempt to forbid an [2c advanced] [9h consequentialist agent] from using the most effective or obvious ways of modeling humans. The *next* best way of modeling humans, outside the blocked-off options, is unusually likely to look like a weird loophole that turns out to encode sapience some way we didn't imagine.\n\nAn alternative for preventing mindcrime without a trustworthy [ nonperson predicate] is to consider [102 agent designs intended *not* to model humans, or other minds, in great detail], since there may be some [6y pivotal achievements] that can be accomplished without a value-aligned agent modeling human minds in detail.', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '3', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '2016-02-26 20:11:55', hasDraft: 'false', votes: [], voteSummary: 'null', muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '11', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'EliezerYudkowsky', 'AlexeiAndreev', 'EricRogstad', 'AlexPear' ], childIds: [], parentIds: [ 'mindcrime' ], commentIds: [], questionIds: [], tagIds: [], relatedIds: [], markIds: [], explanations: [], learnMore: [], requirements: [], subjects: [], lenses: [], lensParentId: 'mindcrime', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '21138', pageId: 'mindcrime_introduction', userId: 'AlexPear', edit: '11', type: 'newEditProposal', createdAt: '2016-12-27 23:55:22', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '21023', pageId: 'mindcrime_introduction', userId: 'EricRogstad', edit: '10', type: 'newEdit', createdAt: '2016-12-21 01:49:46', auxPageId: '', oldSettingsValue: '', newSettingsValue: 'removing duplicate claim' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '21022', pageId: 'mindcrime_introduction', userId: 'EricRogstad', edit: '9', type: 'newEdit', createdAt: '2016-12-21 01:48:45', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6455', pageId: 'mindcrime_introduction', userId: 'AlexeiAndreev', edit: '8', type: 'newEdit', createdAt: '2016-02-05 01:55:32', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3790', pageId: 'mindcrime_introduction', userId: 'AlexeiAndreev', edit: '0', type: 'newAlias', createdAt: '2015-12-15 23:47:09', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3791', pageId: 'mindcrime_introduction', userId: 'AlexeiAndreev', edit: '7', type: 'newEdit', createdAt: '2015-12-15 23:47:09', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3559', pageId: 'mindcrime_introduction', userId: 'EliezerYudkowsky', edit: '6', type: 'newEdit', createdAt: '2015-11-30 02:11:32', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3558', pageId: 'mindcrime_introduction', userId: 'EliezerYudkowsky', edit: '5', type: 'newEdit', createdAt: '2015-11-30 02:06:32', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3557', pageId: 'mindcrime_introduction', userId: 'EliezerYudkowsky', edit: '4', type: 'newEdit', createdAt: '2015-11-30 02:06:16', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3554', pageId: 'mindcrime_introduction', userId: 'EliezerYudkowsky', edit: '3', type: 'newEdit', createdAt: '2015-11-30 01:58:40', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3553', pageId: 'mindcrime_introduction', userId: 'EliezerYudkowsky', edit: '2', type: 'newEdit', createdAt: '2015-11-30 01:55:01', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3546', pageId: 'mindcrime_introduction', userId: 'EliezerYudkowsky', edit: '1', type: 'newEdit', createdAt: '2015-11-30 01:20:58', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3545', pageId: 'mindcrime_introduction', userId: 'EliezerYudkowsky', edit: '0', type: 'newParent', createdAt: '2015-11-30 01:20:34', auxPageId: 'mindcrime', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'false', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }