{ localUrl: '../page/alignment_difficulty.html', arbitalUrl: 'https://arbital.com/p/alignment_difficulty', rawJsonUrl: '../raw/8dh.json', likeableId: '0', likeableType: 'page', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], pageId: 'alignment_difficulty', edit: '2', editSummary: '', prevEdit: '1', currentEdit: '2', wasPublished: 'true', type: 'wiki', title: 'Difficulty of AI alignment', clickbait: 'How hard is it exactly to point an Artificial General Intelligence in an intuitively okay direction?', textLength: '3812', alias: 'alignment_difficulty', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'EliezerYudkowsky', editCreatedAt: '2017-05-25 17:09:06', pageCreatorId: 'EliezerYudkowsky', pageCreatedAt: '2017-05-25 17:08:54', seeDomainId: '0', editDomainId: 'EliezerYudkowsky', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '0', isEditorComment: 'false', isApprovedComment: 'false', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '81', text: 'This page attempts to list basic propositions in computer science which, if they are true, would be ultimately responsible for rendering difficult the task of getting good outcomes from a [7g1 sufficiently advanced Artificial Intelligence].\n\n[auto-summary-to-here]\n\n# "Difficulty."\n\nBy saying that these propositions would, if true, seem to imply "difficulties", we don't mean to imply that these problems are unsolvable. We could distinguish possible levels of "difficulty" as follows:\n\n- The problem is straightforwardly solvable, but must in fact be solved.\n- The problem is straightforwardly solvable if foreseen in advance, but does not *force* a general solution in its early manifestations--if the later problems have not been explicitly foreseen, early solutions may fail to generalize. Projects which are not exhibiting sufficient foresight may fail to future-proof for the problem, even though it is in some sense easy.\n- The problem seems solvable by applying added effort, but the need for this effort will contribute *substantial additional time or resource requirements* to the aligned version of the AGI project; implying that unsafe clones or similar projects would have an additional time advantage. E.g., computer operating systems can be made more secure, but it adds rather more than 5% to development time and requires people willing to take on a lot of little inconveniences instead of doing things the most convenient way. If there are enough manifested difficulties like this, and the sum of their severity is great enough, then...\n - If there is strongly believed to be a great and unavoidable resource requirement even for safety-careless AGI projects, then we have a worrisome situation in which coordination among the leading five AGI projects is required to avoid races to the bottom on safety, and arms-race scenarios where the leading projects don't trust each other are extremely bad.\n - If the probability seems great enough that "A safety-careless AGI project can be executed using few enough resources, relative to every group in the world that might have those resources and a desire to develop AGI, that there would be dozens or hundreds of such projects" then a sufficiently great [7wl added development for AI alignment] *forces* [closed_is_cooperative closed AI development scenarios]. (Because open development would give projects that skipped all the safety an insuperable time advantage, and there would be enough such projects that getting all of them to behave is impossible. (Especially in any world where, like at present, there are billionaires with great command of computational resources who don't seem to understand [1y Orthogonality].))\n- The problem seems like it should in principle have a straightforward solution, but it seems like there's a worrisome probability of screwing up along the way, meaning...\n - It requires substantial additional work and time to solve this problem reliably and know that we have solved it (see above), or\n - Feasible amounts of effort still leave a worrying residue of probability that the attempted solution contains a land mine.\n- The problem seems unsolvable using realistic amounts of effort, it which case aligned-AGI designs are constrained to avoid confronting it and we must find workarounds.\n- The problem seems like it ought to be solvable somehow, but we are not sure exactly how to solve it. This could imply that...\n - Novel research and perhaps genius is required to avoid this type of failure, even with the best of good intentions;\n - This might be a kind of conceptual problem that takes a long serial time to develop, and we should get started on it sooner;\n - We should start considering alternative design pathways that would work around or avoid the difficulty, in case the problem is not solved.', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '1', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '', hasDraft: 'false', votes: [], voteSummary: 'null', muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'EliezerYudkowsky' ], childIds: [], parentIds: [ 'ai_alignment' ], commentIds: [], questionIds: [], tagIds: [ 'work_in_progress_meta_tag' ], relatedIds: [], markIds: [], explanations: [], learnMore: [], requirements: [], subjects: [], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22569', pageId: 'alignment_difficulty', userId: 'EliezerYudkowsky', edit: '2', type: 'newEdit', createdAt: '2017-05-25 17:09:06', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22568', pageId: 'alignment_difficulty', userId: 'EliezerYudkowsky', edit: '0', type: 'newTag', createdAt: '2017-05-25 17:08:56', auxPageId: 'work_in_progress_meta_tag', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22567', pageId: 'alignment_difficulty', userId: 'EliezerYudkowsky', edit: '0', type: 'newParent', createdAt: '2017-05-25 17:08:55', auxPageId: 'ai_alignment', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22565', pageId: 'alignment_difficulty', userId: 'EliezerYudkowsky', edit: '1', type: 'newEdit', createdAt: '2017-05-25 17:08:54', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'false', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }