{ localUrl: '../page/2qm.html', arbitalUrl: 'https://arbital.com/p/2qm', rawJsonUrl: '../raw/2qm.json', likeableId: '1652', likeableType: 'page', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], pageId: '2qm', edit: '1', editSummary: '', prevEdit: '0', currentEdit: '1', wasPublished: 'true', type: 'comment', title: '"> We don't have to explicit..."', clickbait: '', textLength: '4170', alias: '2qm', externalUrl: '', sortChildrenBy: 'recentFirst', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'EliezerYudkowsky', editCreatedAt: '2016-03-19 23:27:24', pageCreatorId: 'EliezerYudkowsky', pageCreatedAt: '2016-03-19 23:27:24', seeDomainId: '0', editDomainId: 'EliezerYudkowsky', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '0', isEditorComment: 'false', isApprovedComment: 'true', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '480', text: '> We don't have to explicitly cover injunctions, just to provide information that allows the agent to predict human judgments.\n\nI should edit to clarify that I worry the process of *teaching* the agent the *category* of everything to avoid will fail, not that I'm assuming an explicit list with no further extrapolation. If I thought there was a straightforwardly trainable simple concept covering everything an AGI shouldn't do, which left enough space to include pivotal acts, and that didn't max out in weird edge cases when subjected to strong optimization pressures, I would be a lot less worried than I currently am!\n\n> If the AI is bad at making predictions, then it may just end up with lots of variables for which it thinks the human might say "changing that variable is problematic." Behaving appropriately with respect to this uncertainty could recover the desired behavior.\n\nThis is an issue of prioritization. There's some finite amount of checking humans can do which has to be distributed over everything there is to be checked. Let's say a team of 10 checkers can each do 4 detailed checks and 40 quick checks per day without getting sloppy. If the catastrophic act is on some level noticed, but it's assigned a probability-of-catastrophe level which ranks it as the 1000th item on the list, it's going to slip through if the AGI is allowed to do anything at all. Again, your intuition seems to be that there should be a few easy obvious things to check, whereas I'm nervous that any checking procedure strong enough to catch the disasters is going to produce a huge number of false positives because the AI will not internally contain the information and cognitive capacity required to tell the difference.\n\n> If we use such a learner appropriately, this seems like it can obtain behavior at least as good as if the agent was first been taught a measure of impact and then used that measure to avoid (or flag) high-impact consequences.\n\nWe differ in how much we think predictors can safely do automatically. My reason for wanting to think about low impact explicitly has two parts.\n\nFirst, I'm concerned that for realistic limited AGIs of the sort we'll actually see in the real world, we will not want to amplify its intelligence up to the point where all learning can be taken for granted, we will want to use known algorithms, and therefore, considering something like 'low impact' explicitly and as part of machine learning may improve our chances of ending up with a low-impact AGI.\n\nSecond, if there turns out to be an understandable core to low impact, then by explicitly understanding this core we can decrease our nervousness about what a trained AGI might have been trained to do. By default we'd need to worry about an AGI blindly trained to flag possibly dangerous things, learning some unknown peculiar generalization of low impact that will, like a neural network being fooled by the right pattern of static, fail in some weird edge case the next time its option set expands. If we understand explicitly what generalization of low impact is being learned, it would boost our confidence (compared to the blind training case) of the next expansion of options not being fooled by the right kind of staticky image (under optimization pressure from a planning module trying to avoid dangerous impacts).\n\nThis appears to me to go back to our central disagreement-generator about how much the programmers need to explicitly understand and consider. I worry that things which seem like 'predictions' in principle won't generalize well from previously labeled data, especially for things with [2fr reflective degrees of freedom], double-especially for limited AGI systems of the sort that we will actually see in practice in any endgame with a hope of ending well. Or more simple terms, I think that trying to have safety systems that we don't understand and that have been generalized from labeled data without us fully understanding the generalization and its possible edge cases are nigh-inevitable recipes for disaster. Or in simpler simpler terms, you can't possibly get away with building a powerful AGI you understand that poorly.', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '0', maintainerCount: '0', userSubscriberCount: '0', lastVisit: '', hasDraft: 'false', votes: [], voteSummary: 'null', muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'EliezerYudkowsky' ], childIds: [], parentIds: [ 'low_impact', '2qh' ], commentIds: [], questionIds: [], tagIds: [], relatedIds: [], markIds: [], explanations: [], learnMore: [], requirements: [], subjects: [], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '8802', pageId: '2qm', userId: 'EliezerYudkowsky', edit: '1', type: 'newEdit', createdAt: '2016-03-19 23:27:24', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '8799', pageId: '2qm', userId: 'EliezerYudkowsky', edit: '0', type: 'newParent', createdAt: '2016-03-19 23:09:57', auxPageId: 'low_impact', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '8801', pageId: '2qm', userId: 'EliezerYudkowsky', edit: '0', type: 'newParent', createdAt: '2016-03-19 23:09:57', auxPageId: '2qh', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'false', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }