{ localUrl: '../page/human_arguments_ai_control.html', arbitalUrl: 'https://arbital.com/p/human_arguments_ai_control', rawJsonUrl: '../raw/1vy.json', likeableId: '808', likeableType: 'page', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], pageId: 'human_arguments_ai_control', edit: '3', editSummary: '', prevEdit: '2', currentEdit: '3', wasPublished: 'true', type: 'wiki', title: 'Human arguments and AI control', clickbait: '', textLength: '9248', alias: 'human_arguments_ai_control', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'PaulChristiano', editCreatedAt: '2016-02-25 01:44:10', pageCreatorId: 'PaulChristiano', pageCreatedAt: '2016-02-03 23:41:18', seeDomainId: '0', editDomainId: '705', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '0', isEditorComment: 'false', isApprovedComment: 'true', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '17', text: '### Explanation and AI control\n\nConsider the definition:\n\n> An action is good to the extent that I would judge it to be good, after hearing a detailed explanation assessing its advantages, disadvantages, and alternatives.\n\nContrast with:\n\n> An action is good to the extent that I would judge it to be good, after an extensive and idealized process of reflection.\n\nThe second definition seems like a conceptually useful ingredient for AI control. The first definition would be a lot more actionable though; if we could make it work, it seems more promising as an ingredient of concrete AI control proposals. In this context, the explanations would be crafted by AI systems trained to produce compelling explanations.\n\nFor this to work, we would need to reliably distinguish good from bad actions based only on their explanations — explanations which were chosen to be maximally compelling, rather than based on notions of “honesty” or a desire to be helpful.\n\nTo achieve this goal, these explanations would need to be quite rich in at least two senses:\n\n- These explanations may depend on unfamiliar concepts and complex reasoning, which _themselves_ require explanation.\n- These explanations need to explore not only an option but its alternatives, not only the evidence for a conclusion but the evidence against. They may look more like “arguments” than an “explanations,” even though there is no explicit adversarial dynamic.\n\nTo make things a bit easier, the explanations need not be produced strategically by a single actor. Instead, different parts of the explanation could be produced to satisfy different goals.\n\nIt would be nice to know whether this kind of explanation is feasible even in principle. Will sophisticated AI systems be able to explain their reasoning in a way that meets this bar? Or is there an inherent gap between the conclusions that a system can reach and the conclusions that it can convincingly justify to someone else?\n\n### How to study AI explanation?\n\nThis problem is more challenging and interesting for AI systems that are more sophisticated than their users. It seems hard for these systems to give a full description of their reasoning to the user, in the same way that it can be difficult for a brilliant human expert to explain their reasoning to a layperson.\n\nThis problem is more approachable and (again) more interesting for AI systems that are able to communicate with humans and have a theory of mind. Without these capabilities, it seems hard to give a full description of their reasoning to the user, in the same way that it can be difficult for a 4 year old to explain their reasoning to anyone.\n\nBetween these two difficulties, explanation is hard to study in the context of existing systems. (Though we can study related problems about understandability and transparency, which themselves seem important and which I hope to discuss soon.)\n\nFortunately, we have an excellent toy model of this problem: humans.\n\nThe problem\n===========\n\nConsider a group of experts in domain X, and a user with a minimal understanding of that domain. We would like to design a system that lets the user discover some fact about X or make a decision related to X. For example, the experts may be finance professionals “helping” a consumer decide what financial products to buy. Or the experts may be physicists “helping” an undergraduate make an accurate prediction about a novel high-energy physics experiment.\n\nWe are interested in the case where:\n\n1. The experts don’t care about the user or about what is really true,\n2. The user doesn’t have much time to think about the problem,\n3. The domain is very complex, requiring detailed background knowledge, unfamiliar abstractions, and challenging reasoning, and\n4. The user wants to make decisions that are nearly as good as the decisions that the experts would make if they cared.\n\nThe only lever at the user’s disposal is control over the experts’ incentives. We’ll assume that the user has more or less complete control over incentives, in the same way that the designers of AI systems have control over their training process.\n\nAs a simple and classical example, the user could elicit a recommendation from one expert, ask a second expert to argue against that recommendation, and then reward whichever one of them is more persuasive.\n\nThe question is: can we design systems that let the user reliably accomplish their task?\n\n### Examining the analogy\n\nOur goal is to build systems that allow the user to extract the “right” answer from the experts, even in the presence of very large gaps in knowledge, time, and ability. Success would suggest that, at least if our AI systems are similar to the human experts, we could use this kind of explanation process to define the “right” action, without significant performance loss.\n\nThat looks good as far as it goes, but there are some reasons to expect AI systems to be unlike the human experts.\n\n- AI systems which are in some respects very sophisticated may lack many capabilities of human experts. Even once we know that humans can evaluate explanations sufficiently rigorously, there is a further problem of actually building AI systems which are able to explain themselves.\n\nBuilding systems that can actually explain themselves is a major and complementary research question, which seems worth studying in the context of existing AI algorithms. The usefulness of explanation for AI control rests on **both** the human’s ability to evaluate explanations and the AI’s ability to explain itself; we are only trying to study the former.\n\n- In some respects the gap between human users and AI systems may be much larger than the gap between the user and the experts. And certainly the thought process of AI systems is likely to be much more unfamiliar. Even if our techniques can close the latter gap, they may not be able to close the former.\n\nThe explanation problem for humans is not the very hardest version of the explanation problem. If we could solve this version, we’d have more work to do in order to generalize to ever more alien cognitive styles. That said, if we were able to bridge a large gap in abilities with minimal reduction in performance, it would suggest that we had developed scalable frameworks for explanation.\n\nWith respect to capability gaps, note that experts with a lot of time are quite a bit more capable than laypeople with little time. At the point when AI systems have such a big lead over their human users, it seems likely that human contributions to AI control will no longer be relevant. So I’m inclined to think that “alien minds” is a bigger issue than “big capability gaps.”\n\n- Human experts may “play nice,” following usual norms of civil discourse even when violating them would technically better suit the incentives described by the user. More subtly, human experts may simply not be very good at deception. (We may also be able to build AI systems that aren’t very good at deception, but this would require some extra work.)\n\nThis is something to watch out for and to try avoid; see the discussion in the next section.\n\nWorking on the problem\n======================\n\n### Setup\n\nIn order to test approaches to this problem, all we need are a few experts in a domain, a judge who doesn’t understand that domain (or a sequence of such judges), and a challenging question for the judge to attempt to answer. These ingredients are readily available. We can evaluate performance by comparing the answers the judge comes up with to the answers the experts would have given, or in some cases by appeals to ground truth or to more authoritative authorities.\n\nIt’s not clear what domains/experts/questions are most fruitful, and there are other degrees of freedom about how to set up the exercise; but the easiest way to sort these things out may be to try it (with a relatively short timescale, such as 2-8 hours) and see.\n\nOne non-trivial issue is implementing the expert’s incentives. One could start with play money, treating the exercise as a game which the experts try in good faith to win. Of course, if the experts are paid rather than being volunteers, then there is a more straightforward way to determine their incentives. But realistically I think that the good faith efforts will be more important than incentives. We can get extra mileage by having the same experts play the game against varying users, increasing the probability that the experts identify any particular effective strategy, and by having retrospective collaborative efforts to improve the expert’s strategies.\n\n### Approaches\n\nThe bigger question may be: are there promising approaches to this problem that are worth exploring?\n\nEven though this seems like a question of natural interest, I don’t really know of a literature on it (though it would be great to find one). It seems like trying random things and seeing how they would work would be pretty informative.\n\nI’ve written about [one possible mechanism](https://arbital.com/p/1xx), which I am very interested to test. That proposal also has many possible improvements, and in general I expect the whole thing to change a lot after making contact with reality.', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '1', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '', hasDraft: 'false', votes: [], voteSummary: 'null', muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'PaulChristiano' ], childIds: [], parentIds: [ 'paul_ai_control' ], commentIds: [], questionIds: [], tagIds: [], relatedIds: [], markIds: [], explanations: [], learnMore: [], requirements: [], subjects: [], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '7780', pageId: 'human_arguments_ai_control', userId: 'JessicaChuan', edit: '3', type: 'newEdit', createdAt: '2016-02-25 01:44:10', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6786', pageId: 'human_arguments_ai_control', userId: 'JessicaChuan', edit: '2', type: 'newEdit', createdAt: '2016-02-11 03:16:54', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6785', pageId: 'human_arguments_ai_control', userId: 'JessicaChuan', edit: '0', type: 'newAlias', createdAt: '2016-02-11 03:16:53', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6367', pageId: 'human_arguments_ai_control', userId: 'JessicaChuan', edit: '0', type: 'deleteChild', createdAt: '2016-02-04 00:01:03', auxPageId: 'how_common_is_imitation', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6365', pageId: 'human_arguments_ai_control', userId: 'JessicaChuan', edit: '1', type: 'newChild', createdAt: '2016-02-04 00:00:55', auxPageId: 'how_common_is_imitation', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6356', pageId: 'human_arguments_ai_control', userId: 'JessicaChuan', edit: '1', type: 'newEdit', createdAt: '2016-02-03 23:41:18', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6355', pageId: 'human_arguments_ai_control', userId: 'JessicaChuan', edit: '0', type: 'newParent', createdAt: '2016-02-03 23:41:00', auxPageId: 'paul_ai_control', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6353', pageId: 'human_arguments_ai_control', userId: 'JessicaChuan', edit: '0', type: 'deleteParent', createdAt: '2016-02-03 23:40:49', auxPageId: 'elaborations_apprenticeship_learning', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6351', pageId: 'human_arguments_ai_control', userId: 'JessicaChuan', edit: '0', type: 'newParent', createdAt: '2016-02-03 23:38:36', auxPageId: 'elaborations_apprenticeship_learning', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'false', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }