{ localUrl: '../page/unsupervised_learning_ai_control.html', arbitalUrl: 'https://arbital.com/p/unsupervised_learning_ai_control', rawJsonUrl: '../raw/1w3.json', likeableId: '814', likeableType: 'page', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], pageId: 'unsupervised_learning_ai_control', edit: '2', editSummary: '', prevEdit: '1', currentEdit: '2', wasPublished: 'true', type: 'wiki', title: 'Unsupervised learning and AI control', clickbait: '', textLength: '10059', alias: 'unsupervised_learning_ai_control', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'PaulChristiano', editCreatedAt: '2016-02-11 09:21:52', pageCreatorId: 'PaulChristiano', pageCreatedAt: '2016-02-04 00:42:26', seeDomainId: '0', editDomainId: '705', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '0', isEditorComment: 'false', isApprovedComment: 'true', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '32', text: 'Reinforcement learning systems optimize for an objective defined by external feedback — anything from successful prediction of image labels to good performance in a game. (I am using “reinforcement” learning [very broadly](https://arbital.com/p/1v2?title=reinforcement-learning-and-linguistic-convention), including e.g. supervised learning.) I think it’s safe to say that reinforcement/supervised learning is overwhelmingly dominant in machine learning today.\n\nI think it is also safe to say that many researchers think this will eventually change, even for the domains and techniques where supervised learning is currently most dominant — e.g. Yann LeCun [wrote](http://fastml.com/yann-lecuns-answers-from-the-reddit-ama/) last year “everyone \\[in deep learning] agrees that the future is in unsupervised learning,” while acknowledging “the recent practical success of deep learning in image and speech all use purely supervised backprop.”\n\nImproved unsupervised learning might be good news for AI control, and the prospect of unsupervised learning plays a major role in informal discussions of AI control. If unsupervised learning can extract robust and meaningful concepts, these concepts may be available to communicate goals and preferences to AI systems, who can then pursue what we actually care about rather than an approximation defined by external feedback.\n\n### My take\n\n(See also: _Caveats_ below.)\n\nI think that we should try to address the AI control problem using reinforcement learning, rather than assuming that the problem will be made much easier by progress in unsupervised learning.\n\nMore specifically: I think that we should assume that we can train systems to be good at specific tasks for which we can provide feedback on performance, but beyond that we should not rely on strong assumptions about their internal representations or other characteristics of their behavior.\n\nBy “broadly construed” I mean to include capabilities like efficient prediction, semi-supervised learning, and density estimation. These areas capture many _capabilities_ that people have in mind when they discuss unsupervised learning. But in terms of their relevance to AI control, they are quite similar to supervised learning, and they certainly fit within the more specific framework in the last paragraph.\n\nIn contrast, researchers often express the hope that sophisticated AI systems will discover many of the same robust concepts that humans use when they reason about the world, and that these concepts will be sufficiently precise, and in the right format, and sufficiently aligned with the human concepts, that they can be used directly to issue commands or specify goals.\n\nMy recommendation is to treat future capabilities as being similar-in-kind to contemporary reinforcement learning, though applying in broader domains and with more efficient use of data, rather than making optimistic and somewhat vague assumptions about what unsupervised learning will do for us.\n\n### Justification\n\nMy reasons for focusing on reinforcement/supervised learning are:\n\n- I don’t think that we have much idea what future unsupervised learning will look like, and I think that specific assumptions about it are likely to be wrong. These assumptions are often very imprecise and intuitive, which I feel makes them especially suspect and hard to reason about. I think that our best _single_ model of future unsupervised learning may well be current unsupervised learning, which is pretty well-modeled as good prediction and semi-supervised learning.\n- I think there is a [good case](https://arbital.com/p/1w2) that research in AI control should focus on existing techniques. We can understand these techniques much better than unknown future techniques; we can do empirical work on these techniques; and this work will be especially relevant if AI control happens to become important surprisingly soon. This argument suggests we should work with reinforcement learning while that’s what we have, and think more about unsupervised learning as the techniques mature.\n- Even if unsupervised learning is the dominant paradigm in the future, it seems quite plausible that control techniques based on reinforcement learning will remain relevant. For example, reinforcement learning may still remain one useful technique amongst many, and may play a significant role in the internal organization of powerful AI systems. \nAs a special case, many possible “unsupervised” learning strategies involve interacting reinforcement systems with cleverly designed objectives. Other strategies use unsupervised learning to build useful representations, but only extract useful behavior from these representations by supervised fine-tuning.\n\n### Caveats\n\nTo clarify:\n\n- I think we will make significant advances in unsupervised learning, and these advances will be relevant to AI control. I am making a methodological claim about how to think effectively about and do useful research on AI control right now — not a strong prediction about the future of machine learning.\n- I think that the prospect of improved unsupervised learning should make us more optimistic about AI control. I don’t think that modest changes in our overall level of optimism will have big effects on what we should do, unless we end up _very_ optimistic (which I don’t think is justified), or we started out _very_ pessimistic (which I don’t think is justified either).\n- A better understanding of unsupervised learning may be helpful for AI control. At face value this doesn’t look like a promising project for a researcher concerned with AI control (since understanding unsupervised learning is a hot topic), but it still has some positive differential impact and it might end up being a winner. And of course I am happy to be supportive of AI researchers who are working on differentially useful projects, even if they aren’t maximally differentially useful from my perspective.\n- I assume that future systems will (eventually) be able to make superhumanly efficient use of data. I object to making more detailed assumptions about the features learned by unsupervised learning, _not_ to the assumption that we will eventually realize the practical goals of unsupervised learning. For example, I think that we should expect AI systems to develop conceptual understanding that allows them to quickly learn the meaning of a new word or even do zero-shot learning.\n- I think it is worthwhile to pay attention to continuing progress in unsupervised learning and to adjust our approach to AI control as we learn more and can make firmer predictions about what will be possible.\n\n### An example of the distinction\n\nSuppose that we train a learner to recognize which scenes contain humans.\n\nIn order to analyze how the behavior of this system will scale, I would think in detail about the training process, the objective it is optimizing, and what behaviors would optimize that objective. For example, if the system is trained to reproduce human labels, then I expect more sophisticated systems to converge to the human’s labels.\n\nWe might hope that in the future an unsupervised approach would learn to identify which scenes “really” contained humans, and could make correct judgments even in cases where the human would err or in domains where we couldn’t actually elicit a human label.\n\nI would recommend against making this kind assumption until we have learned more about unsupervised learning.\n\nThis discussion becomes more complex and important when we think about messier concepts like “good” or “what Hugh wants.” It is relatively clear what the reinforcement model predicts — powerful AI systems will make increasingly accurate predictions about how a human labeler would label the given data. It’s not clear exactly what the unsupervised approach would do, and I think that we shouldn’t count on it doing something that is “good.”\n\n### The current situation\n\nVery few people have thought seriously about how to handle AI control if reinforcement learning remains as dominant as it currently is; I think that few people are optimistic enough to think that the task is possible and pessimistic enough to think that it may be necessary.\n\nWhen I’ve discussed the issue with AI researchers, they seem to have strong expectations that progress in unsupervised learning will obviate many of the concerns with AI control for reinforcement learning (and with AI control more broadly), allowing users to e.g. provide natural language instructions that will be correctly understood and implemented. I think this is a very reasonable hypothesis, and that the views of AI researchers are an important source of evidence for it.\n\nBut I’m not yet persuaded that this is much more likely than not. I also don’t think that most AI researchers have considered the question in much detail or engaged with substantive arguments that these problems are real. I’m not even sure that _they_ think that this optimistic hypothesis is much more likely than not, rather than considering it the dominant hypothesis or most promising approach.\n\nMIRI researchers mostly seem to take the opposite view — that unsupervised learning definitely won’t address these problems by default. But the MIRI research agenda responds by focusing on a number of problems they see as needed to make unsupervised learning work for goal specification—ontology identification, multi-level world models, ambiguity identification and operator modeling. I think most MIRI researchers feel that we are probably doomed if we are stuck with the kind of reinforcement learning that is available today. (This is my unconfirmed impression based on informal discussions.)\n\n### Upshot\n\nUnsupervised learning will probably improve significantly before AI control becomes a serious problem. Nevertheless, I think that researchers interested in AI control should focus on handling reinforcement learning systems of the kind that already exist.', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '1', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '2016-02-05 07:32:39', hasDraft: 'false', votes: [], voteSummary: 'null', muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'PaulChristiano' ], childIds: [], parentIds: [ 'paul_ai_control' ], commentIds: [], questionIds: [], tagIds: [], relatedIds: [], markIds: [], explanations: [], learnMore: [], requirements: [], subjects: [], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6878', pageId: 'unsupervised_learning_ai_control', userId: 'JessicaChuan', edit: '2', type: 'newEdit', createdAt: '2016-02-11 09:21:52', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6394', pageId: 'unsupervised_learning_ai_control', userId: 'JessicaChuan', edit: '0', type: 'deleteChild', createdAt: '2016-02-04 00:46:33', auxPageId: 'act_based_agents', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6391', pageId: 'unsupervised_learning_ai_control', userId: 'JessicaChuan', edit: '1', type: 'newChild', createdAt: '2016-02-04 00:44:37', auxPageId: 'act_based_agents', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6390', pageId: 'unsupervised_learning_ai_control', userId: 'JessicaChuan', edit: '1', type: 'newParent', createdAt: '2016-02-04 00:43:37', auxPageId: 'paul_ai_control', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6388', pageId: 'unsupervised_learning_ai_control', userId: 'JessicaChuan', edit: '0', type: 'deleteParent', createdAt: '2016-02-04 00:43:32', auxPageId: 'possible_stance_ai_control_research', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6386', pageId: 'unsupervised_learning_ai_control', userId: 'JessicaChuan', edit: '1', type: 'newEdit', createdAt: '2016-02-04 00:42:26', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6385', pageId: 'unsupervised_learning_ai_control', userId: 'JessicaChuan', edit: '0', type: 'newParent', createdAt: '2016-02-04 00:39:18', auxPageId: 'possible_stance_ai_control_research', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'false', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }