{ localUrl: '../page/counterfactual_oversight_vs_training_data.html', arbitalUrl: 'https://arbital.com/p/counterfactual_oversight_vs_training_data', rawJsonUrl: '../raw/1vs.json', likeableId: 'GeoffRuddock', likeableType: 'page', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], pageId: 'counterfactual_oversight_vs_training_data', edit: '3', editSummary: '', prevEdit: '2', currentEdit: '3', wasPublished: 'true', type: 'wiki', title: 'Counterfactual oversight vs. training data', clickbait: '', textLength: '7202', alias: 'counterfactual_oversight_vs_training_data', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'PaulChristiano', editCreatedAt: '2016-03-04 02:32:18', pageCreatorId: 'PaulChristiano', pageCreatedAt: '2016-02-03 23:12:45', seeDomainId: '0', editDomainId: '705', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '0', isEditorComment: 'false', isApprovedComment: 'true', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '39', text: '\nI have written a lot recently about [counterfactual human oversight](https://arbital.com/p/1tj?title=human-in-counterfactual-loop). This idea has made me much more optimistic about AI control proposals based on supervised learning. But counterfactual oversight looks superficially kind of weird, and unlike the kind of thing that would appear in practice. I think that is mostly a presentation issue, and that in fact analysis of counterfactual oversight applies more or less directly to the kinds of supervised learning systems that people are likely to build.\n\n### Status quo\n\n\nConsider the normal workflow for training a supervised learning system:\n\n- Collect and label training data.\n- Train a learning system.\n- Deploy that system.\n\nThis workflow can run into a few well-known problems:\n\n- The problem is not stationary, and over time the training data becomes less relevant.\n- There are spurious correlations in the training data that don’t generalize to the test data.\n\nHow do we address these problems?\n\n- Continue to gather training data while the system is deployed. Periodically adjust the learned model (or maybe use an online approach if the setting calls for it).\n- Try to make the training data as similar as possible to the test data.\n\nIdeally, our training data would be a random subset of the test data, and and we would train continuously. Of course, if data needs labelling and the task is performed frequently, it will be impractical to label it all by hand. So instead we might label a small fraction of it.\n\nIn most cases these measures aren’t necessary if we are mindful of the possible problems, and we will instead address problems in the cheapest way available.\n\n### Counterfactual oversight\n\nCounterfactual oversight consists of labelling a random subset of data and using it as online training data. The key difference is that any given data point _may_ become a training data point, with the decision made _after_ the learning system has made a decision about it. As long as the randomization is unpredictable to the learner, this gives us a formal guarantee that there can’t be any noticeable difference between the training and test data. And therefore if our learner behaves well on training data, it really must behave well on test data.\n\nIn most cases, I expect that this solution is overkill, so if it’s expensive we can probably do something cheaper. For example, I would be surprised if researchers ever really needed to use a secure cryptographic RNG to decide what cases to include in the training set.\n\nBut when thinking about scalability to extreme cases, it seems worthwhile to check that there is a (relatively) cheap solution which is essentially perfectly robust. This then gives us a nice model to use when thinking about those extreme cases, and if we notice any problems with the extreme solution we can pay extra attention to them. If the extreme solution works and isn’t too expensive, we can be reassured that people will find _some_ solution that works and isn’t too expensive, whether or not it’s the particular one we imagined.\n\nWhen describing counterfactual oversight I also usually imagine that our algorithms can deal with sequential data rather than discarding information about time. This allows them to adjust to trends in the data, or to anticipate changes based on available information, rather than anchoring their behavior to past experiences. I suspect this will be an important issue in the future, but for the most part it can be ignored for now. This ability isn’t a requirement for applying counterfactual oversight — the point is that counterfactual oversight allows us to apply this ability while retaining strong formal guarantees.\n\nA final difference is one of language. I describe counterfactual oversight as the learner “trying to do what the evaluator would rate highly,” with the evaluator picking some random data points to actually rate. This would more traditionally be described as the learner trying to solve the underlying problem, with the evaluator providing some useful training data to help. I think that this difference in language results from a difference of perspective — I am trying to pay close attention to what exactly the system is actually doing. This vantage point seems especially suitable for thinking about AI control, but it doesn’t directly translate to any technical differences in the systems being discussed. I also happen to think that many practicing AI researchers could stand to be a bit more precise in this respect, though they probably shouldn’t go as far as I do and it’s not so important one way or the other.\n\n### So why think about it?\n\nIf counterfactual oversight is very similar to existing practice, why bother thinking about it?\n\nI am interested in understanding in advance what issues will arise as we try to scale existing approaches to AI control to very powerful systems — to whatever extent that is possible.\n\nThe supervised learning paradigm has a number of distinctive challenges, for example based on non-stationary data, the possibility of spurious correlations in training data, and the availability and cost of supervision. Today these problems are real but typically manageable; it’s conceivable that they will become more severe as learning systems become more powerful. It’s natural to ask whether they are likely to become _much_ more severe, and in particular whether they call into question supervised learning as a paradigm for controlling very powerful AI systems.\n\nCounterfactual oversight seems to address many of these problems in a robust way, suggesting that they won’t be deal-breakers “in the limit.” For example, a human-level online learner under counterfactual oversight is unlikely to predictably behave badly because of spurious correlations in the training data. This suggests that such spurious correlations are unlikely to be deal-breakers. Similarly, it seems that the amount of data required for a sophisticated semi-supervised learner using counterfactual oversight would be comparable to the amount of data needed by a completely unsupervised learner, suggesting that the availability of training data is not a deal-breaker either.\n\nPrior to considering counterfactual oversight, I had expected that training processes involving humans were unlikely to be suitable as part of the_definition _of correct behavior for sophisticated AI systems — that they could only be used to help provide auxiliary information that would be useful to an AI in achieving goals defined by some other means. Thinking through the consequences of counterfactual oversight has largely addressed the narrow versions of this concern (though there are many closely related issues that remain open).\n\nAs far as I can tell, practicing AI researchers mostly didn’t have this concern, which is fine. I think there is room for some people to approach long-term issues with a more theoretical and cautious stance; I’m pretty optimistic that the problems with scalability that seem especially challenging in theory are also likely to generate interesting practical questions for AI researchers today.', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '1', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '', hasDraft: 'false', votes: [], voteSummary: 'null', muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'PaulChristiano' ], childIds: [], parentIds: [], commentIds: [], questionIds: [], tagIds: [], relatedIds: [], markIds: [], explanations: [], learnMore: [], requirements: [], subjects: [], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '8286', pageId: 'counterfactual_oversight_vs_training_data', userId: 'JessicaChuan', edit: '3', type: 'newEdit', createdAt: '2016-03-04 02:32:18', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6757', pageId: 'counterfactual_oversight_vs_training_data', userId: 'JessicaChuan', edit: '0', type: 'newAlias', createdAt: '2016-02-11 02:25:40', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6758', pageId: 'counterfactual_oversight_vs_training_data', userId: 'JessicaChuan', edit: '2', type: 'newEdit', createdAt: '2016-02-11 02:25:40', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6362', pageId: 'counterfactual_oversight_vs_training_data', userId: 'JessicaChuan', edit: '0', type: 'deleteParent', createdAt: '2016-02-03 23:43:56', auxPageId: 'apprenticeship_learning_mimicry', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6359', pageId: 'counterfactual_oversight_vs_training_data', userId: 'JessicaChuan', edit: '0', type: 'deleteChild', createdAt: '2016-02-03 23:43:25', auxPageId: 'ambitious_vs_narrow_value_learning', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6333', pageId: 'counterfactual_oversight_vs_training_data', userId: 'JessicaChuan', edit: '1', type: 'newChild', createdAt: '2016-02-03 23:14:17', auxPageId: 'ambitious_vs_narrow_value_learning', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6332', pageId: 'counterfactual_oversight_vs_training_data', userId: 'JessicaChuan', edit: '1', type: 'newEdit', createdAt: '2016-02-03 23:12:45', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6331', pageId: 'counterfactual_oversight_vs_training_data', userId: 'JessicaChuan', edit: '0', type: 'newParent', createdAt: '2016-02-03 23:10:26', auxPageId: 'apprenticeship_learning_mimicry', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'false', hasParents: 'false', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }