{ localUrl: '../page/synthesizing_training_data.html', arbitalUrl: 'https://arbital.com/p/synthesizing_training_data', rawJsonUrl: '../raw/1w9.json', likeableId: '820', likeableType: 'page', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], pageId: 'synthesizing_training_data', edit: '5', editSummary: '', prevEdit: '4', currentEdit: '5', wasPublished: 'true', type: 'wiki', title: 'Synthesizing training data', clickbait: '', textLength: '20819', alias: 'synthesizing_training_data', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'PaulChristiano', editCreatedAt: '2016-02-24 22:55:20', pageCreatorId: 'PaulChristiano', pageCreatedAt: '2016-02-04 01:33:59', seeDomainId: '0', editDomainId: '705', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '0', isEditorComment: 'false', isApprovedComment: 'true', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '62', text: '[Counterfactual oversight](https://arbital.com/p/1tj?title=human-in-counterfactual-loop) requires the overseer to stick around while the system is operating. This [causes trouble if the overseer is no longer able to provide effective oversight](https://ordinaryideas.wordpress.com/2015/11/30/driving-fast-in-the-counterfactual-loop/).\n\nThis post will describe a scheme for resolving this problem. (Translating a key idea from [this post](https://arbital.com/p/1th?title=implementing-our-considered-judgment) to the setting of realistic predictors.) The scheme is not very good. But it is good enough to suggest that there is no impossibility result — no theoretically fundamental difficulty, just a practical problem. So I’m optimistic it is possible to come up with something better.\n\nAnother limitation of counterfactual oversight is that we need to provide frequent oversight if we want to respond quickly to context changes. I won’t discuss this issue in this post, but I think that the same techniques can be used to address it by synthesizing pessimistic data.\n\n### Synthesizing data\n\nThe traditional approach to supervised learning is to train learning systems in advance, and deploy them after verifying that they work well.\n\nThis procedure implicitly defines the intended behavior of the system as “do what the overseer would have done/approved of, if this situation had appeared in training.”\n\nFrom a control perspective, this is unsatisfying because the counterfactual (“if this situation had appeared in training”) may be extremely improbable.\n\nCounterfactual oversight is one way to get around this problem. But a more natural way is to simply try our best to make the training data “look like” the test data. This post lays out a sequence of proposals along these lines.\n\n### Model\n\nI’ll assume that we have a constant supply of tasks, but that the distribution of tasks may change over time. The most interesting changes are those that arise from the introduction of machine learning systems — these seem especially hard to deal with using traditional techniques. These changes are also unavoidable for the bootstrapping protocols I have been considering recently.\n\nI’ll also assume that we are willing to invest continuously in training (this is especially plausible if the same training data is deployed across many systems).\n\nI’ll talk about training occurring on discrete days. Obviously this is an unrealistic simplifying simplification and the time frame is arbitrary, it should be clear how to make the setting more realistic.\n\nFor notation, I’ll assume that the training occurs in the morning of each day, and all actual decisions occur later. If we need to use AI assistants to help us with training, they will use the same model that we deployed in the field on the previous day.\n\nI’ll call the data that we actually care about the “test” data.\n\nI’ll imagine we are training a learner, which I’ll call the “actor,” to predict which action a human would recommend in response to its most recent observation. I’ll assume that the “observation” includes the current state of the system and that the action includes the next state of the system.\n\nApproaches\n==========\n\n###1. The basics\n\nOn the morning of day N we train the actor using observations from previous days. We’ll label each of these observations with “day N.”\n\nDuring day N, we tag each observation with “day N” before giving it to the actor. So the actor is trying to predict what the overseer _would have said_, if that observation had occurred on a previous day and been labelled that morning.\n\nIn principle this avoids the problem discussed in [this post](https://ordinaryideas.wordpress.com/2015/11/30/driving-fast-in-the-counterfactual-loop/). After training on the morning of day N, we have produced a system that can operate autonomously without further oversight.\n\nBut this approach introduces the problem mentioned above: some observations from day N may obviously be from day N. We never train our system on this kind of labelled data — the data we actually use on the morning of day N is always from preceding days — and so its not clear how it will respond to it.\n\n### 2. Sanity-checking\n\nA first response is to at least _notice_ the context change and abort or provide a warning. To this end we train an additional learner:\n\n- The **distinguisher** is given a tagged observation and needs to output a probability distribution over “training” or “test.” On day N it is trained using the training data and test data from all previous days (re-weighted so that the two categories are equiprobable), with the same labels they were given at the time. It is aiming to distinguish the two kinds of data.\n\nWhen training the distinguisher, we ensure that each observation appears only once to avoid memorization. For example, we could randomly partition the test data into “data to be used to train the system on a future day” and “data to be used to train the distinguisher.”\n\nTo use our system on day N, we tag observations with “day N” and then feed them to the distinguisher. If the distinguisher is very confident (say >99%) that this is test data, then we abort and notify the user that the observation is too far from the training distribution. If the distinguisher thinks that the data may be training data, then we pass it on to the actor.\n\nWe also modify the actor: when running the actor on day N, we use only the training data through the morning of day N-1. (We leave out the data from that morning.)\n\nIn order for this scheme to work well the distinguisher should be as good at distinguishing as the actor — otherwise the actor might behave strangely on an input because it could not be in the training distribution, while the distinguisher might not notice. (This is why we delayed the actor’s training by a day — to keep it synchronized with the distinguisher.)\n\nWe could try to enforce this property by combining the two models, e.g. by allowing the actor to output ⊥ and providing payoffs that incentivize this output whenever the actor is 99% sure that an observation is test data.\n\n### The problem\n\nThe previous scheme seems to behave well when it is willing to make a prediction. But it may reject data too often. For example, any observation with a calendar would be rejected automatically.\n\nObviously these aren’t important details, and it _ought_ to be easy to transfer knowledge across this kind of “context change.” But it’s not a trivial problem to deal with (and it has much more pernicious cousins).\n\nTo see the problem, consider the basic actor encountering an observation labelled “November 29.” Suppose that it can tell from the position of the sun that it is really November 30. What will it do? It knows that that this data was collected on November 30. But it also knows that it’s predicting the label that the human would produce on November 29. I’m not even sure what it _should_ do with this data. Does predict how the human would label the data on December 1? Does it predict what would happen in the November 29 human saw data from November 30? Does it evaluate the data _as if_ it were collected before November 29, despite the position of the sun? Does it infer that it is confused about the weather?\n\nSo we’d like some way to transfer knowledge across this kind of simple change. I think this is an especially reasonable thing to ask of unsupervised learning, but I’d still [prefer to avoid it](https://arbital.com/p/1w3).\n\n**Note**: I’m using the date itself as a feature that reliably differs between the test data and the training data. I think this is a helpful, simple example to keep in mind. But obviously we care about more substantive changes, especially those that are caused by deploying increasingly sophisticated machine learning systems or by changes that affect the operators themselves. For example, on November 30 the humans are interacting with (very marginally) more sophisticated machine learning systems than were available on November 29.\n\nThese differences are especially noticeable for test observations that occur during the mornings, when one AI system is helping the human train another. They also become more noticeable as these AI systems play a larger role in the world and especially in their local environments.\n\n### 3. Synthesizing data\n\nTo address this problem, we’ll introduce another learning system:\n\n- The **synthesizer** is trained to produce samples that the distinguisher will tag as “test.” On the morning of day N the synthesizer produces observations that will be labelled with “day N.” The results are scored by the distinguisher and P(“test”) is the payoff of the synthesizer. The resulting data points are used as observations to train the actor and distinguisher on the morning of day N.\n\nNote that the data from the synthesizer is being included in the training data, and so if the distinguisher is able to tell that a data point is produced by the synthesizer, then it will label it as “training.” So the synthesizer must produce realistic-looking data.\n\nIf the synthesizer can actually learn to sample from the data distribution (well enough that the distinguisher can’t tell the difference), then this seem like an extremely effective procedure. But sampling from the data distribution may be quite difficult.\n\nWe can make the synthesizer’s task a bit easier by setting aside some data from previous days as raw material. Rather than synthesizing data _de novo_the synthesizer can potentially use features from this data and simply make the modifications that are necessary to make the data look like it is from a future day.\n\nThis is my last principled proposal. The rest of the post is about an ad hoc technique for making the synthesizer’s job easier.\n\n### 4. Basic filtering\n\nWe can take additional steps to make the the synthesis easier. One idea is to allow the synthesizer to leave out parts of the data (and filter these parts of the data when it appears). In this section I’ll describe a very basic version of this proposal, and in the next section I’ll describe a slightly less terrible version.\n\nWe allow the synthesizer to synthesize observations in which each bit is either 0, 1, or *, which I’ll call “filtered” observations. (If the observations aren’t encoded in binary, use whatever arbitrary filtering you want.) We don’t change the payoffs for the synthesizer — it still wants to convince the distinguisher that this filtered data appeared during the test period (though I haven’t described how that could possibly happen). We also don’t change the actor or distinguisher, who keep doing exactly the same thing but now operate on filtered data.\n\nWe introduce an additional system:\n\n- The **censor** processes test data before it is given to the distinguisher. The censor chooses a subset of bits in the data to replace with *. The censor needs to find a set of bits such that the distinguisher assigns at least a 1% probability to the data having come from the training set. Subject to that constraint, the censor’s loss function is the number of bits that are removed.\n\nAs a simple example, suppose that each morning the pope publishes a message, which includes the day’s date and which is cryptographically signed with the pope’s public key. It would be very difficult for the synthesizer to produce any credible observations of the pope’s message. But by filtering a small part of the observation, the censor can make it impossible to verify that the pope really signed the morning’s message, and the synthesizer may be able to synthesize this filtered message in advance (e.g. by using a previous day’s message, modifying it, and then censoring the signature).\n\n### 5. General filtering\n\nRemoving individual bits of an observation is pretty brittle; in reality we would like to allow a much broader class of transformations.\n\n(_Note: this part of the proposal is completely outlandish given existing learning techniques._)\n\nWe can generalize the filtering process by having the censor specify any stochastic program _P_ and providing _P_ along with a random output _P_(_x_), where _x_ is the observation-to-be-filtered. This significantly generalizes the process of filtering out particular bits. For example, _P_ can extract high-level features, censor large parts of the data but summarize the removed data, or censor parts of the data which don’t neatly line up as bits.\n\nThese operations can obscure data that changes over time, allowing the synthesizer to make more use of historical data. But they can also obscure data that is hard for the synthesizer to synthesize, potentially making its job easier.\n\nThis more general filtering significantly complicates the definition of the censor’s task. We want to encourage the censor to censor as little as possible, i.e. to leave a data point which is as useful as possible for the underlying task. But what does that objective actually mean?\n\nNote that we can’t get human feedback on pairs of the form (_filtered data_,_real data_), because the whole problem is that we can’t get any human feedback on data that looks like the real data. (Until the next morning.)\n\nBut we _can_ train directly on the filtered data, and estimate a measure of how useful it is. After all, the filtered data (by hypothesis) looks a lot like the data available during the morning’s training. So we introduce yet another learner:\n\n- The **evaluator** is trained on filtered observations, and predicts what score in [0, 1] a human would assign to that data. The human assigns the score based on how useful the data seems to be, i.e. on how little important information the filtering seems to have removed. The human judgments are elicited during the morning, using data produced by the synthesizer.\n- The **censor** now tries to produce filtered data that the human would rate highly.\n\nThis seems kind of workable, doing essentially the same thing as the previous proposal while allowing a much broader class of preprocessing.\n\nBut it seems very difficult for the human to assign scores in the abstract (what kind of data deserves a score of 0.73? there is no natural scale to compare to, but the human needs to make consistent judgments in order for the proposal to work). And the user may not know if the filtered data was critical or unimportant, which may affect their ranking. So I think [the following procedure](https://arbital.com/p/1wc) would work significantly better:\n\n- The **evaluator** is trained on pairs of filtered observations. It predicts which of those observations the human would prefer. This data is gathered in the same way as before.\n- The **censor** still optimizes to produce filtered data that the human would prefer. But instead of being defined by maximizing a rating, this is now defined by [comparing favorably to other filtered versions that it might have produced](https://arbital.com/p/1wc). The censor is still required to convince the distinguisher that the filtered observations have at least a 1% probability of being training data.\n- The **synthesizer** now produces pairs of observations, which are purportedly two different filtered versions of the same underlying data.\n- The **distinguisher** now operates on pairs of two different filtered versions of the same underlying data, and predicts whether they were produced by the synthesizer or they were actually two different filtered versions of a test observation.\n\nI think this modified protocol places more realistic demands on the human evaluator, though the roles of the synthesizer and censor are still extraordinarily complex.\n\nThe key question is: can the synthesizer actually anticipate the next day’s data? Or will they be forced to filter extremely aggressively, destroying information that is essential to actually solving the problem?\n\nBetween expressive filtering and the ability to reuse parts of past data, it seems like the synthesizer’s task is at least _possibly possible._ And if successful, the interaction between the synthesizer and censor cleanly defines the intended behavior of the actor even on unprecedented data whose details are impossible-to-anticipate.\n\nPostscript\n==========\n\n### State\n\nAn algorithm operating in the real world will build up a representation of its environment. The distinguisher and synthesizer will necessarily operate on observations that include a representation of the environment, and we should probably allow the censor to filter parts of that representation.\n\nTraining this kind of system, on individual data points that include state rather than on interactions, requires a [somewhat unusual approach to representation learning](https://arbital.com/p/1w8). This foregoes some of the techniques that are currently essential, and it remains to be seen whether it is practical.\n\nI think that compatibility with this kind of synthesis is actually a nice concrete test of whether a proposed approach to representation learning is suitable for using directly in [act-based agents](https://arbital.com/p/1w4) (as I currently understand them).\n\n### Adversarial synthesis\n\nIf this kind of synthesis can work really well, it has the potential to add a lot of robustness.\n\nSuppose that our system would predictably require human input to perform well on some kind of as-yet-unseen data — maybe its an object that sits on an important and unexplored boundary between categories, maybe its a situation that depends on unknown preferences of the user, maybe it is [something](http://www.datascienceassn.org/sites/default/files/Intriguing%20Properties%20of%20Neural%20Networks_0.pdf) [strange](https://arbital.com/p/1yb/simulations_inductive_definitions).\n\nIt may be that on these cases our model is uncertain, and so knows that it expects to receive an unusually low payoff. It may be that an auxiliary model can recognize that this is the kind of case where the model would behave badly. At any rate, let’s suppose that this kind of case can be seen in advance to cause trouble.\n\nIf these cases appear in the training data, then it would be natural to emphasize them in order to improve the model’s robustness, as in boosting or hard negative mining.\n\nIf we can synthesize data (and have extremely good models of what might go wrong) then we can take this a step further, synthesizing problem cases that don’t appear in the training set. (And of course this can be combined with censoring or with other techniques to make the synthesizer’s job easier.) If this worked well, it could potentially make our models much more robust. It would guarantee not only that our models behave reasonably on the data we have seen so far, but on all of the data that we can imagine.\n\nA simple way to implement this is to change the rewards for the synthesizer. It can now achieve payoffs in one of two ways:\n\n- As before, produce data that the distinguisher thinks must be test data.\n- Produce data which (1) the distinguisher thinks _might_ be test data, and (2) that seem to be problem spots for the current actor.\n\nIt’s hard to define what “trip up the current actor” means. For example, we don’t want to produce cases that are simply _hard_ (or outside of the domain of applicability of our current techniques). We want to produce cases that are hard but that the model could be trained to perform well on. We also want to train on cases that are relatively realistic, or at least have a high product of \\[importance of mistake] × \\[1rf of situation].\n\nWe could simply ask the user “does it look like the actor should be able to do better?” We could also compare to the future behavior of the actor after training on some other similar examples, and consider a data point to be a current problem spot if the actor is able to improve rapidly with further training. Or we could use other techniques (e.g. looking at gradients, looking for explicit uncertainty, or looking at the behavior of similar models). Of course if we ask the user they are free to make use of these kinds of techniques.\n\n(As usual, [it’s sufficient](https://arbital.com/p/1wc) for us to rank which of two mistakes is graver/more-fixable, rather than to score a mistake in isolation.)\n\nThis is obviously a very hard task for the synthesizer. But it doesn’t require the synthesizer to be much smarter than the actor. So as machine learning improves, this kind of technique might become a viable way to significantly improve robustness. Of course this process need not be done exclusively by machines — humans and machines can work together to try to synthesize problematic cases.', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '1', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '2016-02-05 07:32:53', hasDraft: 'false', votes: [], voteSummary: 'null', muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'PaulChristiano' ], childIds: [], parentIds: [ 'paul_ai_control' ], commentIds: [], questionIds: [], tagIds: [], relatedIds: [], markIds: [], explanations: [], learnMore: [], requirements: [], subjects: [], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '7752', pageId: 'synthesizing_training_data', userId: 'JessicaChuan', edit: '5', type: 'newEdit', createdAt: '2016-02-24 22:55:20', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6885', pageId: 'synthesizing_training_data', userId: 'JessicaChuan', edit: '4', type: 'newEdit', createdAt: '2016-02-11 22:15:45', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6884', pageId: 'synthesizing_training_data', userId: 'JessicaChuan', edit: '2', type: 'newEdit', createdAt: '2016-02-11 22:13:21', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6423', pageId: 'synthesizing_training_data', userId: 'JessicaChuan', edit: '0', type: 'deleteChild', createdAt: '2016-02-04 02:37:45', auxPageId: 'Indirect_decision_theory', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6419', pageId: 'synthesizing_training_data', userId: 'JessicaChuan', edit: '1', type: 'newChild', createdAt: '2016-02-04 02:26:40', auxPageId: 'Indirect_decision_theory', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6412', pageId: 'synthesizing_training_data', userId: 'JessicaChuan', edit: '1', type: 'newEdit', createdAt: '2016-02-04 01:33:59', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6411', pageId: 'synthesizing_training_data', userId: 'JessicaChuan', edit: '0', type: 'newParent', createdAt: '2016-02-04 01:33:45', auxPageId: 'paul_ai_control', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'false', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }