{ localUrl: '../page/modeling_ai_control_with_humans.html', arbitalUrl: 'https://arbital.com/p/modeling_ai_control_with_humans', rawJsonUrl: '../raw/1wd.json', likeableId: '823', likeableType: 'page', myLikeValue: '0', likeCount: '1', dislikeCount: '0', likeScore: '1', individualLikes: [ 'AlexRay' ], pageId: 'modeling_ai_control_with_humans', edit: '6', editSummary: '', prevEdit: '5', currentEdit: '6', wasPublished: 'true', type: 'wiki', title: 'Modeling AI control with humans', clickbait: '', textLength: '10944', alias: 'modeling_ai_control_with_humans', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'PaulChristiano', editCreatedAt: '2016-03-03 16:29:45', pageCreatorId: 'PaulChristiano', pageCreatedAt: '2016-02-04 03:00:48', seeDomainId: '0', editDomainId: '705', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '0', isEditorComment: 'false', isApprovedComment: 'true', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '48', text: 'I’ve been trying to build an aligned AI out of reward-maximizing modules. A successful scheme could also be applied to a simple toy problem involving greedy humans, which might be more intuitive.\n\nThis is similar to toy problems I’ve written about before ([1](https://arbital.com/p/1yn), [2](https://arbital.com/p/1vy/human_arguments_ai_control), [3](https://arbital.com/p/1tv?title=the-steering-problem), [4](https://arbital.com/p/1v6?title=problem-safe-ai-from-episodic-rl)), but I think this is a cleaner and stronger statement.\n\n### The setup\n\nSuppose that you have access to a human-simulation, and can run as many simulations as available computing hardware allows.\n\nYou can give the simulation input over a binary channel, and receive its output over a binary channel. At any time you can give the simulation a score in [0, 1] ∪ {⊥}. At any time you can take a snapshot of the current simulation or stop the simulation, and you can restart simulations from any saved snapshot.\n\nThe simulated human will, to the best of their abilities, maximize:\n\n$\\mathbb{E}$[the next score I receive | I next receive a non-⊥ score.\n\nThe question is: using simulations of this kind, can we build an effective composite system aligned with our interests?\n\nIf we want to be concrete/colorful, we can give some detail to the simulated human. It’s just a normal person (other than their unusual goal of score-maximization), living in a world much like ours. The communication channel is implemented by a special computer that the chosen human has access to. After each reward signal, the human is given some time to rest, recover, and enjoy themselves, and then we restart the state of the rest of the world.\n\n### The benchmark\n\nTo evaluate our performance, we compare to a hypothetical benchmark in which the simulated human is instead motivated to be maximally helpful to whoever is currently on the other end of the communication channel.\n\nIn the benchmark scenario, simulations cost the same amount and are identical in every way except for their motivations. We have the same ability to stop, start, and snapshot simulations.\n\nOur goal is to build a system, out of our collection of reward-driven simulations, that is nearly as useful to us as this benchmark. Ideally the gap in usefulness would be a small additive constant. But I’d also be happy with a small multiplicative constant (e.g. it takes 10% longer to get the same tasks done), or even a moderate multiplicative constant (e.g. it takes 3x longer to get the same tasks done). And I’d be interested in (though not happy with) a large multiplicative constant (e.g. it takes 1000x longer to get the same tasks done).\n\n### Fitting AI control into this framework\n\nMost of the schemes I’ve written about over the last year fit into this simple model, and I think that it’s a useful intuition pump for understanding these schemes. For example:\n\n- [Approval-directed agents](https://arbital.com/p/1t7)\n- [Meeting halfway](https://arbital.com/p/1vp)\n- [Synthesizing training data](https://arbital.com/p/1w9)\n- [Symbolic reasoning](https://arbital.com/p/1tq?title=learning-and-logic)\n- [Active learning with opaque predictors](https://arbital.com/p/1tq?title=learning-and-logic)\n\nI think that a solution to this problem would be an extremely promising step for AI control. Conversely, as long as we can’t solve this problem, I think that it is hard to argue that the AI control problem is totally resolved, since we could in fact find ourselves with capabilities similar to the greedy-human-simulation.\n\n### Examining the analogy\n\nThere are two natural questions:\n\n- If we had a working approach to AI control, would it solve this toy problem?\n- If we could solve this toy problem, would we have a working approach to AI control?\n\nIn the next section I’ll explore a few key disanalogies that I think are worth having in mind, but which are easy to capture by modifying the setup. But first I want to compare existing AI techniques to this basic model.\n\n- Supervised learning fits into this model very well. During training we give the human a sequence of examples, with scores determined by the loss of their classification. After training, we take a snapshot of the simulation’s state and then use that snapshot indefinitely.\n- Online learning also fits into the model; in this case we simply continue to do training instead of using a snapshot.\n- Episodic reinforcement learning fits into the model very well. During an episode we provide the reward as an ordinary observation; at the end of the episode we compute the average discounted reward and provide it as the score.\n- Generative models aren’t analogous to a single human-simulation, because it is not clear how to compute the reward. But many successful techniques for generative modeling, such as variational autoencoders or adversarial networks, are made out of a small number of components which do fit in this framework.\n- Similarly, supervised learning doesn’t fit directly into this framework because it is not clear what objective is being maximized. But most approaches that maximize an objective would fit in this framework, since e.g. we can ask the human to implement an autoencoder (by having two copies of them interact) or to make predictions about upcoming data.\n\nSome other tasks don’t fit well into this framework, e.g. if much of the work is spent on specifying models or producing domain-specific algorithms that encode both how to solve the task and what the intended behavior is. For example, many approaches to robotics don’t fit into this framework very well.\n\nOverall, I feel like most existing learning algorithms fit into this framework, and so we should expect a robust solution to the AI control problem to also apply to the toy problem.\n\nOne objection is that over time researchers are thinking of increasingly clever ways to put together individual reward-maximizing modules (e.g. variational autoencoders). It may be that solving the AI control problem requires using new approaches that haven’t yet been invented.\n\nI don’t think this is a strong objection. There may be clever tricks that we don’t yet know for using reward-maximizing components to do cool things. But that just makes the toy problem richer and more interesting, and makes it more likely that progress on the toy problem will be interesting to AI researchers more broadly.\n\nThe reverse question — if we solve the toy problem, will it apply to the real AI control problem? — is a lot murkier. I would consider it a really good first step, and I think that it would immediately lead to a reasonable candidate solution to the AI control problem, since we could simply replace each human simulation with a reinforcement learner. But we’d need to do more work to get the that solution to actually work in practice, and there is a lot of room for new problems that don’t appear in the simple idealization.\n\nDisanalogies\n============\n\nThere are some very important disanalogies between the toy problem and the real AI control problem. Fortunately, it is pretty easy to modify the toy problem to fix them.\n\n### AI may be subhuman in many respects\n\nIn a solution to the toy problem, we may be tempted to rely on the fact that our simulated human is able to do anything that a human can do.\n\nBut a realistic AI system may be much worse than humans at some tasks. This could be true even for AI systems that are powerful enough to radically transform the world.\n\nParticularly troubling is that an AI system might be especially bad at understanding humans, or human values. Humans may be especially good at some kinds of reasoning-about-humans, since we can e.g. put ourselves in each other’s shoes and engage a bunch of dedicated cognitive machinery produced by evolution.\n\nAn AI system that was very good at computational tasks but very bad at humans might pose distinctive dangers. Similarly, other big mismatches could lead to other kinds of trouble.\n\nAn ideal solution to the toy problem would continue to work if the human simulation is significantly subhuman in some respects. It might rest on _some_minimal set of capabilities, but we would like the minimum to be as minimal as possible. We should be especially wary of abilities like our better understanding of humans that seem plausibly human-specific.\n\n### AI may be superhuman in many respects\n\nOn the flip side, our AI systems may eventually be radically better than humans at many tasks. Ideally a toy problem would continue to work for _very_ powerful human-simulations. To illustrate the point colorfully, rather than imagining a single human interacting with us, we could imagine an [entire civilization of geniuses thinking for years during each second of simulation](http://lesswrong.com/lw/qk/that_alien_message/).\n\nAn ideal solution to the toy problem would be robust to increases in the power of the simulated humans, ideally to arbitrarily large increases.\n\n### There may be problems during training\n\nAn AI system may make mistakes during training. Being a “good” learner means that those errors will be quickly corrected, not that they will never occur. We would like to build systems that are robust to these errors.\n\nMost of these mistakes will be innocuous, but theoretically they could be quite bad. For example, amongst our space of possible models we might imagine that there is one which tries to solve the problem and one which tries to survive (kind of like life on Earth). The latter model may instrumentally give good answers to questions, as long as that’s the best way to survive, but might suddenly behave strangely at an inopportune moment.\n\nI don’t think this particular scenario is likely, but I do think that we should [strongly prefer schemes that are robust to adversarial errors](https://medium.com/ai-control/handling-adversarial-errors-4ed438af8bdd#.kqwyejubi). I think that a scheme that can’t handle this kind of error has a significant chance of suffering from other serious problems. Being robust to adversarial errors essentially requires exerting “positive pressure” on our models to do exactly what we want, rather than trusting that if they have mostly done what we want so far they will continue to behave as intended.\n\nIn the context of the toy problem, we might imagine that there are two different humans in the simulation, and that in each step we interact with one of them at random. If one of the two has been doing much better so far, we gradually shift our probabilities towards consulting that one exclusively. One of these humans is really trying to maximize their own reward, but their evil twin has an ulterior motive (like influencing our world in some way — which instrumentally leads them to achieve high reward, so that they can keep playing the game).\n\nAn ideal solution to the toy problem would continue to work in the presence of this evil twin.', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '1', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '2016-02-05 07:32:44', hasDraft: 'false', votes: [], voteSummary: 'null', muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'PaulChristiano' ], childIds: [], parentIds: [ 'paul_ai_control' ], commentIds: [], questionIds: [], tagIds: [], relatedIds: [], markIds: [], explanations: [], learnMore: [], requirements: [], subjects: [], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '8147', pageId: 'modeling_ai_control_with_humans', userId: 'AlexeiAndreev', edit: '6', type: 'newEdit', createdAt: '2016-03-03 16:29:45', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '7784', pageId: 'modeling_ai_control_with_humans', userId: 'JessicaChuan', edit: '5', type: 'newEdit', createdAt: '2016-02-25 01:59:40', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '7762', pageId: 'modeling_ai_control_with_humans', userId: 'JessicaChuan', edit: '4', type: 'newEdit', createdAt: '2016-02-24 23:24:34', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6899', pageId: 'modeling_ai_control_with_humans', userId: 'JessicaChuan', edit: '3', type: 'newEdit', createdAt: '2016-02-11 23:42:53', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6439', pageId: 'modeling_ai_control_with_humans', userId: 'JessicaChuan', edit: '2', type: 'newParent', createdAt: '2016-02-04 03:06:35', auxPageId: 'paul_ai_control', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6437', pageId: 'modeling_ai_control_with_humans', userId: 'JessicaChuan', edit: '2', type: 'newEdit', createdAt: '2016-02-04 03:05:58', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '6436', pageId: 'modeling_ai_control_with_humans', userId: 'JessicaChuan', edit: '1', type: 'newEdit', createdAt: '2016-02-04 03:00:48', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'false', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }