{
  localUrl: '../page/steering_problem.html',
  arbitalUrl: 'https://arbital.com/p/steering_problem',
  rawJsonUrl: '../raw/1tv.json',
  likeableId: '773',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '0',
  dislikeCount: '0',
  likeScore: '0',
  individualLikes: [],
  pageId: 'steering_problem',
  edit: '6',
  editSummary: '',
  prevEdit: '5',
  currentEdit: '6',
  wasPublished: 'true',
  type: 'wiki',
  title: 'The steering problem',
  clickbait: '',
  textLength: '38183',
  alias: 'steering_problem',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'PaulChristiano',
  editCreatedAt: '2016-03-04 00:36:10',
  pageCreatorId: 'PaulChristiano',
  pageCreatedAt: '2016-02-02 22:10:55',
  seeDomainId: '0',
  editDomainId: '705',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '18',
  text: '\n\nMost AI research focuses on reproducing human abilities: to learn, infer, and reason; to perceive, plan, and predict. There is a complementary problem which (understandably) receives much less attention: if you _had_ these abilities, what would you do with them?\n\n**The steering problem:** Using black-box access to human-level cognitive abilities, can we write a program that is as useful as a well-motivated human with those abilities?\n\nThis document explains what the steering problem is and why I think it’s worth spending time on.\n\n1. Introduction\n===============\n\nA capable, well-motivated human can be extremely useful: they can work without oversight, produce results that need not be double-checked, and work towards goals that aren’t precisely defined. These capabilities are critical in domains where decisions cannot be easily supervised, whether because they are too fast, too complex, or too numerous.\n\nIn some sense “be as useful as possible” is just another task at which a machine might reach human-level performance. But it is different from the concrete capabilities normally considered in AI research.\n\nWe can say clearly what it means to “predict well,” “plan well,” or “reason well.” If we ignored computational limits, machines could achieve any of these goals today. And before the existing vision of AI is realized, we must _necessarily_ achieve each of these goals.\n\nFor now, “be as useful as possible” is in a different category. We can’t say exactly what it means. We could not do it no matter how fast our computers could compute. And even if we resolved the most salient challenges in AI, we could remain in the dark about this one.\n\nConsider a capable AI tasked with running an academic conference. How should it use its capabilities to make decisions?\n\n- We could try to specify exactly what makes a conference good or bad. But our requirements are complex and varied, and so specifying them exactly seems time-consuming or impossible.\n- We could build an AI that imitates successful conference organizers. But this approach can never do any better than the humans we are imitating. Realistically, it won’t even match human performance unless we somehow communicate what characteristics are important and why.\n- We could ask an AI to maximize our satisfaction with the conference. But we’ll get what we measure. An extensive evaluation would greatly increase the cost of the conference, while a superficial evaluation would leave us with a conference optimized for superficial metrics. Everyday experience with humans shows how hard delegation can be, and how much easier it is to assign a task to someone who actually cares about the outcome.\n\nOf course there is already pressure to write _useful_ programs in addition to smart programs, and some AI research studies how to efficiently and robustly communicate desired behaviors. For now, available solutions apply only in limited domains or to weak agents. The steering problem is to close this gap.\n\n### Motivation\n\nA system which “merely” predicted well would be extraordinarily useful. Why does it matter whether we know how to make a system which is “as useful as possible”?\n\nOur machines will probably do _some_ things very effectively. We know what it means to “act well” in the service of a given goal. For example, using human cognitive abilities as a black box, we could probably design autonomous corporations which very effectively maximized growth. If the black box was cheaper than the real thing, such autonomous corporations could displace their conventional competitors.\n\nIf machines can do everything equally well, then this would be great news. If not, society’s direction may be profoundly influenced by what can and cannot be done easily. For example, if we can only maximize what we can precisely define, we may inadvertently end up with a world filled with machines trying their hardest to build bigger factories and better widgets, uninterested in anything we consider intrinsically valuable.\n\nAll technologies are more useful for some tasks than others, but machine intelligence might be particularly problematic because it can entrench itself. For example, a rational profit-maximizing corporation might distribute itself throughout the world, pay people to help protect it, make well-crafted moral appeals for equal treatment, or campaign to change policy. Although such corporations could bring large benefits in the short term, in the long run they may be difficult or impossible to uproot, even once they serve no one’s interests.\n\n### Why now?\n\nReproducing human abilities gets a lot of deserved attention. Figuring out exactly what you’d do once you succeed feels like planning the celebration before the victory: it might be interesting, but why can’t it wait?\n\n1. **Maybe it’s hard**. Probably the steering problem is much easier than the AI problem, but it might turn out to be surprisingly difficult. If it _is_ difficult, then learning that earlier will help us think more clearly about AI, and give us a head start on addressing the steering problem.\n2. **It may help us understand AI.** The difficulty of saying exactly what you want is a basic challenge, and the steering problem is a natural perspective on this challenge. A little bit of research on natural theoretical problems is often worthwhile, even when the direct applications are limited or unclear. In section 4 we discuss possible approaches to the steering problem, many of which are new perspectives on important problems.\n3. **It should be developed alongside AI.** The steering problem is a long-term goal in the same way that understanding human-level prediction is a long-term goal. Just as we do theoretical research on prediction before that research is commercially relevant, it may be sensible to do theoretical research on steering before it is commercially relevant. Ideally, our ability to build useful systems will grow in parallel with our ability to build capable systems.\n4. **Nine women can’t make a baby in one month.** We could try to save resources by postponing work on the steering problem until it seems important. At this point it will be easier to work on the steering problem, and if the steering problem turns out to be unimportant then we can avoid thinking about it altogether. But at large scales it becomes hard to speed up progress by increasing the number of researchers. Fewer people working for longer may ultimately be more efficient even if earlier researchers are at a disadvantage. In general, scaling up fields rapidly is difficult.\n5. **AI progress may be surprising** . We probably won’t reproduce human abilities in the next few decades, and we probably won’t do it without ample advance notice. That said, AI is too young, and our understanding too shaky, to make confident predictions. A mere 15 years is 20% of the history of modern computing. If important human-level capabilities are developed surprisingly early or rapidly, then it would be worthwhile to better understand the implications in advance.\n6. **The field is sparse**. Because the steering problem and similar questions have received so little attention, individual researchers are likely to make rapid headway. There are perhaps three to four orders of magnitude between basic research on AI and research directly relevant to the steering problem, lowering the bar for arguments 1–5.\n\nIn section 3 we discuss some other reasons not to work on the steering problem: Is work done now likely to be relevant? Is there any concrete work to do now? Should we wait until we can do experiments? Are there adequate incentives to resolve this problem already?\n\n2. Defining the problem precisely\n=================================\n\nRecall our problem statement:\n\n**The steering problem:** Using black-box access to human-level cognitive abilities, can we write a program that is as useful as a well-motivated human with those abilities?\n\nWe’ll adopt a particular human, Hugh, as our “well-motivated human:” we’ll assume that we have black-box access to Hugh-level cognitive abilities, and we’ll try to write a program which is as useful as Hugh.\n\n### Abilities\n\nIn reality, AI research yields complicated sets of related abilities, with rich internal structure and no simple performance guarantees. But in order to do concrete work in advance, we will model abilities as black boxes with well-defined contracts.\n\nWe’re particularly interested in tasks which are “AI complete” in the sense that human-level performance on that task could be used as a black box to achieve human-level performance on a very wide range of tasks. For now, we’ll further focus on domains where performance can be unambiguously defined.\n\nSome examples:\n\n- **Boolean question-answering**. A question-answerer is given a statement and outputs a probability. A question-answerer is Hugh-level if it never makes judgments predictably worse than Hugh’s. We can consider question-answerers in a variety of languages, ranging from natural language (“Will a third party win the US presidency in 2016?”) to precise algorithmic specifications (“Will this program output 1?”).\n- **Online learning**. A function learner is given a sequence of labelled examples (x, y) and predicts the label of a new data point, x’. A function learner is Hugh-level if, after training on any sequence of data (xᵢ, yᵢ), the learner’s guess for the label of the next point is—on average—at least as good as Hugh’s.\n- **Embodied reinforcement learning**. A reinforcement learner interacts with an environment and receives periodic rewards, with the goal of maximizing the discounted sum of its rewards. A reinforcement learner is Hugh-level if, following any sequence of observations, it achieves an _expected_ performance as good as Hugh’s in the subsequent rounds. The expectation is taken using our subjective distribution over the physical situation of an agent who has made those observations.\n\nWhen talking about Hugh’s predictions, judgments, or decisions, we imagine that Hugh has access to a reasonably powerful computer, which he can use to process or display data. For example, if Hugh is given the binary data from a camera, he can render it on a screen in order to make predictions about it.\n\nWe can also consider a particularly degenerate ability:\n\n- **Unlimited computation**. A box that can run any algorithm in a single time step is—in some sense—Hugh level at every precisely stated task.\n\nAlthough unlimited computation seems exceptionally powerful, it’s not immediately clear how to solve the steering problem even using such an extreme ability.\n\n### Measuring usefulness\n\nWhat does it mean for a program to be “as useful” as Hugh?\n\nWe’ll start by defining “as useful for X as Hugh,” and then we will informally say that a program is “as useful” as Hugh if it’s as useful for the tasks we care most about.\n\nConsider **H,** a black box which simulates Hugh or perhaps consults a version of Hugh who is working remotely. We’ll suppose that running **H** takes the same amount of time as consulting our Hugh-level black boxes. A project to accomplish X could potentially use as many copies of **H** as it can afford to run.\n\nA program **P** is as useful than Hugh for X if, for every project using **H** to accomplish X, we can efficiently transform it into a new project which uses **P**to accomplish X. The new project shouldn’t be much more expensive—-it shouldn’t take much longer, use much more computation or many additional resources, involve much more human labor, or have significant additional side-effects.\n\n### Well-motivated\n\nWhat it does it mean for Hugh to be well-motivated?\n\nThe easiest approach is universal quantification: for _any_ human Hugh, if we run our program using Hugh-level black boxes, it should be as useful as Hugh.\n\nAlternatively, we can leverage our intuitive sense of what it means for someone to be well-motivated to do X, and define “well-motivated” to mean “motivated to help the user’s project succeed.”\n\n### Scaling up\n\nIf we are given better black boxes, we should make a better program. This is captured by the requirement that our program should be as useful as Hugh,no matter how capable Hugh is (as long as the black boxes are equally capable).\n\nIdeally, our solutions should scale far past human-level abilities. This is not a theoretical concern—in many domains computers already have significantly superhuman abilities. This requirement is harder to make precise, because we can no longer talk about the “human benchmark.” But in general, we would like to build systems which are (1) working towards their owner’s interests, and (2) nearly as effective as the best goal-directed systems that can be built using the available abilities. The ideal solution to the steering problem will have these characteristics in general, even when the black-box abilities are radically superhuman.\n\n### Scaling down\n\n“Human-level abilities” could refer to many different things, including:\n\n1. Human-level performance on high-level tasks.\n2. The level of functionality embodied in the human brain. Human-level perception, intuition, motor control, subsymbolic reasoning, and so on.\n\nIn general, as we shift from 1 towards 2 the steering problem becomes more difficult. It may be difficult to produce simple or predictable high-level functions using low-level abilities.\n\nFor example, humans pursue a complicated set of goals that would be very difficult to determine by looking at the human brain (and some of which are quite distant from the evolutionary pressures that produced us). When given a task that doesn’t serve these goals, a human may simply decide to pursue their own agenda. If we build human-like abilities out of human-like low-level functions, we may find ourselves with similarly unpredictable high-level functions.\n\nIt is harder to formalize or understand low-level abilities than high-level functions. One approach is to consider very short time periods. For example, we could consider black boxes which learn functions as well as a human who spends only 500 milliseconds per example. Unfortunately, at this level it is harder to encapsulate human abilities in a small number of simple functions, and we must pay more attention to the way in which these abilities can be connected.\n\nIf the steering problem were satisfactorily resolved, “scaling down” to these lower-level abilities would be a natural but challenging next step.\n\n3. Objections\n=============\n\n\n**The simple, abstract capabilities we can think of now are much harder to use productively than the rich and messy AI capabilities we will actually develop.**\n\nFor now we can’t clearly state _anything_ a machine could do that would make the steering problem easy (short of exactly reproducing human behavior). Filling in this gap would be an appropriate response to the steering problem.\n\nPerhaps we don’t yet know exactly what we want machines to do, but figuring it out is inextricably bound up with getting them to do it. If so, it might be easier to say what we want once we know how to do it. But by the same token, it might be easier to figure out how to do it once we can better say what we want.\n\nIn either case, it seems likely that the steering problem fills in its own niche: either it is a distinct problem that won’t be solved automatically en route to AI; or else it is a different perspective on the same underlying difficulties, and can be productively explored in parallel with other AI research.\n\nBecause the steering problem is non-trivial for simple, precisely stated abilities, it may well be non-trivial for the abilities we actually obtain. Certainly we can imagine developing a human-level predictor without learning too much about how to build useful systems. So it seems unreasonable to be confident that the steering problem will turn out to be a non-problem.\n\n**The simple, abstract abilities we can think of now are much _easier_ to work with than the human-level abilities we will actually develop, or at least much different. Building a robust system is easier when all of the pieces have clean, reliable functions; in practice things won’t be so pretty.**\n\nIt would be a larger leap to continue “…and the ideas required to work with simple, reliable components will have no relevance to their more realistic counterparts.” _Whatever_ abilities we end up with, many solutions to the steering problem will turn out to be inapplicable, and they will all be incomplete. But we can still find useful general techniques by developing ideas that are helpful for many versions of the steering problem; and we can identify important technical challenges by understanding what makes each version easy or hard.\n\nWe can gradually scale up the difficulty of the steering problem by demanding more robust solutions, making weaker guarantees on our black boxes, or working with less manageable abilities. Our choices can be informed by ongoing progress in AI, focusing on those capabilities we think are most realistic and the forms of robustness we consider most likely to be necessary.\n\n**Why is autonomy necessary?**\n\nOne apparent solution to the steering problem is to retain human decision-making, with AI systems acting as assistants and tools to help humans accomplish their goals.\n\nThis is an appropriate solution while AI systems remain relatively limited. It has serious problems when scaling:\n\n- If large numbers of machines make large numbers of decisions, with human wages orders of magnitude larger than the operating costs of machines, then the cost of human oversight becomes prohibitive. Imagine a million humans overseeing a billion or trillion human-level machines.\n- If machines make very rapid decisions, human oversight can introduce unacceptable latency. Imagine human engineers overseeing the handling of individual Google searches.\n- If machines work on complex problems, human overseers may not be able to understand their reasoning process. Imagine a physics undergraduate overseeing a team of world-class physicists.\n\nAll of these problems become particularly severe when we consider _thinking about thinking_. That is, machines must make numerous, rapid decisions about how to process information, what to investigate or compute, how to organize their resources, and so on. If we want to use machine intelligence to make those decisions better, that will have to be done without substantial human oversight.\n\nIt may be possible to maintain human involvement in all important automation, but doing so will eventually become a serious bottleneck. Tasks that can be performed without human oversight will become increasingly efficient, and without explicit coordination (and a willingness to make short-term sacrifices) it seems likely that more autonomous operations will outcompete their less autonomous counterparts.\n\n**Is there any concrete work to do on the steering problem?**\n\nIn the next section I’ll describe a handful of existing research directions that bear on the steering problem. I think the steering problem suggests an interesting and unusual perspective on each of these domains; I don’t know whether it will prove to be a fruitful perspective, but if it fails it won’t be because of a lack of first steps.\n\nI have done some work motivated explicitly by the steering problem: [a formalization](http://www.google.com/url?q=http%3A%2F%2Fordinaryideas.wordpress.com%2F2012%2F04%2F21%2Findirect-normativity-write-up%2F&sa=D&sntz=1&usg=AFQjCNGtdsc2V3j_mhZbr8uvywxV0R2eJA) of “judgment upon reflection,” which can be expressed entirely algorithmically based on (experimentally controlled) observations of human behavior, [an alternative](https://arbital.com/p/1t7/approval_directed_agents) to goal-directed behavior which may enjoy similar productivity benefits while being more robust, and [some](http://ordinaryideas.wordpress.com/2014/07/18/adversarial-collaboration/) [simple](https://arbital.com/p/1tp?title=delegating-to-a-mixed-crowd) [protocols](https://arbital.com/p/1xx) for delegating to untrusted agents.\n\n4. Approaches, ingredients, and related work\n============================================\n\n### Rational agency\n\n\nOne natural approach to the steering problem is to build goal-directed agents who want to be useful or who share their creators’ goals.\n\nThere are two main difficulties:\n\n- Specifying goals in an appropriate language. What does it mean to “be useful”? How can we define what we want?\n- Building agents that reliably pursue goals specified in that language.\n\nDeploying a goal-directed agent is somewhat worrying: an agent with an almost-but-not-quite-correct goal will be working at cross-purposes to its creator, and will be motivated (for example) to avoid revealing that its goal is not quite correct. These concerns motivate a third line of research:\n\n- Designing goals or goal-directed agents which “fail gracefully,” i.e. which don’t behave adversarially or resist correction, even if their goals are not perfectly aligned with their creators’.\n\nSeveral lines of existing research bear on each of these questions.\n\n**Specifying goals**\n\nRather than directly specifying what outcomes are good, it seems more promising to specify how to learn what outcomes are good. This is a topic of existing research, although the focus is typically on pragmatic considerations rather than on the more general theoretical problem.\n\n- Inverse reinforcement learning (for example see [Russell](http://www.cs.berkeley.edu/~russell/papers/colt98-uncertainty.pdf), [Ng and Russell](http://www-cs.stanford.edu/people/ang/papers/icml00-irl.pdf), or [Ziebart et al](http://www.aaai.org/Papers/AAAI/2008/AAAI08-227.pdf).) and goal inference (for example see [Baker, Tenenbaum, and Saxe](http://web.mit.edu/clbaker/www/papers/cogsci2007.pdf) or [Verma and Rao](http://homes.cs.washington.edu/~rao/nips05imit.pdf)) attempt to infer underlying preferences by observing behavior, despite a complex relationship between actions and outcomes. To apply to the steering problem, the techniques would need to be generalized to learners who are much better informed and more capable than the human models they are learning from, and who have much noisier information about the human models and their environment. This requires understanding the limitations and errors of the human models, and generalizing the human’s goals robustly so that they remain acceptable even when they are pursued in an unfamiliar way.\n- Preference learning attempts to infer underlying preferences from observed decisions, despite noisy information and potentially irrational behavior (for a small sample, see [Fürnkranz and Hüllermeier](http://www.springer.com/computer/ai/book/978-3-642-14124-9), [Fürnkranz and Hüllermeier](http://pdf.aminer.org/000/172/223/pairwise_preference_learning_and_ranking.pdf), or [Gervasio et al](http://pdf.aminer.org/000/172/223/pairwise_preference_learning_and_ranking.pdf).). Existing work considers small domains with explicitly represented preferences, and there seem to be serious challenges when scaling to preferences over complete states-of-affairs. As a result, preference learning seems less directly applicable than inverse reinforcement learning or goal inference.\n- Some more speculative and philosophical research (for example, see [my post](http://ordinaryideas.wordpress.com/2012/04/21/indirect-normativity-write-up/) on the subject, or [this](https://intelligence.org/files/CEV.pdf) more discursive article by Yudkowsky) has explored how to formalize our preferences in a general and precise way. The focus is on determining what processes of deliberation correctly capture our informed judgment and how we might formalize those processes. The primary challenge, on this perspective, is defining our preferences about outcomes that are difficult for us to describe or reason about.\n\nWe could also investigate goals of the form “maximize user satisfaction,” but it seems hard to find a satisfactory definition along these lines.\n\n**Pursuing goals**\n\nEven with desirable goals in hand, it may be challenging to design systems that reliably pursue those goals. There are questions about how goal-directed behavior relates to reasoning and to the behavior of subsystems (does the system pursue the goals it appears to?), about the theoretical basis for optimal rational behavior (does it pursue them well? ), and about how an agent should behave in light of uncertainty about what outcomes are desirable.\n\n- Some existing work in reinforcement learning (for a summary, see [Sutton and Barto](http://webdocs.cs.ualberta.ca/~sutton/book/the-book.html)) and probabilistic inference (for example, see [Attas](http://research.goldenmetallic.com/aistats03.pdf)) shows how goal-directed behavior can be implemented using other faculties.\n- Some work in philosophy studies the formal basis for rational agency, from decision theory to epistemology. This work clarifies what it means to rationally pursue a particular goal. A lack of understanding may result in systems that appear to pursue one goal but actually pursue another, or that generalize to novel environments in undesirable ways.\n- Some work on AI safety (see [Dewey](http://www.danieldewey.net/learning-what-to-value.pdf) or [Bostrom](http://books.google.com/books?hl=en&lr=&id=C-_8AwAAQBAJ&oi=fnd&pg=PP1&dq=superintelligence+nick+bostrom&ots=UES5TvoQLp&sig=k55cFr_T1nLzunEALbvgI50KoX8#v=onepage&q=superintelligence%20nick%20bostrom&f=false), ch. 12) explores frameworks for pursuing uncertain goals, in the interest of understanding how and to what extent “learning what is good” is different from “learning what is true.”\n- Some work in multi-objective optimization considers settings with a large space of possible objectives, and seeks policies which are appropriate in light of uncertainty about which objective we really care about.\n\n**Failing gracefully**\n\nEven a “near miss” when defining a goal-directed agent might have undesirable consequences. In addition to minimizing the probability of failure, it would be nice to minimize the costs of a near miss—-and to allow us to use the kind of “trial and error” approach that is more typical of software development.\n\n- Researchers interested in AI safety have introduced and discussed the notion of “corrigible” agents, who cooperate with “corrective” interventions by their programmers—even if we cannot implement goals consistent with that cooperation.\n- Some researchers have worked to make AI reasoning understandable (For example, see [Bullinaria](https://www.cs.bham.ac.uk/~jxb/PUBS/AIRTNN.pdf) or [Craven](http://research.cs.wisc.edu/machine-learning/shavlik-group/craven.thesis.pdf)). Understandability can reduce the scope for malignant failure modes, since engineers might be able to directly monitor the motivation for decisions (and in particular to distinguish between honest and deceptive behavior).\n\n## Delegation\n\n\nEvery day humans work productively on projects that they don’t intrinsically care about, motivated by a desire for money, recognition, or satisfaction. We could imagine an AI doing the same thing. For example, a reinforcement learner might do useful work in order to earn a reward, without having any intrinsic concern for the work being done.\n\nUnfortunately, such delegation runs into some problems. The problems appear even when delegating to humans, but they get considerably worse as machines become more powerful and more numerous:\n\n- Naively, the agent will only do good work when the principal can verify the quality of that work. The cost of this oversight can be non-trivial.\n- Unless the oversight is extremely thorough or the problem particularly straightforward, there will be some gap between what the principal wants and what the principal evaluates. The reward-driven agent will maximize whatever the principal evaluates.\n- If agents are granted much autonomy, they may find other ways to get rewards. This depends in detail on how the “reward” is implemented, and what the agent cares about.\n\nThere are many tools available to address these problems: breaking a system up into pieces with differing values and limited autonomy, performing randomized audits, automating audits and auditing auditors, relying on agents with short time horizons or extreme risk aversion, and so on. So far there are no compelling proposals that put the pieces together.\n\nA full solution is likely to rely on agents with different values, combined with an appropriate system of checks and balances. But if the agents coordinate to pursue their collective values, they could fatally undermine such a system. We can try to minimize the risk by making the agents’ interactions essentially zero sum, or employing other agents to oversee interactions and report signs of collusion. But the possibility of collusion remains a serious obstacle.\n\nEven if delegation cannot fully resolve the steering problem, a weak solution might be useful as part of a bootstrapping protocol (see the section on bootstrapping below).\n\nThis problem is similar in spirit to mechanism design, but the details (and apparently the required tools) are quite different. Nevertheless, some ideas from mechanism design or the economics of delegation may turn out to be applicable. Conversely, some approaches to the steering problem might be of interest to economists in these areas.\n\n### Shared language and concepts\n\n\nWhen delegating to a helpful human, we would say what we want done in natural language, relying on a rich network of shared concepts that can be used to specify goals or desired actions. Writing programs with the same capability would greatly simplify or perhaps solve the steering problem.\n\nIn some sense, human-level language understanding is already encapsulated in human-level cognitive abilities. For example, if we were pursuing the delegation approach in the last section, we could describe tasks in natural language. The agents would infer what we expect them to do and under what conditions we will give them rewards, and they would behave appropriately in light of that knowledge. But this “language understanding” only appears in the agent’s goal-directed behavior.\n\nTo address the steering problem, we would like something stronger. We would like to build agents that share human concepts, such that we can write code that operates in terms of those concepts: specifying a goal in terms of higher-level concepts, or executing instructions defined in terms of these concepts. These tasks don’t seem to be possible using only goal-directed language understanding.\n\nUnderstanding concept learning and the relationship to language is a fundamental problem in cognitive science and AI. Work in these areas thus bears directly on the steering problem.\n\nFor now, we cannot say formally what it would mean to have a program that reliably acquired “the same” concepts as humans, so that instructions expressed in those concepts would have the intended meaning. Even given unlimited computation, it’s not clear how we would solve the steering problem using concept learning. This is not at all to say it is not possible, merely that it has not yet been done.\n\nThere may be a tight connection between the theoretical question—-what would it mean to learn human concepts, and how could you do it with any amount of computation—-and the pragmatic computational issues. If there is a connection, then the theoretical question might be easier once the pragmatic issues are better understood. But conversely, the pragmatic question might also be easier once the theoretical issues are better understood.\n\n### Non-consequentialist intelligence\n\n\nIf describing our real goals is too demanding, and describing a crude approximation is hazardous, then we might try to build systems without explicitly defined goals. This makes the safety problem much easier, but probably makes it harder to build systems which are sufficiently powerful at all.\n\n**Non-agents**\n\nOne idea is to focus on systems with some narrower function: answering questions, proposing plans, executing a narrow task, or so on. On their own these systems might not be terribly useful, but the hope is that as tools they can be nearly as useful as a goal-directed assistant. Moreover, because these systems don’t need to be aligned with human goals, they may be easier to construct.\n\nFor example, research in decision support and multi-objective optimization aims to find good solutions to an optimization problem (or a planning problem) by interacting with a decision-maker rather than giving a scalar representation of their preferences (for example, see [Fonseca and Fleming](http://pdf.aminer.org/000/310/607/genetic_algorithms_for_multiobjective_optimization_formulationdiscussion_and_generalization.pdf) or[Deb](http://link.springer.com/chapter/10.1007/978-1-4614-6940-7_15)).\n\nThese systems can certainly be useful (and so may be appropriate as part of a bootstrapping strategy or as a component in a more complex solution; see the next sections). But most of them inherently require significant human oversight, and do not seem suitable as solutions to the full steering problem: for projects involving large numbers of agents with a small number of human overseers, the involvement of human oversight in all substantive decisions is probably an unacceptable overhead.\n\nThis issue applies at every level simultaneously, and seems particularly serious when we consider systems thinking about how to think. For example, in order to produce a plan, a simulated human or team would first plan how to plan. They would seek out relevant information, talk to people with necessary expertise, focus their attention on the highest-priority questions, allocate available computational resources, and so on. If human oversight is necessary for every step of this process, the resulting system is not even as useful as a black box that outputs good plans.\n\nOf course, even if humans rely on goal-directed behavior to accomplish these tasks, this doesn’t imply that machines must as well. But that would require a concrete alternative approach that still captures the benefits of goal-directed reasoning without substantial oversight.\n\n**Non-consequentialist agents**\n\nInstead we might build agents with no explicitly defined goals which can nevertheless accomplish the same tasks as a helpful human.\n\nPerhaps most straightforward is an agent that asks “What would a helpful human do?” and then does that. If we have a particular helpful human available as a template, we could build a predictive model of that human template’s decisions, and use this model to guide our agent’s decisions. With a good enough model, the result would be precisely as useful as the human template.\n\nThis proposal has a number of potential problems. First, it does not scale to the available abilities—-it is never more useful than a simulation of the human template. So it is not a general solution to the steering problem, unless we have access to arbitrarily capable human templates. Second, if our predictions are imperfect, the behavior may be significantly worse than the helpful human template. Third, making accurate predictions about a human is itself a superhuman skill, and so asking Alice to do what she thinks Bob would do can result in behavior worse than Alice would produce on her own, no matter how smart Bob is.\n\nWe can address some of these problems by instead asking “What choice would the human overseer most approve of?” and then taking the most-approved-of option. And a bootstrapping procedure can address some limitations of using a human template. I explore these ideas [here](https://arbital.com/p/1t7/approval_directed_agents).\n\nResearch on inverse reinforcement learning or goal inference could also be used to learn imitative behavior which is sensitive to human goals, rather than to build systems that infer a human’s long-term goals.\n\nThere may be many other alternatives to goal-directed behavior. Any proposal would probably be combined with some ideas under “rational agency” and “shared language” above.\n\n### Bootstrapping\n\nWe might try to deal with two cases separately:\n\n- We are dealing with machines _much_ smarter, faster, or more numerous than humans.\n- We aren’t.\n\nIn the first case, we could try to delegate the steering problem to the much more capable machines. In the second case, we might be able to make do with a weak solution to the steering problem which wouldn’t scale to more challenging cases.\n\nIt’s hard to know how this strategy would work out, and to a greater extent than the other approaches we would expect to play it by ear. But by thinking about some of the difficulties in advance, we can get a better sense for how hard the steering problem will actually be, and whether a strong solution is necessary.\n\nRoughly, there are two pieces to a bootstrapping protocol:\n\n1. We need to solve a weak version of the steering problem. We don’t have to make our machine as useful as a human, but we do need to get _some_ useful work, beyond what we could have done ourselves. Ideally we’ll come as close as possible to a useful human.\n2. We need to use the available machines to solve a stronger version of the steering problem. This is repeated until we’ve solved the full problem.\n\nBoth of these steps would rely on the kinds of techniques we have discussed throughout this section. The difference is that the humans, as well as the machine intelligences, only need to solve the steering problem for machines somewhat smarter or faster than themselves.\n\nOne implication is that weak solutions to the steering problem might be amplified into strong solutions, and so are worth considering even if they can’t be scaled up to strong solutions directly. This includes many of the approaches listed under “Delegation” and “Goal-less intelligence.”',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '1',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'PaulChristiano'
  ],
  childIds: [],
  parentIds: [
    'paul_ai_control'
  ],
  commentIds: [],
  questionIds: [],
  tagIds: [],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8263',
      pageId: 'steering_problem',
      userId: 'JessicaChuan',
      edit: '6',
      type: 'newEdit',
      createdAt: '2016-03-04 00:36:10',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '7635',
      pageId: 'steering_problem',
      userId: 'JessicaChuan',
      edit: '5',
      type: 'newEdit',
      createdAt: '2016-02-23 00:14:21',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6661',
      pageId: 'steering_problem',
      userId: 'JessicaChuan',
      edit: '4',
      type: 'newEdit',
      createdAt: '2016-02-10 02:04:56',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6660',
      pageId: 'steering_problem',
      userId: 'JessicaChuan',
      edit: '3',
      type: 'newEdit',
      createdAt: '2016-02-10 02:01:38',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6659',
      pageId: 'steering_problem',
      userId: 'JessicaChuan',
      edit: '2',
      type: 'newEdit',
      createdAt: '2016-02-10 01:58:14',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6156',
      pageId: 'steering_problem',
      userId: 'JessicaChuan',
      edit: '0',
      type: 'deleteChild',
      createdAt: '2016-02-02 22:13:28',
      auxPageId: 'Human_consulting_HCH',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6154',
      pageId: 'steering_problem',
      userId: 'JessicaChuan',
      edit: '1',
      type: 'newChild',
      createdAt: '2016-02-02 22:13:00',
      auxPageId: 'Human_consulting_HCH',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6153',
      pageId: 'steering_problem',
      userId: 'JessicaChuan',
      edit: '1',
      type: 'newEdit',
      createdAt: '2016-02-02 22:10:55',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6152',
      pageId: 'steering_problem',
      userId: 'JessicaChuan',
      edit: '0',
      type: 'newParent',
      createdAt: '2016-02-02 22:02:01',
      auxPageId: 'paul_ai_control',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}