{
  localUrl: '../page/consequentialist.html',
  arbitalUrl: 'https://arbital.com/p/consequentialist',
  rawJsonUrl: '../raw/9h.json',
  likeableId: '2451',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '1',
  dislikeCount: '0',
  likeScore: '1',
  individualLikes: [
    'EliezerYudkowsky'
  ],
  pageId: 'consequentialist',
  edit: '10',
  editSummary: '',
  prevEdit: '9',
  currentEdit: '10',
  wasPublished: 'true',
  type: 'wiki',
  title: 'Consequentialist cognition',
  clickbait: 'The cognitive ability to foresee the consequences of actions, prefer some outcomes to others, and output actions leading to the preferred outcomes.',
  textLength: '13430',
  alias: 'consequentialist',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'EliezerYudkowsky',
  editCreatedAt: '2016-06-11 05:04:41',
  pageCreatorId: 'EliezerYudkowsky',
  pageCreatedAt: '2015-07-01 21:52:19',
  seeDomainId: '0',
  editDomainId: 'EliezerYudkowsky',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '195',
  text: '[summary(Gloss):  "Consequentialism" is picking out immediate actions on the basis of which future outcomes you predict will result.\n\nE.g: Going to the airport, not because you really like airports, but because you predict that if you go to the airport now you'll be in Oxford tomorrow.  Or throwing a ball in the direction that your cerebellum predicts will lead to the future outcome of a soda can being knocked off the stump.\n\nAn extremely basic and ubiquitous idiom of cognition.]\n\n[summary:  "Consequentialism" is the name for the backward step from preferring future outcomes to selecting current actions.\n\nE.g:  You don't go to the airport because you really like airports; you go to the airport so that, in the future, you'll be in Oxford.  (If this sounds extremely basic and obvious, it's meant to be.)   An air conditioner isn't designed by liking metal that joins together at right angles, it's designed such that the future consequence of running the air conditioner will be cold air.\n\nConsequentialism requires:\n\n- Being able to predict or guess the future outcomes of different actions or policies;\n- Having a way to order outcomes, ranking them from lowest to highest;\n- Searching out actions that are predicted to lead to high-ranking futures;\n- Outputting those actions.\n\nOne might say that humans are empirically more powerful than mice because we are better consequentialists.  If we want to eat, we can envision a spear and throw it at prey.  If we want the future consequence of a well-lit room, we can envision a solar power panel.\n\nMany of the issues in [2v AI alignment] and the [2l safety of advanced agents] arise when a machine intelligence starts to be a consequentialist across particular interesting domains.]\n\nConsequentialist reasoning selects policies on the basis of their predicted consequences - it does action $X$ because $X$ is forecasted to lead to preferred outcome $Y$.  Whenever we reason that an agent which prefers outcome $Y$ over $Y'$ will therefore do $X$ instead of $X',$ we're implicitly assuming that the agent has the cognitive ability to do consequentialism at least about $X$s and $Y$s.  It does means-end reasoning; it selects means on the basis of their predicted ends plus a preference over ends.\n\nE.g:  When we [2vl infer] that a [10h paperclip maximizer] would try to [3ng improve its own cognitive abilities] given means to do so, the background assumptions include:\n\n- That the paperclip maximizer can *forecast* the consequences of the policies "self-improve" and "don't try to self-improve";\n- That the forecasted consequences are respectively "more paperclips eventually" and "less paperclips eventually";\n- That the paperclip maximizer preference-orders outcomes on the basis of how many paperclips they contain;\n- That the paperclip maximizer outputs the immediate action it predicts will lead to more future paperclips.\n\n(Technically, since the forecasts of our actions' consequences will usually be uncertain, a coherent agent needs a [1fw utility function over outcomes] and not just a preference ordering over outcomes.)\n\nThe related idea of "backward chaining" is one particular way of solving the cognitive problems of consequentialism: start from a desired outcome/event/future, and figure out what intermediate events are likely to have the consequence of bringing about that event/outcome, and repeat this question until it arrives back at a particular plan/policy/action.\n\nMany narrow AI algorithms are consequentialists over narrow domains.  A chess program that searches far ahead in the game tree is a consequentialist; it outputs chess moves based on the expected result of those chess moves and your replies to them, into the distant future of the board.\n\nWe can see one of the critical aspects of human intelligence as [cross_consequentialism cross-domain consequentialism].  Rather than only forecasting consequences within the boundaries of a narrow domain, we can trace chains of events that leap from one domain to another.  Making a chess move wins a chess game that wins a chess tournament that wins prize money that can be used to rent a car that can drive to the supermarket to get milk.  An Artificial General Intelligence that could learn many domains, and engage in consequentialist reasoning that leaped across those domains, would be a [2c sufficiently advanced agent] to be interesting from most perspectives on interestingness.  It would start to be a consequentialist about the real world.\n\n# Pseudoconsequentialism\n\nSome systems are [-pseudoconsequentialist] - they in some ways *behave as if* outputting actions on the basis of their leading to particular futures, without using an explicit cognitive model and explicit forecasts.\n\nFor example, natural selection has a lot of the power of a cross-domain consequentialist; it can design whole organisms around the consequence of reproduction (or rather, inclusive genetic fitness).  It's a fair approximation to say that spiders weave webs *because* the webs will catch prey that the spider can eat.  Natural selection doesn't actually have a mind or an explicit model of the world; but millions of years of selecting DNA strands that did in fact previously construct an organism that reproduced, gives an effect *sort of* like outputting an organism design on the basis of its future consequences.  (Although if the environment changes, the difference suddenly becomes clear: natural selection doesn't immediately catch on when humans start using birth control.  Our DNA goes on having been selected on the basis of the *old* future of the ancestral environment, not the *new* future of the actual world.)\n\nSimilarly, a reinforcement-learning system learning to play Pong might not actually have an explicit model of "What happens if I move the paddle here?" - it might just be re-executing policies that had the consequence of winning last time.  But there's still a future-to-present connection, a pseudo-backwards-causation, based on the Pong environment remaining fairly constant over time, so that we can sort of regard the Pong player's moves as happening *because* it will win the Pong game.\n\n# Ubiquity of consequentialism\n\nConsequentialism is an extremely basic idiom of optimization:\n\n- You don't go to the airport because you really like airports; you go to the airport so that, in the future, you'll be in Oxford.\n- An air conditioner is an artifact selected from possibility space such that the future consequence of running the air conditioner will be cold air.\n- A butterfly, by virtue of its DNA having been repeatedly selected to *have previously* brought about the past consequence of replication, will, under stable environmental conditions, bring about the future consequence of replication.\n- A rat that has previously learned a maze, is executing a policy that previously had the *consequence* of reaching the reward pellets at the end:  A series of turns or behavioral rule that was neurally reinforced in virtue of the future conditions to which it led the last time it was executed.  This policy will, given a stable maze, have the same consequence next time.\n- Faced with a superior chessplayer, we enter a state of [9g Vingean uncertainty] in which we are more sure about the final consequence of the chessplayer's moves - that it wins the game - than we have any surety about the particular moves made.  To put it another way, the main abstract fact we know about the chessplayer's next move is that the consequence of the move will be winning.\n- As a chessplayer becomes strongly superhuman, its play becomes [6s instrumentally efficient] in the sense that *no* abstract description of the moves takes precedence over the consequence of the move.  A weak computer chessplayer might be described in terms like "It likes to move its pawn" or "it tries to grab control of the center", but as the chess play improves past the human level, we can no longer detect any divergence from "it makes the moves that will win the game later" that we can describe in terms like "it tries to control the center (whether or not that's really the winning move)".  In other words, as a chessplayer becomes more powerful, we stop being able to describe its moves that will ever take priority over our beliefs that the moves have a certain consequence.\n\nAnything that Aristotle would have considered as having a "final cause", or teleological explanation, without being entirely wrong about that, is something we can see through the lens of cognitive consequentialism or pseudoconsequentialism.  A plan, a design, a reinforced behavior, or selected genes:  Most of the complex order on Earth derives from one or more of these.\n\n# Interaction with advanced safety\n\nConsequentialism or pseudoconsequentialism, over various domains, is an [2c advanced agent property] that is a key requisite or key threshold in several issues of AI alignment and advanced safety:\n\n- You get [47 unforeseen maxima] because the AI connected up an action you didn't think of, with a future state it wanted.\n- It seems [6r foreseeable] that some issues will be [48 patch-resistant] because of the [-42] effect: after one road to the future is blocked off, the next-best road to that future is often a very similar one that wasn't blocked.\n- Reasoning about [-2vl] generally relies on at least pseudoconsequentialism - they're strategies that *lead up to* or would be *expected to lead up to* improved achievement of other future goals.\n   - This means that, by default, lots and lots of the worrisome or problematic convergent strategies like "resist being shut off" and "build subagents" and "deceive the programmers" arise from some degree of consequentialism, combined with some degree of [3nf grasping the relevant domains].\n\nAbove all:  The human ability to think of a future and plan ways to get there, or think of a desired result and engineer technologies to achieve it, is *the* source of humans having enough cognitive capability to be dangerous.  Most of the magnitude of the impact of an AI, such that we'd want to align in the first place, would come in a certain sense from that AI being a sufficiently good consequentialist or solving the same cognitive problems that consequentialists solve.\n\n# Subverting consequentialism?\n\nSince consequentialism seems tied up in so many issues, some of the proposals for making alignment easier have in some way tried to retreat from, limit, or subvert consequentialism.  E.g:\n\n- [6x Oracles] are meant to "answer questions" rather than output actions that lead to particular goals.\n- [44z Imitation-based] agents are meant to imitate the behavior of a reference human as perfectly as possible, rather than selecting actions on the basis of their consequences.\n\nBut since consequentialism is so close to the heart of why an AI would be [6y sufficiently useful] in the first place, getting rid of it tends to not be that straightforward.  E.g:\n\n- Many proposals for [6y what to actually do] with Oracles involve asking them to plan things, with humans then executing the plans.\n- An AI that [44z imitates] a human doing consequentialism must be [1v0 representing consequentialism inside itself somewhere].\n\nSince 'consquentialism' or 'linking up actions to consequences' or 'figuring out how to get to a consequence' is so close to what would make advanced AIs useful in the first place, it shouldn't be surprising if some attempts to subvert consequentialism in the name of safety run squarely into [42k an unresolvable safety-usefulness tradeoff].\n\nAnother concern is that consequentialism may to some extent be a convergent or default outcome of optimizing anything hard enough.  E.g., although natural selection is a pseudoconsequentialist process, it optimized for reproductive capacity so hard that [2rc it eventually spit out some powerful organisms that were explicit cognitive consequentialists] (aka humans).\n\nWe might similarly worry that optimizing any internal aspect of a machine intelligence hard enough would start to embed consequentialism somewhere - policies/designs/answers selected from a sufficiently general space that "do consequentialist reasoning" is embedded in some of the most effective answers.\n\nOr perhaps a machine intelligence might need to be consequentialist in some internal aspects in order to be [6y smart enough to do sufficiently useful things] - maybe you just can't get a sufficiently advanced machine intelligence, sufficiently early, unless it is, e.g., choosing on a consequential basis what thoughts to think about, or engaging in consequentialist engineering of its internal elements.\n\nIn the same way that [18t expected utility] is the only coherent way of making certain choices, or in the same way that natural selection optimizing hard enough on reproduction started spitting out explicit cognitive consequentialists, we might worry that consequentialism is in some sense central enough that it will be hard to subvert - hard enough that we can't easily get rid of [10g instrumental convergence] on [2vl problematic strategies] just by getting rid of the consequentialism while preserving the AI's usefulness.\n\nThis doesn't say that the research avenue of subverting consequentialism is automatically doomed to be fruitless.  It does suggest that this is a deeper, more difficult, and stranger challenge than, "Oh, well then, just build an AI with all the consequentialist aspects taken out."',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '1',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '2016-02-17 09:17:28',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'EliezerYudkowsky',
    'OliviaSchaefer',
    'AlexeiAndreev'
  ],
  childIds: [],
  parentIds: [
    'advanced_agent'
  ],
  commentIds: [
    '47g'
  ],
  questionIds: [],
  tagIds: [],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '12394',
      pageId: 'consequentialist',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'deleteTag',
      createdAt: '2016-06-11 05:04:52',
      auxPageId: 'stub_meta_tag',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '12392',
      pageId: 'consequentialist',
      userId: 'EliezerYudkowsky',
      edit: '10',
      type: 'newEdit',
      createdAt: '2016-06-11 05:04:41',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '12391',
      pageId: 'consequentialist',
      userId: 'EliezerYudkowsky',
      edit: '9',
      type: 'newEdit',
      createdAt: '2016-06-11 05:01:10',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '12390',
      pageId: 'consequentialist',
      userId: 'EliezerYudkowsky',
      edit: '8',
      type: 'newEdit',
      createdAt: '2016-06-11 04:59:55',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '12242',
      pageId: 'consequentialist',
      userId: 'EliezerYudkowsky',
      edit: '7',
      type: 'newEdit',
      createdAt: '2016-06-09 23:55:39',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '3923',
      pageId: 'consequentialist',
      userId: 'AlexeiAndreev',
      edit: '0',
      type: 'newAlias',
      createdAt: '2015-12-16 16:35:24',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '3924',
      pageId: 'consequentialist',
      userId: 'AlexeiAndreev',
      edit: '6',
      type: 'newEdit',
      createdAt: '2015-12-16 16:35:24',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '3655',
      pageId: 'consequentialist',
      userId: 'EliezerYudkowsky',
      edit: '5',
      type: 'newEdit',
      createdAt: '2015-12-04 20:06:39',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '3654',
      pageId: 'consequentialist',
      userId: 'EliezerYudkowsky',
      edit: '4',
      type: 'newEdit',
      createdAt: '2015-12-04 19:51:49',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1100',
      pageId: 'consequentialist',
      userId: 'AlexeiAndreev',
      edit: '1',
      type: 'newUsedAsTag',
      createdAt: '2015-10-28 03:47:09',
      auxPageId: 'stub_meta_tag',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '145',
      pageId: 'consequentialist',
      userId: 'AlexeiAndreev',
      edit: '1',
      type: 'newParent',
      createdAt: '2015-10-28 03:46:51',
      auxPageId: 'advanced_agent',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1682',
      pageId: 'consequentialist',
      userId: 'OliviaSchaefer',
      edit: '3',
      type: 'newEdit',
      createdAt: '2015-10-13 22:53:35',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1681',
      pageId: 'consequentialist',
      userId: 'EliezerYudkowsky',
      edit: '2',
      type: 'newEdit',
      createdAt: '2015-07-01 22:11:45',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1680',
      pageId: 'consequentialist',
      userId: 'EliezerYudkowsky',
      edit: '1',
      type: 'newEdit',
      createdAt: '2015-07-01 21:52:19',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}