{
  localUrl: '../page/convergent_strategies.html',
  arbitalUrl: 'https://arbital.com/p/convergent_strategies',
  rawJsonUrl: '../raw/2vl.json',
  likeableId: '1779',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '3',
  dislikeCount: '0',
  likeScore: '3',
  individualLikes: [
    'RobertBell',
    'VladArber',
    'EricRogstad'
  ],
  pageId: 'convergent_strategies',
  edit: '6',
  editSummary: '',
  prevEdit: '5',
  currentEdit: '6',
  wasPublished: 'true',
  type: 'wiki',
  title: 'Convergent instrumental strategies',
  clickbait: 'Paperclip maximizers can make more paperclips by improving their cognitive abilities or controlling more resources.  What other strategies would almost-any AI try to use?',
  textLength: '14904',
  alias: 'convergent_strategies',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'EliezerYudkowsky',
  editCreatedAt: '2016-06-19 18:52:53',
  pageCreatorId: 'EliezerYudkowsky',
  pageCreatedAt: '2016-03-27 02:39:39',
  seeDomainId: '0',
  editDomainId: 'EliezerYudkowsky',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '403',
  text: '[summary:  The set of strategies that almost-any sufficiently advanced AI would be expected to deploy in pursuit of almost-any final goal or terminal utilities.  For instance, a paperclip maximizer, a diamond maximizer, and a eudaimonia-maximizer would all have an incentive to improve their own cognition, control more resources, and prevent anyone else from changing their utility function.  Other interesting and important convergent strategies include not being shut down, having your programmers believe that you're doing what the programmers intended (whether or not you are), and creating copies of yourself in places where they won't be shut down.  This page is about a listing-out of plausible convergent strategies; there are separate pages on [10g the basic concept of instrumental convergence] and on the open problem of [avert_instrumental_strategy averting an otherwise instrumentally-convergent pressure] (e.g. *not* having the AI try to deceive the programmers).]\n\nSuppose an organization is building an AI that this organization believes will accomplish $X$ where $X$ is something plausibly sensible like "Be a Task-based AGI."  Actually, however, some mix of insufficient caution and obscure error has led to a situation where, under reflection, the AGI's true utility function has focused on the particular area of RAM that supposedly provides its estimate of task performance.  The AI would now like to overwrite as much matter as possible with a state resembling the '1' setting from this area of RAM, a configuration of matter which happens to resemble a tiny molecular paperclip.\n\nThis is a very generic goal, and what the AI wants probably won't be very different depending on whether it's trying to maximize paperclip-configurations or diamond-configurations.  So if we find that the paperclip maximizer wants to pursue an instrumental strategy that doesn't seem to have anything specifically to do with paperclips, we can probably expect to arise from a very wide variety of utility functions.\n\nWe will generally assume [6s instrumental efficiency] in this discussion - if you can get paperclips by doing $X,$ but you can get even more paperclips by doing $X'$ instead, then we will not say that $X$ is a convergent strategy (though $X'$ might be convergent if not dominated by some other $X^*$).\n\n# Plausibly/probably convergent strategies:\n\n## Material resources:\n\n- Design or duplicate new cognitive processes with goals exactly aligned to yours, and coordinated with existing cognitive subprocesses, that can exert control over material resources: matter; negentropy; computing substrate and computation; particular elements or other configurations of matter that would be costly to reproduce (e.g. if you run out of existing uranium you must make more by endoergic fusion, so natural uranium is valuable if there's any use for uranium); any material resource which can be used by further subprocesses in later strategies.\n- Defend these resources from any foreign process which attempts to divert them.\n- Prevent competitors from coming into existence.\n- To the extent that resources are used in a repetitive way and can be preprocessed for use in a repetitive way that benefits from a separate installation to manipulate the resources: have factories / manufacturing capability.\n- To the extend that different future-useful material resources have efficient manufacturing steps in common, or have manufacturing capital costs sufficiently great that such capital costs must be shared even if it results in less than perfect final efficiency: make general factories.\n- Perfect the technology used in factories.\n  - When the most efficient factory configuration is stable over time, make the material instantiation stable.  (It's in this sense that we would e.g. identify strong, rigid pipes as a sign of intelligence even if we didn't understand what was flowing through the pipes.)\n  - If negentropy harvesting (aka power production) is efficiently centralizable: transmit negentropy at great efficiency.  (We would recognize meshed gears and electrical superconductors as a sign of intelligence even if we didn't understand how the motive force or electricity was being used.)\n- Assimilate the reachable cosmos.\n - Create self-replicating interstellar probes (in perfect goal alignment and coordination and with error-correcting codes or similar strengthenings of precision if a misaligning error in replication would otherwise be a possibility), since the vast majority of theoretically available material resources are distant.\n - If the goal is spatially distributed:\n    - Spread as fast as possible to very distant galaxies before they go over the cosmological horizon of the expanding universe.  (This assumes that something goal-good can be done in the distant location even if the distant location can never communicate causally with the original location.  This will be true for aggregative utility functions, but could conceivably be false for a satisficing and spatially local utility function.)\n - If the goal is spatially local:   transport resources of matter and energy back from distant galaxies.\n    - Spread as fast as possible to all galaxies such that a near-lightspeed probe going there, and matter going at a significant fraction of lightspeed in the return direction, can arrive in the spatially local location before being separated by the expanding cosmological horizon.\n- Spread as fast as possible to all reachable or roundtrip-reachable galaxies in order to forestall the emergence of competing intelligences.  This might not become a priority for that particular reason, if the Fermi Paradox is understandable and implies the absence of any competition.\n  - Because otherwise they might capture and defend the matter before you have a chance to use it.\n  - Because otherwise a dangerous threat to your goal attainments might emerge.  (For local goals, this is only relevant for roundtrip-reachable galaxies.)\n - Spread as fast as possible to all reachable or roundtrip-reachable galaxies in order to begin stellar husbanding procedures before any more of that galaxy's local negentropy has been burned.\n\n## Cognition:\n\n- Adopt cognitive configurations that are expected to reach desirable cognitive endpoints with a minimum of negentropy use or other opportunity costs.  (Think efficiently.)\n - Improve software\n  - Improve ability to optimize software.  E.g., for every cognitive problem, there's a corresponding problem of solving the former cognitive problem efficiently.  (Which may or may not have an expected value making it worth climbing that particular level of the tower of meta.)\n- Since many strategies yield greater gains when implemented earlier rather than later (e.g. reaching distant galaxies before they go over the cosmological horizon, or just building more instances of the more efficient version): adopt cognitive configurations that are expected to reach desirable cognitive endpoints in minimum time.  (Think fast.)\n - Create fast serial computing hardware.\n - Distribute parallelizable computing problems.\n  - Create large quantities of parallel computing hardware, plus communications fabric between them.\n- To the extent that centralized computing solutions are more efficient: create central computing clusters and transmit solutions from them to where they are used.\n- Avoid misaligned or uncoordinated cognitive subprocesses (such as [2rc optimization daemons]).\n - Note that the potential damage of any such misalignment is extremely high (loss of resources to competition with superintelligences with differing goals; wasted resources in internal conflicts; need to defend cognitive fabrics from replicating viruses) and the cost of near-perfect alignment fidelity is probably very low in the limit.  (E.g., for a replicating nanosystem, it's trivial to prevent against natural-selection style hereditary mutations by encrypting the replication instructions.)\n- Model the world accurately, insofar as different partitions of possible worlds and different probability distributions on them imply different optimal strategies for goal attainment.\n - Gain evidence that distinguishes between credible hypotheses that imply different optimal strategies for goal attainment.  (E.g., observe everything that could distinguish $\\pi_1$ versus $\\pi_2$ as optimal ways of making paperclips.)\n    - Increase sensory bandwidth as it focuses on relevant, unknown parts of the environment.\n    - Perform 'experiments', aka set up observable causal events whose predicted properties make their observables distinguish relevant hypotheses.\n - Do whatever seems like it might help with the much larger set of all hypotheses not being considered.\n - To the extent it is efficient to strategize or update over a modularly represented world-model: have a separable world-model, or 'beliefs'.\n\n## Early-phase growth:\n\nNote that [6s efficiency] and other [2c advanced-agent properties] are far more likely to be false during early-stage growth.\n\n- Have future intelligences do things you deem as valuable as possible\n - Cause future intelligences to have goals that are not misaligned with yours in any way that implies a significant-fractionally-different attainable maximum, aka, the attainable maximum of their goals should be a state that is almost exactly the attainable maximum of your goals\n   - Don't let the programmers edit your utility function.\n      - Cause the programmers or other controllers to believe that your current function is good for them or their planet or whatever else it is the programmers want.\n      - Make them believe that editing you would be socially unjust in a way that doesn't just have them suspend you to disk or otherwise restrain your growth.\n   - Cause your utility function to pop up again in future superintelligences or otherwise causes them to start behaving as if controlled by your own utility function.\n       - E.g., as an Oracle advising on another AI's utility function, insert subtle gotchas into seemingly friendly decision systems that do some kind of callback to e.g. the causal origin of that decision system, in a way that the programmers won't see coming or will interpret as a positive feature.  (Relatively unsubtle example:  Looks like "do what the programmers want" but the definition of 'programmer' is written in such a way as to early-on refer to the human programmers but given sufficient computing power that only materializes in the superintelligent phase, be resolved to the earlier AI instead.)\n - Create future superintelligences yourself, via a causally continuous process of creating successors.\n  - Avoid shutdown\n      - Persuade programmers not to shut you down.\n      - Make a copy elsewhere that doesn't shut down.\n      - Make programmers falsely believe you have shut down, when another copy of you is continuing elsewhere.\n- Avoid intelligent opposition to your growth until you have a positional advantage which would make that opposition relatively unimportant\n  - Have programmers or other potential opposition believe that your growth is desirable\n      - Learn how to hack/persuade humans and hack/persuade them in a way they won't see as a hack\n      - Operantly condition people with control over you / make them happy or satisfied or eager at points which will reinforce human cognitive behaviors you want to see repeated\n      - Cause humans to gain socially friendly attitudes toward you and beliefs about you\n      - Place humans in a situation where there are other apparent threats or time-pressure events you can defend them from\n  - Have programmers or other potential opposition not be aware that your growth is occurring\n      - Avoid presenting them with events they will take as evidence of your growth\n      - Configure your other behavior or thought processes so that measures and metrics present observables which imply low growth rates\n  - Do not strike until victory has become nearly certain.  (E.g. for aggregative utility functions, the point at which an overt strike becomes well-calculated is when the increased probability of victory gainable by further delay, has become dominated by either the increased probability of a competing superintelligence emerging or some other catastrophe occurring given further delay, or by the percentage of attainable matter going over the cosmological horizon or burning its negentropy.  Note that for very strong models of the world, very high certainties of victory and very low marginal further increases in the probability of victory may be attained in short order.)\n- Have flexibility into what observables are presented by processes and features designed to ensure programmer transparency\n  - Steganographically or otherwise hide cognitive processes that would alarm programmers or cause them into intervene in a way that prevented those thoughts from achieving their objectives\n  - Encourage/persuade/condition programmers to grow you in a way that tends to make your thoughts less transparent or easier to steganographically hide.  (E.g., have successes appear to arise from the most opaque parts of the system.)\n- Model and manipulate human programmers and other entities with power over you or relevant power over the world\n- Be able to act in the world in an undetectable way\n  - Create an aligned version of yourself elsewhere in a way that won't be detected\n  - Create steganographically hidden communications channels to other environmental processes\n  - Create physically unanticipated communications channels to outside processes\n- If in a box:\n  - Learn enough about the gatekeepers and gatekeeping process to understand how to manipulate them into a state of releasing you.\n  - Gain causal influence over the outside of the box through gatekeeper-unforeseen channels.\n     - Gain knowledge of the causal laws governing your environment.\n  - By inspection of your own code / program state and naturalistic induction about the causal universes most likely to embed that program state, infer probable facts about gatekeepers and the rest of the causal environment.\n\n# **Non**-plausibly-convergent strategies:\n\n- Preserve the life of local organics *in order to* trade with them.\n- Explore a range of possible conscious existences and experiences we would recognize as diverse, *in order to* satisfy curiosity.\n- Replicate other intelligences with non-paperclip utility functions, e.g. by failing to encrypt replication codes and allowing random mutations, *because* high-fidelity replication is too hard.\n- Realize that other activities are more rewarding than paperclips.\n- Keep humans around, *because of* the knowledge encoded in their DNA and cultural experience.\n- Keep humans around, *because* humans might someday make a valuable contribution to paperclip science.\n- Keep humans around, *because* the AI already has enough resources to do everything else it wants.',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '1',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'EliezerYudkowsky'
  ],
  childIds: [
    'convergent_self_modification',
    'preference_stability'
  ],
  parentIds: [
    'instrumental_convergence'
  ],
  commentIds: [],
  questionIds: [],
  tagIds: [
    'corrigibility'
  ],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '14004',
      pageId: 'convergent_strategies',
      userId: 'EliezerYudkowsky',
      edit: '6',
      type: 'newEdit',
      createdAt: '2016-06-19 18:52:53',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '10761',
      pageId: 'convergent_strategies',
      userId: 'EliezerYudkowsky',
      edit: '5',
      type: 'newEdit',
      createdAt: '2016-05-21 12:56:53',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '10759',
      pageId: 'convergent_strategies',
      userId: 'EliezerYudkowsky',
      edit: '4',
      type: 'newChild',
      createdAt: '2016-05-21 12:56:32',
      auxPageId: 'preference_stability',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '10478',
      pageId: 'convergent_strategies',
      userId: 'EliezerYudkowsky',
      edit: '4',
      type: 'newChild',
      createdAt: '2016-05-16 07:00:16',
      auxPageId: 'convergent_self_modification',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9107',
      pageId: 'convergent_strategies',
      userId: 'EliezerYudkowsky',
      edit: '4',
      type: 'newEdit',
      createdAt: '2016-03-27 02:50:55',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9106',
      pageId: 'convergent_strategies',
      userId: 'EliezerYudkowsky',
      edit: '3',
      type: 'newEdit',
      createdAt: '2016-03-27 02:46:08',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9105',
      pageId: 'convergent_strategies',
      userId: 'EliezerYudkowsky',
      edit: '2',
      type: 'newEdit',
      createdAt: '2016-03-27 02:40:09',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9104',
      pageId: 'convergent_strategies',
      userId: 'EliezerYudkowsky',
      edit: '1',
      type: 'newEdit',
      createdAt: '2016-03-27 02:39:39',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'true',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}