{
  localUrl: '../page/corps_vs_si.html',
  arbitalUrl: 'https://arbital.com/p/corps_vs_si',
  rawJsonUrl: '../raw/83z.json',
  likeableId: '0',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '0',
  dislikeCount: '0',
  likeScore: '0',
  individualLikes: [],
  pageId: 'corps_vs_si',
  edit: '2',
  editSummary: '',
  prevEdit: '1',
  currentEdit: '2',
  wasPublished: 'true',
  type: 'wiki',
  title: 'Corporations vs. superintelligences',
  clickbait: 'Corporations have relatively few of the advanced-agent properties that would allow one mistake in aligning a corporation to immediately kill all humans and turn the future light cone into paperclips.',
  textLength: '10153',
  alias: 'corps_vs_si',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'EliezerYudkowsky',
  editCreatedAt: '2017-03-25 06:41:36',
  pageCreatorId: 'EliezerYudkowsky',
  pageCreatedAt: '2017-03-25 06:28:20',
  seeDomainId: '0',
  editDomainId: 'EliezerYudkowsky',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'false',
  isApprovedComment: 'false',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '367',
  text: 'It is sometimes suggested that corporations are relevant analogies for [41l superintelligences].  To evaluate this analogy without simply falling prey to the continuum fallacy, we need to consider which specific thresholds from the standard list of [2c advanced agent properties] can reasonably be said to apply in full force to corporations.  This suggests roughly the following picture:\n\n- Corporations generally exhibit [7mt infrahuman, par-human, or high-human] levels of ability on non-heavily-parallel tasks.  On cognitive tasks that parallelize well across massive numbers of humans being paid to work on them, corporations exhibit [7mt superhuman] levels of ability compared to an individual human.\n - In order to try and grasp the overall performance boost from organizing into a corporation, consider a Microsoft-sized corporation trying to play Go in 2010.  The corporation could potentially pick out its strongest player and so gain high-human performance, but would probably not play very far above that individual level, and so would not be able to defeat the individual world champion.  Consider also the famous chess game of Kasparov vs. The World, which Kasparov ultimately won.\n - On massively parallel cognitive tasks, corporations exhibit strongly superhuman performance; the best passenger aircraft designable by Boeing seems likely to be far superior to the best passenger aircraft that could be designed by a single engineer at Boeing.\n- In virtue of being composed of humans, corporations have most of the advanced-agent properties that humans themselves do:\n - They can deploy **[7vh general intelligence]** and **[9h cross-domain consequentialism].**\n - They possess **[-3nf]** and operate in the **[-78k].**\n - They can deploy **realistic psychological models** of humans and try to deceive them.\n- Also in virtue of being composed of humans, corporations are not in general **[1c0 Vingean-unpredictable,]** hence not systematically **[9f cognitively uncontainable.]**  Without constituent researchers who know secret phenomena of a domain, corporations are not **[2j strongly cognitively uncontainable.]**\n- Corporations are not [6s epistemically efficient] relative to humans, except perhaps in limited domains for the extremely few such that have deployed internal prediction markets with sufficiently high participation and subsidy.  (The *stock prices* of large corporations are efficient, but the corporations aren't; often the stock price tanks after the corporation does something stupid.)\n- Corporations are not [6s instrumentally efficient.]  No currently known method exists for aggregating human strategic acumen into an instrumentally efficient conglomerate the way that prediction markets try to do for epistemic predictions about near-term testable events.  It is often possible for a human to see a better strategy for accomplishing the corporation's pseudo-goals than the corporation is pursuing.\n- Corporations generally exhibit little interest in fundamental cognitive self-improvement, e.g. extremely few of them have deployed internal prediction markets (perhaps since the predictions of these internal prediction markets are often embarrassing to overconfident managers).  Since corporate intelligence is almost entirely composed of humans, most of the basic algorithms running a corporation are not subject to improvement by the corporation.  Attempts to do crude analogues of this tend to, e.g., bog down the entire corporation in bureaucracy and internal regulations, rather than resulting in genetic engineering of better executives or an [428 intelligence explosion].\n- Corporations have no basic speed advantage over their constituent humans, since speed does not parallelize.\n\nSometimes discussion of analogies between corporations and hostile superintelligences focuses on a purported misalignment with human values.\n\nAs mentioned above, corporations are  *composed of* consequentialist agents, and can often deploy consequentialist reasoning to this extent.  The humans inside the corporation are not all always pulling in the same direction, and this can lead to non-consequentialist behavior by the corporation considered as a whole; e.g. an executive may not maximize financial gain for the company out of fear of personal legal liability or just other life concerns.\n\nOn many occasions some corporations have acted psychopathically with respect to the outside world, e.g. tobacco companies.  However, even tobacco companies are still composed entirely of humans who might balk at being e.g. [10h turned into paperclips].  It is possible to *imagine* circumstances under which a Board of Directors might wedge itself into pressing a button that turned everything including themselves into paperclips.  However, acting in a unified way to pursue an interest of *the corporation* that is contrary to the non-financial personal interests of all executives *and* directors *and* employees *and* shareholders, does not well-characterize the behavior of most corporations under most circumstances.\n\nThe conditions for [7hh the coherence theorems implying consistent expected utility maximization] are not met in corporations, as they are not met in the constituent humans.  On the whole, the *strategic acumen* of big-picture corporate strategy seems to behave more like Go than like airplane design, and indeed corporations are usually strategically dumber than their smartest employee and often seem to be strategically dumber than their CEOs.  Running down the list of [2vl] suggests that corporations exhibit some such behaviors sometimes, but not all of them nor all of the time.  Corporations sometimes act like they wish to survive; but sometimes act like their executives are lazy in the face of competition.  The directors and employees of the company will not go to literally any lengths to ensure the corporation's survival, or protect the corporation's (nonexistent) representation of its utility function, or converge their decision processes toward optimality (again consider the lack of internal prediction markets to aggregate epistemic capabilities on near-term resolvable events; and the lack of any known method for agglomerating human instrumental strategies into an efficient whole).\n\nCorporations exist in a strongly multipolar world; they operate in a context that includes other corporations of equal size, alliances of corporations of greater size, governments, an opinionated public, and many necessary trade partners, all of whom are composed of humans running at equal speed and of equal or greater intelligence and strategic acumen.  Furthermore, many of the resulting compliance pressures are applied directly to the individual personal interests of the directors and managers of the corporation, i.e., the decision-making CEO might face individual legal sanction or public-opinion sanction independently of the corporation's expected average earnings.  Even if the corporation did, e.g., successfully assassinate a rival's CEO, not all of the resulting benefits to the corporation would accrue to the individuals who had taken the greatest legal risks to run the project.\n\nPotential strong disanalogies to a [-10h] include the following:\n\n- A paperclip maximizer can get much stronger returns on cognitive investment and reinvestment owing to being able to optimize its own algorithms at a lower level of organization.\n- A paperclip maximizer can operate in much faster serial time.\n- A paperclip maximizer can scale single-brain algorithms (rather than hiring more humans to try to communicate with each other across verbal barriers, a paperclip maximizer can potentially solve problems that require one BIG brain using high internal bandwidth).\n- A paperclip maximizer can scale continuous, perfectly cooperative and coordinated copies of itself as more computational power becomes available.\n- Depending on the returns on cognitive investment, and the timescale on which it occurs, a paperclip maximizer undergoing an intelligence explosion can end up with a strong short-term intelligence lead on the nearest rival AI projects (e.g. because the times separating the different AI projects were measured on a human scale, with the second-leading project 2 months behind the leading project, and this time difference was amplified by many orders of magnitude by fast serial cognition once the leading AI became capable of it).\n- Strongly superhuman cognition potentially leads the paperclip maximizer to rapidly overcome initial material disadvantages.\n - E.g. a paperclip maximizer that can e.g. crack protein folding to develop its own biological organisms or bootstrap nanotechnology, or that develops superhuman psychological manipulation of humans, potentially acquires a strong positional advantage over all other players in the system and can ignore game-theoretic considerations (you don't have to play the Iterated Prisoner's Dilemma if you can simply disassemble the other agent and use their atoms for something else).\n- Strongly superhuman strategic acumen means the paperclip maximizer can potentially deploy tactics that literally no human has ever imagined.\n- Serially fast thinking and serially fast actions can take place faster than humans (or corporations) can react.\n- A paperclip maximizer is *actually* motivated to *literally* kill all opposition including all humans and turn everything within reach into paperclips.\n\nTo the extent one credits the dissimilarities above as relevant to whatever empirical question is at hand, arguing by analogy from corporations to superintelligences--especially under the banner of "corporations *are* superintelligences!"--would be an instance of the [noncentral_fallacy noncentral fallacy] or [-reference_class_tennis].  Using the analogy to argue that "superintelligences are no more dangerous than corporations" would be the "precedented therefore harmless" variation of the [-7nf].  Using the analogy to argue that "corporations are the real danger," without having previously argued out that superintelligences are harmless or that superintelligences are sufficiently improbable, would be [-derailing].',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '1',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'EliezerYudkowsky'
  ],
  childIds: [],
  parentIds: [
    'advanced_agent'
  ],
  commentIds: [
    '8hr'
  ],
  questionIds: [],
  tagIds: [],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '22389',
      pageId: 'corps_vs_si',
      userId: 'EliezerYudkowsky',
      edit: '2',
      type: 'newEdit',
      createdAt: '2017-03-25 06:41:36',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '22385',
      pageId: 'corps_vs_si',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newParent',
      createdAt: '2017-03-25 06:28:21',
      auxPageId: 'advanced_agent',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '22383',
      pageId: 'corps_vs_si',
      userId: 'EliezerYudkowsky',
      edit: '1',
      type: 'newEdit',
      createdAt: '2017-03-25 06:28:20',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}