{
  localUrl: '../page/relative_ability.html',
  arbitalUrl: 'https://arbital.com/p/relative_ability',
  rawJsonUrl: '../raw/7mt.json',
  likeableId: '3974',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '2',
  dislikeCount: '0',
  likeScore: '2',
  individualLikes: [
    'EricRogstad',
    'RyanCarey2'
  ],
  pageId: 'relative_ability',
  edit: '7',
  editSummary: '',
  prevEdit: '6',
  currentEdit: '7',
  wasPublished: 'true',
  type: 'wiki',
  title: 'Infrahuman, par-human, superhuman, efficient, optimal',
  clickbait: 'A categorization of AI ability levels relative to human, with some gotchas in the ordering.  E.g., in simple domains where humans can play optimally, optimal play is not superhuman.',
  textLength: '8118',
  alias: 'relative_ability',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'EliezerYudkowsky',
  editCreatedAt: '2017-03-08 07:04:22',
  pageCreatorId: 'EliezerYudkowsky',
  pageCreatedAt: '2017-01-29 20:23:38',
  seeDomainId: '0',
  editDomainId: 'EliezerYudkowsky',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'false',
  isApprovedComment: 'false',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '72',
  text: '[summary:\n- **Strictly infrahuman**: The AI can't do better than a human in any regard (in that domain).\n- **Infrahuman**:  The AI almost always loses to the human (in that domain).\n- **Par-human**:  The AI sometimes wins and sometimes loses; it's weaker in some places and stronger in others (in that domain).\n- **High-human**:  The AI performs around as well as exceptionally competent humans.\n- **Superhuman**:  The AI almost always wins.\n- **[6s Efficient]**:  Human advice contributes no marginal improvement to the AI's competence.\n- **Strongly superhuman**:  The AI is *much* better than human; [9j the domain is rich enough] for humans to be [9f surprised] at the AI's tactics.\n- **Optimal**:  Perfect performance for the domain.\n\nThese thresholds aren't always ordered as above.  For example, [9s logical Tic-Tac-Toe] is simple enough that humans and AIs can both play optimally; so, in the Tic-Tac-Toe domain, optimal play isn't superhuman.]\n\nSome thresholds in '[2c sufficiently advanced]' machine intelligence are not absolute ability levels within a domain, but abilities relative to the human programmers or operators of the AI.  When this is true, it's useful to think about *relative* ability levels within a domain; and one generic set of distinguished thresholds in relative ability is:\n\n- **Strictly infrahuman:**  The AI cannot do anything its human operators / programmers cannot do.  Computer chess in 1966 relative to a human master.\n- **Infrahuman:**  The AI is definitely weaker than its operators but can deploy some surprising moves.  Computer chess in 1986 relative to a human master.\n- **Par-human** (or more confusingly "**human-level**"):  If competing in that domain, the AI would sometimes win, sometimes lose; it's better than human at some things and worse in others; it just barely wins or loses.  Computer chess in 1991 on a home computer, relative to a strong amateur human player.\n- **High-human**:  The AI performs as well as exceptionally competent humans.  Computer chess just before [1bx 1996].\n- **Superhuman:**  The AI always wins.  Computer chess in 2006.\n- **[6s Efficient]:**  Human advice contributes no marginal improvement to the AI's competence.  Computer chess was somewhere around this level in 2016, with "advanced" / "freestyle" / "hybrid" / "centaur" chess starting to lose out against purely machine players. %note: Citation solicited. Googling gives the impression that nothing has been heard from 'advanced chess' in the last few years.%\n- **Strongly superhuman:**\n  - The ceiling of possible performance in the domain is far above the human level; the AI can perform orders of magnitudes better.  E.g., consider a human and computer competing at *how fast* they can do arithmetic.  In principle the domain is simple, but competing with respect to speed leaves room overhead for the computer to do literally billions of times better.\n  - [9j The domain is rich enough] that humans don't understand key generalizations, leaving them shocked at *how* the AI wins.  Computer Go relative to human masters in 2017 was just starting to exhibit the first signs of this ("We thought we were one or two stones below God, but after playing AlphaGo, we think it is more like three or four").  Similarly, consider a human grandmaster playing Go against a human novice.\n- **Optimal:**  The AI's performance is perfect for the domain; God could do no better.  Computer play in checkers as of 2007.\n\nThe *ordering* of these thresholds isn't always as above.  For example, in the extremely simple domain of [9s logical Tic-Tac-Toe], humans can play optimally after a small amount of training.  Optimal play in Tic-Tac-Toe is therefore not superhuman.  Similarly, if an AI is playing in a rich domain but still has strange weak spots, the AI might be strongly superhuman (its play is *much* better and shocks human masters) but not [6s efficient] (the AI still sometimes plays wrong moves that human masters can see are wrong).\n\nThe term "human-equivalent" is deprecated because it confusingly implies a roughly human-style balance of capabilities, e.g., an AI that is roughly as good at conversation as a human and also roughly as good at arithmetic as a human.  This seems pragmatically unlikely.\n\nThe [other Wiki](https://en.wikipedia.org/wiki/Progress_in_artificial_intelligence) lists the categories "optimal, super-human, high-human, par-human, sub-human".\n\n# Relevant thresholds for AI alignment problems\n\nConsidering these categories as [2c thresholds of advancement] relevant to the point at which AI alignment problems first materialize:\n\n- "Strictly infrahuman" means we don't expect to be surprised by any tactic the AI uses to achieve its goals (within a domain).\n- "Infrahuman" means we might be surprised by a tactic, but not surprised by overall performance levels.\n- "Par-human" means we need to start worrying that humans will lose in any event determined by a competition (although this seems to imply the [7g0 non-adversarial principle] has already been violated); we can't rely on humans winning some event determined by a contest of relevant ability.  Or this may suppose that the AI gains access to resources or capabilities that we have strong reason to believe are protected by a lock of roughly human ability levels, even if that lock is approached in a different way than usual.\n- "High-human" means the AI will *probably* see strategies that a human sees in a domain; it might be possible for an AI of par-human competence to miss them, but this is much less likely for a high-human AI.  It thus behaves like a slightly weaker version of postulating [6s efficiency] for purposes of expecting the AI to see some particular strategy or point.\n- "Superhuman" implies at least [9f weak cognitive uncontainability] by [1bt Vinge's Law].  Also, if something is known to be difficult or impossible for humans, but seems possibly doable in principle, we may need to consider it becoming possible given some superhuman capability level.\n- "Efficiency" is a fully sufficient condition for the AI seeing any opportunity that a human sees; e.g., it is a fully sufficient condition for many instrumentally convergent strategies.  Similarly, it can be postulated as a fully sufficient condition to refute a claim that an AI will take a path such that some other path would get more of its utility function.\n- "Strongly superhuman" means we need to expect that an AI's strategies may deploy faster than human reaction times, or overcome great starting disadvantages.  Even if the AI starts off in a much worse position it may still win.\n- "Optimality" doesn't obviously correspond to any particular threshold of results, but is still an important concept in the hierarchy, because only by knowing the absolute limits on optimal performance can we rule out strongly superhuman performance as being possible.  See also the claim [9t].\n\n# 'Human-level AI' confused with 'general intelligence'\n\nThe term "human-level AI" is sometimes used in the literature to denote [42g].  This should probably be avoided, because:\n\n- Narrow AIs have achieved par-human or superhuman ability in many specific domains without [7vh general intelligence].\n- If we consider [7vh general intelligence] as a capability, a kind of superdomain, it seems possible to imagine infrahuman levels of general intelligence (or superhuman levels).  The apparently large jump from humans to chimpanzees mean that we mainly see human levels of general intelligence with no biological organisms exhibiting the same ability at a lower level; but, at least so far as we currently know, AI could possibly take a different developmental path.  So alignment thresholds that could plausibly follow from general intelligence, like [3nf big-picture awareness], aren't necessarily locked to par-human performance overall.\n\nArguably, the term 'human-level' should just be avoided entirely, because it's been pragmatically observed to function as a [7mz gotcha button] that derails the conversation some fraction of the time; with the interrupt being "Gotcha!  AIs won't have a humanlike balance of abilities!"\n',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '1',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'EliezerYudkowsky'
  ],
  childIds: [],
  parentIds: [
    'advanced_agent'
  ],
  commentIds: [
    '89l'
  ],
  questionIds: [],
  tagIds: [
    'value_alignment_glossary'
  ],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '22261',
      pageId: 'relative_ability',
      userId: 'EliezerYudkowsky',
      edit: '7',
      type: 'newEdit',
      createdAt: '2017-03-08 07:04:22',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '22260',
      pageId: 'relative_ability',
      userId: 'EliezerYudkowsky',
      edit: '6',
      type: 'newEdit',
      createdAt: '2017-03-08 07:02:44',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '22065',
      pageId: 'relative_ability',
      userId: 'EliezerYudkowsky',
      edit: '5',
      type: 'newEdit',
      createdAt: '2017-02-17 20:48:36',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '21927',
      pageId: 'relative_ability',
      userId: 'EliezerYudkowsky',
      edit: '4',
      type: 'newEdit',
      createdAt: '2017-02-06 03:03:05',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '21889',
      pageId: 'relative_ability',
      userId: 'EliezerYudkowsky',
      edit: '3',
      type: 'newEdit',
      createdAt: '2017-01-29 21:48:28',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '21883',
      pageId: 'relative_ability',
      userId: 'EliezerYudkowsky',
      edit: '2',
      type: 'newEdit',
      createdAt: '2017-01-29 20:27:43',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '21882',
      pageId: 'relative_ability',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newEditGroup',
      createdAt: '2017-01-29 20:23:54',
      auxPageId: 'EliezerYudkowsky',
      oldSettingsValue: '123',
      newSettingsValue: '2'
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '21880',
      pageId: 'relative_ability',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newParent',
      createdAt: '2017-01-29 20:23:39',
      auxPageId: 'advanced_agent',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '21881',
      pageId: 'relative_ability',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newTag',
      createdAt: '2017-01-29 20:23:39',
      auxPageId: 'value_alignment_glossary',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '21878',
      pageId: 'relative_ability',
      userId: 'EliezerYudkowsky',
      edit: '1',
      type: 'newEdit',
      createdAt: '2017-01-29 20:23:38',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}