Superintelligent

https://arbital.com/p/superintelligent

by Eliezer Yudkowsky Jun 6 2016 updated Jun 8 2016

A "superintelligence" is strongly superhuman (strictly higher-performing than any and all humans) on every cognitive problem.


[summary: A supernova isn't infinitely hot, but it's still pretty darned hot; in the same sense, a superintelligence isn't infinitely smart, but it's pretty darned smart: strictly superhuman across all cognitive domains, by a significant margin, or else as a fallback merely optimal. (A superintelligence can't win against a human at logical tic-tac-toe, though in real-world tic-tac-toe it could disassemble the opposing player.) Superintelligences are epistemically and instrumentally efficient relative to humans, and have all the other advanced agent properties as well.]

Machine performance inside a domain (class of problems) can potentially be:

A superintelligence is either 'strongly superhuman', or else at least 'optimal', across all cognitive domains. It can't win against a human at logical tic-tac-toe, but it plays optimally there. In a real-world game of tic-tac-toe that it strongly wanted to win, it might sabotage the opposing player, deploying superhuman strategies on the richer "real world" gameboard.

I. J. Good originally used 'ultraintelligence' to denote the same concept: "Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever."

To say that a hypothetical agent or process is "superintelligent" will usually imply that it has all the advanced-agent properties.

Superintelligences are still bounded (if the character of physical law at all resembles the Standard Model of physics). They are (presumably) not infinitely smart, infinitely fast, all-knowing, or able to achieve every describable outcome using their available resources and options. However:

If we're talking about a hypothetical superintelligence, probably we're either supposing that an intelligence explosion happened, or we're talking about a limit state approached by a long period of progress.

Many/most problems in AI alignment seem like they ought to first appear at a point short of full superintelligence. As part of the project of making discourse about advanced agents precise, we should try to identify the key advanced agent property more precisely than saying "this problem would appear on approaching superintelligence" - to suppose superintelligence is usually sufficient but will rarely be necessary.

For the book, see Nick Bostrom's book Superintelligence.