"This topic consistently fru..."


by Paul Christiano Dec 29 2015

This topic consistently frustrates me; the proposed typology is obviously incomplete, and I don't think it produces any useful conclusions except by either equivocating between definitions (e.g. when establishing that X is a sovereign and later that sovereigns have property P), by assuming exhaustiveness without justification, or by straightforwardly smuggling in associations.

Note that "an AI intended to act freely in the world according to its own preferences" need not entail "without further direction," since the preferences of the AI may make reference to human direction. And neither of these directly entail the need to get it right on the first try to any greater extent than any other AI system.

And the complement of these properties doesn't really imply anything at all, certainly not that a system is a genie or an oracle.