Advanced safety

by Eliezer Yudkowsky Mar 26 2015 updated Dec 16 2015

An agent is *really* safe when it has the capacity to do anything, but chooses to do what the programmer wants.

A proposal meant to produce value-aligned agents is 'advanced-safe' if it succeeds, or fails safely, in scenarios where the AI becomes much smarter than its human developers.


A proposal for a value-alignment methodology, or some aspect of that methodology, is alleged to be 'advanced-safe' if that proposal is claimed robust to scenarios where the agent:


It seems reasonable to expect that there will be difficulties of dealing with minds smarter than our own, doing things we didn't imagine, that will be qualitatively different from designing a toaster oven to not burn down a house, or from designing an AI system that is dumber than human. This means that the concept of 'advanced safety' will end up importantly different from the concept of robust pre-advanced AI.

Concretely, it has been argued to be [ foreseeable] for several difficulties including e.g. programmer deception and unforeseen maximums, that they won't materialize before an agent is advanced, or won't materialize in the same way, or won't materialize as severely. This means that practice with dumber-than-human AIs may not train us against these difficulties, requiring a separate theory and mental discipline for making advanced AIs safe.

We have observed in practice that many proposals for 'AI safety' do not seem to have been thought through against advanced agent scenarios; thus, there seems to be a practical urgency to emphasizing the concept and the difference.

Key problems of advanced safety that are new or qualitatively different compared to pre-advanced AI safety include:

Non-advanced-safe methodologies may conceivably be useful if a [ known algorithm nonrecursive agent] can be created that is (a) powerful enough to be relevant and (b) can be known not to become advanced. Even here there may be grounds for worry that such an agent finds unexpectedly strong strategies in some particular subdomain - that it exhibits flashes of domain-specific advancement that break a non-advanced-safe methodology.


As an extreme case, an 'omni-safe' methodology allegedly remains value-aligned, or fails safely, even if the agent suddenly becomes omniscient and omnipotent (acquires delta probability distributions on all facts of interest and has all describable outcomes available as direct options). See: real-world agents should be omni-safe.


Kenzi Amodei

I'm surprised you want to use the word "advanced" to for this concept; implies to me this is the main kind of high-level safety missing from standard "safety" models? I guess the list of bullet points does cover a whole lot of scenarios. It does make it sound sexy, and not like something you'd want to ignore. Obvious alternative usage for the word advanced relative to safety would be for "actually" safe (over just claimed safe). Maybe that has other words available to it like provably.

I have the intuition that many proposals fail against advanced agents; I don't see intuitively that it's the "advanced" that's the main problem (that would imply they would work as long as the agent didn't become advanced, I think? What does that look like? And is this like Asimov's three laws or tool AI or what?)

Are there any interesting intuition pumps that fall out of omniscience/omnipotence that don't fall easily out of the "advanced" concept?