"I expect you know my answer..."


by Paul Christiano Dec 30 2015 updated Dec 30 2015

I expect you know my answer on this one.

I agree that if there is a really fast transition (e.g. doubling capability in a day), starting from a world that looks generally like the world of today (and in particular one which isn't already moving incredibly quickly) then it could result in world takeover depending on the conditions of AI development. Maybe I'd call it more likely than not in that case, with the main uncertainty being how concentrated relevant information is and how well-coordinated the people with that information already are (of course according to calendar time they might quickly form a singleton anyway as their coordination ability improves, but that's precisely the uninteresting sense I was describing before).

You could reserve "intelligence explosion" for the really fast transition + "standing start" scenario. But from my perspective the broader notion is quite useful, since it looks like a probable consequence of our understanding of technological development, which is consistent with history and the contemporary understanding of AI, which still no one takes seriously despite being one of the most important facts about the future. The broader notion is also what people normally say the definition is, e.g. it's what Chalmers argues for and I think it's the definition Nick uses.

The narrower notion is perhaps even more important if it will actually occur, but also is (at a minimum) highly controversial, and is based on a view of AI progress that few experts endorse. It seems best to reserve "intelligence explosion" for the crazy but probable event that no one takes seriously, to continue to try to get the broader intellectual community to understand why that event seems likely, and to have a more nuanced discussion (amongst people who already take the basic claim seriously) about how fast and abrupt progress is likely to be.

I don't have a better name for "sovereign" in part because I don't think it's a useful or entirely coherent concept---it feels practically designed to smuggle in assumptions. I do think that we can make better names for various more precise versions of it, e.g. "fully autonomous agent."