"Normally I think that you s..."

https://arbital.com/p/7m

by Paul Christiano Jun 18 2015


Normally I think that you set the bar too high for yourself. In this case, I think that you would be justified in setting the bar much higher (I guess if we disagreed in the same direction in every case, it wouldn't be clear that we were really disagreeing).

If you design a "safe" AI which is much less efficient (say 10x more expensive to do the same things) then an unsafe AI, that may be useful but it does not seem to resolve what you call the value achievement dilemma. It would need to be coupled with very good coordination to prevent people from deploying the more efficient, unsafe AI.

So I think it is reasonable to set the bar at safe systems that act in the world (acquire resources, produce things, influence politics…) nearly as effectively as any unsafe system that we could construct using the same underlying technologies.

This kind of requirement seems much more important than (e.g.) ensuring that your system remains safe if it were to suddenly become infinitely powerful.

This disagreement likely relates to our disagreement about the likely pace and dynamics of AI development. One difference is that in this case assuming a fast takeoff may actually be less conservative. So if you want to push the "plan for the worst" line, it seems like you should probably be pessimistic about an intelligence explosion where that would be inconvenient, but also be pessimistic about the tolerable gaps in efficiency where that would be inconvenient.


Comments

Eliezer Yudkowsky

The definition of 'relevant & limited' seems sensitive to beliefs about fast vs. slow takeoff, check. Will need to flag that dependency (I was in the process of revising this page set).