"> Are you asking for safety..."


by Eliezer Yudkowsky Jul 14 2015 updated Jul 14 2015

Are you asking for safety even if one of these systems or subsystems becomes omniscient while others did not?

Yes! If your system behaves unsafely when subsystem A becomes too much smarter than subsystem B, that's bad. You should have designed your AI to detect if A gets too far ahead of B, and limit A or suspend to disk or otherwise fail safely.

I've noticed that in a lot of cases, you seem convinced that various classes of problem would be handled… I want to say 'automatically', but I think the more charitable interpretation would be, 'as special cases of solving some larger general problem that I'm not worried about being solved'. Can you state explicitly what background assumption would lead you to think that an AI which behaves badly if subsystem A is very overpowered relative to subsystem B, is still safe? Like, what is the mechanism that makes the AI safe in this case?