by Nate Soares Jul 8 2016 updated Jul 8 2016

● Nearest unblocked strategy generally, and especially over instrumentally convergent corrigibility incorrigibility, suggests that if there are naturally\-arising AI behaviors we see as bad \(e\.g\. routing around shutdown\), there may emerge a pseudo\-adversarial selection of best strategies that happen to route around our attempted patches to those problems\. E\.g\., the AI constructs an environmental subagent to continue carrying on its goals, while cheerfully obeying 'the letter of the law' with respect to allowing its current hardware to be shut down\. This pseudo\-adversarial selection \(though obviously the AI does not actually have a goal of thwarting us or selecting low\-goodness strategies per se\) again implies that operational goodness is likely to be systematically lower than the AI's partially\-learned estimate of goodness; again to an increasing degree as the AI becomes smarter and searches a wider policy space\.



Eliezer Yudkowsky

Oh my God you don't know about instrumentally convergent corrigibility incorrigibility

How could I have neglected to tell you this

The world is doomed

(Edited to fix.)