"The what, the huh?"

https://arbital.com/p/3nl

by Alexei Andreev May 16 2016


Any consequentialist agent which has acquired sufficient big\-picture savviness to understand that it has code, and that this code is relevant to achieving its goals, would \(by default acquire subgoals relating to its code\. \(Unless this default is averted\.\) For example, an agent that wants \(only\) to produce smiles or make paperclips, whose code contains a shutdown procedure, will not want this procedure to execute because it will lead to fewer future smiles or paperclips\. \(This preference is not sui generis but arises from the execution of the code itself; the code is reflectively inconsistent\.\)

The what, the huh?