"Another, speculative point: If $V$ and $U$ we..."

https://arbital.com/p/8s5

by Sören Mind Oct 28 2017 updated Oct 28 2017


Another, speculative point:

If $~$V$~$ and $~$U$~$ were my utility function and my friend's, my intuition is that an agent that optimizes the wrong function would act more robustly. If true, this may support the theory that Goodhart's curse for AI alignment would be to a large extent a problem of defending against adversarial examples by learning robust features similar to human ones. Namely, the robust response may be because me and my friend have learned similar robust, high-level features; we just give them different importance.