"> Are there meaningful poli..."


by Eliezer Yudkowsky Jul 14 2015 updated Jan 25 2016

Are there meaningful policy differences between different shades of case (2)?

If all of our uncertainty was about the best long-term destiny of humanity, and there were simple and robust ways to discriminate good outcomes from catastrophic outcomes when it came to asking a behaviorist genie to do simple-seeming things, then building a behaviorist genie would avert Edge Instantiation, Unforeseen Maximums, and all the other value identification problems. If we still have a thorny value identification problem even for questions like "How do we get the AI to just paint all the cars pink, without tiling the galaxies with pink cars?" or "How can we safely tell the AI to 'pause' when somebody hits the pause button?", then there are still whole hosts of questions that remain relevant even if somebody 'just' wants to build a behaviorist genie.