Happiness maximizer


by Eliezer Yudkowsky Jul 16 2015 updated Dec 16 2015

It is sometimes proposed that we build an AI intended to maximize human happiness. (One early proposal suggested that AIs be trained to recognize pictures of people with smiling faces and then to take such recognized pictures as reinforcers, so that the grown version of the AI would value happiness.) There's a lot that would allegedly predictably go wrong with an approach like that.

[todo: - in tutorial page?