"What disturbs me in this article is the normati..."

https://arbital.com/p/8qs

by Robert Peetsalu Oct 9 2017 updated Oct 10 2017


What disturbs me in this article is the normativeness - describing values, rightness and goodness as something objective, having an objective boolean value, existing in the world without an observer to have those values, like some motivation without someone being motivated by it. Instead rightness and goodness are meaningless outside of some utility function, some desired end state that would label moving towards it as positive direction and against it as negative direction. Without a destination every direction is as good as every other. Values are always subjective, so when teaching them to an AI we can only refer to how common it is to regard value A as being positive or negative among people.

The universe doesn't want anything, so for example killing humans has no innate badness and is not negative for the universe. It's just negative for most humans. If taking a pill will change your subjective values to "killing=good", then rightness will also change and the AI will now extrapolate this new rightness from your brain. Furthermore it will correctly recommend futures with killing because they are better than futures without it according to these values.

We have no reason to believe that if each of us knew as much as a superintelligence knows, could think as fast as it and reason as soundly as it does, that we would then have no differences in values. Let's assume safely that subjectivity isn't going anywhere. We can still define some useful values for the AI by substituting objective values with an overwhelming consensus of known subjective values. Those are basic values that are common to most people and don't vary significantly with political or personal preference, like human rights, basic criminal law, maybe some of the soft positive values mentioned in the article. Ban on wars would be nice to include! (We'd need to define what level of aggression is considered war and whether information war and sanctions are also included.)

The utility function of an AI is what defines its priorities for possible outcomes aka its values. In case of forementioned rights and laws they tend to take the form of penalty for wrong actions instead of utility gain for good actions, which is a slippery slope in the sense that AI-s tend to find loopholes in prohibitions, but on the other hand penalties can't be abused for utility maximization like gains can. For example rewarding for creating happy fluffy feelings in people would turn the AI into a maximizer.

In any case we'll want to change the AI-s values as our understanding of good and right evolves, so let's hope utility indifference will let us update them. Instead of changing drastically over time our values will probably become more detailed and situational - full of exceptions, just like our laws. Already justice systems of many countries are so complex that it would make sense to delegate judgement to AI-s. Can't wait to see news of first AI judges being bribed with utility gains.

P.S: Act of opposing normative values is the definition of rebelling, so I guess I'm a rebel now ^_^