What should superintelligence be programmed to do?


by Alexei Andreev Jan 18 2017 updated Jan 26 2017

This is one of the key open questions in The plan experiment.

Coherent extrapolated volition (alignment target)

From Policy desiderata in the development of machine superintelligence:

In the most general terms, we optimistically take the overarching objective to be the realization of a widely appealing and inclusive near- and long-term future that ultimately achieves humanity’s potential for desirable development while being considerate to beings of all kinds whose interests may be affected by our choices. An ideal proposal for governance arrangements would be one conducive to that end.


Travis Rivera

Assuming there is some way of divining the utility of an individual I think this can be viewed from the lens of a similar problem where you have many agents and you want to combine their utility in some way there is the possibility of just maximizing the average utility vs maximizing the median utility.

The benefits of maximizing the median utility is that the designer is most likely to be part of the median and maybe those that are in the median would have higher utility than those in the average utility maximize world but then this might come at the expense of those who's utility are in the tails of the distribution.

The benefits of maximizing the average utility is that the benefits are more spread out and you don't get such a polarizing effect that you would get from median utility. I expect that the result would be a world that is "meh" one that is not particularly great but not bad either.

This is different than CEV in that CEV maximizes non-controversial utilities but is indifferent otherwise.