AI alignment open problem

https://arbital.com/p/value_alignment_open_problem

by Eliezer Yudkowsky Apr 11 2015 updated Feb 6 2017

Tag for open problems under AI alignment.


A tag for pages that describe at least one major open problem that has been identified within the theory of value-aligned advanced agents, powerful artificial minds such that the effect of running them is good / nice / normatively positive ('high value').

To qualify as an 'open problem' for this tag, the problem should be relatively crisply stated, unsolved, and considered important.