"> Consider the first AI sys..."

https://arbital.com/p/1h7

by Eliezer Yudkowsky Dec 30 2015


Consider the first AI system that can reasonably predict your answers to questions of the form "Might X constitute mindcrime?"…

Do you think that this kind of question is radically harder than other superficially comparable question-answering tasks?

Yes! It sounds close to FAI-complete in the capacities required. It sounds like trying to brute-force an answer to it via generalized supervised learning might easily involve simulating trillions of Eliezer-models. In general you and I seem to have very different intuitions about how hard it is to get a good answer to "deep, philosophical questions" via generalized supervised learning.