As I see it, there are two cases that are meaningfully distinct:
(1) what we want is so simple, and we are so confident in what it is, that we are prepared to irrevocably commit to a particular concrete specification of "what we want" in the near future, (of course it's also fine to have a good enough approximation with high enough probability, etc. etc.)
(2) it's not, or we aren't
It is more or less obvious that we are in (2). For example, even if every human was certain that the only thing they wanted was to produce as much diamond as possible (to use your example), we'd still be deep into case (2). And that's just about the easiest imaginable case. (The only exception I can see is some sort of extropian complexity-maximizing view.)
Are there meaningful policy differences between different shades of case (2)? I'm not yet convinced.