A subproblem of building a task-directed AGI (genie) is communicating to the AGI the next task and identifying which outcomes are considered as fulfilling the task. For the superproblem, see Safe plan identification and verification. This seems like primarily a communication problem. It might have additional constraints associated with, e.g., the AGI being a behaviorist genie. In the known-fixed-algorithm case of AGI, it might be that we don't have much freedom in aligning the AGI's planning capabilities with its task representation, and that we hence need to work with a particular task representation (i.e., we can't just use language to communicate, we need to use labeled training cases). This is currently a stub page, and is mainly being used as a parent or tag for subproblems.