I don't see indirect specifications as encountering these difficulties; all of the contenders so far go straight for the throat (defining behavior directly in terms of perceptions) rather than trying to pick out the programmer in the AI's ontology. Even formal accounts of e.g. language learning seem like they will have to go for the throat in this sense (learning the correspondence between language and an initially unknown world, based on perceptions), rather than manually binding nouns to parts of a particular ontology or something like that. So whatever mechanism you used to initially learn what a "programmer" is, it seems like you can use the same one to learn what a programmer is under your new physical theory (or more likely, your beliefs about the referent of "programmer" will automatically adjust with your beliefs about physics, and indeed will be used to help inform your changing beliefs about physics).
The "direct" approaches, that pick out what is valuable directly in the hard-coded ontology of the AI, seem clearly unsatisfactory on other grounds.