Methodologically, I am trying to understand what approaches may or may not work and what the key difficulties are. I am trying to anticipate what problems are hard or easy in order to understand what approaches may or may not work. I wouldn't describe this as "taking things for granted," I think we are probably miscommunicating.
it's always possible for an equally smart adversary to tell the difference
This is a big problem, I think that it's the more real version of "perfect simulation will be out of the question." Note that this is only a concern for some processes (e.g. if the simulation output is one bit, then you don't have this problem).
(Note that in practice generative adversarial models are extremely finicky to train, at least partly for this reason.)
I think the other big problem is the complementary one, that even an equally smart adversary can't reliably distinguish a crappy simulation from a good simulation (where a dumb example is that no distinguisher can detect a steganographically encoded message even though that implies the simulation was poor).