Consider an AI system composed of many interacting subsystems, or a world containing many AI systems. Are you asking for safety even if one of these systems or subsystems becomes omniscient while others did not? Clearly this would be a nice property to have if it were attainable, but it seems pretty ambitious. I'm also not convinced it's a big deal one way or the other, because I don't expect there to be massive unnoticed (by the AI systems that are designing new AI systems) disparities in power during normal operation. So whether designing for such disparities is useful seems to depend on an empirical claim about the plausibility of big differentials.
You could make your original point with respect to differentials "if it fails for a large enough differential, then why think the real differential is small enough?" but I don't find this very compelling when we can say relatively precisely what kind of differential is small enough.