Many issues in AI alignment have dependencies on what we think we can factually say about the general design space of cognitively powerful agents, or on which background assumptions yield which implications about advanced agents. E.g., the Orthogonality Thesis is a claim about the general design space of powerful AIs. The design space of advanced agents is very wide, and only very weak statements seem likely to be true about the whole design space; but we can still try to say 'If X then Y' and refute claims about 'No need for if-X, Y happens anyway!'