Neat, I'm a contrarian. I guess I should explain why my credence is about 80% different from everyone else's :)
Obviously, being off earth would provide essentially no protection from a uFAI. It may, however. shift the odds of us getting an aligned AI in the first place.
Maybe this is because I'm taking this to mean more than most, I only think it helps if well-established and significant, but by my models both the rate of technological progress and ability to coordinate seems to be proportional to something like density of awesome people with a non-terrible incentive structure. Filtering by "paid half a million dollars to get to mars" and designing the incentive structure from scratch seems like an unusually good way to create a dense pocket of awesome people focused on important problems, in a way which is very hard to dilute.
I claim that if we have long enough timelines for a self-sustaining off-earth colony to be created, the first recursively self-improving AGI has a good chance of being built there. And that a strongly filtered group immersed in other hard challenges with and setting up decision-making infrastructure intentionally rather than working with all the normal civilization cruft are more likely to coordinate on safety than earth-based teams.
I do not expect timelines to be long enough that this is an option, so do not endorse this as a sane use of funding. But having an off-earth colony seems way, way more useful than a warm scarf.
I would agree with:
There are currently much better ways to reduce AI x-risk than funding off-earth colonies. (~96%)
It is unlikely that off-earth colonies will be sufficiently established in time to mitigate AI x-risk. (~77%)
I don't think the existence of such a colony would directly mitigate AI risk, but it could help in the same way that e.g. improved governance or public discourse could help. I think that over the long term, off-Earth colonies have a significant positive expected effect on institution quality (analogously with European colonization of North America). And "warm scarf" sets the bar low.
Comments
Eric Bruylant
Neat, I'm a contrarian. I guess I should explain why my credence is about 80% different from everyone else's :)
Obviously, being off earth would provide essentially no protection from a uFAI. It may, however. shift the odds of us getting an aligned AI in the first place.
Maybe this is because I'm taking this to mean more than most, I only think it helps if well-established and significant, but by my models both the rate of technological progress and ability to coordinate seems to be proportional to something like density of awesome people with a non-terrible incentive structure. Filtering by "paid half a million dollars to get to mars" and designing the incentive structure from scratch seems like an unusually good way to create a dense pocket of awesome people focused on important problems, in a way which is very hard to dilute.
I claim that if we have long enough timelines for a self-sustaining off-earth colony to be created, the first recursively self-improving AGI has a good chance of being built there. And that a strongly filtered group immersed in other hard challenges with and setting up decision-making infrastructure intentionally rather than working with all the normal civilization cruft are more likely to coordinate on safety than earth-based teams.
I do not expect timelines to be long enough that this is an option, so do not endorse this as a sane use of funding. But having an off-earth colony seems way, way more useful than a warm scarf.
I would agree with:
Paul Christiano
I don't think the existence of such a colony would directly mitigate AI risk, but it could help in the same way that e.g. improved governance or public discourse could help. I think that over the long term, off-Earth colonies have a significant positive expected effect on institution quality (analogously with European colonization of North America). And "warm scarf" sets the bar low.