CFAR should explicitly focus on AI safety

https://arbital.com/p/6wx

by Stephanie Zolayvar Dec 16 2016


The Center for Applied Rationality has historically had a "cause-neutral" mission but has recently revised its mission to partly be focused on AI safety efforts in particular.


Comments

Anna Salamon

I want a wrong question button!! :/

Timothy Chu

Addressing the post, a focus on AI risk feels like something worth experimenting with.

My lame model suggests that the main downside is that it risks the brand. If so, experimenting with AI risk in the CFAR context seems like a potentially high value avenue of exploration, and brand damage can be mitigated.

For example, if it turned out to be toxic for the CFAR brand, the same group of people could spin off a new program called something else, and people may not remember or care that it was the old CFAR folks.

Connor Flexman

Along with "Growing EA is net-positive", anything with a large search space + value judgment seems like it's going to have this issue.