"Of course, the game is typi..."

https://arbital.com/p/1fh

by Paul Christiano Dec 28 2015


Of course, the game is typically about costs and benefits. Saying "it is good to adopt the security mindset" is (often) an implicit claim about the relative costs of extra work vs. attacks. It's not totally clear if this article is making a similar claim, but it sounds like it.

In terms of costs and benefits, the AI case is quite different from typical security applications.

In the case of my disagreements with MIRI (which I think are relatively mild in this domain), here is how things look to me:

In this case, the balance is not between "extra work" and "failure." It is between "failing because of X" and "failing because of Y." So to make the argument that addressing Y deserves priority, you need to do one of:

(The situation is a bit more subtle than that, since at this point we are mostly talking about arguments about whether a particular class of research problems is promising, or whether any AI control approaches informed by that research will inevitably be rejected by a more conservative approach. But that dichotomy gives the general picture.)

I don't think that such an argument has really been made yet, and the attempted arguments seem to mostly go through claims about future AI progress (especially with respect to fast takeoff) that I find pretty implausible.

So: my inclination is to go on being relatively unconservative (with respect to these particular concerns), and then to shift towards the security mindset once we start to understand the landscape of actually-possible-workable approaches to AI control.

My guess is that a similar strategy would have been appropriate in the early days of cryptography. The first order of business is to find the ideas needed for plausible practical infrastructure for secure communication.