"Boundedly rational ?means r..."


by Kenzi Amodei Jun 21 2015

Boundedly rational ?means rational even when you don't have infinite computing power? Naturalistic ?refers to naturalized induction, where you're not a cartesian dualist who thinks your processes can't be messed with by stuff in the world and also you're not just thinking of yourself as a little black dot in the middle of Conway's game of life? Google says economic agent means one who has an impact on the economy by buying, selling or trading; I assign 65% to that being roughly the meaning in use here?

Somehow the epistemic efficiency thing reminds me of the halting problem; that whatever we try and do, it can just do it more. Or… somehow it actually reminds me more the other way, that it's solved the halting problem on us. Apologies for abuse of technical terms.

So an epistemically efficient agent, for example, is already overcoming all the pitfalls you see in movies of "not being able to understand the human drive for self sacrifice" or love, or etc.

Is there an analogue of efficient markets for instrumental efficiency? Some sort of master-strategy-outputting process that exists (or maybe plausibly exists in at least some special cases) in our world? Maybe Deep Blue at chess, I guess? Google maps for driving directions (for the most part)? reads to next paragraph. Well; not sure whether to update against Google Maps being an example from the fact that it's not mentioned in "instrumentally efficient agents are presently unknown" section

That said, "outside very limited domains" - well, I guess "the whole stock market, mostly" is a fair bit broader than "chess" or even "driving directions". Ah, I see; so though chess programs are overall better than humans, they're not hitting the "every silly-looking move is secretly brilliant" bar yet. Oh, and that's definitely not true of google maps - if it looks like it's making you do something stupid, you should have like 40% that it's in fact being stupid. Got it.

I can't tell if I should also be trying to think about whether there's a reasonable de


Kenzi Amodei

I seem to have found max comment length? Here's the rest:

I can't tell if I should also be trying to think about whether there's a reasonable definition of "the goals of google maps" wherein it actually is maximizing its goals right now in a way we can't advance. I don't think there is one?

I don't know why this hasn't happened to corporations - you'd think someone would try it, at some point, and that if it actually worked pretty well it would eventually allow them to outcompete, even if it was the sort of innovation that meant you had to climb uphill for a bit you'd expect people to keep periodically trying and for one of them eventually to overcome the activation energy barrier?