Ad-hoc hack (alignment theory)

https://arbital.com/p/hack

by Eliezer Yudkowsky May 18 2016 updated May 18 2016

A "hack" is when you alter the behavior of your AI in a way that defies, or doesn't correspond to, a principled approach for that problem.


An "ad-hoc hack" is when you modify or patch the algorithm of the AI with regards to something that would ordinarily have simple, principled, or nailed-down structure, or where it seems like that part ought to have some simple answer instead. E.g., instead of defining a von Neumann-Morgenstern coherent utility function, you try to solve some problem by introducing something that's almost a VNM utility function but has a special case in line 3 which activates only on Tuesday. This seems unusually likely to break other things, e.g. reflective consistency, or anything else that depends on the coherence or simplicity of utility functions. Such hacks should be avoided in advanced-agent designs whenever possible, for analogous reasons to why they would be avoided in [cryptographic_analogy cryptography] or [probe_analogy designing a space probe]. It may be interesting and productive anyway to look for a weird hack that seems to produce the desired behavior, because then you understand at least one system that produces the behavior you want - even if it would be unwise to actually build an AGI like that, the weird hack might give us the inspiration to find a simpler or more coherent system later. But then we should also be very suspicious of the hack, and look for ways that it fails or produces weird side effects.

An example of a productive weird hack was [Benya_Fallenstein]'s Parametric Polymorphism proposal for tiling agents. You wouldn't want to build a real AGI like that, but it was helpful for showing what could be done - which properties could definitely be obtained together within a tiling agent, even if by a weird route. This in turn helped suggest relatively less hacky proposals later.