Solving Intelligence

https://arbital.com/p/8w6

by Lancelot Verinia Dec 11 2017

What does it mean exactly to "solve intelligence"?


I decided some months ago that I wanted to "solve" intelligence. At that time, I had no (concrete) idea what it was I meant by "solve". I knew I wanted to develop Human Level Machine Intelligence (HLMI), but not much beyond that. After some meditation (and learning a little more about AI), I settled on what exactly it is I wanted to do. I wanted to solve the theoretical stumbling blocks limiting progress in Artificial General Intelligence (AGI). In particular, I wanted to take a first principles approach. Formulate intelligence (and intelligent agents) from first principles, then refine the theory generated by this approach. I do not expect to be in a position to make progress on these goals for the next 4 - 6 years (depending on how my education progresses), so this post would be edited in future times to better reflect my current position on my goals. This should be interpreted as desires of mine which I intend to pursue in a postgraduate program, and then (if all goes well), as postdoctoral research.

My goal can be summarised as developing a satisfactory model of intelligence; doing for intelligence what has been done for computation. An example of a good model are Turing machines. Some criteria which seem desirable about the Turing machines model and/or which I would want in my model (in no particular order) are:

  1. Timelessness: New practical advancements in the field should not render the model obsolete.
  2. Explanatory power: The model should explain the phenomenon being modelled. It should serve as a framework through which we understand and can reason about what we're modelling. Using the model to reason about the phenomenon should take less mental bandwith than reasoning about the phenomenon in the abstract. The model should reduce the inferential distance between us and whatever it is we're trying to learn. The model should serve to reduce (and not increase) the complexity of our mental map of the phenomena.
  3. Accuracy: The model should be accurate. It should cut reality at its joints, and correspond to whatever it is were trying to model.
  4. Predictive Power: We should be able to make (falsifiable) predictions about the phenomena we're trying to model. A good model would help constrain our anticipations of observations regarding the phenomena. This ties back into the accuracy of the model. If we discover new relationships in our model, then it should correspond to relationship in the real world. Turing machines wouldn't be a very good (universal) model of computation if super-Turing computation was feasible.

The above is by no means a complete list. If the model is not useful, then the goal was not achieved. The principal aim is an implementable model of intelligence. A model that would enable the construction of a provably optimal (I expect my analysis of intelligence to be asymptotic and resource independent, so provably optimal means "there does not exist a more efficient and/or effective algorithm") intelligent agent. If theoretical research doesn't lead to HLMI, then it's not a victory.

In order to develop a model of intelligence, I expect I'll take the following research path.

Goals

Foundations of Intelligence

Formalise learning

Bonus

Formalise knowledge

Bonus

"Solve" Intelligence

Bonus

Nota Bene

"Develop" doesn't mean that one doesn't already exist, more that I plan to improve on already existing models, or if needed build one from scratch. The aim is model that is satisfactorily (for a very high criteria for satisfy) useful (the criteria I listed above is my attempt at dissolving "useful". The end goal is a theory that can be implemented to build HLMI). I don't plan to (needlessly) reinvent the wheel. When I set out to pursue my goal of formalising intelligence, I would build on the work of others in the area).