{ localUrl: '../page/8w6.html', arbitalUrl: 'https://arbital.com/p/8w6', rawJsonUrl: '../raw/8w6.json', likeableId: '0', likeableType: 'page', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], pageId: '8w6', edit: '1', editSummary: '', prevEdit: '0', currentEdit: '1', wasPublished: 'true', type: 'wiki', title: 'Solving Intelligence', clickbait: 'What does it mean exactly to "solve intelligence"?', textLength: '6606', alias: '8w6', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'LancelotVerinia', editCreatedAt: '2017-12-11 06:21:31', pageCreatorId: 'LancelotVerinia', pageCreatedAt: '2017-12-11 06:21:31', seeDomainId: '0', editDomainId: '2713', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '0', isEditorComment: 'false', isApprovedComment: 'false', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '3', text: 'I decided some months ago that I wanted to "solve" intelligence. At that time, I had no (concrete) idea what it was I meant by "solve". I knew I wanted to develop Human Level Machine Intelligence (HLMI), but not much beyond that. After some meditation (and learning a little more about AI), I settled on what exactly it is I wanted to do. I wanted to solve the theoretical stumbling blocks limiting progress in Artificial General Intelligence (AGI). In particular, I wanted to take a first principles approach. Formulate intelligence (and intelligent agents) from first principles, then refine the theory generated by this approach. I do not expect to be in a position to make progress on these goals for the next 4 - 6 years (depending on how my education progresses), so this post would be edited in future times to better reflect my current position on my goals. This should be interpreted as desires of mine which I intend to pursue in a postgraduate program, and then (if all goes well), as postdoctoral research. \n \nMy goal can be summarised as developing a satisfactory model of intelligence; doing for intelligence what has been done for computation. An example of a good model are Turing machines. Some criteria which seem desirable about the Turing machines model and/or which I would want in my model (in no particular order) are: \n \n1. **Timelessness:** New practical advancements in the field should not render the model obsolete.\n2. **Explanatory power:** The model should explain the phenomenon being modelled. It should serve as a framework through which we understand and can reason about what we're modelling. Using the model to reason about the phenomenon should take less mental bandwith than reasoning about the phenomenon in the abstract. The model should reduce the inferential distance between us and whatever it is we're trying to learn. The model should serve to reduce (and not increase) the complexity of our mental map of the phenomena.\n3. **Accuracy:** The model should be accurate. It should cut reality at its joints, and correspond to whatever it is were trying to model. \n4. **Predictive Power:** We should be able to make (falsifiable) predictions about the phenomena we're trying to model. A good model would help constrain our anticipations of observations regarding the phenomena. This ties back into the accuracy of the model. If we discover new relationships in our model, then it should correspond to relationship in the real world. Turing machines wouldn't be a very good (universal) model of computation if super-Turing computation was feasible. \n \nThe above is by no means a complete list. If the model is not useful, then the goal was not achieved. The principal aim is an implementable model of intelligence. A model that would enable the construction of a provably optimal (I expect my analysis of intelligence to be asymptotic and resource independent, so provably optimal means "there does not exist a more efficient and/or effective algorithm") intelligent agent. If theoretical research doesn't lead to HLMI, then it's not a victory.\n\nIn order to develop a model of intelligence, I expect I'll take the following research path.\n\n#Goals \n##Foundations of Intelligence \n* Define "Intelligence". \n* Develop a model of intelligence. \n* Develop a method for quantifying and measuring intelligence of arbitrary agents in agent space. \n* Understand intelligence and what makes certain agent designs produce more intelligent agents. \n* Develop a hierarchy of intelligent agents over all of agent space. \n* Answer: "is there a limit to intelligence?" \n \n##Formalise learning\n* Develop a model of learning. \n* Answer: What does it mean for a learning algorithm to be better than another?\n* Develop a method for analysing (I'm thinking of asymptotic analysis (at least as of now, all analysis I plan to do would be asymptotic)) (and comparing) the performance of learning algorithms on a particular problem, across a particular problem class, and across problem space using a particular knowledge representation system(KRS), using various KRS, and across the space of possible KRS. \n* Understand what causes the difference in performance between learning algorithms. \n* Determine the scope/extent of knowledge a given learning algorithm can learn.\n* Develop a hierarchy of learning algorithms capturing the entire space of learning algorithms.\n* Synthesise the results into a rigorous theory of learning ("learning theory").\n###Bonus\n* Develop a provably optimal (for some sensible definition of "optimal") learning algorithm.\n \n##Formalise knowledge\n* Develop a model of knowledge and of KRS. \n* Develop a method for quantifying and measuring "knowledge" (for example, we might consider the utility of the information contained, the complexity of that body of knowledge, and its form(structure, relationships, etc). \n* Develop a method for analysing and comparing KRS, using a particular learning algorithm, using various types of learning algorithms, and across the space of learning algorithms, on a particular problem, across a particular problem class, and across problem space.\n* Determine the scope/extent of knowledge a given KRS can represent. \n* Develop a theory for transfer of knowledge among similar (for some sensible notion of "similarity") knowledge representation systems, and among dissimlar knowledge representation systems. \n* Understand what makes certain KRS "better" (according to however we measure KRS) than other KRS. \n* Develop a hierarchy of KRS capturing the entire space of KRS. \n* Synthesise the results of the above, and on learning theory into a (rigorous) theory of knowledge ("knowledge theory"). \n###Bonus\n* Develop a provably optimal (for some sensible definition of "optimal") KRS. \n \n##"Solve" Intelligence \n* Synthesise all of the above into a *useful* theory of intelligent agents.\n###Bonus \n* Develop a provably optimal (for some sensible definition of "optimal") intelligent agent.\n \n#Nota Bene\n"Develop" doesn't mean that one doesn't already exist, more that I plan to improve on already existing models, or if needed build one from scratch. The aim is model that is satisfactorily (for a very high criteria for satisfy) useful (the criteria I listed above is my attempt at dissolving "useful". The end goal is a theory that can be implemented to build HLMI). I don't plan to (needlessly) reinvent the wheel. When I set out to pursue my goal of formalising intelligence, I would build on the work of others in the area).\n', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '1', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '', hasDraft: 'false', votes: [], voteSummary: 'null', muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'LancelotVerinia' ], childIds: [], parentIds: [], commentIds: [], questionIds: [], tagIds: [], relatedIds: [], markIds: [], explanations: [], learnMore: [], requirements: [], subjects: [], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22934', pageId: '8w6', userId: 'LancelotVerinia', edit: '1', type: 'newEdit', createdAt: '2017-12-11 06:21:35', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'false', hasParents: 'false', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }