{ localUrl: '../page/80g.html', arbitalUrl: 'https://arbital.com/p/80g', rawJsonUrl: '../raw/80g.json', likeableId: '0', likeableType: 'page', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], pageId: '80g', edit: '1', editSummary: '', prevEdit: '0', currentEdit: '1', wasPublished: 'true', type: 'wiki', title: 'Teleological Measure and agency', clickbait: 'A crackpot's research proposal', textLength: '6250', alias: '80g', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: 'approval', votesAnonymous: 'false', editCreatorId: 'JaimeSevillaMolina', editCreatedAt: '2017-03-02 16:45:54', pageCreatorId: 'JaimeSevillaMolina', pageCreatedAt: '2017-03-02 16:45:54', seeDomainId: '0', editDomainId: 'arbital_featured_project', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '0', isEditorComment: 'false', isApprovedComment: 'false', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '15', text: '\n##Introduction and motivation\nCertain physical processes allow an elegant charaterization when described as systems obeying an optimization law rather than as complex systems of many particles interacting through mechanical rules.\n\nOne fascinating example is the Law of minimal action, which abstracts all the behaviour of a physical system into the minimization of the integral of the langrangian.\n\nOn a different level, we humans reason about our fellows characterizing them as having "goals" that they try to achieve. Furthermore, it is natural for us to attribute intentions to different processes of the physical world attempting to explain their behaviour.\n\nAll of this points to the usefulness of a teleological point of view when dealing with abstract reasoning, but we are still confused about the notion of a goal. The goal (geddit?) of this essay is to introduce one new conceptual framework to define a non-anthropocentric notion of goal, hoping that it will capture some of the intuitions we have about this concept.\n\nWe later apply the framework to define a new concept of agent.\n\n##Teleological measure\nWe will model the universe as a computable mathematical entity, which evolves in discrete steps according to a fully deterministic function which maps states of the universe to one another.\n\n(here goes justification of why computable and deterministic)\n\nIntuitively, it seems that powerful optimization processes tend to coerce the universe to align with their goals.\n\nIn more abstract terms, universes in which the optimization process exist the evolution function behaves contractively with states "preferred" by the optimizator as atractors.\n\nThis suggests that by repeated application of the function over the universes where the optimizator exists we can derive its goals as a utility function over the set of possible states of the universe.\n\nWe can use measure theory to try to formalize this as follows:\n\n1. Start with a mass function over the set of computable universes, ie the occamiam prior.\n2. Filter those universes in which the target optimization process exists, and renormalize the measure over the rest.\n3. For each timestep, apply the evolution function and associate to each universe the sum of the measures of the states of the universe which get mapped to it.\n4. Now associate to each universe the sum over infinite timesteps of its associated measure, discounted by a time factor to ensure convergence.\n$$U(S) = \\sum_{t=0}^\\infty \\sum_{\\{ S'\\in Universes, f^t(S') =S \\}}\\Pi(S') \\cdot \\gamma^t $$\n5. ???\n6. Profit!\n\n##The problem of separation\nThe conditioning process of step two is the more difficult to conceptualize, as it requires identifying a process happening in the universe.\n\nThis is specially hard to do since we have not defined what a process is, or even if there is a meaningful concept for a process in the universe which is separate from the rest of the universe.\n\nIn the same sense that you cannot split an interacting physical system into different lagrangians, you cannot know where a process begins and ends.\n\nWe tend to think of processes as information patterns, but the frontiers between them and the universe remain blurry.\n\n##Agency\nAmong the optimization processes of the Universe, a certain set becomes salient. We humans, and other animals, seem to have a degree of independence which distinguishes us from rocks and stars.\n\nIt is hard to pinpoint accurately what makes those processes, which we will call 'agents', different. One suggestion is the degree of information they possess about the external world (insofar the concept of external world makes sense).\n\nIt seems that agents can use their knowledge about the world to make better decisions. We call this property **fractality**, because ~~we are snobish elistists~~ it measures the degree of entangledness of a process with its environment, akin to how a portion of a fractal contains information about its whole structure.\n\nOn the other hand, it seems like it is not the actual fractality which defines agency - a human dropped in an unfamiliar universe will have very little information about its environment, yet it would not lose its agency. Thus it makes more sense to relate agency with potential fractality.\n\nOn the other hand, there seem to be some trivial counterexamples. We can imagine an encyclopaedia, which has lots of fractality but intuitively very little agency.\n\nThus we need to refine the concept. Another addition we can make is the degree of **influence** a process has over the universe.\n\nDefining influence its tricky as well, but we can employ the teleological measure framework for an initial approximation. The influence of a process relates to the flatness of its teleological measure; flatter functions indicate the inexistence of powerful attractors, and thus indicate a not very influential process.\n\n##Relationship with physical teleology\nPerhaps the most important example of a physical teleological framework is the concept of **entropy**.\nEntropy can be thought of as a function over system states which describes the statistical behavior of the system, since the system naturally progresses from states with little entropy to states with greater entropy.\n\nWe would expect entropy to roughly correspond with the teleological measure of the universe as a whole agent; ie, what results from excluding the filtering step.\n\n## Reversibility\nOne big problem with this approach is that it results in rather dull measures when the evolution function of the universe is reversible, as it is suspected to be by our formulations of quantum mechanics.\n\nWell, then again, so does the concept of entropy fail in this scenario. However we can adopt the solution employed by physics. We define an equivalence relation over the graph of possible universes, grouping together similar states. Then we define a new evolution function as an stochastic extension of the old one, which maps each class of states to other class (or to itself) at random according to the proportion of arrows from each member of the class to the target class.\n\nThis allows us to eliminate the problem of in degree $1$ that resulted in boring measures.\n\n##Mathematical properties of the teleological measure\nTo be researched.\n\n', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '1', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '', hasDraft: 'false', votes: [], voteSummary: 'null', muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'JaimeSevillaMolina' ], childIds: [], parentIds: [ 'JaimeSevillaMolina' ], commentIds: [], questionIds: [], tagIds: [], relatedIds: [], markIds: [], explanations: [], learnMore: [], requirements: [], subjects: [], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22230', pageId: '80g', userId: 'JaimeSevillaMolina', edit: '0', type: 'turnOffVote', createdAt: '2017-03-02 16:46:24', auxPageId: '', oldSettingsValue: 'true', newSettingsValue: 'false' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22229', pageId: '80g', userId: 'JaimeSevillaMolina', edit: '0', type: 'newParent', createdAt: '2017-03-02 16:45:56', auxPageId: 'JaimeSevillaMolina', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22227', pageId: '80g', userId: 'JaimeSevillaMolina', edit: '1', type: 'newEdit', createdAt: '2017-03-02 16:45:54', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'false', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }