{ localUrl: '../page/ontology_identification.html', arbitalUrl: 'https://arbital.com/p/ontology_identification', rawJsonUrl: '../raw/5c.json', likeableId: '2335', likeableType: 'page', myLikeValue: '0', likeCount: '8', dislikeCount: '0', likeScore: '8', individualLikes: [ 'AlexeiAndreev', 'BuckShlegeris', 'AndrewMcKnight', 'CurtisSerVaas', 'AdeleLopez', 'RonnyFernandez', 'DonyChristie', 'StephanieZolayvar' ], pageId: 'ontology_identification', edit: '32', editSummary: 'actually fixed this time', prevEdit: '31', currentEdit: '32', wasPublished: 'true', type: 'wiki', title: 'Ontology identification problem', clickbait: 'How do we link an agent's utility function to its model of the world, when we don't know what that model will look like?', textLength: '34687', alias: 'ontology_identification', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'EricBruylant', editCreatedAt: '2016-10-14 17:32:34', pageCreatorId: 'EliezerYudkowsky', pageCreatedAt: '2015-04-27 22:55:39', seeDomainId: '0', editDomainId: 'EliezerYudkowsky', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '52', isEditorComment: 'false', isApprovedComment: 'true', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '1290', text: '[summary: It seems likely that for advanced agents, the agent's representation of the world will [5d change in unforeseen ways as it becomes smarter]. The ontology identification problem is to create a [5f preference framework] for the agent that optimizes the same external facts, even as the agent modifies its representation of the world. [6b For example], if the [6h intended goal] were to [5g create large amounts of diamond material], one type of ontology identification problem would arise if the programmers thought of carbon atoms as primitive during the AI's development phase, and then the advanced AI discovered nuclear physics.]\n\n[toc:]\n\n# Introduction: The ontology identification problem for unreflective diamond maximizers\n\nA simplified but still very difficult open problem in [2v] is to state an unbounded program implementing a [5g diamond maximizer] that will turn as much of the physical universe into diamond as possible. The goal of "making diamonds" was chosen to have a crisp-seeming definition for our universe (the amount of diamond is the number of carbon atoms covalently bound to four other carbon atoms). If we can crisply define exactly what a 'diamond' is, we can avert issues of trying to convey [5l complex values] into the agent. (The [5g unreflective diamond maximizer] putatively has [ unlimited computing power], runs on a [ Cartesian processor], and confronts no other agents [ similar to itself]. This averts many other problems of [71 reflectivity], [18s decision theory] and [2v value alignment].)\n\nEven with a seemingly crisp goal of "make diamonds", we might still run into two problems if we tried to write a [5t hand-coded object-level utility function] that [ identified] the amount of diamond material:\n\n- Unknown substrate: We might not know the true, fundamental ontology of our own universe, hence not know what stuff diamonds are really made of. (What exactly is a carbon atom? If you say it's a nucleus with six protons, what's a proton? If you define a proton as being made of quarks, what if there are unknown other particles underlying quarks?)\n - It seems intuitively like there ought to be some way to identify carbon atoms to an AI in some way that doesn't depend on talking about quarks. Doing this is part of the ontology identification problem.\n- Unknown representation: We might crisply know what diamonds are in our universe, but not know how to find diamonds inside the agent's model of the environment.\n - Again, it seems intuitively like it ought to be possible to identify diamonds in the environment, even if we don't know details of the agent's exact internal representation. Doing this is part of the ontology identification problem.\n\nTo introduce the general issues in ontology identification, we'll try to walk through the [ anticipated difficulties] of constructing an unbounded agent that would maximize diamonds, by trying specific methods and suggesting [ anticipated difficulties] of those methods.\n\n## Difficulty of making AIXI-tl maximize diamond\n\nThe classic unbounded agent - an agent using far more computing power than the size of its environment - is [11v]. Roughly speaking, AIXI considers all computable hypotheses for how its environment might be turning AIXI's motor outputs into AIXI's sensory inputs and rewards. We can think of AIXI's hypothesis space as including all Turing machines that, sequentially given AIXI's motor choices as inputs, will output a sequence of predicted sense items and rewards for AIXI. The finite variant AIXI-tl has a hypothesis space that includes all Turing machines that can be specified using fewer than $l$ bits and run in less than time $t$.\n\nOne way of seeing the difficulty of ontology identification is considering why it would be difficult to make an AIXI-tl variant that maximized 'diamonds' instead of 'reward inputs'.\n\nThe central difficulty here is that there's no way to find 'diamonds' inside the implicit representations of AIXI-tl's sequence-predicting Turing machines. Given an arbitrary Turing machine that is successfully predicting AIXI-tl's sense inputs, there is no general rule for how to go from the representation of that Turing machine to a statement about diamonds or carbon atoms. The highest-weighted Turing machines that have best predicted the sensory data so far, presumably contain *some* sort of representation of the environment, but we have no idea how to get 'the number of diamonds' out of it.\n\nIf AIXI has a webcam, then the final outputs of the Turing machine are predictions about the stream of bits produced by the webcam, going down the wire into AIXI. We can understand the meaning of that Turing machine's output predictions; those outputs are meant to match types with the webcam's input. But we have no notion of anything else that Turing machine is representing. Even if somewhere in the Turing machine happens to be an atomically detailed model of the world, we don't know what representation it uses, or what format it has, or how to look inside it for the number of diamonds that will exist after AIXI's next motor action.\n\nThis difficulty ultimately arises from AIXI being constructed around a [ Cartesian] paradigm of [ sequence prediction], with AIXI's sense inputs and motor outputs being treated as sequence elements, and the Turing machines in its hypothesis space having inputs and outputs matched to the sequence elements and otherwise being treated as black boxes. This means we can only get AIXI to maximize direct functions of its sensory input, not any facts about the outside environment.\n\n(We can't make AIXI maximize diamonds by making it want *pictures* of diamonds because then it will just, e.g., [ build an environmental subagent that seizes control of AIXI's webcam and shows it pictures of diamonds]. If you ask AIXI to show itself sensory pictures of diamonds, you can get it to show its webcam lots of pictures of diamonds, but this is not the same thing as building an environmental diamond maximizer.)\n\n## Agent using classical atomic hypotheses\n\nAs an [ unrealistic example]: Suppose someone was trying to define 'diamonds' to the AI's utility function, and suppose they knew about atomic physics but not nuclear physics. Suppose they build an AI which, during its development phase, learns about atomic physics from the programmers, and thus builds a world-model that is based on atomic physics.\n\nAgain for purposes of [ unrealistic examples], suppose that the AI's world-model is encoded in such fashion that when the AI imagines a molecular structure - represents a mental image of some molecules - carbon atoms are represented as a particular kind of basic element of the representation. Again, as an [ unrealistic example], imagine that there are [ little LISP tokens] representing environmental objects, and that the environmental-object-type of carbon-objects is encoded by the integer 6. Imagine also that each atom, inside this representation, is followed by a list of the other atoms to which it's covalently bound. Then when the AI is imagining a carbon atom participating in a diamond, inside the representation we would see an object of type 6, followed by a list containing exactly four other 6-objects.\n\nCan we fix this representation for all hypotheses, and then write a utility function for the AI that counts the number of type-6 objects that are bound to exactly four other type-6 objects? And if we did so, would the result actually be a diamond maximizer?\n\n### AIXI-atomic\n\nWe can imagine formulating a variant of AIXI-tl that, rather than all tl-bounded Turing machines, considers tl-bounded simulated atomic universes - that is, simulations of classical, pre-nuclear physics. Call this AIXI-atomic.\n\nA first difficulty is that universes composed only of classical atoms are not good explanations of our own universe, even in terms of surface phenomena; e.g., the [ultraviolet catastrophe](http://en.wikipedia.org/wiki/Ultraviolet_catastrophe). So let it be supposed that we have simulation rules for classical physics that replicate at least whatever phenomena the programmers have observed at [ development time], even if the rules have some seemingly ad-hoc elements (like there being no ultraviolent catastrophes).\n\nA second difficulty is that a simulated universe of classical atoms does not identify where in the universe the AIXI-atomic agent resides, or that AIXI-atomic's sense inputs don't have types commensurate with the types of atoms. We can elide this difficulty by imagining that AIXI-atomic simulates classical universes containing a single hypercomputer, and that AIXI-atomic knows a simple function from each simulated universe onto its own sensory data (e.g., it knows to look at the simulated universe, and translate simulated photons impinging on its webcam onto predicted webcam data in the received format). This elides most of the problem of [ naturalized induction], by fixing the ontology of all hypotheses and standardizing their hypothetical [ bridging laws].\n\nSo the analogous AIXI-atomic agent that maximizes diamond:\n\n- Considers only hypotheses that directly represent universes as huge systems of classical atoms, so that the function 'count atoms bound to four other carbon atoms' can be directly run over any possible future the agent considers.\n- Assigns probabilistic priors over these possible atomic representations of the universe.\n- Somehow [ maps each atomic representation onto the agent's sensory experiences and motor actions].\n- [Bayes-updates its priors] based on actual sensory experiences, the same as classical AIXI.\n- Can evaluate the 'expected diamondness on the next turn' of a single action by looking at all hypothetical universes where that action is performed, weighted by their current probability, and summing over the expectation of diamond-bound carbon atoms on their next clock tick.\n- Can evaluate the 'future expected diamondness' of an action, over some finite time horizon, by assuming that its future self will also Bayes-update and maximize expected diamondness over that time horizon.\n- On each turn, outputs the action with greatest expected diamondness over some finite time horizon.\n\nSuppose our own real universe was amended to otherwise be exactly the same, but contain a single [ impermeable] hypercomputer. Suppose we defined an agent like the one above, using simulations of 1900-era classical models of physics, and ran that agent on the hypercomputer. Should we expect the result to be an actual diamond maximizer - that most mass in the universe will be turned into carbon and arranged into diamonds?\n\n### Anticipated failure of AIXI-atomic in our own universe: trying to maximize diamond outside the simulation\n\nOur own universe isn't atomic, it's nuclear and quantum-mechanical. This means that AIXI-atomic does not contain any hypotheses in its hypothesis space that *directly represent* the universe. By 'directly represent', we mean that carbon atoms in AIXI-atomic's best representations do not correspond to carbon atoms in our own world).\n\nIntuitively, we would think it was [ common sense] for an agent that wanted diamonds to react to the experimental data identifying nuclear physics, by deciding that a carbon atom is 'really' a nucleus containing six protons, and atomic binding is 'really' covalent electron-sharing. We can imagine this agent [ common-sensically] updating its model of the universe to a nuclear model, and redefining the 'carbon atoms' that its old utility function counted to mean 'nuclei containing exactly six protons'. Then the new utility function could evaluate outcomes in the newly discovered nuclear-physics universe. We will call this the **utility rebinding problem**.\n\nWe don't yet have a crisp formula that seems like it would yield commonsense behavior for utility rebinding. In fact we don't yet have any candidate formulas for utility rebinding, period. Stating one is an open problem. See below.\n\nFor the 'classical atomic AIXI' agent we defined above, what happens instead is that the 'simplest atomic hypothesis that fits the facts' will be an enormous atom-based computer, simulating nuclear physics and quantum physics in order to control AIXI's webcam, which is still believed to be composed of atoms in accordance with the prespecified bridging laws. From our perspective this hypothesis seems silly, but if you restrict the hypothesis space to only classical atomic universes, that's what ends up being the computationally simplest hypothesis to explain the results of quantum experiments.\n\nAIXI-atomic will then try to choose actions so as to maximize the amount of expected diamond inside the probable *outside universes* that could contain the giant atom-based simulator of quantum physics. It is not obvious what sort of behavior this would imply.\n\n### Metaphor for difficulty: AIXI-atomic cares about only fundamental carbon\n\nOne metaphorical way of looking at the problem is that AIXI-atomic was implicitly defined to care only about diamonds made out of *ontologically fundamental* carbon atoms, not diamonds made out of quarks. A probability function that assigns 0 probability to all universes made of quarks, and a utility function that outputs a constant on all universes made of quarks, [ yield functionally identical behavior]. So it is an exact metaphor to say that AIXI-atomic only *cares* about universes with ontologically basic carbon atoms, given that AIXI-atomic only *believes* in universes with ontologically basic carbon atoms.\n\nSince AIXI-atomic only cares about diamond made of fundamental carbon, when AIXI-atomic discovered the experimental data implying that almost all of its probability mass should reside in nuclear or quantum universes in which there were no fundamental carbon atoms, AIXI-atomic stopped caring about the effect its actions had on the vast majority of probability mass inside its model. Instead AIXI-atomic tried to maximize inside the tiny remaining probabilities in which it *was* inside a universe with fundamental carbon atoms that was somehow reproducing its sensory experience of nuclei and quantum fields; for example, a classical atomic universe with an atomic computer simulating a quantum universe and showing the results to AIXI-atomic.\n\nFrom our perspective, we failed to solve the 'ontology identification problem' and get the real-world result we wanted, because we tried to define the agent's *utility function* in terms of properties of a universe made out of atoms, and the real universe turned out to be made of quantum fields. This caused the utility function to *fail to bind* to the agent's representation in the way we intuitively had in mind.\n\n### Advanced-nonsafety of hardcoded ontology identifications\n\nToday we do know about quantum mechanics, so if we tried to build an unreflective diamond maximizer using the above formula, it might not fail on account of [48 the particular exact problem] of atomic physics being false.\n\nBut perhaps there are discoveries still remaining that would change our picture of the universe's ontology to imply something else underlying quarks or quantum fields. Human beings have only known about quantum fields for less than a century; our model of the ontological basics of our universe has been stable for less than a hundred years of our human experience. So we should seek an AI design that does not assume we know the exact, true, fundamental ontology of our universe during an AI's [5d development phase]. Or if our failure to know the exact laws of physics causes catastrophic failure of the AI, we should at least heavily mark that this is a [ relied-on assumption].\n\n## Beyond AIXI-atomic: Diamond identification in multi-level maps\n\nA realistic, bounded diamond maximizer wouldn't represent the outside universe with atomically detailed models. Instead, it would have some equivalent of a [ multi-level map] of the world in which the agent knew in principle that things were composed of atoms, but didn't model most things in atomic detail. E.g., its model of an airplane would have wings, or wing shapes, rather than atomically detailed wings. It would think about wings when doing aerodynamic engineering, atoms when doing chemistry, nuclear physics when doing nuclear engineering.\n\nAt the present, there are not yet any proposed formalisms for how to do probability theory with multi-level maps (in other words: [ nobody has yet put forward a guess at how to solve the problem even given infinite computing power]). Having some idea for how an agent could reason with multi-level maps, would be a good first step toward being able to define a bounded expected utility optimizer with a utility function that could be evaluated on multi-level maps. This in turn would be a first step towards defining an agent with a utility function that could rebind itself to *changing* representations in an *updating* multi-level map.\n\nIf we were actually trying to build a diamond maximizer, we would be likely to encounter this problem long before it started formulating new physics. The equivalent of a computational discovery that changes 'the most efficient way to represent diamonds' is likely to happen much earlier than a physical discovery that changes 'what underlying physical systems probably constitute a diamond'.\n\nThis also means that, on the actual [ value loading problem], we are liable to encounter the ontology identification problem long before the agent starts discovering new physics.\n\n# Discussion of the generalized ontology identification problem\n\nIf we don't know how to solve the ontology identification problem for maximizing diamonds, we probably can't solve it for much more complicated values over universe-histories.\n\n\n### View of human angst as ontology identification problem\n\nArgument: A human being who feels angst on contemplating a universe in which "By convention sweetness, by convention bitterness, by convention color, in reality only atoms and the void" (Democritus), or wonders where there is any room in this cold atomic universe for love, free will, or even the existence of people - since, after all, people are just *mere* collections of atoms - can be seen as undergoing an ontology identification problem: they don't know how to find the objects of value in a representation containing atoms instead of ontologically basic people.\n\nHuman beings simultaneously evolved a particular set of standard mental representations (e.g., a representation for colors in terms of a 3-dimensional subjective color space, a representation for other humans that simulates their brain via [empathy]) along with evolving desires that bind to these representations ([identification of flowering landscapes as beautiful](http://en.wikipedia.org/wiki/Evolutionary_aesthetics#Landscape_and_other_visual_arts_preferences), a preference not to be embarrassed in front of other objects designated as people). When someone visualizes any particular configurations of 'mere atoms', their built-in desires don't automatically fire and bind to that mental representation, the way they would bind to the brain's native representation of other people. Generalizing that no set of atoms can be meaningful, and being told that reality is composed entirely of such atoms, they feel they've been told that the true state of reality, underlying appearances, is a meaningless one.\n\nArguably, this is structurally similar to a utility function so defined as to bind only to true diamonds made of ontologically basic carbon, which evaluates as unimportant any diamond that turns out to be made of mere protons and neutrons.\n\n## Ontology identification problems may reappear on the reflective level\n\nAn obvious thought (especially for [6w online genies]) is that if the AI is unsure about how to reinterpret its goals in light of a shifting mental representation, it should query the programmers.\n\nSince the definition of a programmer would then itself be baked into the [5f preference framework], the problem might [ reproduce itself on the reflective level] if the AI became unsure of where to find [9r programmers]. ("My preference framework said that programmers were made of carbon atoms, but all I can find in this universe are quantum fields.")\n\n## Value lading in category boundaries\n\nTaking apart objects of value into smaller components can sometimes create new moral [ edge cases]. In this sense, rebinding the terms of a utility function decides a [ value-laden] question.\n\nConsider chimpanzees. One way of viewing questions like "Is a chimpanzee truly a person?" - meaning, not, "How do we arbitrarily define the syllables per-son?" but "Should we care a lot about chimpanzees?" - is that they're about how to apply the 'person' category in our desires to things that are neither typical people nor typical nonpeople. We can see this as arising from something like an ontological shift: we're used to valuing cognitive systems that are made from whole human minds, but it turns out that minds are made of parts, and then we have the question of how to value things that are made from some of the person-parts but not all of them.\n\nRedefining the value-laden category 'person' so that it talked about brains made out of neural regions, rather than whole human beings, would implicitly say whether or not a chimpanzee was a person. Chimpanzees definitely have neural areas of various sizes, and particular cognitive abilities - we can suppose the empirical truth is unambiguous at this level, and known to us. So the question is then whether we regard a particular configuration of neural parts (a frontal cortex of a certain size) and particular cognitive abilities (consequentialist means-end reasoning and empathy, but no recursive language) as something that our 'person' category values... once we've rewritten the person category to value configurations of cognitive parts, rather than whole atomic people.\n\nIn this sense the problem we face with chimpanzees is exactly analogous to the question a diamond maximizer would face after discovering nuclear physics and asking itself whether a carbon-14 atom counted as 'carbon' for purposes of caring about diamonds. Once a diamond maximizer knows about neutrons, it can see that C-14 is chemically like carbon and forms the same kind of chemical bonds, but that it's heavier because it has two extra neutrons. We can see that chimpanzees have a similar brain architectures to the sort of people we always considered before, but that they have smaller frontal cortexes and no ability to use recursive language, etcetera.\n\nWithout knowing more about the diamond maximizer, we can't guess what sort of considerations it might bring to bear in deciding what is Truly Carbon and Really A Diamond. But the breadth of considerations human beings need to invoke in deciding how much to care about chimpanzees, is one way of illustrating that the problem of rebinding a utility function to a shifted ontology is [value-laden] and potentially undergo [ excursions] into [ arbitrarily complicated desiderata]. Redefining a [ moral category] so that it talks about the underlying parts of what were previously seen as all-or-nothing atomic objects, may carry an implicit ruling about how to value many kinds of [ edge case] objects that were never seen before.\n\n A formal part of this problem may need to be carved out from the edge-case-reclassification part: e.g., how would you redefine carbon as C12 if there were no other isotopes, or how would you rebind the utility function to *at least* C12, or how would edge cases be identified and queried.\n\n\n# Potential research avenues\n\n## 'Transparent priors' constrained to meaningful but Turing-complete hypothesis spaces\n\nThe reason why we can't bind a description of 'diamond' or 'carbon atoms' to the hypothesis space used by [11v] or [ AIXI-tl] is that the hypothesis space of AIXI is all Turing machines that produce binary strings, or probability distributions over the next sense bit given previous sense bits and motor input. These Turing machines could contain an unimaginably wide range of possible contents\n\n(Example: Maybe one Turing machine that is producing good sequence predictions inside AIXI, actually does so by simulating a large universe, identifying a superintelligent civilization that evolves inside that universe, and motivating that civilization to try to intelligently predict future future bits from past bits (as provided by some intervention). To write a formal utility function that could extract the 'amount of real diamond in the environment' from arbitrary predictors in the above case , we'd need the function to read the Turing machine, decode that universe, find the superintelligence, decode the superintelligence's thought processes, find the concept (if any) resembling 'diamond', and hope that the superintelligence had precalculated how much diamond was around in the outer universe being manipulated by AIXI.)\n\nThis suggests that to solve the ontology identification problem, we may need to constrain the hypothesis space to something [ less general] than 'an explanation is any computer program that outputs a probability distribution on sense bits'. A constrained explanation space can still be Turing complete (contain a possible explanation for every computable sense input sequence) without every possible computer program constituting an explanation.\n\nAn [ unrealistic example] would be to constrain the hypothesis space to Dynamic Bayesian Networks. DBNs can represent any Turing machine with bounded memory,[todo: Not sure where to look for a citation, but I'd be very surprised if this wasn't true.] so they are very general, but since a DBN is a causal model, they make it possible for a preference framework to talk about 'the cause of a picture of a diamond' in a way that you couldn't look for 'the cause of a picture of a diamond' inside a general Turing machine. Again, this might fail if the DBN has no 'natural' way of representing the environment except as a DBN simulating some other program that simulates the environment.\n\nSuppose a rich causal language, such as, e.g., a [ dynamic system] of objects with [ causal relations] and [ hierarchical categories of similarity]. The hope is that in this language, the *natural* hypothesis representing the environment - the simplest hypotheses within this language that well predict the sense data, or those hypotheses of highest probability under some simplicity prior after updating on the sense data - would be such that there was a natural 'diamond' category inside the most probable causal models. In other words, the winning hypothesis for explaining the universe would already have postulated diamondness as a [ natural category] and represented it as Category #803,844, in a rich language where we already know how to look through the enviromental model and find the list of categories.\n\nGiven some transparent prior, there would then exist the further problem of developing a utility-identifying preference framework that could look through the most likely environmental representations and identify diamonds. Some likely (interacting) ways of binding would be, e.g., to "the causes of pictures of diamonds", to "things that are bound to four similar things", querying ambiguities to programmers, or direct programmer inspection of the AI's model (but in this case the programmers might need to re-inspect after each ontological shift). See below.\n\n(A bounded value loading methodology would also need some way of turning the bound preference framework into the estimation procedures for expected diamond and the agent's search procedures for strategies high in expected diamond, i.e., the bulk of the actual AI that carries out the goal optimization.)\n\n## Matching environmental categories to descriptive constraints\n\nGiven some transparent prior, there would exist a further problem of how to actually bind a preference framework to that prior. One possible contributing method for pinpointing an environmental property could be if we understand the prior well enough to understand what the described object ought to look like - the equivalent of being able to search for 'things W made of six smaller things X near six smaller things Y and six smaller things Z, that are bound by shared Xs to four similar things W in a tetrahedral structure' in order to identify carbon atoms and diamond.\n\nWe would need to understand the representation well enough to make a guess about how carbon or diamond would be represented inside it. But if we could guess that, we could write a program that identifies 'diamond' inside the hypothesis space without needing to know in advance that diamondness will be Category #823,034. Then we could rerun the same utility-identification program when the representation updates, so long as this program can reliably identify diamond inside the model each time, and the agent acts so as to optimize the utility identified by the program.\n\nOne particular class of objects that might plausibly be identifiable in this way is 'the AI's programmers' (aka the agents that are causes of the AI's code) if there are parts of the preference framework that say to query programmers to resolve ambiguities.\n\nA toy problem for this research avenue might involve:\n\n- One of the richer representation frameworks that can be inducted as of the time, e.g., a simple Dynamic Bayes Net.\n- An agent environment that can be thus represented.\n- A goal over properties relatively distant from the agent's sensory experience (e.g., the goal is over the cause of the cause of the sensory data).\n- A program that identifies the objects of utility in the environment, within the model thus freely inducted.\n- An agent that optimizes the identified objects of utility, once it has inducted a sufficiently good model of the environment to optimize what it is looking for.\n\nFurther work might add:\n\n- New information that can change the model of the environment.\n- An agent that smoothly updates what it optimizes for in this case.\n\nAnd further:\n\n- Environments complicated enough that there is real structural ambiguity (e.g., dependence on exact initial conditions of the inference program) about how exactly the utility-related parts are modeled.\n- Agents that can optimize through a probability distribution about environments that differ in their identified objects of utility.\n\nA potential agenda for unbounded analysis might be:\n\n- An [ unbounded analysis] showing that a utility-identifying [5f preference framework] is a generalization of a [ VNM utility] and can [ tile] in an architecture that tiles a generic utility function.\n- A [45] analysis showing that an agent is not motivated to try to cause the universe to be such as to have utility identified in a particular way.\n- A [45] analysis showing that the identity and category boundaries of the objects of utility will be treated as a [ historical fact] rather than one lying in the agent's [ decision-theoretic future].\n\n## Identifying environmental categories as the causes of labeled sense data.\n\nAnother potential approach, given a prior transparent enough that we can find causal data inside it, would be to try to identify diamonds as the causes of pictures of diamonds. \n\n[todo: expand]\n\n### Security note\n\n[5j Christiano's hack]: if your AI is advanced enough to model distant superintelligences, it's important to note that distant superintelligences can make 'the most probable cause of the AI's sensory data' be anything they want by making a predictable decision to simulate AIs such that your AI doesn't have info distinguishing itself from the distant AIs your AI imagines being simulated\n\n## Ambiguity resolution\n\nBoth the description-matching and cause-inferring methods might produce ambiguities. Rather than having the AI optimize for a probabilistic mix over all the matches (as if it were uncertain of which match were the true one), it would be better to query the ambiguity to the programmers (especially if different probable models imply different strategies). This problem shares structure with [ inductive inference with ambiguity resolution] as a strategy for resolving [ unforeseen inductions].\n\n[todo: if you try to solve the reflective problem by defining the queries in terms of sense data, you might run into Cartesian problems. if you try to ontologically identify the programmers in terms more general than a particular webcam, so that the AI can have new webcams, the ontology identification problem might reproduce itself on the reflective level. you have to note it down as a dependency either way.]\n\n## Multi-level maps\n\nBeing able to describe, in purely theoretical principle, a prior over epistemic models that have at least two levels and can switch between them in some meaningful sense, would constitute major progress over the present state of the art.\n\n[todo: try this with just two level. half adders as potential models? requirements: that the lower level be only partially realized rather than needing to be fully modeled; that it can describe probabilistic things; that we can have a language for things like this and prior over them that gets updated on the evidence, rather than just a particular handcrafted two-level map.]\n\n# Implications\n\n[todo:\nif the programmers can read through updates to the AI's representation fast enough, or if most of the routine ones leave certain levels intact or imply a defined relation between old and new models, then it might be possible to solve this problem programmatically for genies. especially if it's a nonrecursive genie with known algorithms, because then it might have a known representation that might be known not to change suddenly, and be corrigible-by-default while the representation is being worked out.\nso this is one of the problems more likely to be averted in practice\nbut understanding it does help to see one more reason why You Cannot Just Hardcode the Utility Function By Hand.]\n\n[todo: Hard to solve entire problem because it has at least some entanglement with the full AGI problem.]\n\nThe problem of using sensory data to build computationally efficient probabilistic maps of the world, and to efficiently search for actions that are predicted by those maps to have particular consequences, could be identified with the entire problem of AGI. So the research goal of ontology identification is not to publish a complete bounded system like that (i.e. an AGI), but to develop an unbounded analysis of utility rebinding that seems to say something useful specifically about the ontology-identification part of the problem.)', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '5', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '2016-02-26 10:58:32', hasDraft: 'false', votes: [], voteSummary: 'null', muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'EliezerYudkowsky', 'EricBruylant', 'NateSoares', 'AlexeiAndreev' ], childIds: [ 'diamond_maximizer', 'ontology_identification_technical_tutorial' ], parentIds: [ 'value_identification' ], commentIds: [ '12m', '12p', '7g' ], questionIds: [], tagIds: [ 'value_alignment_open_problem', 'work_in_progress_meta_tag', 'development_phase_unpredictable' ], relatedIds: [ 'pointing_finger' ], markIds: [], explanations: [], learnMore: [], requirements: [], subjects: [], lenses: [ { id: '1', pageId: 'ontology_identification', lensId: 'ontology_identification_technical_tutorial', lensIndex: '0', lensName: 'Technical tutorial', lensSubtitle: '', createdBy: '1', createdAt: '2016-06-17 21:58:56', updatedBy: '1', updatedAt: '2016-06-17 21:58:56' } ], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '20145', pageId: 'ontology_identification', userId: 'EricBruylant', edit: '32', type: 'newEdit', createdAt: '2016-10-14 17:32:34', auxPageId: '', oldSettingsValue: '', newSettingsValue: 'actually fixed this time' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '20141', pageId: 'ontology_identification', userId: 'EricBruylant', edit: '31', type: 'newEdit', createdAt: '2016-10-13 22:31:41', auxPageId: '', oldSettingsValue: '', newSettingsValue: 'removing underscores from summary' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '20140', pageId: 'ontology_identification', userId: 'EricBruylant', edit: '30', type: 'newEdit', createdAt: '2016-10-13 17:54:46', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '20139', pageId: 'ontology_identification', userId: 'EricBruylant', edit: '29', type: 'newEdit', createdAt: '2016-10-13 17:54:06', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '8946', pageId: 'ontology_identification', userId: 'EliezerYudkowsky', edit: '28', type: 'newUsedAsTag', createdAt: '2016-03-23 19:33:41', auxPageId: 'pointing_finger', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4136', pageId: 'ontology_identification', userId: 'EliezerYudkowsky', edit: '28', type: 'newEdit', createdAt: '2015-12-17 23:03:35', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4135', pageId: 'ontology_identification', userId: 'EliezerYudkowsky', edit: '27', type: 'newEdit', createdAt: '2015-12-17 23:01:05', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4134', pageId: 'ontology_identification', userId: 'EliezerYudkowsky', edit: '26', type: 'newTag', createdAt: '2015-12-17 23:00:15', auxPageId: 'development_phase_unpredictable', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4132', pageId: 'ontology_identification', userId: 'EliezerYudkowsky', edit: '0', type: 'deleteParent', createdAt: '2015-12-17 23:00:10', auxPageId: 'development_phase_unpredictable', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4130', pageId: 'ontology_identification', userId: 'EliezerYudkowsky', edit: '0', type: 'deleteParent', createdAt: '2015-12-17 23:00:08', auxPageId: 'advanced_safety', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4128', pageId: 'ontology_identification', userId: 'EliezerYudkowsky', edit: '26', type: 'newEdit', createdAt: '2015-12-17 22:59:49', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3885', pageId: 'ontology_identification', userId: 'AlexeiAndreev', edit: '0', type: 'newAlias', createdAt: '2015-12-16 05:48:26', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3886', pageId: 'ontology_identification', userId: 'AlexeiAndreev', edit: '25', type: 'newEdit', createdAt: '2015-12-16 05:48:26', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3346', pageId: 'ontology_identification', userId: 'TomBrown', edit: '23', type: 'newChild', createdAt: '2015-11-11 05:56:41', auxPageId: '217', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3290', pageId: 'ontology_identification', userId: 'AlexeiAndreev', edit: '23', type: 'newEdit', createdAt: '2015-11-03 21:18:34', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1133', pageId: 'ontology_identification', userId: 'AlexeiAndreev', edit: '1', type: 'newUsedAsTag', createdAt: '2015-10-28 03:47:09', auxPageId: 'work_in_progress_meta_tag', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '818', pageId: 'ontology_identification', userId: 'AlexeiAndreev', edit: '1', type: 'newChild', createdAt: '2015-10-28 03:46:58', auxPageId: 'ontology_identification_technical_tutorial', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '819', pageId: 'ontology_identification', userId: 'AlexeiAndreev', edit: '1', type: 'newChild', createdAt: '2015-10-28 03:46:58', auxPageId: 'diamond_maximizer', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '340', pageId: 'ontology_identification', userId: 'AlexeiAndreev', edit: '1', type: 'newParent', createdAt: '2015-10-28 03:46:51', auxPageId: 'value_identification', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '98', pageId: 'ontology_identification', userId: 'AlexeiAndreev', edit: '1', type: 'newParent', createdAt: '2015-10-28 03:46:51', auxPageId: 'development_phase_unpredictable', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '425', pageId: 'ontology_identification', userId: 'AlexeiAndreev', edit: '1', type: 'newParent', createdAt: '2015-10-28 03:46:51', auxPageId: 'advanced_safety', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '433', pageId: 'ontology_identification', userId: 'AlexeiAndreev', edit: '1', type: 'newParent', createdAt: '2015-10-28 03:46:51', auxPageId: 'value_alignment_open_problem', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1988', pageId: 'ontology_identification', userId: 'AlexeiAndreev', edit: '22', type: 'newEdit', createdAt: '2015-08-19 14:22:09', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1987', pageId: 'ontology_identification', userId: 'EliezerYudkowsky', edit: '21', type: 'newEdit', createdAt: '2015-06-07 21:54:13', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1986', pageId: 'ontology_identification', userId: 'EliezerYudkowsky', edit: '20', type: 'newEdit', createdAt: '2015-05-07 00:24:47', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1985', pageId: 'ontology_identification', userId: 'EliezerYudkowsky', edit: '19', type: 'newEdit', createdAt: '2015-05-05 23:54:24', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1984', pageId: 'ontology_identification', userId: 'EliezerYudkowsky', edit: '18', type: 'newEdit', createdAt: '2015-05-05 23:15:25', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1983', pageId: 'ontology_identification', userId: 'EliezerYudkowsky', edit: '17', type: 'newEdit', createdAt: '2015-05-04 22:53:57', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1982', pageId: 'ontology_identification', userId: 'EliezerYudkowsky', edit: '16', type: 'newEdit', createdAt: '2015-04-30 00:29:09', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1981', pageId: 'ontology_identification', userId: 'EliezerYudkowsky', edit: '15', type: 'newEdit', createdAt: '2015-04-30 00:10:24', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1980', pageId: 'ontology_identification', userId: 'EliezerYudkowsky', edit: '14', type: 'newEdit', createdAt: '2015-04-29 00:38:43', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1979', pageId: 'ontology_identification', userId: 'NateSoares', edit: '13', type: 'newEdit', createdAt: '2015-04-28 09:42:00', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1978', pageId: 'ontology_identification', userId: 'NateSoares', edit: '12', type: 'newEdit', createdAt: '2015-04-28 09:39:01', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1977', pageId: 'ontology_identification', userId: 'NateSoares', edit: '11', type: 'newEdit', createdAt: '2015-04-28 09:37:58', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1976', pageId: 'ontology_identification', userId: 'EliezerYudkowsky', edit: '10', type: 'newEdit', createdAt: '2015-04-28 03:42:51', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1975', pageId: 'ontology_identification', userId: 'EliezerYudkowsky', edit: '9', type: 'newEdit', createdAt: '2015-04-28 03:40:02', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1974', pageId: 'ontology_identification', userId: 'EliezerYudkowsky', edit: '7', type: 'newEdit', createdAt: '2015-04-28 01:40:09', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1973', pageId: 'ontology_identification', userId: 'EliezerYudkowsky', edit: '5', type: 'newEdit', createdAt: '2015-04-28 01:03:01', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1972', pageId: 'ontology_identification', userId: 'EliezerYudkowsky', edit: '3', type: 'newEdit', createdAt: '2015-04-27 23:23:31', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1971', pageId: 'ontology_identification', userId: 'EliezerYudkowsky', edit: '2', type: 'newEdit', createdAt: '2015-04-27 22:59:38', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'true', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }