{
  localUrl: '../page/intro_utility_coherence.html',
  arbitalUrl: 'https://arbital.com/p/intro_utility_coherence',
  rawJsonUrl: '../raw/7hh.json',
  likeableId: '4070',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '2',
  dislikeCount: '0',
  likeScore: '2',
  individualLikes: [
    'TravisRivera',
    'AretsPaeglis'
  ],
  pageId: 'intro_utility_coherence',
  edit: '15',
  editSummary: '',
  prevEdit: '13',
  currentEdit: '15',
  wasPublished: 'true',
  type: 'wiki',
  title: 'Coherent decisions imply consistent utilities',
  clickbait: 'Why do we all use the 'expected utility' formalism?  Because any behavior that can't be viewed from that perspective, must be qualitatively self-defeating (in various mathy ways).',
  textLength: '48748',
  alias: 'intro_utility_coherence',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'EliezerYudkowsky',
  editCreatedAt: '2018-11-29 07:38:02',
  pageCreatorId: 'EliezerYudkowsky',
  pageCreatedAt: '2017-01-20 04:41:34',
  seeDomainId: '0',
  editDomainId: '15',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '4',
  isEditorComment: 'false',
  isApprovedComment: 'false',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '1491',
  text: '[summary:  A tutorial that introduces the concept of 'expected utility' by walking through some of the bad things that happen to an agent which can't be viewed as having a consistent utility function and probability assignment.]\n\n# Introduction to the introduction: Why expected utility?\n\nSo we're talking about how to make good decisions, or the idea of 'bounded rationality', or what sufficiently advanced Artificial Intelligences might be like; and somebody starts dragging up the concepts of 'expected utility' or 'utility functions'.\n\nAnd before we even ask what those are, we might first ask, *Why?*\n\nThere's a mathematical formalism, 'expected utility', that some people invented to talk about making decisions.  This formalism is very academically popular, and appears in all the textbooks.\n\nBut so what?  Why is that *necessarily* the best way of making decisions under every kind of circumstance?  Why would an Artificial Intelligence care what's academically popular?  Maybe there's some better way of thinking about rational agency?  Heck, why is this formalism popular in the first place?\n\nWe can ask the same kinds of questions about [1bv probability theory]:\n\nOkay, we have this mathematical formalism in which the chance that X happens, aka $\\mathbb P(X),$ plus the chance that X doesn't happen, aka $\\mathbb P(\\neg X),$ must be represented in a way that makes the two quantities sum to unity: $\\mathbb P(X) + \\mathbb P(\\neg X) = 1.$\n\nThat formalism for probability has some neat mathematical properties.  But so what?  Why should the best way of reasoning about a messy, uncertain world have neat properties?  Why shouldn't an agent reason about 'how likely is that' using something completely unlike probabilities?  How do you *know* a sufficiently advanced Artificial Intelligence would reason in probabilities?  You haven't seen an AI, so what do you think you know and how do you think you know it?\n\nThat entirely reasonable question is what this introduction tries to answer.  There are, indeed, excellent reasons beyond academic habit and mathematical convenience for why we would by default invoke 'expected utility' and 'probability theory' to think about good human decisions, talk about rational agency, or reason about sufficiently advanced AIs.\n\nThe broad form of the answer seems easier to show than to tell, so we'll just plunge straight in.\n\n# Why not circular preferences?\n\n*De gustibus non est disputandum,* goes the proverb; matters of taste cannot be disputed.  If I like onions on my pizza and you like pineapple, it's not that one of us is right and one of us is wrong.  We just prefer different pizza toppings.\n\nWell, but suppose I declare to you that I *simultaneously*:\n\n- Prefer onions to pineapple on my pizza.\n- Prefer pineapple to mushrooms on my pizza.\n- Prefer mushrooms to onions on my pizza.\n\nIf we use $>_P$ to denote my pizza preferences, with $X >_P Y$ denoting that I prefer X to Y, then I am declaring:\n\n$$\\text{onions} >_P \\text{pineapple} >_P \\text{mushrooms} >_P \\text{onions}$$\n\nThat sounds strange, to be sure.  But is there anything *wrong* with that?  Can we disputandum it?\n\nWe used the math symbol $>$ which denotes an ordering.  If we ask whether $>_P$ can be an ordering, it naughtily violates the standard transitivity axiom $x > y, y > z \\implies x > z$.\n\nOkay, so then maybe we shouldn't have used the symbol $>_P$ or called it an ordering.  Why is that necessarily bad?\n\nWe can try to imagine each pizza as having a numerical score denoting how much I like it.  In that case, there's no way we could assign consistent numbers $x, y, z$ to those three pizza toppings such that $x > y > z > x.$\n\nSo maybe I don't assign numbers to my pizza.  Why is that so awful?\n\nAre there any grounds besides "we like a certain mathematical formalism and your choices don't fit into our math," on which criticize my three simultaneous preferences?\n\n(Feel free to try to answer this yourself before continuing...)\n\n%%hidden(Click here to reveal and continue):\nSuppose I tell you that I prefer pineapple to mushrooms on my pizza.  Suppose you're about to give me a slice of mushroom pizza; but by paying one penny ($\\$0.01$) I can instead get a slice of pineapple pizza (which is just as fresh from the oven). It seems realistic to say that most people with a pineapple pizza preference would probably pay the penny, if they happened to have a penny in their pocket. %note: It could be that somebody's pizza preference is real, but so weak that they wouldn't pay one penny to get the pizza they prefer.  In this case, imagine we're talking about some stronger preference instead.  Like your willingness to pay at least one penny not to have your house burned down, or something.%\n\nAfter I pay the penny, though, and just before I'm about to get the pineapple pizza, you offer me a slice of onion pizza instead--no charge for the change!  If I was telling the truth about preferring onion pizza to pineapple, I should certainly accept the substitution if it's free.\n\nAnd then to round out the day, you offer me a mushroom pizza instead of the onion pizza, and again, since I prefer mushrooms to onions, I accept the swap.\n\nI end up with exactly the same slice of mushroom pizza I started with... and one penny poorer, because I previously paid \\$0.01 to swap mushrooms for pineapple.\n%%\n\nThis seems like a *qualitatively* bad behavior on my part.  By virtue of my incoherent preferences which cannot be given a consistent ordering, I have shot myself in the foot, done something self-defeating.  We haven't said *how* I ought to sort out my inconsistent preferences.  But no matter how it shakes out, it seems like there must be *some* better alternative--some better way I could reason that wouldn't spend a penny to go in circles.  That is, I could at least have kept my original pizza slice and not spent the penny.\n\nIn a phrase you're going to keep hearing, I have executed a 'dominated strategy': there exists some other strategy that does strictly better. %%note: This does assume that the agent prefers to have more money rather than less money.  "Ah, but why is it bad if one person has a penny instead of another?" you ask.  If we insist on pinning down every point of this sort, then you can also imagine the \\$0.01 as standing in for the *time* I burned in order to move the pizza slices around in circles.  That time was burned, and nobody else has it now.  If I'm an effective agent that goes around pursuing my preferences, I should in general be able to sometimes convert time into other things that I want.  In other words, my circular preference can lead me to incur an [ opportunity cost] denominated in the sacrifice of other things I want, and not in a way that benefits anyone else.%%\n\nOr as Steve Omohundro put it:  If you prefer being in Berkeley to being in San Francisco; prefer being in San Jose to being in Berkeley; and prefer being in San Francisco to being in San Jose; then you're going to waste a lot of time on taxi rides.\n\nNone of this reasoning has told us that a non-self-defeating agent must prefer Berkeley to San Francisco or vice versa.  There are at least six possible consistent orderings over pizza toppings, like $\\text{mushroom} >_P \\text{pineapple} >_P \\text{onion}$ etcetera, and *any* consistent ordering would avoid paying to go in circles. %note: There are more than six possibilities if you think it's possible to be absolutely indifferent between two kinds of pizza.%  We have not, in this argument, used pure logic to derive that pineapple pizza must taste better than mushroom pizza to an ideal rational agent.  But we've seen that eliminating a certain kind of shoot-yourself-in-the-foot behavior, corresponds to imposing a certain *coherence* or *consistency* requirement on whatever preferences are there.\n\nIt turns out that this is just one instance of a large family of *coherence theorems* which all end up pointing at the same set of core properties.  All roads lead to Rome, and all the roads say, "If you are not shooting yourself in the foot in sense X, we can view you as having coherence property Y."\n\nThere are some caveats to this general idea.\n\nFor example:  In complicated problems, perfect coherence is usually impossible to compute--it's just too expensive to consider *all* the possibilities.\n\nBut there are also caveats to the caveats!  For example, it may be that if there's a powerful machine intelligence that is not *visibly to us humans* shooting itself in the foot in way X, then *from our perspective* it must look like the AI has coherence property Y.  If there's some sense in which the machine intelligence is going in circles, because *not* going in circles is too hard to compute, well, *we* won't see that either with our tiny human brains.  In which case it may make sense, from our perspective, to think about the machine intelligence *as if* it has some coherent preference ordering.\n\nWe are not going to go through all the coherence theorems in this introduction.  They form a very large family; some of them are a *lot* more mathematically intimidating; and honestly I don't know even 5% of the variants.\n\nBut we can hopefully walk through enough coherence theorems to at least start to see the reasoning behind, "Why expected utility?"  And, because the two are a package deal, "Why probability?"\n\n# Human lives, mere dollars, and coherent trades\n\nAn experiment in 2000--from a paper titled "[The Psychology of the Unthinkable:\nTaboo Trade-Offs, Forbidden Base Rates, and Heretical Counterfactuals](http://scholar.harvard.edu/files/jenniferlerner/files/2000_the_psychology_of_the_unthinkable.pdf?m=145089665)"--asked subjects to consider the dilemma of a hospital administrator named Robert:\n\n> Robert can save the life of Johnny, a five year old who needs a liver transplant, but the transplant procedure will cost the hospital \\$1,000,000 that could be spent in other ways, such as purchasing better equipment and enhancing salaries to recruit talented doctors to the hospital. Johnny is very ill and has been on the waiting list for a transplant but because of the shortage of local organ donors, obtaining a liver will be expensive. Robert could save Johnny's life, or he could use the \\$1,000,000 for other hospital needs.\n\nThe main experimental result was that most subjects got angry at Robert for even considering the question.\n\nAfter all, you can't put a dollar value on a human life, right?\n\nBut better hospital equipment also saves lives, or at least one hopes so. %note: We can omit the 'better doctors' item from consideration:  The supply of doctors is mostly constrained by regulatory burdens and medical schools rather than the number of people who want to become doctors; so bidding up salaries for doctors doesn't much increase the total number of doctors; so bidding on a talented doctor at one hospital just means some other hospital doesn't get that talented doctor. It's also illegal to pay for livers, but let's ignore that particular issue with the problem setup or pretend that it all takes place in a more sensible country than the United States or Europe.%  It's not like the other potential use of the money saves zero lives.\n\nLet's say that Robert has a total budget of \\$100,000,000 and is faced with a long list of options such as these:\n\n- \\$100,000 for a new dialysis machine, which will save 3 lives\n- \\$1,000,000 for a liver for Johnny, which will save 1 life\n- \\$10,000 to train the nurses on proper hygiene when inserting central lines, which will save an expected 100 lives\n- ...\n\nNow suppose--this is a supposition we'll need for our theorem--that Robert *does not care at all about money,* not even a tiny bit.  Robert *only* cares about maximizing the total number of lives saved.  Furthermore, we suppose for now that Robert cares about every human life equally.\n\nIf Robert does save as many lives as possible, given his bounded money, then Robert must *behave like* somebody assigning some consistent dollar value to saving a human life.\n\nWe should be able to look down the long list of options that Robert took and didn't take, and say, e.g., "Oh, Robert took all the options that saved more than 1 life per \\$500,000 and rejected all options that saved less than 1 life per \\$500,000; so Robert's behavior is *consistent* with his spending \\$500,000 per life."\n\nAlternatively, if we can't view Robert's behavior as being coherent in this sense--if we cannot make up *any* dollar value of a human life, such that Robert's choices are consistent with that dollar value--then it must be possible to move around the same amount of money, in a way that saves more lives.\n\nWe start from the qualitative criterion, "Robert must save as many lives as possible; it shouldn't be possible to move around the same money to save more lives".  We end up with the quantitative coherence theorem, "It must be possible to view Robert as trading dollars for lives at a consistent price."\n\nWe haven't proven that dollars have some intrinsic worth that trades off against the intrinsic worth of a human life.  By hypothesis, Robert doesn't care about money at all.  It's just that every dollar has an *opportunity cost* in lives it could have saved if deployed differently; and this opportunity cost is the same for every dollar because money is fungible.\n\nAn important caveat to this theorem is that there may be, e.g., an option that saves a hundred thousand lives for \\$200,000,000.  But Robert only has \\$100,000,000 to spend.  In this case, Robert may fail to take that option even though it saves 1 life per \\$2,000.  It was a good option, but Robert didn't have enough money in the bank to afford it.  This does mess up the elegance of being able to say, "Robert must have taken *all* the options saving at least 1 life per \\$500,000", and instead we can only say this with respect to options that are in some sense small enough or granular enough.\n\nSimilarly, if an option costs \\$5,000,000 to save 15 lives, but Robert only has \\$4,000,000 left over after taking all his other best opportunities, Robert's last selected option might be to save 8 lives for \\$4,000,000 instead.  This again messes up the elegance of the reasoning, but Robert is still doing exactly what an agent *would* do if it consistently valued lives at 1 life per \\$500,000--it would buy all the best options *it could afford* that purchased at least that many lives per dollar.  So that part of the theorem's conclusion still holds.\n\nAnother caveat is that we haven't proven that there's some specific dollar value in Robert's head, as a matter of psychology.  We've only proven that Robert's outward behavior can be *viewed as if* it prices lives at *some* consistent value, assuming Robert saves as many lives as possible.\n\nIt could be that Robert accepts every option that spends less than \\$500,000/life and rejects every option that spends over \\$600,000, and there aren't any available options in the middle.  Then Robert's behavior can equally be *viewed as* consistent with a price of \\$510,000 or a price of \\$590,000.  This helps show that we haven't proven anything about Robert explicitly *thinking* of some number.  Maybe Robert never lets himself think of a specific threshold value, because it would be taboo to assign a dollar value to human life; and instead Robert just fiddles the choices until he can't see how to save any more lives.\n\nWe naturally have not proved by pure logic that Robert must want, in the first place, to save as many lives as possible.  Even if Robert is a good person, this doesn't follow.  Maybe Robert values a 10-year-old's life at 5 times the value of a 70-year-old's life, so that Robert will sacrifice five grandparents to save one 10-year-old.  A lot of people would see that as entirely consistent with valuing human life in general.\n\nLet's consider that last idea more thoroughly.  If Robert considers a preteen equally valuable with 5 grandparents, so that Robert will shift \\$100,000 from saving 8 old people to saving 2 children, then we can no longer say that Robert wants to save as many 'lives' as possible.  That last decision would decrease by 6 the total number of 'lives' saved.  So we can no longer say that there's a qualitative criterion, 'Save as many lives as possible', that produces the quantitative coherence requirement, 'trade dollars for lives at a consistent rate'.\n\nDoes this mean that coherence might as well go out the window, so far as Robert's behavior is concerned?  Anything goes, now?  Just spend money wherever?\n\n"Hm," you might think.  "But... if Robert trades 8 old people for 2 children *here*... and then trades 1 child for 2 old people *there*..."\n\nTo reduce distraction, let's make this problem be about apples and oranges instead.  Suppose:\n\n- Alice starts with 8 apples and 1 orange.\n- Then Alice trades 8 apples for 2 oranges.\n- Then Alice trades away 1 orange for 2 apples.\n- Finally, Alice trades another orange for 3 apples.\n\nThen in this example, Alice is using a strategy that's *strictly dominated* across all categories of fruit.  Alice ends up with 5 apples and one orange, but could've ended with 8 apples and one orange (by not making any trades at all).  Regardless of the *relative* value of apples and oranges, Alice's strategy is doing *qualitatively* worse than another possible strategy, if apples have any positive value to her at all.\n\nSo the fact that Alice can't be viewed as having any coherent relative value for apples and oranges, corresponds to her ending up with qualitatively less of some category of fruit (without any corresponding gains elsewhere).\n\nThis remains true if we introduce more kinds of fruit into the problem.  Let's say the set of fruits Alice can trade includes {apples, oranges, strawberries, plums}.  If we can't look at Alice's trades and make up some relative quantitative values of fruit, such that Alice could be trading consistently with respect to those values, then Alice's trading strategy must have been dominated by some other strategy that would have ended up with strictly more fruit across all categories.\n\nIn other words, we need to be able to look at Alice's trades, and say something like:\n\n"Maybe Alice values an orange at 2 apples, a strawberry at 0.1 apples, and a plum at 0.5 apples.  That would explain why Alice was willing to trade 4 strawberries for a plum, but not willing to trade 40 strawberries for an orange and an apple."\n\nAnd if we *can't* say this, then there must be some way to rearrange Alice's trades and get *strictly more fruit across all categories* in the sense that, e.g., we end with the same number of plums and apples, but one more orange and two more strawberries.  This is a bad thing if Alice *qualitatively* values fruit from each category--prefers having more fruit to less fruit, ceteris paribus, for each category of fruit.\n\nNow let's shift our attention back to Robert the hospital administrator.  *Either* we can view Robert as consistently assigning some *relative* value of life for 10-year-olds vs. 70-year-olds, *or* there must be a way to rearrange Robert's expenditures to save either strictly more 10-year-olds or strictly more 70-year-olds.  The same logic applies if we add 50-year-olds to the mix.  We must be able to say something like, "Robert is consistently behaving as if a 50-year-old is worth a third of a ten-year-old".  If we *can't* say that, Robert must be behaving in a way that pointlessly discards some saveable lives in some category. \n\nOr perhaps Robert is behaving in a way which implies that 10-year-old girls are worth more than 10-year-old boys.  But then the relative values of those subclasses 10-year-olds need to be viewable as consistent; or else Robert must be qualitatively failing to save one more 10-year-old boy than could've been saved otherwise.\n\nIf you can denominate apples in oranges, and price oranges in plums, and trade off plums for strawberries, all at consistent rates... then you might as well take it one step further, and factor out an abstract unit for ease of notation.\n\nLet's call this unit *1 utilon,* and denote it &euro;1.  (As we'll see later, the letters 'EU' are appropriate here.)\n\nIf we say that apples are worth &euro;1, oranges are worth &euro;2, and plums are worth &euro;0.5, then this tells us the relative value of apples, oranges, and plums.  Conversely, if we *can* assign consistent relative values to apples, oranges, and plums, then we can factor out an abstract unit at will--for example, by arbitrarily declaring apples to be worth &euro;100 and then calculating everything else's price in apples.\n\nHave we proven by pure logic that all apples have the same utility?  Of course not; you can prefer some particular apples to other particular apples.  But when you're done saying which things you qualitatively prefer to which other things, if you go around making tradeoffs in way that can be *viewed as* not qualitatively leaving behind some things you said you wanted, we can *view you* as assigning coherent quantitative utilities to everything you want.\n\nAnd that's one coherence theorem--among others--that can be seen as motivating the concept of *utility* in decision theory.\n\nUtility isn't a solid thing, a separate thing.  We could multiply all the utilities by two, and that would correspond to the same outward behaviors.  It's meaningless to ask how much utility you scored at the end of your life, because we could subtract a million or add a million to that quantity while leaving everything else conceptually the same.\n\nYou could pick anything you valued--say, the joy of watching a cat chase a laser pointer for 10 seconds--and denominate everything relative to that, without needing any concept of an extra abstract 'utility'.  So (just to be extremely clear about this point) we have not proven that there is a separate thing 'utility' that you should be pursuing instead of everything else you wanted in life.\n\nThe coherence theorem says nothing about which things to value more than others, or how much to value them relative to other things.  It doesn't say whether you should value your happiness more than someone else's happiness, any more than the notion of a consistent preference ordering $>_P$ tells us whether $\\text{onions} >_P \\text{pineapple}.$\n\n(The notion that we should assign equal value to all human lives, or equal value to all sentient lives, or equal value to all Quality-Adjusted Life Years, is *utilitarianism.*  Which is, sorry about the confusion, a whole 'nother separate different philosophy.)\n\nThe conceptual gizmo that maps thingies to utilities--the whatchamacallit that takes in a fruit and spits out a utility--is called a 'utility function'.  Again, this isn't a separate thing that's written on a stone tablet.  If we multiply a utility function by 9.2, that's conceptually the same utility function because it's consistent with the same set of behaviors.\n\nBut in general:  If we can sensibly view any agent as doing as well as qualitatively possible at *anything*, we must be able to view the agent's behavior as consistent with there being some coherent relative quantities of wantedness for all the thingies it's trying to optimize.\n\n# Probabilities and expected utility\n\nWe've so far made no mention of *probability.*  But the way that probabilities and utilities interact, is where we start to see the full structure of *expected utility* spotlighted by all the coherence theorems.\n\nThe basic notion in expected utility is that some choices present us with uncertain outcomes.\n\nFor example, I come to you and say:  "Give me 1 apple, and I'll flip a coin; if the coin lands heads, I'll give you 1 orange; if the coin comes up tails, I'll give you 3 plums."  Suppose you relatively value fruits as described earlier: 2 apples / orange and 0.5 apples / plum.  Then *either* possible outcome gives you something that's worth more to you than 1 apple.  Turning down a so-called 'gamble' like that... why, it'd be a dominated strategy.\n\nIn general, the notion of 'expected utility' says that we assign certain quantities called *probabilities* to each possible outcome.  In the example above, we might assign a 'probability' of $0.5$ to the coin landing heads (1 orange), and a 'probability' of $0.5$ to the coin landing tails (3 plums).  Then the total value of the 'gamble' we get by trading away 1 apple is:\n\n$$\\mathbb P(heads) \\cdot U(\\text{1 orange}) + \\mathbb P(tails) \\cdot U(\\text{3 plums}) \\\\\n= 0.50 \\cdot €2 + 0.50 \\cdot €1.5 = €1.75$$\n\nConversely, if we just keep our 1 apple instead of making the trade, this has an expected utilty of $1 \\cdot U(\\text{1 apple}) = €1.$  So indeed we ought to trade (as the previous reasoning suggested).\n\n"But wait!" you cry.  "Where did these probabilities come from?  Why is the 'probability' of a fair coin landing heads $0.5$ and not, say, $-0.2$ or $3$?  Who says we ought to multiply utilities by probabilities in the first place?"\n\nIf you're used to approaching this problem from a [1zq Bayesian] standpoint, then you may now be thinking of notions like [1rm prior probability] and [occams_razor Occam's Razor] and [4mr universal priors]...\n\nBut from the standpoint of coherence theorems, that's putting the cart before the horse.\n\nFrom the standpoint of coherence theorems, we don't *start with* a notion of 'probability'.\n\nInstead we ought to prove something along the lines of: if you're not using qualitatively dominated strategies, then you must *behave as if* you are multiplying utilities by certain quantitative thingies.\n\nWe might then furthermore show that, for non-dominated strategies, these utility-multiplying thingies must be between $0$ and $1$ rather than say $-0.3$ or $27.$\n\nHaving determined what coherence properties these utility-multiplying thingies need to have, we decide to call them 'probabilities'.  And *then*--once we know in the first place that we need 'probabilities' in order to not be using dominated strategies--we can start to worry about exactly what the numbers ought to be.\n\n## Probabilities summing to 1\n\nHere's a taste of the kind of reasoning we might do:\n\nSuppose that--having already accepted some previous proof that non-dominated strategies dealing with uncertain outcomes, must multiply utilities by quantitative thingies--you then say that you are going to assign a probability of $0.6$ to the coin coming up heads, and a probability of $0.7$ to the coin coming up tails.\n\nIf you're already used to the standard notion of probability, you might object, "But those probabilities sum to $1.3$ when they ought to sum to $1!$" %%note: Or maybe a [4mq tiny bit less] than $1,$ in case the coin lands on its edge or something.%%  But now we are in coherence-land; we don't ask "Did we violate the standard axioms that all the textbooks use?" but "What rules must non-dominated strategies obey?"  *De gustibus non est disputandum;* can we *disputandum* somebody saying that a coin has a 60% probability of coming up heads and a 70% probability of coming up tails?  (Where these are the only 2 possible outcomes of an uncertain coinflip.)\n\nWell--assuming you've already accepted that we need utility-multiplying thingies--I might then offer you a gamble.  How about you give me one apple, and if the coin lands heads, I'll give you 0.8 apples; while if the coin lands tails, I'll give you 0.8 apples.\n\nAccording to you, the expected utility of this gamble is:\n\n$$\\mathbb P(\\text{heads}) \\cdot U(\\text{0.8 apples}) + \\mathbb P(\\text{tails}) \\cdot U(\\text{0.8 apples}) \\\\\n= 0.6 \\cdot €0.8 + 0.7 \\cdot €0.8 = €1.04.$$\n\nYou've just decided to trade your apple for 0.8 apples, which sure sounds like one of 'em dominated strategies.\n\nAnd that's why *the thingies you multiply probabilities by*--the thingies that you use to weight uncertain outcomes in your imagination, when you're trying to decide how much you want one branch of an uncertain choice--must sum to 1, whether you call them 'probabilities' or not.\n\nWell... actually we just argued %note: Nothing we're walking through here is really a coherence theorem *per se*, more like intuitive arguments that a coherence theorem ought to exist.  Theorems require proofs, and nothing is here is what real mathematicians would consider to be a 'proof'.% that probabilities for [1rd mutually exclusive] outcomes should sum to *no more than 1.*  What would be an example showing that, for non-dominated strategies, the probabilities for [1rd exhaustive] outcomes should sum to no less than 1?\n\n%%hidden(Why exhaustive outcomes should sum to at least 1):\nSuppose that, in exchange for 1 apple, I credibly offer:\n\n- To pay you 1.1 apples if a coin comes up heads.\n- To pay you 1.1 apples if a coin comes up tails.\n- To pay you 1.1 apples if anything else happens.\n\nIf the probabilities you assign to these three outcomes sum to say 0.9, you will refuse to trade 1 apple for 1.1 apples.\n\n(This is strictly dominated by the strategy of agreeing to trade 1 apple for 1.1 apples.)\n%%\n\n## Dutch book arguments\n\nAnother way we could have presented essentially the same argument as above, is as follows:\n\nSuppose you are a market-maker in a prediction market for some event $X.$  When you say that your price for event $X$ is $x$, you mean that you will sell for $\\$x$ a ticket which pays $\\$1$ if $X$ happens (and pays out nothing otherwise).  In fact, you will sell any number of such tickets!\n\nSince you are a market-maker (that is, you are trying to encourage trading in $X$ for whatever reason), you are also willing to *buy* any number of tickets at the price $\\$x.$  That is, I can say to you (the market-maker) "I'd like to sign a contract where you give me $N \\cdot \\$x$ now, and in return I must pay you $\\$N$ iff $X$ happens;" and you'll agree.  (We can view this as you selling me a negative number of the original kind of ticket.)\n\nLet $X$ and $Y$ denote two events such that *exactly one* of them must happen; say, $X$ is a coin landing heads and $Y$ is the coin not landing heads.\n\nNow suppose that you, as a market-maker, are motivated to avoid combinations of bets that lead into *certain* losses for you--not just losses that are merely probable, but combinations of bets such that *every* possibility leads to a loss.\n\nThen if exactly one of $X$ and $Y$ must happen, your prices $x$ and $y$ must sum to exactly $\\$1.$  Because:\n\n- If $x + y < \\$1,$ I buy both an $X$-ticket and a $Y$-ticket and get a guaranteed payout of $\\$1$ minus costs of $x + y.$  Since this is a guaranteed profit for me, it is a guaranteed loss for you.\n- If $x + y > \\$1,$ I sell you both tickets and will at the end pay you $\\$1$ after you have already paid me $x + y.$  Again, this is a guaranteed profit for me of $x + y - \\$1 > \\$0.$\n\nThis is more or less exactly the same argument as in the previous section, with trading apples.  Except that: (a) the scenario is more crisp, so it is easier to generalize and scale up much more complicated similar arguments; and (b) it introduces a whole lot of assumptions that people new to expected utility would probably find rather questionable.\n\n"What?" one might cry. "What sort of crazy bookie would buy and sell bets at exactly the same price?  Why ought *anyone* to buy and sell bets at exactly the same price?  Who says that I must value a gain of \\$1 exactly the opposite of a loss of \\$1?  Why should the price that I put on a bet represent my degree of uncertainty about the environment?  What does all of this argument about gambling have to do with real life?"\n\nSo again, the key idea is not that we are assuming anything about people valuing every real-world dollar the same; nor is it in real life a good idea to offer to buy or sell bets at the same prices. %%note: In real life this leads to a problem of 'adversarial selection' where somebody who knows more about the environment than you, can decide whether to buy or sell from you.  To put it another way, from a [1zq Bayesian] standpoint, if an *intelligent* counterparty is deciding whether to buy or sell from you a bet on $X$, the fact that they choose to buy (or sell) should cause you to [1ly update] in favor (or against) $X$ actually happening.  After all, they wouldn't be taking the bet unless they thought they knew something you didn't!%% Rather, Dutch book arguments can stand in as shorthand for some longer story in which we only assume that you prefer more apples to less apples.\n\nThe Dutch book argument above has to be seen as one more added piece in the company of all the *other* coherence theorems--for example, the coherence theorems suggesting that you ought to be quantitatively weighing events in your mind in the first place.\n\n## Conditional probability\n\nWith more complicated Dutch book arguments, we can derive more complicated ideas such as 'conditional probability'.\n\nLet's say that we're pricing three kinds of gambles over two events $Q$ and $R$:\n\n- A ticket that costs $\\$x$, and pays $\\$1$ if $Q$ happens.\n- A ticket that doesn't cost anything or pay anything if $Q$ doesn't happen (the ticket price is refunded); and if $Q$ does happen, this ticket costs $\\$y,$ then pays $\\$1$ if $R$ happens.\n- A ticket that costs $\\$z$, and pays $\\$1$ if $Q$ and $R$ both happen.\n\nIntuitively, the idea of [1rj conditional probability] is that the probability of $Q$ and $R$ both happening, should be equal to the probability of $Q$ happening, times the probability that $R$ happens assuming that $Q$ happens:\n\n$$\\mathbb P(Q \\wedge R) = \\mathbb P(Q) \\cdot \\mathbb P(R \\mid Q)$$\n\nTo exhibit a Dutch book argument for this rule, we want to start from the assumption of a qualitatively non-dominated strategy, and derive the quantitative rule $z = x \\cdot y.$\n\nSo let's give an example that violates this equation and see if there's a way to make a guaranteed profit.  Let's say somebody:\n\n- Prices at x=\\$0.60 the first ticket, aka $\\mathbb P(Q)$\n- Prices at y=\\$0.70 the second ticket, aka $\\mathbb P(R \\mid Q)$\n- Prices at z=\\$0.20 the third ticket, aka $\\mathbb P(Q \\wedge R),$ which ought to be \\$0.42 assuming the first two prices.\n\nThe first two tickets are priced relatively high, compared to the third ticket which is priced relatively low, suggesting that we ought to sell the first two tickets and buy the third.\n\nOkay, let's ask what happens if we sell 10 of the first ticket, sell 10 of the second ticket, and buy 10 of the third ticket.\n\n- If $Q$ doesn't happen, we get \\$6, and pay \\$2.  Net +\\$4.\n- If $Q$ happens and $R$ doesn't happen, we get \\$6, pay \\$10, get \\$7, and pay \\$2.  Net +\\$1.\n- If $Q$ happens and $R$ happens, we get \\$6, pay \\$10, get \\$7, pay \\$10, pay \\$2, and get \\$10.  Net:  +\\$1.\n\nThat is: we can get a guaranteed positive profit over all three possible outcomes.\n\nMore generally, let $A, B, C$ be the (potentially negative) amount of each ticket $X, Y, Z$ that is being bought (buying a negative amount is selling).  Then the prices $x, y, z$ can be combined into a 'Dutch book' whenever the following three inequalities can be simultaneously true, with at least one inequality strict:\n\n$$\\begin{array}{rrrl}\n-Ax & + 0 & - Cz & \\geqq 0 \\\\\nA(1-x) & - By & - Cz & \\geqq 0 \\\\\nA(1-x) & + B(1-y) & + C(1-z) & \\geqq 0\n\\end{array}$$\n\nFor $x, y, z \\in (0..1)$ this is impossible exactly iff $z = x * y.$  The proof via a bunch of algebra is left as an exercise to the reader. %note: The quick but advanced argument would be to say that the left-hand-side must look like a singular matrix, whose determinant must therefore be zero.%\n\n## The Allais Paradox\n\nBy now, you'd probably like to see a glimpse of the sort of argument that shows in the first place that we need expected utility--that a non-dominated strategy for uncertain choice must behave as if multiplying utilities by some kinda utility-multiplying thingies ('probabilities').\n\nAs far as I understand it, the real argument you're looking for is [Abraham Wald's complete class theorem](https://projecteuclid.org/download/pdf_1/euclid.aoms/1177730345), which I must confess I don't know how to reduce to a simple demonstration.\n\nBut we can catch a glimpse of the general idea from a famous psychology experiment that became known as the Allais Paradox (in slightly adapted form).\n\nSuppose you ask some experimental subjects which of these gambles they would rather play:\n\n- 1A:  A certainty of \\$1,000,000.\n- 1B:  90% chance of winning \\$5,000,000, 10% chance of winning nothing.\n\nMost subjects say they'd prefer 1A to 1B.\n\nNow ask a separate group of subjects which of these gambles they'd prefer:\n\n- 2A:  50% chance of winning \\$1,000,000; 50% chance of winning \\$0.\n- 2B:  45% chance of winning \\$5,000,000; 55% chance of winning \\$0.\n\nIn this case, most subjects say they'd prefer gamble 2B.\n\nNote that the \\$ sign here denotes real dollars, not utilities!  A gain of five million dollars isn't, and shouldn't be, worth exactly five times as much to you as a gain of one million dollars.  We can use the &euro; symbol to denote the expected utilities that are abstracted from how much you relatively value different outcomes; \\$ is just money.\n\nSo we certainly aren't claiming that the first preference is paradoxical because 1B has an expected dollar value of \\$4.5 million and 1A has an expected dollar value of \\$1 million.  That would be silly.  We care about expected utilities, not expected dollar values, and those two concepts aren't the same at all!\n\nNonetheless, the combined preferences 1A > 1B and 2A < 2B are not compatible with any coherent utility function.  We cannot simultaneously have:\n\n$$\\begin{array}{rcl}\nU(\\text{gain \\$1 million}) & > & 0.9 \\cdot U(\\text{gain \\$5 million}) + 0.1 \\cdot U(\\text{gain \\$0}) \\\\\n0.5 \\cdot U(\\text{gain \\$0}) + 0.5 \\cdot U(\\text{gain \\$1 million}) & > & 0.45 \\cdot U(\\text{gain \\$5 million}) + 0.55 \\cdot U(\\text{gain \\$0})\n\\end{array}$$\n\nThis was one of the earliest experiments seeming to demonstrate that actual human beings were not expected utility maximizers--a very tame idea nowadays, to be sure, but the *first definite* demonstration of that was a big deal at the time.  Hence the term, "Allais Paradox".\n\nNow by the general idea behind coherence theorems, since we can't *view this behavior* as corresponding to expected utilities, we ought to be able to show that it corresponds to a dominated strategy somehow--derive some way in which this behavior corresponds to shooting off your own foot.\n\nIn this case, the relevant idea seems non-obvious enough that it doesn't seem reasonable to demand that you think of it on your own; but if you like, you can pause and try to think of it anyway.  Otherwise, just continue reading.\n\nAgain, the gambles are as follows:\n\n- 1A:  A certainty of \\$1,000,000.\n- 1B:  90% chance of winning \\$5,000,000, 10% chance of winning nothing.\n- 2A:  50% chance of winning \\$1,000,000; 50% chance of winning \\$0.\n- 2B:  45% chance of winning \\$5,000,000; 55% chance of winning \\$0.\n\nNow observe that Scenario 2 corresponds to a 50% chance of playing Scenario 1, and otherwise getting \\$0.\n\nThis, in fact, is why the combination 1A > 1B; 2A < 2B is incompatible with expected utility.  In terms of [one set of axioms](https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem#The_axioms) frequently used to describe expected utility, it violates the Independence Axiom: if a gamble $L$ is preferred to $M$, that is $L > M$, then we ought to be able to take a constant probability $p > 0$ and another gamble $N$ and have $p \\cdot L + (1-p)\\cdot N > p \\cdot M + (1-p) \\cdot N.$\n\nTo put it another way, if I flip a coin to decide whether or not to play some entirely different game $N,$ but otherwise let you choose $L$ or $M,$ you ought to make the same choice as if I just ask you whether you prefer $L$ or $M$.  Your preference between $L$ and $M$ should be 'independent' of the possibility that, instead of doing anything whatsoever with $L$ or $M,$ we will do something else instead.\n\nAnd since this is an axiom of expected utility, any violation of that axiom ought to correspond to a dominated strategy somehow.\n\nIn the case of the Allais Paradox, we do the following:\n\nFirst, I show you a switch that can be set to A or B, currently set to A.\n\nIn one minute, I tell you, I will flip a coin.  If the coin comes up heads, you will get nothing.  If the coin comes up tails, you will play the gamble from Scenario 1.\n\nFrom your current perspective, that is, we are playing Scenario 2: since the switch is set to A, you have a 50% chance of getting nothing and a 50% chance of getting \\$1 million.\n\nI ask you if you'd like to pay a penny to throw the switch from A to B.  Since you prefer gamble 2B to 2A, and some quite large amounts of money are at stake, you agree to pay the penny.  From your perspective, you now have a 55% chance of ending up with nothing and a 45% chance of getting \\$5M.\n\nI then flip the coin, and luckily for you, it comes up tails.\n\nFrom your perspective, you are now in Scenario 1B.  Having observed the coin and updated on its state, you now think you have a 90% chance of getting \\$5 million and a 10% chance of getting nothing.  By hypothesis, you would prefer a certainty of \\$1 million.\n\nSo I offer you a chance to pay another penny to flip the switch back from B to A.  And with so much money at stake, you agree.\n\nI have taken your two cents on the subject.\n\nThat is:  You paid a penny to flip a switch and then paid another penny to switch it back, and this is dominated by the strategy of just leaving the switch set to A.\n\nAnd that's at least a glimpse of why, if you're not using dominated strategies, the thing you do with relative utilities is multiply them by probabilities in a consistent way, and prefer the choice that leads to a greater expectation of the variable representing utility.\n\n### From the Allais Paradox to real life\n\nThe real-life lesson about what to do when faced with Allais's dilemma might be something like this:\n\nThere's *some* amount that \\$1 million would improve your life compared to \\$0.\n\nThere's some amount that an additional \\$4 million would further improve your life after the first \\$1 million.\n\nYou ought to visualize these two improvements as best you can, and decide whether another \\$4 million can produce at least *one-ninth* as much improvement, as much true value to you, as the first \\$1 million.\n\nIf it can, you should consistently prefer 1B > 1A; 2B > 2A.  And if not, you should consistently prefer 1A > 1B; 2A > 2B.\n\nThe standard 'paradoxical' preferences in Allais's experiment are standardly attributed to a certainty effect: people value the *certainty* of having \\$1 million, while the difference between a 50% probability and a 55% probability looms less large.  (And this ties in to a number of other results about certainty, need for closure, prospect theory, and so on.)\n\nIt may sound intuitive, in an Allais-like scenario, to say that you ought to derive some value from being *certain* about the outcome.  In fact this is just the reasoning the experiment shows people to be using, so of course it might sound intuitive.  But that does, inescapably, correspond to a kind of thinking that produces dominated strategies.\n\nOne possible excuse might be that certainty is valuable if you need to make plans about the future; knowing the exact future lets you make better plans.  This is admittedly true and a phenomenon within expected utility, though it applies in a smooth way as confidence increases rather than jumping suddenly around 100%.  But in the particular dilemma as described here, you only have 1 minute before the game is played, and no time to make other major life choices dependent on the outcome.\n\nAnother possible excuse for certainty bias might be to say:  "Well, I value the emotional feeling of certainty."\n\nIn real life, we do have emotions that are directly about probabilities, and those little flashes of happiness or sadness are worth something if you care about people being happy or sad.  If you say that you value the emotional feeling of being *certain* of getting \\$1 million, the freedom from the fear of getting \\$0, for the minute that the dilemma lasts and you are experiencing the emotion--well, that may just be a fact about what you value, even if it exists outside the expected utility formalism.\n\nAnd this genuinely does not fit into the expected utility formalism.  In an expected utility agent, probabilities are just thingies-you-multiply-utilities-by.  If those thingies start generating their own utilities once represented inside the mind of person who is an object of ethical value, you really are going to get results that are incompatible with the formal decision theory.\n\nHowever, *not* being viewable as an expected utility agent does always correspond to employing dominated strategies.  You are giving up *something* in exchange, if you pursue that feeling of certainty.  You are potentially losing all the real value you could have gained from another \\$4 million, if that realized future actually would have gained you more than one-ninth the value of the first \\$1 million.  Is a fleeting emotional sense of certainty over 1 minute, worth *automatically* discarding the potential \\$5-million outcome?  Even if the correct answer given your values is that you properly ought to take the \\$1 million, treasuring 1 minute of emotional doesn't seem like the wise reason to do that.  The wise reason would be if the first \\$1 million really was worth that much more than the next \\$4 million.\n\nThe danger of saying, "Oh, well, I attach a lot of utility to that comfortable feeling of certainty, so my choices are coherent after all" is not that it's mathematically improper to value the emotions we feel while we're deciding.  Rather, by saying that the *most valuable* stakes are the emotions you feel during the minute you make the decision, what you're saying is, "I get a huge amount of value by making decisions however humans instinctively make their decisions, and that's much more important than the thing I'm making a decision *about.*"  This could well be true for something like buying a stuffed animal.  If millions of dollars or human lives are at stake, maybe not so much.\n\n# Conclusion\n\nThe demonstrations we've walked through here aren't the professional-grade coherence theorems as they appear in real math.  Those have names like "[Cox's Theorem](https://en.wikipedia.org/wiki/Cox's_theorem)" or "the complete class theorem"; their proofs are difficult; and they say things like "If seeing piece of information A followed by piece of information B leads you into the same epistemic state as seeing piece of information B followed by piece of information A, plus some other assumptions, I can show an isomorphism between those epistemic states and classical probabilities" or "Any decision rule for taking different actions depending on your observations either corresponds to Bayesian updating given some prior, or else is strictly dominated by some Bayesian strategy".\n\nBut hopefully you've seen enough concrete demonstrations to get a general idea of what's going on with the actual coherence theorems.  We have multiple spotlights all shining on the same core mathematical structure, saying dozens of different variants on, "If you aren't running around in circles or stepping on your own feet or wantonly giving up things you say you want, we can see your behavior as corresponding to this shape.  Conversely, if we can't see your behavior as corresponding to this shape, you must be visibly shooting yourself in the foot."  Expected utility is the only structure that has this great big family of discovered theorems all saying that.  It has a scattering of academic competitors, because academia is academia, but the competitors don't have anything like that mass of spotlights all pointing in the same direction.\n\nSo if we need to pick an interim answer for "What kind of quantitative framework should I try to put around my own decision-making, when I'm trying to check if my thoughts make sense?" or "By default and barring special cases, what properties might a sufficiently advanced machine intelligence *look to us* like it had at least approximately, if we couldn't see it *visibly* running around in circles?", then there's pretty much one obvious candidate:  Probabilities, utility functions, and expected utility.\n\n# Further reading\n\n- To learn more about agents and AI:  [ Interesting cognition and behavior that can be derived just from the notion of expected utility], followed by [ Is expected utility a good way to think about the default behavior of sufficiently advanced Artificial Intelligences?]\n- To learn more about decision theory:  [ The controversial counterfactual at the heart of the expected utility formula.]',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '1',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '',
  hasDraft: 'false',
  votes: [],
  voteSummary: [
    '0',
    '0',
    '0',
    '0',
    '0',
    '0',
    '0',
    '0',
    '0',
    '0'
  ],
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {
    Summary: 'A tutorial that introduces the concept of 'expected utility' by walking through some of the bad things that happen to an agent which can't be viewed as having a consistent utility function and probability assignment.'
  },
  creatorIds: [
    'EliezerYudkowsky',
    'EvgeniiPashkin'
  ],
  childIds: [],
  parentIds: [
    'expected_utility_formalism'
  ],
  commentIds: [],
  questionIds: [],
  tagIds: [
    'b_class_meta_tag'
  ],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [
    {
      id: '7394',
      parentId: 'expected_utility_formalism',
      childId: 'intro_utility_coherence',
      type: 'subject',
      creatorId: 'EliezerYudkowsky',
      createdAt: '2017-01-19 19:03:27',
      level: '3',
      isStrong: 'true',
      everPublished: 'true'
    },
    {
      id: '7466',
      parentId: 'coherence_theorems',
      childId: 'intro_utility_coherence',
      type: 'subject',
      creatorId: 'EliezerYudkowsky',
      createdAt: '2017-02-07 21:07:34',
      level: '2',
      isStrong: 'false',
      everPublished: 'true'
    }
  ],
  lenses: [],
  lensParentId: 'expected_utility_formalism',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '23134',
      pageId: 'intro_utility_coherence',
      userId: 'EliezerYudkowsky',
      edit: '15',
      type: 'newEdit',
      createdAt: '2018-11-29 07:38:02',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '22866',
      pageId: 'intro_utility_coherence',
      userId: 'EvgeniiPashkin',
      edit: '14',
      type: 'newEditProposal',
      createdAt: '2017-11-04 16:54:42',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '22738',
      pageId: 'intro_utility_coherence',
      userId: 'EliezerYudkowsky',
      edit: '13',
      type: 'newEdit',
      createdAt: '2017-08-28 04:10:33',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '22272',
      pageId: 'intro_utility_coherence',
      userId: 'EliezerYudkowsky',
      edit: '12',
      type: 'newEdit',
      createdAt: '2017-03-09 22:13:06',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '21955',
      pageId: 'intro_utility_coherence',
      userId: 'EliezerYudkowsky',
      edit: '11',
      type: 'newEdit',
      createdAt: '2017-02-08 03:25:19',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '21954',
      pageId: 'intro_utility_coherence',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newSubject',
      createdAt: '2017-02-07 21:07:35',
      auxPageId: 'coherence_theorems',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '21831',
      pageId: 'intro_utility_coherence',
      userId: 'EliezerYudkowsky',
      edit: '10',
      type: 'newEdit',
      createdAt: '2017-01-25 05:21:08',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '21830',
      pageId: 'intro_utility_coherence',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newTag',
      createdAt: '2017-01-25 05:18:26',
      auxPageId: 'b_class_meta_tag',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '21829',
      pageId: 'intro_utility_coherence',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'deleteTag',
      createdAt: '2017-01-25 05:18:18',
      auxPageId: 'work_in_progress_meta_tag',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '21827',
      pageId: 'intro_utility_coherence',
      userId: 'EliezerYudkowsky',
      edit: '9',
      type: 'newEdit',
      createdAt: '2017-01-25 05:16:47',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '21805',
      pageId: 'intro_utility_coherence',
      userId: 'EliezerYudkowsky',
      edit: '8',
      type: 'newEdit',
      createdAt: '2017-01-21 01:55:24',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '21803',
      pageId: 'intro_utility_coherence',
      userId: 'EliezerYudkowsky',
      edit: '7',
      type: 'newEdit',
      createdAt: '2017-01-20 05:40:05',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '21796',
      pageId: 'intro_utility_coherence',
      userId: 'EliezerYudkowsky',
      edit: '6',
      type: 'newEdit',
      createdAt: '2017-01-20 05:34:24',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '21795',
      pageId: 'intro_utility_coherence',
      userId: 'EliezerYudkowsky',
      edit: '5',
      type: 'newEdit',
      createdAt: '2017-01-20 05:06:29',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '21794',
      pageId: 'intro_utility_coherence',
      userId: 'EliezerYudkowsky',
      edit: '4',
      type: 'newEdit',
      createdAt: '2017-01-20 04:47:02',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '21793',
      pageId: 'intro_utility_coherence',
      userId: 'EliezerYudkowsky',
      edit: '3',
      type: 'newEdit',
      createdAt: '2017-01-20 04:45:32',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '21792',
      pageId: 'intro_utility_coherence',
      userId: 'EliezerYudkowsky',
      edit: '2',
      type: 'newEdit',
      createdAt: '2017-01-20 04:44:11',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '21788',
      pageId: 'intro_utility_coherence',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newParent',
      createdAt: '2017-01-20 04:41:35',
      auxPageId: 'expected_utility_formalism',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '21790',
      pageId: 'intro_utility_coherence',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newSubject',
      createdAt: '2017-01-20 04:41:35',
      auxPageId: 'expected_utility_formalism',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '21791',
      pageId: 'intro_utility_coherence',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newTag',
      createdAt: '2017-01-20 04:41:35',
      auxPageId: 'work_in_progress_meta_tag',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '21786',
      pageId: 'intro_utility_coherence',
      userId: 'EliezerYudkowsky',
      edit: '1',
      type: 'newEdit',
      createdAt: '2017-01-20 04:41:34',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}