{
  localUrl: '../page/causal_dt.html',
  arbitalUrl: 'https://arbital.com/p/causal_dt',
  rawJsonUrl: '../raw/5n9.json',
  likeableId: '0',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '0',
  dislikeCount: '0',
  likeScore: '0',
  individualLikes: [],
  pageId: 'causal_dt',
  edit: '4',
  editSummary: '',
  prevEdit: '3',
  currentEdit: '4',
  wasPublished: 'true',
  type: 'wiki',
  title: 'Causal decision theories',
  clickbait: 'On CDT, to choose rationally, you should imagine the world where your physical act changes, then imagine running that world forward in time.  (Therefore, it's irrational to vote in elections.)',
  textLength: '17408',
  alias: 'causal_dt',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'EliezerYudkowsky',
  editCreatedAt: '2016-08-02 00:36:46',
  pageCreatorId: 'EliezerYudkowsky',
  pageCreatedAt: '2016-07-29 22:15:27',
  seeDomainId: '0',
  editDomainId: '123',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '2',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '139',
  text: '[summary(Gloss):  Causal decision theory says, "To choose rationally, imagine that just your own physical act changes, and ask what the resulting universe would look like."  CDT is the implicit background theory being invoked when somebody suggests that [rationality_of_voting voting in elections] is 'irrational', or that two 'rational' agents can't help but defect against each other in the [5py]."\n\nCausal decision theory is currently (2016) the most academically popular [18s decision theory].  It is being challenged by [58b].]\n\n[summary:  Causal decision theory is a (currently academically dominant) view which says that the [rational_choice principle of rational choice] is to choose based on the [causal_counterfactual physical consequences] of your act.  To figure out what the universe looks like if you do X, you should imagine a world where nothing changes up to the moment of X, the physical act changes to be X, and then your model of the world's laws is run forward from there.\n\nCDT contrasts to [-5px evidential decision theory], the original form of decision theory, which was later realized to imply, "Act so that your decision is the best news you could get, if somebody told you about it as news."  It also contrasts to the more recent [-58b], which says, "Choose as if you are selecting the logical output of your decision algorithm."\n\nCausal decision theory has been criticized on grounds of giving counterintuitive advice such as "Don't bother [rationality_of_voting voting in elections]" or "Defect against your own clone in the [5py Prisoner's Dilemma]", and for other agents getting higher payoffs in dilemmas such as [5pv].  Logical decision theorists also critique CDT on grounds such as its alleged [2rb reflective inconsistency].]\n\n[summary(Technical):  Causal decision theory says that the action-conditionals inside the [18v expected utility formula] should be treated as [causal_counterfactual causal counterfactuals] intervening on your physical act, that is, the expectation of utility $\\mathcal U$ summed over outcomes $\\mathcal O$ given an action $a_x$ should be written:\n\n$$\\mathbb E[\\mathcal U|a_x] = \\sum_{o_i \\in \\mathcal O} \\mathcal U(o_i) \\cdot \\mathbb P(a_x \\ \\square \\!\\! \\rightarrow o_i)$$\n\nIf causal counterfactuals are computed as in the standard theory of [causal_model causal models] using $operatorname{do}()$ interventions, then the expected utility formula would be written:\n\n$$\\mathbb E[\\mathcal U| \\operatorname{do}(a_x)] = \\sum_{o_i \\in \\mathcal O} \\mathcal U(o_i) \\cdot \\mathbb P(o_i | \\operatorname{do}(a_x))$$\n\nSome proposed technical refinements of CDT include [tickle_defense updating on your own suspicion of action]; and choosing mixed strategies to break infinite loops in dilemmas like [death_in_damascus] or [5qh the Absent-Minded Driver].\n\n[58b Logical decision theorists] have critiqued standard causal decision theory upon several intuitive and technical grounds.]\n\nCausal decision theory (CDT) and its many variants are the current academically dominant form of [18s decision theory] (as of 2016).  CDT contrasts with the older formalism of [5px evidential decision theory] and, more recently, the new principle of [58b logical decision theory].  CDT says that "The [principle_rational_choice principle of rational choice] is to decide according to the [causal_counterfactual counterfactual] consequences of your physical act."  That is:  To figure out the consequences of a choice, imagine the universe as it will already exist at the moment when you physically act; imagine only your physical act changing; and then run the laws of physics forwards from there to figure out the consequences.\n\nOther overviews of causal decision theory:\n\n- [Stanford Encyclopedia of Philosophy](http://plato.stanford.edu/entries/decision-causal/).\n- [Wikipedia](https://en.wikipedia.org/wiki/Causal_decision_theory).\n- [Book by James M. Joyce](https://www.amazon.com/Foundations-Decision-Cambridge-Probability-Induction/dp/0521641640/).\n- [Early paper by Gibbard and Harper which helped to establish causal decision theory](http://www.mit.edu/~djr/24.118/Counterfactuals.pdf).\n\n# Causal versus evidential decision theory\n\nCausal decision theory gained widespread acceptance based on critiques of the policies implied by the previous way of writing down the  [18t expected utility formula], which we now think of as [5px evidential decision theory] (EDT).\n\nWe can think of EDT as the accidental result of writing down expected utilities in the most obvious way:  The expected consequence of an act $a_0$ is just the probability distribution over outcomes $o_i$ given by $\\mathbb P(o_i|a_0).$ That is, on EDT, to imagine the consequence of choosing an act $a_0,$ we imagine what we would believe about the world if somebody told us that we'd actually chosen $a_0.$\n\nTo see on example of a case that pries apart EDT and CDT, consider the [toxoplasmosis_dilemma Toxoplasmosis Dilemma].  Suppose that a certain parasitic infection, often carried by cats, has been found to make humans enjoy petting cats more (thus helping to spread the infection).  Suppose that statistics have found that in a certain experimental setup, 10% of the people who don't pet a cute kitten, and 20% of the people who do pet the kitten, have toxoplasmosis.  The kitten itself is guaranteed to have been sterilized and free of toxoplasmosis.  The disutility of toxoplasmosis as a parasitic infection greatly outweighs the pleasure of petting the kitten.  Do you pet the kitten?\n\nAn EDT agent might reason, "If I learn as news that I pet the kitten, I would estimate a 10% higher chance that I have toxoplasmosis, compared to the world in which I do not learn that I pet the kitten.  Therefore, I will not pet the kitten."\n\nA CDT agent would reason, "When I imagine the world up to the point where I pet the kitten, either I already have toxoplasmosis or I don't.  Petting the kitten can't *cause* me to get toxoplasmosis.  Therefore, I should pet the kitten... now, having realized that I intend to pet the kitten, I realize that I have a 20% chance of having toxoplasmosis.  But in the counterfactual world where I *don't* pet the kitten, my probability of having toxoplasmosis would counterfactually still be 20%, and I'd miss out on petting the kitten as well."\n\nDue to it being widely agreed that the CDT agent is being more reasonable in the above case, CDT was widely adopted as a replacement for the previous formalism that was then relabeled as EDT.\n\nEDT and CDT are computed in formally different ways.  When we condition on our actions inside EDT, we are computing a [1rj conditional probability], whereas in CDT, we are computing a [causal_counterfactual causal counterfactual].  The difference between the two is sometimes explained by contrasting this pair of sentences:\n\n- If Lee Harvey Oswald didn't shoot John F. Kennedy, somebody else did.\n- If Lee Harvey Oswald hadn't shot John F. Kennedy, somebody else would have.\n\nIn the first sentence, we imagine being told as news that Oswald didn't shoot Kennedy, and [1ly updating our beliefs] to integrate this with the rest of our observations.  Formally, we take whatever tiny shred of probability we might have assigned to possible worlds where the history books are wrong and Oswald didn't actually shoot Kennedy, and imagine that tiny shred of probability expanding to become the whole of our [1rp posterior] probability distribution.  In particular, if we imagine whatever shred of probability we assign to worlds like that, we know that even in those worlds, Kennedy was still shot.\n\nLet $O$ denote the proposition that Oswald shot Kennedy, $\\neg O$ denote $O$ being false, and $K$ denote the proposition that Kennedy was shot.  Our revised probability of Kennedy being shot if $O$ were actually false, written as $\\mathbb P(K|\\neg O),$ would still be quite high.\n\nThe second sentence asks us to imagine how a counterfactual world would play out if Oswald had acted differently.  To visualize this counterfactual:\n\n- We imagine everything in the world being the same up until the point where Oswald decides to shoot Kennedy.\n- We surgically intervene on our imagined world to change Oswald's decision to not-shooting, without changing any other facts about the past.\n- We rerun our model of the world's mechanisms forward from the point of change, to determine what *would have* happened in this alternate universe.\n\nThis [causal_counterfactual causal counterfactual] is often written as $\\mathbb P(\\neg O \\ \\square \\!\\! \\rightarrow K).$  If you believe that Lee Harvey Oswald acted alone (and did in fact shoot Kennedy), then you should estimate a high probability of $\\mathbb P(\\neg O \\ \\square \\!\\! \\rightarrow K),$ contrasting to your presumably low probability for $\\mathbb P(K|\\neg O).$\n\n# Computing causal counterfactuals\n\nMany academic discussions of causal decision theory take for granted that we 'just know' a counterfactual distribution $\\mathbb P(\\bullet \\ || \\ \\bullet)$ which is treated as heaven-sent.  However, one formal way of computing causal counterfactuals was given relative to the theory of [causal_model causal models] developed by Judea Pearl and others.\n\n%%todo: put real diagrams into this section; note that it duplicates a section in [5d6]. %%\n\nThe backbone of a causal model is a directed acyclic graph showing which events causally affect which other events:\n\n- $X_1$ -> {$X_2$, $X_3$} -> $X_4$ -> $X_5$\n\nOne standard example of such a causal graph is:\n\n- SEASON -> {RAINING, SPRINKLER} -> {SIDEWALK} -> {SLIPPERY}\n\nThis says, e.g.:\n\n- That the current SEASON affects the probability that it's RAINING, and separately affects the probability of the SPRINKLER turning on.  (But RAINING and SPRINKLER don't affect each other; if we know the current SEASON, we don't need to know whether it's RAINING to figure out the probability the SPRINKLER is on.)\n- RAINING and SPRINKLER can both cause the SIDEWALK to become wet.  (So if we did observe that the sidewalk was wet, then even already knowing the SEASON, we would estimate a different probability that it was RAINING depending on whether the SPRINKLER was on.  The SPRINKLER being on would 'explain away' the SIDEWALK's observed wetness without any need to postulate RAIN.)\n- Whether the SIDEWALK is wet is the sole determining factor for whether the SIDEWALK is SLIPPERY.  (So that if we *know* whether the SIDEWALK is wet, we learn nothing more about the probability that the path is SLIPPERY by being told that the SEASON is summer.  But if we didn't already know whether the SIDEWALK was wet, whether the SEASON was summer or fall might be very relevant for guessing whether the path was SLIPPERY!)\n\nA causal model goes beyond the graph by including specific probability functions $\\mathbb P(X_i | \\mathbf{pa}_i)$ for how to calculate the probability of each node $X_i$ taking on the value $x_i$ given the values $\\mathbf {pa}_i$ of $x_i$'s immediate ancestors.  It is implicitly assumed that the causal model [ factorizes], so that the probability of any value assignment $\\mathbf x$ to the whole graph can be calculated using the product:\n\n$$\\mathbb P(\\mathbf x) = \\prod_i \\mathbb P(x_i | \\mathbf{pa}_i)$$\n\nThen the counterfactual conditional $\\mathbb P(\\mathbf x | \\operatorname{do}(X_j=x_j))$ is calculated via:\n\n$$\\mathbb P(\\mathbf x | \\operatorname{do}(X_j=x_j)) = \\prod_{i \\neq j} \\mathbb P(x_i | \\mathbf{pa}_i)$$\n\n(We assume that $\\mathbf x$ has $x_j$ equaling the $\\operatorname{do}$-specified value of $X_j$; otherwise its conditioned probability is defined to be $0$.)\n\nThis just says that when we set $\\operatorname{do}(X_j=x_j)$ we ignore the ordinary parent nodes for $X_j$ and just say that whatever the values of $\\mathbf{pa}_j,$ the probability of $X_j = x_j$ is 1.\n\nThis formula implies that conditioning on $\\operatorname{do}(X_j=x_j)$ can only affect the probabilities of variables $X_k$ that are "downstream" of $X_j$ in the directed graph of the causal model.  (Which is why choosing to pet the kitten can't possibly affect whether you have [toxoplasmosis_dilemma toxoplasmosis].)\n\nThen expected utility should be calculated as:\n\n$$\\mathbb E[\\mathcal U| \\operatorname{do}(a_x)] = \\sum_{o_i \\in \\mathcal O} \\mathcal U(o_i) \\cdot \\mathbb P(o_i | \\operatorname{do}(a_x))$$\n\nUnder this rule, we will calculate that [toxoplasmosis_dilemma we can't affect the probability of having toxoplasmosis by petting the cat], since our choice to pet the cat is causally downstream of whether we have toxoplasmosis.\n\n[todo: put diagram here]\n\n# Proposed technical refinements of CDT\n\nThe semantics of the $\\operatorname{do}()$ operation, or causal counterfactuals generally, imply that in [5pt Newcomblike problems] the first pass of a CDT expected utility calculation may return quantitatively wrong utilities, or even a qualitatively bad option, since the CDT agent will not yet have updated background beliefs based on observing its own decision.\n\nIn [5pv], the mischievous [5b2 Omega] places two boxes before you, a transparent Box A containing \\$1,000, and an opaque Box B.  Omega then departs.  You can take one box or both boxes.  If Omega predicted that you would take only Box B, then Omega has already put \\$1,000,000 into Box B.  If Omega predicted you would two-box, Box B already contains nothing.\n\nSuppose that in the general population, the base rate of taking only Box B is 2/3.  Then at the first moment of making the decision to two-box, a CDT agent will believe that Box B has a 2/3 probability of being full.\n\nBesides this being an inaccurate expectation of future wealth, in a slightly different version of Newcomb's Problem, it leads to potential losses.  Suppose you must press one of four buttons $W, X, Y, Z$ to determine (a) whether to one-box or two-box, and (b) whether to pay an extra \\$900 fee to make the money (if any) be tax-free.  If your marginal tax rate is otherwise 50%, then the payoff chart in after-tax income might look like this:\n\n$$\\begin{array}{r|c|c}\n& \\text{One-boxing predicted} & \\text{Two-boxing predicted} \\\\\n\\hline\n\\text{W: Take both boxes, no fee:} & \\$500,500 & \\$500 \\\\ \\hline\n\\text{X: Take only Box B, no fee:} &  \\$500,000 & \\$0 \\\\ \\hline\n\\text{Y: Take both boxes, pay fee:} & \\$1,000,100 & \\$100 \\\\ \\hline\n\\text{Z: Take only Box B, pay fee:} & \\$999,100 & -\\$900\n\\end{array}$$\n\nA CDT-agent that has not yet updated on observing its own choice, thinking that it has the 2/3 prior chance of Box B being full, will press the button Y.\n\nAn obvious amendment is to have CDT observe its first impulse, update its background beliefs if required, recalculate expected utilities, possibly change the option selected, and possibly update again, and continue until arriving at a stable state.  This closely resembles the [tickle_defense] in that the CDT agent notices the 'tickle' of an impulse to choose a particular option, and tries updating on that tickle.\n\nA potential problem with this first amendment is that it can potentially go into infinite loops.\n\nIn [death_in_damascus], a man of Damascus sees Death, and Death looks surprised, then remarks that he has an appointment with the man tomorrow.  The man immediately purchases a fast horse and rides to Aleppo, where the next day he is killed by falling roof tiles.\n\nThe premise of Death in Damascus is that Death, who like Omega is an excellent predictor of human behavior, has already informed you that whichever choice you end up taking was the one that led to the appointed place of your death.  If you decide to stay in Damascus, then observing this, you should expect staying in Damascus to be fatal and Aleppo to be less dangerous.  If you observe yourself choosing to ride to Aleppo, you should expect that Aleppo kills you while Damascus would be quite safe.  Faced with this dilemma, a causal decision theory that repeatedly updates on the 'tickles' of its observed decision-impulses will go into an infinite loop.\n\nAn obvious second amendment is to allow a CDT agent to use mixed strategies, for example to 'choose' to stay in Damascus or go to Aleppo with 0.5 : 0.5 probability.  This permits stability in the Death in Damascus case and also some degree of self-observational updating.\n\nHowever, as [2 Yudkowsky] has observed, this twice-amended version of CDT is still subject to predictable losses.  At the moment of making the 'mixed' decision to stay in Aleppo or go to Damascus with 0.5 : 0.5 probability, the agent reasons as if it has a 50% chance of surviving (by the semantics of the $\\operatorname{do}()$ operation, the counterfactual for the agent's action cannot, inside that calculation, be correlated with any background variables).  So if there was a further-compounded decision which included e.g. a chance to purchase for \\$1 a ticket that pays out \\$10 if the agent survives, the agent will buy that ticket (and then try to sell it back immediately afterwards).  Similarly, once the CDT agent has started on its way to Aleppo (if that was the result of the randomized decision), nothing prohibits it from suddenly realizing that Aleppo is certainly fatal and Damascus is safe, and trying to turn back.  In this sense, the stability and internal consistency of CDT agents might still be regarded as an unsolved problem.\n\n%%todo:\n\ntechnical details: tickles, infinite loops, mixed strategies\nmotivation and history: newcomb's problem, critiques, critiques from logical decision theory\n\n%%',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '1',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '',
  hasDraft: 'false',
  votes: [],
  voteSummary: [
    '0',
    '0',
    '0',
    '0',
    '0',
    '0',
    '0',
    '0',
    '0',
    '0'
  ],
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'true',
  proposalEditNum: '5',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {
    Gloss: 'Causal decision theory says, "To choose rationally, imagine that just your own physical act changes, and ask what the resulting universe would look like."  CDT is the implicit background theory being invoked when somebody suggests that [rationality_of_voting voting in elections] is 'irrational', or that two 'rational' agents can't help but defect against each other in the [5py]."\n\nCausal decision theory is currently (2016) the most academically popular [18s decision theory].  It is being challenged by [58b].',
    Summary: 'Causal decision theory is a (currently academically dominant) view which says that the [rational_choice principle of rational choice] is to choose based on the [causal_counterfactual physical consequences] of your act.  To figure out what the universe looks like if you do X, you should imagine a world where nothing changes up to the moment of X, the physical act changes to be X, and then your model of the world's laws is run forward from there.\n\nCDT contrasts to [-5px evidential decision theory], the original form of decision theory, which was later realized to imply, "Act so that your decision is the best news you could get, if somebody told you about it as news."  It also contrasts to the more recent [-58b], which says, "Choose as if you are selecting the logical output of your decision algorithm."\n\nCausal decision theory has been criticized on grounds of giving counterintuitive advice such as "Don't bother [rationality_of_voting voting in elections]" or "Defect against your own clone in the [5py Prisoner's Dilemma]", and for other agents getting higher payoffs in dilemmas such as [5pv].  Logical decision theorists also critique CDT on grounds such as its alleged [2rb reflective inconsistency].',
    Technical: 'Causal decision theory says that the action-conditionals inside the [18v expected utility formula] should be treated as [causal_counterfactual causal counterfactuals] intervening on your physical act, that is, the expectation of utility $\\mathcal U$ summed over outcomes $\\mathcal O$ given an action $a_x$ should be written:\n\n$$\\mathbb E[\\mathcal U|a_x] = \\sum_{o_i \\in \\mathcal O} \\mathcal U(o_i) \\cdot \\mathbb P(a_x \\ \\square \\!\\! \\rightarrow o_i)$$\n\nIf causal counterfactuals are computed as in the standard theory of [causal_model causal models] using $operatorname{do}()$ interventions, then the expected utility formula would be written:\n\n$$\\mathbb E[\\mathcal U| \\operatorname{do}(a_x)] = \\sum_{o_i \\in \\mathcal O} \\mathcal U(o_i) \\cdot \\mathbb P(o_i | \\operatorname{do}(a_x))$$\n\nSome proposed technical refinements of CDT include [tickle_defense updating on your own suspicion of action]; and choosing mixed strategies to break infinite loops in dilemmas like [death_in_damascus] or [5qh the Absent-Minded Driver].\n\n[58b Logical decision theorists] have critiqued standard causal decision theory upon several intuitive and technical grounds.'
  },
  creatorIds: [
    'EliezerYudkowsky',
    'JMoros'
  ],
  childIds: [],
  parentIds: [
    'decision_theory'
  ],
  commentIds: [],
  questionIds: [],
  tagIds: [
    'work_in_progress_meta_tag'
  ],
  relatedIds: [],
  markIds: [],
  explanations: [
    {
      id: '5935',
      parentId: 'causal_dt',
      childId: 'ldt_intro_econ',
      type: 'subject',
      creatorId: 'EliezerYudkowsky',
      createdAt: '2016-08-03 22:42:33',
      level: '2',
      isStrong: 'true',
      everPublished: 'true'
    },
    {
      id: '5939',
      parentId: 'causal_dt',
      childId: 'ldt_intro_compsci',
      type: 'subject',
      creatorId: 'EliezerYudkowsky',
      createdAt: '2016-08-03 22:48:10',
      level: '2',
      isStrong: 'true',
      everPublished: 'true'
    },
    {
      id: '5794',
      parentId: 'causal_dt',
      childId: 'causal_dt',
      type: 'subject',
      creatorId: 'EliezerYudkowsky',
      createdAt: '2016-08-02 00:30:35',
      level: '3',
      isStrong: 'true',
      everPublished: 'true'
    }
  ],
  learnMore: [
    {
      id: '5793',
      parentId: 'causal_dt',
      childId: 'absentminded_driver',
      type: 'subject',
      creatorId: 'EliezerYudkowsky',
      createdAt: '2016-08-02 00:29:46',
      level: '3',
      isStrong: 'false',
      everPublished: 'true'
    },
    {
      id: '5819',
      parentId: 'causal_dt',
      childId: 'death_in_damascus',
      type: 'subject',
      creatorId: 'EliezerYudkowsky',
      createdAt: '2016-08-02 04:09:06',
      level: '3',
      isStrong: 'false',
      everPublished: 'true'
    }
  ],
  requirements: [
    {
      id: '5795',
      parentId: 'reads_algebra',
      childId: 'causal_dt',
      type: 'requirement',
      creatorId: 'EliezerYudkowsky',
      createdAt: '2016-08-02 00:31:10',
      level: '3',
      isStrong: 'true',
      everPublished: 'true'
    }
  ],
  subjects: [
    {
      id: '5794',
      parentId: 'causal_dt',
      childId: 'causal_dt',
      type: 'subject',
      creatorId: 'EliezerYudkowsky',
      createdAt: '2016-08-02 00:30:35',
      level: '3',
      isStrong: 'true',
      everPublished: 'true'
    }
  ],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {
    '5n9': [
      '5qh',
      '5qn'
    ]
  },
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '22735',
      pageId: 'causal_dt',
      userId: 'JMoros',
      edit: '5',
      type: 'newEditProposal',
      createdAt: '2017-08-19 21:45:05',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '18310',
      pageId: 'causal_dt',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newTeacher',
      createdAt: '2016-08-03 22:48:11',
      auxPageId: 'ldt_intro_compsci',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '18304',
      pageId: 'causal_dt',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newTeacher',
      createdAt: '2016-08-03 22:42:33',
      auxPageId: 'ldt_intro_econ',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '18070',
      pageId: 'causal_dt',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newTeacher',
      createdAt: '2016-08-02 04:09:06',
      auxPageId: 'death_in_damascus',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '18025',
      pageId: 'causal_dt',
      userId: 'EliezerYudkowsky',
      edit: '4',
      type: 'newEdit',
      createdAt: '2016-08-02 00:36:46',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '18022',
      pageId: 'causal_dt',
      userId: 'EliezerYudkowsky',
      edit: '3',
      type: 'newEdit',
      createdAt: '2016-08-02 00:33:30',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '18021',
      pageId: 'causal_dt',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newTag',
      createdAt: '2016-08-02 00:32:49',
      auxPageId: 'work_in_progress_meta_tag',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '18017',
      pageId: 'causal_dt',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'deleteTag',
      createdAt: '2016-08-02 00:32:38',
      auxPageId: 'work_in_progress_meta_tag',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '18015',
      pageId: 'causal_dt',
      userId: 'EliezerYudkowsky',
      edit: '2',
      type: 'newEdit',
      createdAt: '2016-08-02 00:32:29',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '18013',
      pageId: 'causal_dt',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newRequirement',
      createdAt: '2016-08-02 00:31:10',
      auxPageId: 'reads_algebra',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '18011',
      pageId: 'causal_dt',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newTeacher',
      createdAt: '2016-08-02 00:30:36',
      auxPageId: 'causal_dt',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '18012',
      pageId: 'causal_dt',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newSubject',
      createdAt: '2016-08-02 00:30:36',
      auxPageId: 'causal_dt',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '18007',
      pageId: 'causal_dt',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newTeacher',
      createdAt: '2016-08-02 00:29:46',
      auxPageId: 'absentminded_driver',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '17762',
      pageId: 'causal_dt',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newParent',
      createdAt: '2016-07-29 22:15:28',
      auxPageId: 'decision_theory',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '17763',
      pageId: 'causal_dt',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newTag',
      createdAt: '2016-07-29 22:15:28',
      auxPageId: 'work_in_progress_meta_tag',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '17760',
      pageId: 'causal_dt',
      userId: 'EliezerYudkowsky',
      edit: '1',
      type: 'newEdit',
      createdAt: '2016-07-29 22:15:27',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}