{
  localUrl: '../page/executable_philosophy.html',
  arbitalUrl: 'https://arbital.com/p/executable_philosophy',
  rawJsonUrl: '../raw/112.json',
  likeableId: 'NateSoares',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '11',
  dislikeCount: '0',
  likeScore: '11',
  individualLikes: [
    'AlexeiAndreev',
    'OliviaSchaefer',
    'BrandonReinhart',
    'EliezerYudkowsky',
    'MarianAndrecki',
    'TravisRivera',
    'ChrisHibbert',
    'BenjyForstadt',
    'EricRogstad',
    'MiddleKek',
    'ChaseRoycroft'
  ],
  pageId: 'executable_philosophy',
  edit: '10',
  editSummary: '',
  prevEdit: '9',
  currentEdit: '10',
  wasPublished: 'true',
  type: 'wiki',
  title: 'Executable philosophy',
  clickbait: 'Philosophical discourse aimed at producing a trustworthy answer or meta-answer, in limited time, which can used in constructing an Artificial Intelligence.',
  textLength: '14354',
  alias: 'executable_philosophy',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'EliezerYudkowsky',
  editCreatedAt: '2016-06-06 23:06:32',
  pageCreatorId: 'EliezerYudkowsky',
  pageCreatedAt: '2015-07-31 19:20:06',
  seeDomainId: '0',
  editDomainId: 'EliezerYudkowsky',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '1',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '1190',
  text: '[summary:  'Executable philosophy' is [+2]'s term for discourse about subjects usually considered in the realm of philosophy, meant to be used for designing an [2c Artificial Intelligence].  Tenets include:\n\n- It is acceptable to take reductionism, and computability of human thought, as a premise.  We take that as established, don't argue it further, and move on to other issues.\n - The low-level mathematical unity of physics - the reducibility of complex physical objects into mathematically simple components, etctera - has been better established than any philosophical argument which purports to contradict them.\n - We don't have infinite time to arrive at workable solutions.  Continuing debates about non-naturalism *ad infinitem* is harmful and prevents us from moving on.\n- Most "philosophical issues" worth pursuing, can and should be rephrased as subquestions of some primary question about how to design an Artificial Intelligence.\n - E.g. rather than the central question of metaethics being "What is goodness made out of?", we begin with the central question "What algorithm would compute goodness?"\n - This imports the discipline of programming into philosophy, and the mindset that programmers use to identify whether an idea will compile.\n- Faced with any philosophically confusing issue, our first task is to **identify what cognitive algorithm humans are executing which feels from the inside like this sort of confusion**; rather than trying to clearly define terms and weigh up all possible arguments for 'positions' within the confusion.\n - E.g., if the standard philosophical question is "Are free will and determinism compatible?" then there might or might not be any coherent thing we mean by free will. But there is definitely some algorithm running in our brain that, when faced with this particular question, generates a confusing sense of a hard-to-pin-down conflict.\n- You've finished solving a philosophy problem when you're no longer confused about it, not when a 'position' seems very persuasive.\n - There's no need to be intimidated by how long a problem has been left open, since all confusion exists in the map, not in the territory.  Any place where a satisfactory solution seems impossible is just someplace your map has an internal skew, and it should be possible to wake up from the confusion.]\n\n\n"Executable philosophy" is [+2]'s term for discourse about subjects usually considered to belong to the realm of philosophy, meant to be applied to problems that arise in designing or [2v aligning] [2c machine intelligence].\n\nTwo motivations of "executable philosophy" are as follows:\n\n1.  We need a philosophical analysis to be "effective" in Turing's sense: that is, the terms of the analysis must be useful in writing programs.  We need ideas that we can compile and run; they must be "executable" like code is executable.\n2.  We need to produce adequate answers on a time scale of years or decades, not centuries.  In the entrepreneurial sense of "good execution", we need a methodology we can execute on in a reasonable timeframe.\n\nSome consequences:\n\n- We take at face value some propositions that seem extremely likely to be true in real life, like "The universe is a mathematically simple low-level unified causal process with no non-natural elements or attachments".  This is almost certainly true, so as a matter of fast entrepreneurial execution, we take it as settled and move on rather than debating it further.\n   - This doesn't mean we know *how* things are made of quarks, or that we instantly seize on the first theory proposed that involves quarks.  Being reductionist isn't the same as cheering for everything with a reductionist label on it; even if one particular naturalistic theory is true, most possible naturalistic theories will still be wrong.\n- Whenever we run into an issue that seems confusing, we ask "What cognitive process is executing inside our minds that [feels from the inside](https://wiki.lesswrong.com/wiki/How_an_algorithm_feels) like this confusion?"\n  - Rather than asking "Is free will compatible with determinism?" we ask "What algorithm is running in our minds that feels from the inside like free will?"\n      - If we start out in a state of confusion or ignorance, then there might or not be such a thing as free will, and there might or might not be a coherent concept to describe the thing that does or doesn't exist, but we are definitely and in reality executing some discoverable way of thinking that corresponds to this feeling of confusion.  By asking the question on these grounds, we guarantee that it is answerable eventually.\n  - This process terminates **when the issue no longer feels confusing, not when a position sounds very persuasive**.\n  - "Confusion exists in the map, not in the territory; if I don't know whether a coin has landed heads or tails, that is a fact about my state of mind, not a fact about the coin.  There can be mysterious questions but not mysterious answers."\n  - We do not accept as satisfactory an argument that, e.g., humans would have evolved to feel a sense of free will because this was socially useful.  This still takes a "sense of free will" as an unreduced black box, and argues about some prior cause of this feeling.  We want to know *which cognitive algorithm* is executing that feels from the inside like this sense.  We want to learn the *internals* of the black box, not cheer on an argument that some reductionist process *caused* the black box to be there.\n- Rather than asking "What is goodness made out of?", we begin from the question "What algorithm would compute goodness?"\n  - We apply a programmer's discipline to make sure that all the concepts used in describing this algorithm will also compile.  You can't say that 'goodness' depends on what is 'better' unless you can compute 'better'.\n\nConversely, we can't just plug the products of standard analytic philosophy into AI problems, because:\n\n• The academic incentives favor continuing to dispute small possibilities because "ongoing dispute" means "everyone keeps getting publications".  As somebody once put it, for academic philosophy, an unsolvable problem is "like a biscuit bag that never runs out of biscuits".  As a sheerly cultural matter, this means that academic philosophy hasn't accepted that e.g. everything is made out of quarks (particle fields) without any non-natural or irreducible properties attached.\n\nIn turn, this means that when academic philosophers have tried to do [41n metaethics], the result has been a proliferation of different theories that are mostly about non-natural or irreducible properties, with only a few philosophers taking a stand on trying to do metaethics for a strictly natural and reducible universe.  Those naturalistic philosophers are still having to *argue for* a natural universe rather than being able to accept this and move on to do further analysis *inside* the naturalistic possibilities.  To build and align Artificial Intelligence, we need to answer some *complex* questions about how to compute goodness; the field of academic philosophy is stuck on an argument about whether goodness ought ever to be computed.\n\n•  Many academic philosophers haven't learned the programmers' discipline of distinguishing concepts that might compile.  If we imagine rewinding the state of understanding of computer chess to what obtained in the days when [38r Edgar Allen Poe proved that no mere automaton could play chess], then the modern style of philosophy would produce, among other papers, a lot of papers considering the 'goodness' of a chess move as a primitive property and arguing about the relation of goodness to reducible properties like controlling the center of a chessboard.\n\nThere's a particular mindset that programmers have for realizing which of their own thoughts are going to compile and run, and which of their thoughts are not getting any closer to compiling.  A good programmer knows, e.g., that if they offer a 20-page paper analyzing the 'goodness' of a chess move in terms of which chess moves are 'better' than other chess moves, they haven't actually come any closer to writing a program that plays chess.  (This principle is not to be confused with greedy reductionism, wherein you find one thing you understand how to compute a bit better, like 'center control', and then take this to be the entirety of 'goodness' in chess.  Avoiding greedy reductionism is part of the *skill* that programmers acquire of thinking in effective concepts.)\n\nMany academic philosophers don't have this mindset of 'effective concepts', nor have they taken as a goal that the terms in their theories need to compile, nor do they know how to check whether a theory compiles.  This, again, is one of the *foundational* reasons why despite there being a very large edifice of academic philosophy, the products of that philosophy tend to be unuseful in AGI.\n\nIn more detail, [2 Yudkowsky] lists these as some tenets or practices of what he sees as 'executable' philosophy:\n\n- It is acceptable to take reductionism, and computability of human thought, as a premise, and move on.\n - The presumption here is that the low-level mathematical unity of physics - the reducibility of complex physical objects into small, mathematically uniform physical parts, etctera - has been better established than any philosophical argument which purports to contradict them.  Thus our question is "How can we reduce this?" or "Which reduction is correct?" rather than "Should this be reduced?"\n - Yudkowsky further suggests that things be [reduced to a mixture of causal facts and logical facts](http://lesswrong.com/lw/frz/mixed_reference_the_great_reductionist_project/).\n- Most "philosophical issues" worth pursuing can and should be rephrased as subquestions of some primary question about how to design an Artificial Intelligence, even as a matter of philosophy qua philosophy.\n - E.g. rather than the central question being "What is goodness made out of?", we begin with the central question "How do we design an AGI that computes goodness?"  This doesn't solve the question - to claim that would be greedy reductionism indeed - but it does *situate* the question in a pragmatic context.\n - This imports the discipline of programming into philosophy.  In particular, programmers learn that even if they have an inchoate sense of what a computer should do, when they actually try to write it out as code, they sometimes find that the code they have written fails (on visual inspection) to match up with their inchoate sense.  Many ideas that sound sensible as English sentences are revealed as confused as soon as we try to write them out as code.\n- Faced with any philosophically confusing issue, our task is to **identify what cognitive algorithm humans are executing which feels from the inside like this sort of confusion**, rather than, as in conventional philosophy, to try to clearly define terms and then weigh up all possible arguments for all 'positions'.\n - This means that our central question is guaranteed to have an answer.\n - E.g., if the standard philosophical question is "Are free will and determinism compatible?" then there is not guaranteed to be any coherent thing we mean by free will, but it is guaranteed that there is in fact some algorithm running in our brain that, when faced with this particular question, generates a confusing sense of a hard-to-pin-down conflict.\n - This is not to be confused with merely arguing that, e.g., "People evolved to feel like they had free will because that was useful in social situations in the ancestral environment."  That merely says, "I think evolution is the cause of our feeling that we have free will."  It still treats the feeling itself as a black box.  It doesn't say what algorithm is actually running, or walk through that algorithm to see exactly how the sense of confusion arises.  We want to know the *internals* of the feeling of free will, not argue that this black-box feeling has a reductionist-sounding cause.\n\nA final trope of executable philosophy is to not be intimidated by how long a problem has been left open.  "Ignorance exists in the mind, not in reality; uncertainty is in the map, not in the territory; if I don't know whether a coin landed heads or tails, that's a fact about me, not a fact about the coin."  There can't be any unresolvable confusions out there in reality.  There can't be any inherently confusing substances in the mathematically lawful, unified, low-level physical process we call the universe.  Any seemingly unresolvable or impossible question must represent a place where we are confused, not an actually impossible question out there in reality.  This doesn't mean we can quickly or immediately solve the problem, but it does mean that there's some way to wake up from the confusing dream.  Thus, as a matter of entrepreneurial execution, we're allowed to try to solve the problem rather than run away from it; trying to make an investment here may still be profitable.\n\nAlthough all confusing questions must be places where our own cognitive algorithms are running skew to reality, this, again, doesn't mean that we can immediately see and correct the skew; nor that it is compilable philosophy to insist in a very loud voice that a problem is solvable; nor that when a solution is presented we should immediately seize on it because the problem must be solvable and behold here is a solution.  An important step in the method is to check whether there is any lingering sense of something that didn't get resolved; whether we really feel less confused; whether it seems like we could write out the code for an AI that would be confused in the same way we were; whether there is any sense of dissatisfaction; whether we have merely chopped off all the interesting parts of the problem.\n\nAn earlier guide to some of the same ideas was the [Reductionism Sequence](https://goo.gl/qHyXwr).\n\n[todo: tutorial:  finishable philosophy applied to 'free will'.  (don't forget to distinguish plausible wrong ways to do it on each step.  is there a good example besides free will that can serve as a homework problem?  maybe something actually unresolved like 'Why does anything exist -> why do some things exist more than others?' with Tegmark Level IV as a considered, but not accepted answer.)]',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '2',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '2016-02-21 07:17:32',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'EliezerYudkowsky',
    'AlexeiAndreev'
  ],
  childIds: [],
  parentIds: [
    'ai_alignment'
  ],
  commentIds: [
    '3zs'
  ],
  questionIds: [],
  tagIds: [
    'philosophy'
  ],
  relatedIds: [
    'rescue_utility'
  ],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '13802',
      pageId: 'executable_philosophy',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newTag',
      createdAt: '2016-06-18 01:27:26',
      auxPageId: 'philosophy',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '11861',
      pageId: 'executable_philosophy',
      userId: 'EliezerYudkowsky',
      edit: '10',
      type: 'newEdit',
      createdAt: '2016-06-06 23:06:32',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '11859',
      pageId: 'executable_philosophy',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newAlias',
      createdAt: '2016-06-06 23:05:00',
      auxPageId: '',
      oldSettingsValue: 'ai_grade_philosophy',
      newSettingsValue: 'executable_philosophy'
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '11860',
      pageId: 'executable_philosophy',
      userId: 'EliezerYudkowsky',
      edit: '9',
      type: 'newEdit',
      createdAt: '2016-06-06 23:05:00',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '11577',
      pageId: 'executable_philosophy',
      userId: 'EliezerYudkowsky',
      edit: '8',
      type: 'newUsedAsTag',
      createdAt: '2016-06-01 00:38:02',
      auxPageId: 'rescue_utility',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '11539',
      pageId: 'executable_philosophy',
      userId: 'EliezerYudkowsky',
      edit: '8',
      type: 'newEdit',
      createdAt: '2016-05-31 16:16:09',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '11538',
      pageId: 'executable_philosophy',
      userId: 'EliezerYudkowsky',
      edit: '7',
      type: 'newEdit',
      createdAt: '2016-05-31 16:03:38',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9327',
      pageId: 'executable_philosophy',
      userId: 'EliezerYudkowsky',
      edit: '6',
      type: 'newEdit',
      createdAt: '2016-04-17 21:43:39',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9325',
      pageId: 'executable_philosophy',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newAlias',
      createdAt: '2016-04-17 21:35:12',
      auxPageId: '',
      oldSettingsValue: 'finishable_philosophy',
      newSettingsValue: 'ai_grade_philosophy'
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9326',
      pageId: 'executable_philosophy',
      userId: 'EliezerYudkowsky',
      edit: '5',
      type: 'newEdit',
      createdAt: '2016-04-17 21:35:12',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8594',
      pageId: 'executable_philosophy',
      userId: 'EliezerYudkowsky',
      edit: '4',
      type: 'newEdit',
      createdAt: '2016-03-15 20:19:16',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '3792',
      pageId: 'executable_philosophy',
      userId: 'AlexeiAndreev',
      edit: '0',
      type: 'newAlias',
      createdAt: '2015-12-15 23:56:58',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '3793',
      pageId: 'executable_philosophy',
      userId: 'AlexeiAndreev',
      edit: '3',
      type: 'newEdit',
      createdAt: '2015-12-15 23:56:58',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '366',
      pageId: 'executable_philosophy',
      userId: 'AlexeiAndreev',
      edit: '1',
      type: 'newParent',
      createdAt: '2015-10-28 03:46:51',
      auxPageId: 'ai_alignment',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1650',
      pageId: 'executable_philosophy',
      userId: 'EliezerYudkowsky',
      edit: '2',
      type: 'newEdit',
      createdAt: '2015-07-31 19:22:39',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1649',
      pageId: 'executable_philosophy',
      userId: 'EliezerYudkowsky',
      edit: '1',
      type: 'newEdit',
      createdAt: '2015-07-31 19:20:06',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}