{
  localUrl: '../page/lobstacle.html',
  arbitalUrl: 'https://arbital.com/p/lobstacle',
  rawJsonUrl: '../raw/64y.json',
  likeableId: '0',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '0',
  dislikeCount: '0',
  likeScore: '0',
  individualLikes: [],
  pageId: 'lobstacle',
  edit: '2',
  editSummary: '',
  prevEdit: '1',
  currentEdit: '2',
  wasPublished: 'true',
  type: 'wiki',
  title: 'Löbstacle',
  clickbait: '',
  textLength: '2558',
  alias: 'lobstacle',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'JohnSteidley',
  editCreatedAt: '2016-09-19 15:16:30',
  pageCreatorId: 'JaimeSevillaMolina',
  pageCreatedAt: '2016-09-17 12:37:43',
  seeDomainId: '0',
  editDomainId: 'arbital_featured_project',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '7',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '21',
  text: 'Imagine you have an artificial intelligence $D1$ who uses a logical [ knowledge base] and a certain system of [ logic] to guarantee that its actions always provably have the best outcome possible.\n\nIn other words, $D1$ has a bunch of [ axioms] and some [ deduction rules], and it can relate the logical terms to the real world. When it takes an action, it first checks that if it formalizes the action as a logical formula then the action [ entails] a good outcome according to the inference rules it is using and the axioms in his knowledge base.\n\nSidestepping complexity issues%%note: Deducing logical consequences in reasonable logical systems is not cheap!%%, such a setup is really desirable, as opposed to a black box artificial intelligences, because it allows us to better understand its reasoning process and design formal constraints that the AI has to satisfy.\n\nHowever, we also want the AI to become smarter and able to come up with better conclusions. For that it may require to add new axioms to its knowledge base, so that it can prove more things. Let's call the reasoner which results of adding those axioms $D2$.\n\nOn the other hand, it has to be careful that it does not add any axiom in [inconsistency contradiction] with the old axioms, because then the new [ knowledge base] would entail every logical statement!\n\nTherefore, before enacting the modification, $D1$ has to prove that $D2$ is [-consistent]. \n\nBut since $D2$ is stronger than $D1$ (everything $D1$ proves is also proved by $D2$, since every axiom in $D1$ is also in $D2$) then the consistency of $D2$ implies the consistency of $D1$, and if $D1$ extends minimal arithmetic then it can prove that.\n%%note: This is most certainly true, but it is formalizable? Turns out it is as long as $D1$ includes [3ft]. A brief and hand wavy argument that shows this is acknowledging that $\\square_{D1}A\\rightarrow \\square_{D2}A$ [todo: complete argument]%%\n\nTherefore, $D1$ cannot show that $D2$ is consistent, because otherwise $D1$ would prove that he himself is consistent, and if $D1$ was actually consistent then that is not allowed by [godel_second_incompleteness_theorem]%%note: Or, [5hs equivalently], by [55w], which is where the catchy name Löbstacle comes from%%.\n\nThis means that our idealized AI will only build successors which are strictly weaker than himself, which is clearly undesirable. Extended discussions of the Löbstacle can be found [here](http://intelligence.org/files/TilingAgentsDraft.pdf) and [here](http://intelligence.org/files/ProblemsSelfReference.pdf).',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '1',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'JaimeSevillaMolina',
    'JohnSteidley'
  ],
  childIds: [],
  parentIds: [],
  commentIds: [],
  questionIds: [],
  tagIds: [],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '3519',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '2',
      dislikeCount: '0',
      likeScore: '2',
      individualLikes: [],
      id: '19645',
      pageId: 'lobstacle',
      userId: 'JohnSteidley',
      edit: '2',
      type: 'newEdit',
      createdAt: '2016-09-19 15:16:30',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '3520',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '1',
      dislikeCount: '0',
      likeScore: '1',
      individualLikes: [],
      id: '19643',
      pageId: 'lobstacle',
      userId: 'JaimeSevillaMolina',
      edit: '1',
      type: 'newEdit',
      createdAt: '2016-09-17 12:37:43',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'false',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}