{
  localUrl: '../page/daemons.html',
  arbitalUrl: 'https://arbital.com/p/daemons',
  rawJsonUrl: '../raw/2rc.json',
  likeableId: '1676',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '5',
  dislikeCount: '0',
  likeScore: '5',
  individualLikes: [
    'AndrewMcKnight',
    'EricBruylant',
    'ConnorFlexman3',
    'DanRyan',
    'PaulTorek'
  ],
  pageId: 'daemons',
  edit: '3',
  editSummary: '',
  prevEdit: '2',
  currentEdit: '3',
  wasPublished: 'true',
  type: 'wiki',
  title: 'Optimization daemons',
  clickbait: 'When you optimize something so hard that it crystalizes into an optimizer, like the way natural selection optimized apes so hard they turned into human-level intelligences',
  textLength: '5261',
  alias: 'daemons',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'EliezerYudkowsky',
  editCreatedAt: '2016-03-28 00:54:00',
  pageCreatorId: 'EliezerYudkowsky',
  pageCreatedAt: '2016-03-22 01:41:58',
  seeDomainId: '0',
  editDomainId: 'EliezerYudkowsky',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '1',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '1777',
  text: '[summary:  We can see natural selection spitting out humans as a special case of "If you dump enough computing power into optimizing a Turing-general policy using a consequentialist fitness criterion, it spits out a new optimizer that probably isn't perfectly aligned with the fitness criterion."  After repeatedly optimizing brains to reproduce well, the brains in one case turned into general optimizers in their own right, more powerful ones than the original optimizer, with new goals not aligned in general to replicating DNA.\n\nThis could potentially happen anywhere inside an AGI subprocess where you were optimizing inside a sufficiently general solution space and you applied enough optimization power - you could get a solution that did its own, internal optimization, possibly in a way smarter than the original optimizer and misaligned to its original goals.\n\nWhen heavy optimization pressure on a system crystallizes it into an optimizer - especially one that's powerful, or more powerful than the previous system, or misaligned with the previous system - we could term the crystallized optimizer a "daemon" of the previous system.  Thus, under this terminology, humans would be daemons of natural selection.]\n\nIf you subject a dynamic system to a *large* amount of optimization pressure, it can turn into an optimizer or even an intelligence.  The classic example would be how natural selection, in the course of extensively optimizing DNA to construct organisms that replicated the DNA, in one case pushed hard enough that the DNA came to specify a cognitive system capable of doing its own consequentialist optimization.  Initially, these cognitive optimizers pursued goals that correlated well with natural selection's optimization target of reproductive fitness, which is how these crystallized optimizers had originally come to be selected into existence.  However, further optimization of these 'brain' protein chunks caused them to begin to create and share cognitive content among themselves, after which such rapid capability gain occurred that a [6q context change] took place and the brains' pursuit of their internal goals no longer correlated reliably with DNA replication.\n\nAs much as this was, from a *human* standpoint, a wonderful thing to have happened, it wasn't such a great thing from the standpoint of inclusive genetic fitness of DNA or just having stable, reliable, well-understood optimization going on.  In the case of AGIs deploying powerful internal and external optimization pressures, we'd very much like to not have that optimization deliberately or accidentally crystallize into new modes of optimization, especially if this breaks goal alignment with the previous system or breaks other safety properties.  (You might need to stare at the [1y Orthogonality Thesis] until it becomes intuitive that, even though crystallizing daemons from natural selection produced creatures that were more humane than natural selection, this doesn't mean that crystallization from an AGI's optimization would have a significant probability of producing something humane.)\n\nWhen heavy optimization pressure on a system crystallizes it into an optimizer - especially one that's powerful, or more powerful than the previous system, or misaligned with the previous system - we could term the crystallized optimizer a "daemon" of the previous system.  Thus, under this terminology, humans would be daemons of natural selection.  If an AGI, after heavily optimizing some internal system, was suddenly taken over by an erupting daemon that cognitively wanted to maximize something that had previously correlated with the amount of available RAM, we would say this was a crystallized daemon of whatever kind of optimization that AGI was applying to its internal system.\n\nThis presents an AGI safety challenge.  In particular, we'd want at least one of the following things to be true *anywhere* that *any kind* of optimization pressure was being applied:\n\n- The optimization pressure is (knowably and reliably) too weak to create daemons.  (Seemingly true of all current systems, modulo the 'knowably' part.)\n- The subject of optimization is not Turing-complete or otherwise programmatically general and the restricted solution space cannot *possibly* contain daemons no matter *how much* optimization pressure is applied to it.  (3-layer non-recurrent neural networks containing less than a trillion neurons will probably not erupt daemons no matter how hard you optimize them.)\n- The AI has a sufficient grasp on the concept of optimization and the problem of daemons to reliably avoid creating mechanisms outside the AI that do cognitive reasoning.  (Note that if some predicate is added to exclude a particular type of daemon, this potentially runs into the [42 nearest unblocked neighbor] problem.)\n- The AI only creates cognitive subagents which share all the goals and safety properties of the original agent.  E.g. if the original AI is [2pf low-impact], [2r8 softly optimizing], [ abortable], and targeted on [6w performing Tasks], it only creates cognitive systems that are low-impact, don't optimize too hard in conjunction with the original AI, abortable by the same shutdown button, and targeted on performing the current task.',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '3',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'EliezerYudkowsky'
  ],
  childIds: [],
  parentIds: [
    'advanced_safety'
  ],
  commentIds: [
    '9d9'
  ],
  questionIds: [],
  tagIds: [
    'work_in_progress_meta_tag'
  ],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9146',
      pageId: 'daemons',
      userId: 'EliezerYudkowsky',
      edit: '3',
      type: 'newEdit',
      createdAt: '2016-03-28 00:54:00',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8892',
      pageId: 'daemons',
      userId: 'EliezerYudkowsky',
      edit: '2',
      type: 'newEdit',
      createdAt: '2016-03-22 02:31:14',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8891',
      pageId: 'daemons',
      userId: 'EliezerYudkowsky',
      edit: '1',
      type: 'newTag',
      createdAt: '2016-03-22 02:27:34',
      auxPageId: 'work_in_progress_meta_tag',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8889',
      pageId: 'daemons',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'deleteTag',
      createdAt: '2016-03-22 02:27:30',
      auxPageId: 'stub_meta_tag',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8881',
      pageId: 'daemons',
      userId: 'EliezerYudkowsky',
      edit: '1',
      type: 'newEdit',
      createdAt: '2016-03-22 01:41:58',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8880',
      pageId: 'daemons',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newTag',
      createdAt: '2016-03-22 01:35:38',
      auxPageId: 'stub_meta_tag',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8878',
      pageId: 'daemons',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newParent',
      createdAt: '2016-03-22 01:35:28',
      auxPageId: 'advanced_safety',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}