{
  localUrl: '../page/timemachine_efficiency_metaphor.html',
  arbitalUrl: 'https://arbital.com/p/timemachine_efficiency_metaphor',
  rawJsonUrl: '../raw/4gh.json',
  likeableId: '2805',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '1',
  dislikeCount: '0',
  likeScore: '1',
  individualLikes: [
    'NateSoares'
  ],
  pageId: 'timemachine_efficiency_metaphor',
  edit: '3',
  editSummary: '',
  prevEdit: '2',
  currentEdit: '3',
  wasPublished: 'true',
  type: 'wiki',
  title: 'Time-machine metaphor for efficient agents',
  clickbait: 'Don't imagine a paperclip maximizer as a mind.  Imagine it as a time machine that always spits out the output leading to the greatest number of future paperclips.',
  textLength: '4156',
  alias: 'timemachine_efficiency_metaphor',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'EliezerYudkowsky',
  editCreatedAt: '2017-07-27 15:44:15',
  pageCreatorId: 'EliezerYudkowsky',
  pageCreatedAt: '2016-06-16 20:40:12',
  seeDomainId: '0',
  editDomainId: 'EliezerYudkowsky',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '64',
  text: 'The time-machine metaphor is an [4kf intuition pump] for [6s instrumentally efficient agents] - agents smart enough that they always get at least as much of their [109 utility] as any strategy we can imagine.  Taking a [41l superintelligent] [10h paperclip maximizer] as our central example, rather than visualizing a brooding mind and spirit deciding which actions to output in order to achieve its paperclippy desires, we should consider visualizing a time machine hooked up to an output channel, which, for every possible output, peeks at how many paperclips will result in the future given that output, and which always yields the output leading to the greatest number of paperclips.  In the metaphor, we're not to imagine the time machine as having any sort of mind or intentions - it just repeatedly outputs the action that actually leads to the most paperclips.\n\nThe time machine metaphor isn't a perfect way to visualize a [2rd bounded] superintelligence; the time machine is strictly more powerful.  E.g., the time machine can instantly unravel a 4096-bit encryption key because it 'knows' the bitstring that is the answer.  So the point of this metaphor is not as an intuition pump for capabilities, but rather, an intuition pump for overcoming [-anthropomorphism] in reasoning about a paperclip maximizer's policies; or as an intuition pump for understanding the sense-update-predict-act agent architecture.\n\nThat is:  If you imagine a superintelligent paperclip maximizer as a mind, you might [43g imagine persuading it] that, really, [3tm it can get more paperclips] by trading with humans instead of turning them into paperclips.  If you imagine a time machine, which isn't a mind, you're less likely to imagine persuading it, and instead ask more honestly the question, "What is the maximum number of paperclips the universe can be turned into, and how would one go about doing that?"  Instead of imagining ourselves arguing with Clippy about how humans really are very productive, we ask the question from the time machine's standpoint - which universe actually ends up with more paperclips in it?\n\nThe relevant fact about instrumentally efficient agents is that they are, from our perspective, *unbiased* (in the [statistical_bias statistical sense of bias]) in their policies, relative to any kind of bias we can detect.\n\nAs an example, consider a 2015-era chess engine, contrasted to a 1985-era chess engine.  The 1985-era chess engine may lose to a moderately strong human amateur, so it's not [6s relatively efficient].  It may have humanly-perceivable quirks such as "It likes to move its queen", that is, "I detect that it moves its queen more often than would be strictly required to win the game."  As we go from 1985 to 2015, the machine chessplayer improves beyond the point where we, personally, can detect any flaws in it.  You should expect the reason why the 2015 chess engine moves anywhere to be only understandable to *you* (without machine assistance) as "because that move had a great probability of leading to a winning position later", and not in any other psychological terms like "it likes to move its pawn".\n\nFrom your perspective, the 2015 chess engine will only move its pawn on occasions where that probably leads to winning the game, and does not move the pawn on occasions where it leads to losing the game. If you see the 2015 chess engine make a move you didn't think was high in winningness, you conclude that it has seen some winningness you didn't know about and is about to do exceptionally well, or you conclude that the move you favored led into futures surprisingly low in winningness, and not that the chess engine is favoring some unwinning move.  We can no longer personally and without machine assistance detect any systematic departure from "It makes the chess move that leads to winning the game" in the direction of "It favors some other class of chess move for reasons apart from its winningness."\n\nThis is what makes the time machine metaphor a good intuition pump for an instrumentally efficient agent's choice of policies (though not a good intuition for the magnitude of its capabilities).',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '1',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'true',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'EliezerYudkowsky'
  ],
  childIds: [],
  parentIds: [
    'efficiency'
  ],
  commentIds: [],
  questionIds: [],
  tagIds: [],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '22878',
      pageId: 'timemachine_efficiency_metaphor',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'deleteTeacher',
      createdAt: '2017-11-26 22:36:41',
      auxPageId: 'instrumental_goals_equally_tractable',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '22876',
      pageId: 'timemachine_efficiency_metaphor',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newTeacher',
      createdAt: '2017-11-26 22:36:35',
      auxPageId: 'instrumental_goals_equally_tractable',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '22723',
      pageId: 'timemachine_efficiency_metaphor',
      userId: 'EliezerYudkowsky',
      edit: '3',
      type: 'newEdit',
      createdAt: '2017-07-27 15:44:15',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '22722',
      pageId: 'timemachine_efficiency_metaphor',
      userId: 'EliezerYudkowsky',
      edit: '2',
      type: 'newEdit',
      createdAt: '2017-07-27 15:30:34',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '13383',
      pageId: 'timemachine_efficiency_metaphor',
      userId: 'EliezerYudkowsky',
      edit: '1',
      type: 'newEdit',
      createdAt: '2016-06-16 20:40:12',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '13379',
      pageId: 'timemachine_efficiency_metaphor',
      userId: 'EliezerYudkowsky',
      edit: '1',
      type: 'newParent',
      createdAt: '2016-06-16 20:21:51',
      auxPageId: 'efficiency',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}