{
  localUrl: '../page/relevant_limited_AI.html',
  arbitalUrl: 'https://arbital.com/p/relevant_limited_AI',
  rawJsonUrl: '../raw/2y.json',
  likeableId: '1855',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '0',
  dislikeCount: '0',
  likeScore: '0',
  individualLikes: [],
  pageId: 'relevant_limited_AI',
  edit: '7',
  editSummary: '',
  prevEdit: '6',
  currentEdit: '7',
  wasPublished: 'true',
  type: 'wiki',
  title: 'Relevant limited AI',
  clickbait: 'Can we have a limited AI, that's nonetheless relevant?',
  textLength: '3023',
  alias: 'relevant_limited_AI',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'RobBensinger2',
  editCreatedAt: '2016-09-25 00:04:01',
  pageCreatorId: 'EliezerYudkowsky',
  pageCreatedAt: '2015-03-27 01:26:01',
  seeDomainId: '0',
  editDomainId: 'EliezerYudkowsky',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '4',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '115',
  text: 'It is an open problem to propose a [5b3 limited AI] that would be [2s relevant] to the [2z value achievement dilemma] - an agent cognitively constrained along some dimensions that render it much safer, but still able to perform some task useful enough to prevent catastrophe.\n\n### Basic difficulty\n\nConsider an [6x Oracle AI] that is so constrained as to be allowed only to output proofs in HOL of input theorems; these proofs are then verified by a simple and secure-seeming verifier in a sandbox whose exact code is unknown to the Oracle, and this verifier outputs 1 if the proof is true and 0 otherwise, then discards the proof-data.  Suppose also that the Oracle is in a shielded box, etcetera.\n\nIt's possible that this Provability Oracle has been so constrained that it is [2j cognitively containable] (it has no classes of options we don't know about).  If the verifier is unhackable, it gives us trustworthy knowledge that a theorem is provable.  But this limited system is not obviously useful in a way that enables humanity to extricate itself from its larger dilemma.  Nobody has yet stated a plan which could save the world *if only* we had a superhuman capacity to detect which theorems were provable in Zermelo-Fraenkel set theory.\n\nSaying "The solution is for humanity to only build Provability Oracles!" does not resolve the [2z value achievement dilemma] because humanity does not have the coordination ability to 'choose' to develop only one kind of AI over the indefinite future, and the Provability Oracle has no obvious use that prevents non-Oracle AIs from ever being developed.  Thus our larger value achievement dilemma would remain unsolved.  It's not obvious how the Provability Oracle would even constitute significant strategic progress.\n\n### Open problem\n\nDescribe a cognitive task or real-world task for a AI to carry out, *that makes great progress upon the [2z value achievement dilemma] if executed correctly*, and that can be done with a *limited* AI that:\n\n1.  Has a real-world solution state that is exceptionally easy to pinpoint using a utility function, thereby avoiding some of [2w edge instantiation], [47 unforeseen maximums], [6q context change], [ programmer maximization], and the other pitfalls of [2l advanced safety], if there is otherwise a trustworthy solution for [ low-impact AI]; or\n2.  Seems exceptionally implementable using a [1fy known-algorithm non-self-improving agent], thereby averting problems of stable self-modification, if there is otherwise a trustworthy solution for a known-algorithm non-self-improving agent; or\n3.  Constrains the agent's option space so drastically as to make the strategy space not be rich (and the agent hence containable), while still containing a trustworthy, otherwise unfindable solution to some challenge that resolves the larger dilemma.\n\n[todo: ### Additional difficulties]\n\n[todo: (Fill in this section later; all the things that go wrong when somebody eagerly says something along the lines of "We just need AI that does X!")]',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '1',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '2016-02-26 01:58:42',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'EliezerYudkowsky',
    'AlexeiAndreev',
    'RobBensinger2'
  ],
  childIds: [],
  parentIds: [
    'ai_alignment'
  ],
  commentIds: [
    '7m'
  ],
  questionIds: [],
  tagIds: [],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '3542',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '1',
      dislikeCount: '0',
      likeScore: '1',
      individualLikes: [],
      id: '19720',
      pageId: 'relevant_limited_AI',
      userId: 'RobBensinger2',
      edit: '7',
      type: 'newEdit',
      createdAt: '2016-09-25 00:04:01',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '3835',
      pageId: 'relevant_limited_AI',
      userId: 'AlexeiAndreev',
      edit: '0',
      type: 'newAlias',
      createdAt: '2015-12-16 02:45:19',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '3836',
      pageId: 'relevant_limited_AI',
      userId: 'AlexeiAndreev',
      edit: '6',
      type: 'newEdit',
      createdAt: '2015-12-16 02:45:19',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '365',
      pageId: 'relevant_limited_AI',
      userId: 'AlexeiAndreev',
      edit: '1',
      type: 'newParent',
      createdAt: '2015-10-28 03:46:51',
      auxPageId: 'ai_alignment',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1628',
      pageId: 'relevant_limited_AI',
      userId: 'EliezerYudkowsky',
      edit: '5',
      type: 'newEdit',
      createdAt: '2015-03-27 01:55:00',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1627',
      pageId: 'relevant_limited_AI',
      userId: 'EliezerYudkowsky',
      edit: '4',
      type: 'newEdit',
      createdAt: '2015-03-27 01:54:14',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1626',
      pageId: 'relevant_limited_AI',
      userId: 'EliezerYudkowsky',
      edit: '3',
      type: 'newEdit',
      createdAt: '2015-03-27 01:50:13',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1625',
      pageId: 'relevant_limited_AI',
      userId: 'EliezerYudkowsky',
      edit: '2',
      type: 'newEdit',
      createdAt: '2015-03-27 01:38:53',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}