{
  localUrl: '../page/correlated_coverage.html',
  arbitalUrl: 'https://arbital.com/p/correlated_coverage',
  rawJsonUrl: '../raw/1d6.json',
  likeableId: 'GlennField',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '0',
  dislikeCount: '0',
  likeScore: '0',
  individualLikes: [],
  pageId: 'correlated_coverage',
  edit: '4',
  editSummary: '',
  prevEdit: '3',
  currentEdit: '4',
  wasPublished: 'true',
  type: 'wiki',
  title: 'Correlated coverage',
  clickbait: 'In which parts of AI alignment can we hope that getting many things right, will mean the AI gets everything right?',
  textLength: '5128',
  alias: 'correlated_coverage',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'EliezerYudkowsky',
  editCreatedAt: '2016-06-27 01:37:41',
  pageCreatorId: 'EliezerYudkowsky',
  pageCreatedAt: '2015-12-25 01:43:17',
  seeDomainId: '0',
  editDomainId: 'EliezerYudkowsky',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '2',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '68',
  text: '"Correlated coverage" occurs within a domain when - going to some lengths to avoid words like "competent" or "correct" - an [2c advanced agent] handling some large number of domain problems the way we want, means that the AI is likely to handle all problems in the domain the way we want.              \n\nTo see the difference between correlated coverage and not-correlated coverage, consider humans as general [9c epistemologists], versus the [5l] problem.\n\nIn [5l], there's [1y Humean freedom] and [ multiple fixed points] when it comes to "Which outcomes rank higher than which other outcomes?"  All the terms in [5l Frankena's list of desiderata] have their own Humean freedom as to the details.  An agent can decide 1000 issues the way we want, that happen to shadow 12 terms in our complex values, so that covering the answers we want pins down 12 degrees of freedom; and then it turns out there's a 13th degree of freedom that isn't shadowed in the 1000 issues, because [6q later problems are not drawn from the same barrel as prior problems].  In which case the answer on the 1001st issue, that does turn around that 13th degree of freedom, isn't pinned down by correlation with the coverage of the first 1000 issues.  Coverage on the first 1000 queries may not correlate with coverage on the 1001st query.\n\nWhen it comes to [9c], there's something like a central idea:  Bayesian updating plus simplicity prior.  Although not every human can solve every epistemic question, there's nonetheless a sense in which humans, having been optimized to run across the savanna and figure out which plants were poisonous and which of their political opponents might be plotting against them, were later able to figure out General Relativity despite having not been explicitly selected-on for solving that problem.  If we include human [ subagents] into our notion of what problems, in general, human beings can be said to cover, then any question of fact where we can get a correct answer by building a superintelligence to solve it for us, is in some sense "covered" by humans as general epistemologists.\n\nHuman neurology is big and complicated and involves many different brain areas, and we had to go through a long process of bootstrapping our epistemology by discovering and choosing to adopt cultural rules about science.  Even so, the fact that there's something like a central tendency or core or simple principle of "Bayesian updating plus simplicity prior", means that when natural selection built brains to figure out who was plotting what, it accidentally built brains that could figure out General Relativity.\n\nWe can see other parts of [5s value alignment] in the same light - trying to find places, problems to tackle, where there may be correlated coverage:\n\nThe reason to work on ideas like [4l] is that we might hope that there's something like a *core idea* for "Try not to impact unnecessarily large amounts of stuff" in a way that there isn't a core idea for "Try not to do anything that decreases [55 value]."\n\nThe hope that [3ps anapartistic reasoning] could be a general solution to [45] says, "Maybe there's a core central idea that covers everything we mean by an agent B letting agent A correct it - like, if we really honestly wanted to let someone else correct us and not mess with their safety measures, it seems like there's a core thing for us to want that doesn't go through all the Humean degrees of freedom in humane value."  This doesn't mean that there's a short program that encodes all of anapartistic reasoning, but it means there's more reason to hope that if you get 100 problems right, and then the next 1000 problems are gotten right without further tweaking, and it looks like there's a central core idea behind it and the core thing looks like anapartistic reasoning, maybe you're done.\n\n[2s1 Do What I Know I Mean] similarly incorporates a hope that, even if it's not *simple* and there isn't a *short program* that encodes it, there's something like a *core* or a *center* to the notion of "Agent X does what Agent Y asks while modeling Agent Y and trying not to do things whose consequences it isn't pretty sure Agent Y will be okay with" where we can get correlated coverage of the problem with *less* complexity than it would take to encode values directly.\n\nFrom the standpoint of the [1cv], understanding the notion of correlated coverage and its complementary problem of [48 patch resistance] is what leads to traversing the gradient from:\n\n- "Oh, we'll just hardwire the AI's utility function to tell it not to kill people."\n\nTo:\n\n- "Of course there'll be an extended period where we have to train the AI not to do various sorts of bad things."\n\nTo: \n\n- "*Bad impacts* isn't a compact category and the training data may not capture everything that could be a bad impact, especially if the AI gets smarter than the phase in which it was trained.  But maybe the notion of being *low impact in general* (rather than blacklisting particular bad impacts) has a simple-enough core to be passed on by training or specification in a way that generalizes across sharp capability gains."',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '1',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '2016-02-25 01:28:55',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'EliezerYudkowsky'
  ],
  childIds: [],
  parentIds: [
    'ai_alignment'
  ],
  commentIds: [],
  questionIds: [],
  tagIds: [
    'context_disaster'
  ],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '14621',
      pageId: 'correlated_coverage',
      userId: 'EliezerYudkowsky',
      edit: '4',
      type: 'newEdit',
      createdAt: '2016-06-27 01:37:41',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4342',
      pageId: 'correlated_coverage',
      userId: 'EliezerYudkowsky',
      edit: '3',
      type: 'newEdit',
      createdAt: '2015-12-25 01:50:21',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4335',
      pageId: 'correlated_coverage',
      userId: 'EliezerYudkowsky',
      edit: '1',
      type: 'newEdit',
      createdAt: '2015-12-25 01:43:17',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4336',
      pageId: 'correlated_coverage',
      userId: 'EliezerYudkowsky',
      edit: '2',
      type: 'newEdit',
      createdAt: '2015-12-25 01:43:17',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4334',
      pageId: 'correlated_coverage',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newTag',
      createdAt: '2015-12-25 00:39:16',
      auxPageId: 'context_disaster',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4332',
      pageId: 'correlated_coverage',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newParent',
      createdAt: '2015-12-25 00:38:53',
      auxPageId: 'ai_alignment',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}