{
  localUrl: '../page/conservative_concept.html',
  arbitalUrl: 'https://arbital.com/p/conservative_concept',
  rawJsonUrl: '../raw/2qp.json',
  likeableId: '1654',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '3',
  dislikeCount: '0',
  likeScore: '3',
  individualLikes: [
    'PatrickLaVictoir',
    'NathanRosquist',
    'EricRogstad'
  ],
  pageId: 'conservative_concept',
  edit: '7',
  editSummary: '',
  prevEdit: '6',
  currentEdit: '7',
  wasPublished: 'true',
  type: 'wiki',
  title: 'Conservative concept boundary',
  clickbait: 'Given N example burritos, draw a boundary around what is a 'burrito' that is relatively simple and allows as few positive instances as possible.  Helps make sure the next thing generated is a burrito.',
  textLength: '11981',
  alias: 'conservative_concept',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'EliezerYudkowsky',
  editCreatedAt: '2016-03-22 19:26:05',
  pageCreatorId: 'EliezerYudkowsky',
  pageCreatedAt: '2016-03-19 23:54:44',
  seeDomainId: '0',
  editDomainId: 'EliezerYudkowsky',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '349',
  text: '[summary:  A conservative concept boundary is a boundary which is (a) relatively simple and (b) classifies as few things as possible as positive instances of the category.  If we see that 3, 5, 13, and 19 are positive instances of a category and 4, 14, and 28 are negative instances, then a *simple* boundary which separates these instances is "All odd numbers" and a *simple and conservative* boundary is "All odd numbers between 3 and 19" or "All primes between 3 and 19".\n\nIf we imagine presenting an AI with smiling faces as instances of a goal concept during training, then a conservative concept boundary might lead the AI to produce only smiles attached to human head, rather than tiny molecular smileyfaces (not that this necessarily solves everything).  If we imagine presenting the AI with 20 positive instances of a burrito, then a conservative boundary might lead the AI to produce a 21st burrito very similar to those, rather than needing to be explicitly presented with negative instances of burritos that force the simplest boundary around the goal concept to exclude poisonous burritos.]\n\nThe problem of conservatism is to draw a boundary around positive instances of a concept which is not only *simple* but also *classifies as few instances as possible as positive.*\n\n# Introduction / basic idea / motivation\n\nSuppose I have a numerical concept in mind, and you query me on the following numbers to determine whether they're instances of the concept, and I reply as follows:\n\n- 3:  Yes\n- 4:  No\n- 5:  Yes\n- 13:  Yes\n- 14:  No\n- 19: Yes\n- 28:  No\n\nA *simple* category which covers this training set is "All odd numbers."\n\nA *simple and conservative* category which covers this training set is "All odd numbers between 3 and 19."\n\nA slightly more complicated, and even more conservative category, is "All prime numbers between 3 and 19."\n\nA conservative but not simple category is "Only 3, 5, 13, and 19 are positive instances of this category."\n\nOne of the (very) early proposals for value alignment was to train an AI on smiling faces as examples of the sort of outcome the AI ought to achieve.  Slightly steelmanning the proposal so that it doesn't just produce *images* of smiling faces as the AI's sensory data, we can imagine that the AI is trying to learn a boundary over the *causes of* its sensory data that distinguishes smiling faces within the environment.\n\nThe classic example of what might go wrong with this alignment protocol is that all matter within reach might end up turned into tiny molecular smiley faces, since heavy optimization pressure would pick out an [2w extreme edge] of the simple category that could be fulfilled as maximally as possible, and it's possible to make many more tiny molecular smileyfaces than complete smiling faces.\n\nThat is:  The AI would by default learn the simplest concept that distinguished smiling faces from non-smileyfaces within its training cases.  Given [6q a wider set of options than existed in the training regime], this simple concept might also classify as a 'smiling face' something that had the properties singled out by the concept, but was unlike the training cases with respect to other properties.  This is the metaphorical equivalent of learning the concept "All odd numbers", and then positively classifying cases like -1 or 9^999 that are unlike 3 and 19 in other regards, since they're still odd.\n\nOn the other hand, suppose the AI had been told to learn a simple *and conservative* concept over its training data.  Then the corresponding goal might demand, e.g., only smiles that came attached to actual human heads experiencing pleasure.  If the AI were moreover a conservative *planner*, it might try to produce smiles only through causal chains that resembled existing causal generators of smiles, such as only administering existing drugs like heroin and not inventing any new drugs, and only breeding humans through pregnancy rather than synthesizing living heads using nanotechnology.\n\nYou couldn't call this a solution to the value alignment problem, but it would - arguendo - get significantly *closer* to the [6h intended goal] than tiny molecular smileyfaces.  Thus, conservatism might serve as one component among others for aligning a [6w Task AGI].\n\nIntuitively speaking:  A genie is hardly rendered *safe* if it tries to fulfill your wish using 'normal' instances of the stated goal that were generated in relatively more 'normal' ways, but it's at least *closer to being safe.*  Conservative concepts and conservative planning might be one attribute among others of a safe genie.\n\n# Burrito problem\n\nThe *burrito problem* is to have a Task AGI make a burrito that is actually a burrito, and not just something that looks like a burrito, and not poisonous and that is actually safe for humans to eat.\n\nConservatism is one possible approach to the burrito problem:  Show the AGI five burritos and five non-burritos.  Then, don't have the AGI learn the *simplest* concept that distinguishes burritos from non-burritos and then create something that is *maximally* a burrito under this concept.  Instead, we'd like the AGI to learn a *simple and narrow concept* that classifies these five things as burritos according to some simple-ish rule which labels as few objects as possible as burritos.  But not the rule, "Only these five exact molecular configurations count as burritos", because that rule would not be simple.\n\nThe concept must still be broad enough to permit the construction of a sixth burrito that is not molecularly identical to any of the first five.  But not so broad that the burrito includes butolinum toxin (because, hey, anything made out of mostly carbon-hydrogen-oxygen-nitrogen ought to be fine, and the five negative examples didn't include anything with butolinum toxin).\n\nThe hope is that via conservatism we can avoid needing to think of every possible way that our training data might not properly stabilize the 'simplest explanation' along every dimension of potentially fatal variance.  If we're trying to only draw *simple* boundaries that separate the positive and negative cases, there's no reason for the AI to add on a "cannot be poisonous" codicil to the rule unless the AI has seen poisoned burritos labeled as negative cases, so that the slightly more complicated rule "but not poisonous" needs to be added to the boundary in order to separate out cases that would otherwise be classified positive.  But then maybe even if we show the AGI one burrito poisoned with butolinum, it doesn't learn to avoid burritos poisoned with ricin, and even if we show it butolinum and ricin, it doesn't learn to avoid burritos poisoned with the radioactive iodine-131 isotope.  Rather than our needing to think of what the concept boundary needs to look like and including enough negative cases to force the *simplest* boundary to exclude all the unsafe burritos, the hope is that via conservatism we can shift some of the workload to showing the AI *positive* examples which happen *not* to be poisonous or have any other problems.\n\n# Conservatism over the causes of sensed training cases.\n\nConservatism in AGI cases seems like it would need to be interpreted over the causes of sensory data, rather than the sensory data itself.  We're not looking for a conservative concept about *which images of a burrito* would be classified as positive, we want a concept over which *environmental burritos* would be classified as positive.  Two burrito candidates can cause identical images while differing in their poisonousness, so we want to draw our conservative concept boundary around (our model of) the causes of past sensory events in our training cases, not draw a boundary around the sensory events themselves.\n\n# Conservative planning\n\nA conservative *strategy* or conservative *plan* would *ceteris paribus* prefer to construct burritos by buying ingredients from the store and cooking them, rather than building nanomachinery that constructs a burrito, because this would be more characteristic of how burritos are usually constructed, or more similar to the elements of previously approved plans.  Again, this seems like it might be less likely to generate a poisonous burrito.\n\nAnother paradigmatic example of conservatism might be to, e.g., inside some game engine, show the AI some human players running around, and then give the AI an object that has the goal of e.g. moving a box to the end of the room.  If the AI is given the ability to fly, but generates a plan in which the box-moving agent only moves around on the ground because that's what the training examples did, then this is a conservative plan.\n\nThe point of this isn't to cripple the AI's abilities, the point is that if e.g. your [2pf low impact measure] has a loophole and the AI generates a plan to turn all matter within reach into pink-painted cars, some steps of this plan like "disassemble stars to make more cars and paint" are likely to be non-conservative and hence not happen automatically.\n\n## Flagging non-conservative plan steps\n\nIf a non-conservative plan seems better along other important dimensions - for example, there is no other plan that has an equally low impact and equally few side effects compared to just synthesizing the burrito using a nanomachine - then we can also imagine that the critical step might be flagged as non-conservative and presented to the user for checking.\n\nThat is, on 'conservative' planning, we're interested in both the problem "generate a plan and then flag and report non-conservative steps" as well as the problem "try to generate a plan that has few or no non-conservative steps".\n\n# Role in other safety problems\n\nConservatism and conservative planning seems like it might directly tackle some standard concerns head-on and in a sufficiently basic way to avoid loopholes, and might also be subject to those concerns.  E.g.:\n\n- [2w] - if in full generality we don't go to the edge of the graph but try to stay in the center of what's already been positively classified, maybe we can avoid this.\n- [47] - if we stick to things very similar to already-positively-classified instances, we won't automatically go into the unimagined parts of the graph.\n- [6q] - a sufficiently conservative optimizer might go on using options previous to similarly whitelisted ones even if large new sections of planning space opened up.\n\nHowever, to the extent we rely on conservatism to prevent any of these things from happening, it's a mission-critical component that itself has to be [2l advanced-safe] with no loopholes.  If a 'conservatism' constraint is being applied to [2c very powerful optimization pressures], we need to worry about this [42 seeking out any loophole] in what is 'conservative'.  It might be that the central notion of 'conservatism' is simple enough to have no loopholes.  But it's also possible that even a simplish and conservative concept would still include some dangerous instances, if there's enough optimization pressure seeking out a maximal-under-some-criterion instance within everything that's been classified conservatively.\n\nTwo possible meta-approaches to making conservatism even safer:\n\n- Use conservatism to flag non-conservative steps in plans, or expected non-conservative instances of goal achievements, and refer these for user checking before taking action.  (Rather than automatically generating a plan containing only 'conservative' steps.)  This would have the [2qq standard problems with user checking].\n- Have a definition of conservatism, relative to the AI's current world-model and conceptual language, which would automatically catch as 'exceptional' (hence not conservative) anything which had the weird property of being the only first-order-conservative instance of a concept that had some other special property being sought out by the optimization pressure.  This might involve weird reflective problems, such as any planned event being special in virtue of the AI having planned it.',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '3',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'EliezerYudkowsky'
  ],
  childIds: [],
  parentIds: [
    'task_agi'
  ],
  commentIds: [
    '2rt',
    '7j9',
    '89c'
  ],
  questionIds: [],
  tagIds: [
    'taskagi_open_problems',
    'value_alignment_open_problem'
  ],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8910',
      pageId: 'conservative_concept',
      userId: 'EliezerYudkowsky',
      edit: '7',
      type: 'newEdit',
      createdAt: '2016-03-22 19:26:05',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8831',
      pageId: 'conservative_concept',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'deleteTag',
      createdAt: '2016-03-20 01:45:48',
      auxPageId: 'work_in_progress_meta_tag',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8829',
      pageId: 'conservative_concept',
      userId: 'EliezerYudkowsky',
      edit: '6',
      type: 'newEdit',
      createdAt: '2016-03-20 01:45:23',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8827',
      pageId: 'conservative_concept',
      userId: 'EliezerYudkowsky',
      edit: '5',
      type: 'newEdit',
      createdAt: '2016-03-20 01:41:39',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8819',
      pageId: 'conservative_concept',
      userId: 'EliezerYudkowsky',
      edit: '4',
      type: 'newEdit',
      createdAt: '2016-03-20 01:04:47',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8818',
      pageId: 'conservative_concept',
      userId: 'EliezerYudkowsky',
      edit: '3',
      type: 'newEdit',
      createdAt: '2016-03-20 00:58:17',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8817',
      pageId: 'conservative_concept',
      userId: 'EliezerYudkowsky',
      edit: '2',
      type: 'newEdit',
      createdAt: '2016-03-20 00:57:48',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8816',
      pageId: 'conservative_concept',
      userId: 'EliezerYudkowsky',
      edit: '1',
      type: 'newEdit',
      createdAt: '2016-03-19 23:54:44',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8815',
      pageId: 'conservative_concept',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newTag',
      createdAt: '2016-03-19 23:54:41',
      auxPageId: 'work_in_progress_meta_tag',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8813',
      pageId: 'conservative_concept',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newTag',
      createdAt: '2016-03-19 23:53:29',
      auxPageId: 'taskagi_open_problems',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8811',
      pageId: 'conservative_concept',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newTag',
      createdAt: '2016-03-19 23:53:26',
      auxPageId: 'value_alignment_open_problem',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8809',
      pageId: 'conservative_concept',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newParent',
      createdAt: '2016-03-19 23:53:22',
      auxPageId: 'task_agi',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}