{
  localUrl: '../page/inductive_ambiguity.html',
  arbitalUrl: 'https://arbital.com/p/inductive_ambiguity',
  rawJsonUrl: '../raw/4w.json',
  likeableId: '2319',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '2',
  dislikeCount: '0',
  likeScore: '2',
  individualLikes: [
    'AndrewCritch',
    'AnnaSalamon'
  ],
  pageId: 'inductive_ambiguity',
  edit: '6',
  editSummary: '',
  prevEdit: '5',
  currentEdit: '6',
  wasPublished: 'true',
  type: 'wiki',
  title: 'Identifying ambiguous inductions',
  clickbait: 'What do a "red strawberry", a "red apple", and a "red cherry" have in common that a "yellow carrot" doesn't?  Are they "red fruits" or "red objects"?',
  textLength: '5496',
  alias: 'inductive_ambiguity',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'EliezerYudkowsky',
  editCreatedAt: '2016-03-20 03:44:19',
  pageCreatorId: 'EliezerYudkowsky',
  pageCreatedAt: '2015-04-17 01:47:59',
  seeDomainId: '0',
  editDomainId: 'EliezerYudkowsky',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '166',
  text: '[summary:  An 'inductive ambiguity' is when there's more than one simple concept that fits the data, even if some of those concepts are much simpler than others, and you want to figure out *which* simple concept was intended.   Suppose you're given images that show camouflaged enemy tanks and empty forests, but it so happens that the tank-containing pictures were taken on sunny days and the forest pictures were taken on cloudy days.  Given the training data, the key concept the user intended might be "camouflaged tanks", or "sunny days", or "pixel fields with brighter illumination levels".  The last concept is by far the simplest, but rather than just assume the simplest explanation is correct with most of the probability mass, we want the algorithm (or AGI) to detect that there's more than one simple-ish boundary that might separate the data, and [2qq check with the user] about *which* boundary was intended to be learned.]\n\nOne of the old fables in machine learning is the story of the "tank classifier" - a neural network that had supposedly been trained to detect enemy tanks hiding in a forest.  It turned out that all the photos of enemy tanks had been taken on sunny days and all the photos of the same field without the tanks had been taken on cloudy days, meaning that the neural net had really just trained itself to recognize the difference between sunny and cloudy days (or just the difference between bright and dim pictures).  ([Source](http://lesswrong.com/lw/7qz/machine_learning_and_unintended_consequences/#d6o1).)\n\nWe could view this problem as follows:  A human looking at the labeled data might have seen several concepts that someone might be trying to point at - tanks vs. no tanks, cloudy vs. sunny days, or bright vs. dim pictures.  A human might then ask, "Which of these possible categories did you mean?" and describe the difference using words; or, if it was easier for them to generate pictures than to talk, generate new pictures that distinguished among the possible concepts that could have been meant.  Since learning a simple boundary that separates positive from negative instances in the training data is a form of induction, we could call this problem noticing "inductive ambiguities" or "ambiguous inductions".\n\nThis problem bears some resemblance to numerous setups in computer science where we can query an oracle about how to classify instances and we want to learn the concept boundary using a minimum number of instances.  However, identifying an "inductive ambiguity" doesn't seem to be exactly the same problem, or at least, it's not obviously the same problem.  Suppose we consider the tank-classifier problem.  Distinguishing levels of illumination in the picture is a very simple concept, so it would probably be the first one learned; then, treating the problem in classical oracle-query terms, we might imagine the AI presenting the user with various random pixel fields at intermediate levels of illumination.  The user, not having any idea what's going on, classifies these intermediate levels of illumination as 'not tanks', and so the AI soon learns that only quite sunny levels of illumination are required.\n\nPerhaps what we want is less like "figure out exactly where the concept boundary lies by querying the edge cases to the oracle, assuming our basic idea about the boundary is correct" and more like "notice when there's more than one plausible idea that describes the boundary" or "figure out if the user could have been trying to communicate more than one plausible idea using the training dataset".\n\n# Possible approaches\n\nSome possibly relevant approaches that might feed into the notion of "identifying inductive ambiguities":\n\n- [2qp Conservatism].  Can we draw a much narrower, but somewhat more complicated, boundary around the training data?\n- Can we get a concept that more strongly predicts or more tightly predicts the training cases we saw?  (Closely related to conservatism - if we suppose there's a generator for the training cases, then a more conservative generator concentrates more probability density into the training cases we happened to see.)\n- Can we detect commonalities in the positive training cases that aren't already present in the concept we've learned?\n   - This might be a good fit for something like a [generative adversarial](http://arxiv.org/abs/1406.2661) approach, where we generate random instances of the concept we learned, then ask if we can detect the difference between those random instances and the actual positively labeled training cases.\n- Is there a way to blank out the concept we've already learned so that it doesn't just get learned again, and ask if there's a different concept that's learnable instead?  That is, whatever algorithm we're using, is there a good way to tell it "Don't learn *this* concept, now try to learn" and see if it can learn something substantially different?\n- Something something Gricean implication.\n\n# Relevance in value alignment\n\nSince inductive ambiguities are meant to be referred to the user for resolution rather than resolved automatically (the whole point is that the necessary data for an automatic resolution isn't there), they're instances of "[2qq user queries]" and all [2qq standard worries about user queries] would apply.\n\nThe hope about a good algorithm for identifying inductive ambiguities is that it would help catch [2w edge instantiations] and [47 unforeseen maximums], and maybe just simple errors of communication.\n',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '1',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '2016-02-21 21:59:24',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'EliezerYudkowsky',
    'AlexeiAndreev'
  ],
  childIds: [],
  parentIds: [
    'ai_alignment'
  ],
  commentIds: [
    '6n',
    '7j'
  ],
  questionIds: [],
  tagIds: [
    'taskagi_open_problems',
    'value_alignment_open_problem',
    'work_in_progress_meta_tag'
  ],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8854',
      pageId: 'inductive_ambiguity',
      userId: 'EliezerYudkowsky',
      edit: '6',
      type: 'newEdit',
      createdAt: '2016-03-20 03:44:19',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8848',
      pageId: 'inductive_ambiguity',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newAlias',
      createdAt: '2016-03-20 03:34:12',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8849',
      pageId: 'inductive_ambiguity',
      userId: 'EliezerYudkowsky',
      edit: '5',
      type: 'newEdit',
      createdAt: '2016-03-20 03:34:12',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8846',
      pageId: 'inductive_ambiguity',
      userId: 'EliezerYudkowsky',
      edit: '4',
      type: 'newTag',
      createdAt: '2016-03-20 02:37:13',
      auxPageId: 'taskagi_open_problems',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '3824',
      pageId: 'inductive_ambiguity',
      userId: 'AlexeiAndreev',
      edit: '4',
      type: 'newEdit',
      createdAt: '2015-12-16 01:56:55',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1137',
      pageId: 'inductive_ambiguity',
      userId: 'AlexeiAndreev',
      edit: '1',
      type: 'newUsedAsTag',
      createdAt: '2015-10-28 03:47:09',
      auxPageId: 'work_in_progress_meta_tag',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '383',
      pageId: 'inductive_ambiguity',
      userId: 'AlexeiAndreev',
      edit: '1',
      type: 'newParent',
      createdAt: '2015-10-28 03:46:51',
      auxPageId: 'ai_alignment',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '434',
      pageId: 'inductive_ambiguity',
      userId: 'AlexeiAndreev',
      edit: '1',
      type: 'newParent',
      createdAt: '2015-10-28 03:46:51',
      auxPageId: 'value_alignment_open_problem',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '2108',
      pageId: 'inductive_ambiguity',
      userId: 'EliezerYudkowsky',
      edit: '3',
      type: 'newEdit',
      createdAt: '2015-06-07 22:01:45',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '2107',
      pageId: 'inductive_ambiguity',
      userId: 'EliezerYudkowsky',
      edit: '2',
      type: 'newEdit',
      createdAt: '2015-04-17 01:49:00',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '2106',
      pageId: 'inductive_ambiguity',
      userId: 'EliezerYudkowsky',
      edit: '1',
      type: 'newEdit',
      createdAt: '2015-04-17 01:47:59',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}