{
  localUrl: '../page/2ql.html',
  arbitalUrl: 'https://arbital.com/p/2ql',
  rawJsonUrl: '../raw/2ql.json',
  likeableId: '1651',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '0',
  dislikeCount: '0',
  likeScore: '0',
  individualLikes: [],
  pageId: '2ql',
  edit: '1',
  editSummary: '',
  prevEdit: '0',
  currentEdit: '1',
  wasPublished: 'true',
  type: 'comment',
  title: '"It seems critical to distin..."',
  clickbait: '',
  textLength: '2484',
  alias: '2ql',
  externalUrl: '',
  sortChildrenBy: 'recentFirst',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'PaulChristiano',
  editCreatedAt: '2016-03-19 20:07:50',
  pageCreatorId: 'PaulChristiano',
  pageCreatedAt: '2016-03-19 20:07:50',
  seeDomainId: '0',
  editDomainId: 'EliezerYudkowsky',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: 'To put it another way, the task is to have the AI generate a safe burrito\\.  One way to try to do this is making sure that the AI's explicit training data contains a burrito with butolinum toxin, labeled as a negative example, so that the AI knows not to include butolinum\\.  The hope is that via conservatism we can avoid needing to think of every possible way that our training data might not properly stabilize the 'simplest explanation' along every dimension of potentially fatal variance, and shift some of the workload to just showing the AI positive examples which happen not to contain butolinum toxin\\.',
  anchorText: 'One way to try to do this is making sure that the AI's explicit training data contains a burrito with butolinum toxin, labeled as a negative example, so that the AI knows not to include butolinum\\.  The hope is that via conservatism we can avoid needing to think of every possible way that our training data might not properly stabilize the 'simplest explanation' along every dimension of potentially fatal variance, and shift some of the workload to just showing the AI positive examples which happen not to contain butolinum toxin\\.',
  anchorOffset: '77',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '354',
  text: 'It seems critical to distinguish the cases where\n\n1. We are hoping the AI generalizes the concept of "burrito" in the intended way to new data,\n2. The definition of burrito is "something our burrito-identifier would identify as a burrito given enough time," and we are just hoping the AI doesn't make mistakes. (The burrito-identifier is some process that we can actually run in order to determine whether something is a burrito.)\n\nAs you've probably gathered, I feel hopeless about case (1).\n\nIn case (2), any agent that can learn the concept "definitely a burrito" could use this concept to produce definitely-burritos and thereby achieve high reward in the RL game. So the mere existence of the easy-to-learn definitely-a-burrito concept seems to imply that our learner will behave well. We don't have to actually explicitly do any work about conservative concepts (except to better understand the behavior of our learner).\n\nI've never managed to get quite clear on your picture. My impression is that:\n\n*  you think that case (2) is doomed because there is no realistic prospect for creating a good enough burrito-evaluator, \n* you think that even with a good enough burrito-evaluator, you would still have serious trouble because of errors.\n\nI think your optimism about case (1) is defensible; I disagree, but not for super straightforward reasons.  The main disagreement is probably about case (2).\n\nI think that your concern about generating a good enough burrito-evaluator is also defensible; I am optimistic, but even on my view this would require resolving a number of big research problems.\n\nI think your concern about mistakes, and especially about something like "conservative concepts" as a way to reduce the scope for mistakes, is less defensible. I don't feel like this is as complex an issue---the case for delegating this to the learning algorithm seems quite strong, and I don't feel you've really given a case on the other side.\n\nNote that this is related to what you've been calling [4w], and I do think that there are techniques in that space that could help avoid mistakes. (Though I would definitely frame that problem differently.) So it's possible we're not really disagreeing here either. But my best guess is that you are underestimating to the extent to which some of these issues could/should be delegated to the learner itself, supposing that we could resolve your other concerns (i.e. supposing that we could construct a good enough burrito-evaluator).',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '2',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'PaulChristiano'
  ],
  childIds: [],
  parentIds: [
    'taskagi_open_problems'
  ],
  commentIds: [
    '2qn',
    '2qs'
  ],
  questionIds: [],
  tagIds: [],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8797',
      pageId: '2ql',
      userId: 'PaulChristiano',
      edit: '1',
      type: 'newEdit',
      createdAt: '2016-03-19 20:07:50',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8796',
      pageId: '2ql',
      userId: 'PaulChristiano',
      edit: '0',
      type: 'newParent',
      createdAt: '2016-03-19 19:49:41',
      auxPageId: 'taskagi_open_problems',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}