{
  localUrl: '../page/1st.html',
  arbitalUrl: 'https://arbital.com/p/1st',
  rawJsonUrl: '../raw/1st.json',
  likeableId: 'KyleBogosian',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '0',
  dislikeCount: '0',
  likeScore: '0',
  individualLikes: [],
  pageId: '1st',
  edit: '3',
  editSummary: '',
  prevEdit: '2',
  currentEdit: '3',
  wasPublished: 'true',
  type: 'comment',
  title: '"These arguments seem weak t..."',
  clickbait: '',
  textLength: '2450',
  alias: '1st',
  externalUrl: '',
  sortChildrenBy: 'recentFirst',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'PaulChristiano',
  editCreatedAt: '2016-01-30 19:33:20',
  pageCreatorId: 'PaulChristiano',
  pageCreatedAt: '2016-01-30 19:26:54',
  seeDomainId: '0',
  editDomainId: 'EliezerYudkowsky',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '89',
  text: 'These arguments seem weak to me.\n\n - I think the basic issue is that you are not properly handling uncertainty about what will be practically needed to train an agent out of infrahuman errors in language understanding. Your arguments seem much more reasonable under a particular model (e.g. a system that is making predictions or plans and develops language understanding as a tool for making better predictions), but it seems hard to justify 90% confidence in that model.\n - It's not at all clear that language understanding means identifying "natural" categories. Whether values have high information content doesn't seem like a huge consideration given what I consider plausible approaches to language learning---it's the kind of thing that makes the problem linearly harder / require linearly more data, rather than causing a qualitative change.\n - It seems clear that "right" does not mean "a human would judge right given a persuasive argument." That's a way we might try to define right, but it's clearly an alternative to a natural language understanding of right (an alternative I consider more plausible), not an aspect of it.\n - "Do the right thing" does not have to cash out as a function from outcomes --> rightness followed by rightness-maximization. That's not even really an intuitive way to cash it out.\n - The key issue may be how well natural language understanding degrades under uncertainty. Again, you seem to be imagining a distribution over vague maps from outcome --> rightness which is then maximized in expectation, whereas I (and I think most people) are imagining an incomplete set of tentative views about rightness. The incomplete set of tentative views about rightness can include strong claims about things like violations of human autonomy (even though autonomy is similarly defined by an incomplete set of tentative views rather than a distribution over maps from outcome ---> autonomy).\n\nI agree that many commenters and some researchers are too optimistic about this kind of thing working automatically or by default. But I think your post doesn't engage with the substantive optimistic view.\n\nIt would be easier to respond if you gave a tighter argument for your conclusion, but it might also be worth someone actively making a tighter argument for the optimistic view, especially if you actually don't understand the strong optimistic view (rather than initially responding to a weak version of it for clarity).',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '1',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '2016-02-23 11:46:17',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'PaulChristiano'
  ],
  childIds: [],
  parentIds: [
    '4s'
  ],
  commentIds: [],
  questionIds: [],
  tagIds: [],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '5963',
      pageId: '1st',
      userId: 'PaulChristiano',
      edit: '3',
      type: 'newEdit',
      createdAt: '2016-01-30 19:33:20',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '5962',
      pageId: '1st',
      userId: 'PaulChristiano',
      edit: '2',
      type: 'newEdit',
      createdAt: '2016-01-30 19:29:03',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '5961',
      pageId: '1st',
      userId: 'PaulChristiano',
      edit: '1',
      type: 'newEdit',
      createdAt: '2016-01-30 19:26:54',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '5960',
      pageId: '1st',
      userId: 'PaulChristiano',
      edit: '0',
      type: 'newParent',
      createdAt: '2016-01-30 19:07:52',
      auxPageId: '4s',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}