{
  localUrl: '../page/7n.html',
  arbitalUrl: 'https://arbital.com/p/7n',
  rawJsonUrl: '../raw/7n.json',
  likeableId: '2407',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '1',
  dislikeCount: '0',
  likeScore: '1',
  individualLikes: [
    'EricRogstad'
  ],
  pageId: '7n',
  edit: '2',
  editSummary: '',
  prevEdit: '1',
  currentEdit: '2',
  wasPublished: 'true',
  type: 'comment',
  title: '"This (and many of your conc..."',
  clickbait: '',
  textLength: '1619',
  alias: '7n',
  externalUrl: '',
  sortChildrenBy: 'recentFirst',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'PaulChristiano',
  editCreatedAt: '2016-03-23 03:30:23',
  pageCreatorId: 'PaulChristiano',
  pageCreatedAt: '2015-06-18 18:56:26',
  seeDomainId: '0',
  editDomainId: 'EliezerYudkowsky',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '2885',
  text: 'This (and many of your concerns) seem basically sensible to me. But I tend to read them more broadly as a reductio against particular approaches to building aligned AI systems (e.g. building an AI that pursues an explicit and directly defined goal). And so I tend to say things like "I don't expect X to be a problem," because any design that suffers from problem X is likely to be totally unworkable for a wide range of reasons. You tend to say "X seems like a serious problem." But it's not clear if we disagree. \n  \nOne way we may disagree is about what we expect people to do. I think that for the most part reasonable people will be exploring workable designs, or designs that are unworkable for subtle reasons, rather than trying to fix manifestly unworkable designs. You perhaps doubt that there are any reasonable people in this sense.\n  \nAnother difference is that I am inclined to look at people who say "X is not a problem" and imagine them saying something closer to what I am saying. E.g. if you present a difficulty with building rational agents with explicitly represented goals and an AI researcher says that they don't belive this is a real difficulty, it may be because your comments are (at best) reinforcing their view that sophisticated AI systems will not be agents pursuing explicitly represented goals.\n  \n(Of course, I agree that both happen. If we disagree, it's about whether the charitable interpretation is sometimes accurate vs. almost never accurate, or perhaps about whether proceeding under maximally charitable assumptions is tactically worthwhile even if it often proves to be wrong.)',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '3',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '2016-02-27 14:32:20',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'PaulChristiano'
  ],
  childIds: [],
  parentIds: [
    'nearest_unblocked'
  ],
  commentIds: [
    '1h3',
    '2rs',
    '2rv',
    '9k'
  ],
  questionIds: [],
  tagIds: [],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8926',
      pageId: '7n',
      userId: 'PaulChristiano',
      edit: '2',
      type: 'newEdit',
      createdAt: '2016-03-23 03:30:23',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '164',
      pageId: '7n',
      userId: 'AlexeiAndreev',
      edit: '1',
      type: 'newParent',
      createdAt: '2015-10-28 03:46:51',
      auxPageId: 'nearest_unblocked',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1258',
      pageId: '7n',
      userId: 'PaulChristiano',
      edit: '1',
      type: 'newEdit',
      createdAt: '2015-06-18 18:56:26',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}