{
  localUrl: '../page/underestimate_value_complexity_perceputal_property.html',
  arbitalUrl: 'https://arbital.com/p/underestimate_value_complexity_perceputal_property',
  rawJsonUrl: '../raw/4v2.json',
  likeableId: '2857',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '1',
  dislikeCount: '0',
  likeScore: '1',
  individualLikes: [
    'EricRogstad'
  ],
  pageId: 'underestimate_value_complexity_perceputal_property',
  edit: '1',
  editSummary: '',
  prevEdit: '0',
  currentEdit: '1',
  wasPublished: 'true',
  type: 'wiki',
  title: 'Underestimating complexity of value because goodness feels like a simple property',
  clickbait: 'When you just want to yell at the AI, "Just do normal high-value X, dammit, not weird low-value X!" and that 'high versus low value' boundary is way more complicated than your brain wants to think.',
  textLength: '1285',
  alias: 'underestimate_value_complexity_perceputal_property',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'EliezerYudkowsky',
  editCreatedAt: '2016-06-27 01:23:07',
  pageCreatorId: 'EliezerYudkowsky',
  pageCreatedAt: '2016-06-27 01:23:07',
  seeDomainId: '0',
  editDomainId: 'EliezerYudkowsky',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '73',
  text: 'One potential reason why people might tend to systematically underestimate the [5l complexity of value] is if the "goodness" of a policy or goal-instantiation *feels like* a simple, direct property.  That is, our brains compute the goodness level and make it available to us as a relatively simple quantity, so we *feel like* it's a simple fact that tiling the universe with tiny agents experiencing maximum simply-represented 'pleasure' levels, is a *bad* version of happiness.  We feel like it ought to be simple to yell at an AI "Just give me high-value happiness, not this *weird low-value* happiness!"  Or have the AI learn, from a few examples, that it's meant to produce *high*-value X and not *low*-value X, especially if the AI is smart enough to learn other simple boundaries, like the difference between red objects and blue objects.  Where actually the boundary between "good X" and "bad X" is [36h value-laden] and far more wiggly and would require far more examples to delineate.  What our brain computes as a seemingly simple, perceptually available one-dimensional quantity, does not always correspond to a simple, easy-to-learn gradient in the space of policies or outcomes.  This is especially true of the seemingly readily-available property of [3d9 beneficialness].',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '1',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'EliezerYudkowsky'
  ],
  childIds: [],
  parentIds: [
    'complexity_of_value'
  ],
  commentIds: [],
  questionIds: [],
  tagIds: [
    'start_meta_tag',
    'psychologizing'
  ],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '14617',
      pageId: 'underestimate_value_complexity_perceputal_property',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newParent',
      createdAt: '2016-06-27 01:23:09',
      auxPageId: 'complexity_of_value',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '14618',
      pageId: 'underestimate_value_complexity_perceputal_property',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newTag',
      createdAt: '2016-06-27 01:23:09',
      auxPageId: 'start_meta_tag',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '14619',
      pageId: 'underestimate_value_complexity_perceputal_property',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newTag',
      createdAt: '2016-06-27 01:23:09',
      auxPageId: 'psychologizing',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '14615',
      pageId: 'underestimate_value_complexity_perceputal_property',
      userId: 'EliezerYudkowsky',
      edit: '1',
      type: 'newEdit',
      createdAt: '2016-06-27 01:23:07',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}