{
  localUrl: '../page/88.html',
  arbitalUrl: 'https://arbital.com/p/88',
  rawJsonUrl: '../raw/88.json',
  likeableId: '2427',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '0',
  dislikeCount: '0',
  likeScore: '0',
  individualLikes: [],
  pageId: '88',
  edit: '1',
  editSummary: '',
  prevEdit: '0',
  currentEdit: '1',
  wasPublished: 'true',
  type: 'comment',
  title: '"I didn't know that about Ba..."',
  clickbait: '',
  textLength: '1712',
  alias: '88',
  externalUrl: '',
  sortChildrenBy: 'recentFirst',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'KenziAmodei',
  editCreatedAt: '2015-06-22 07:46:11',
  pageCreatorId: 'KenziAmodei',
  pageCreatedAt: '2015-06-22 07:46:11',
  seeDomainId: '0',
  editDomainId: 'EliezerYudkowsky',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '186',
  text: 'I didn't know that about Bayesian inference-ish updating baking in an Occam-ish prior.  Does it need to be complexity penalizing, or would any consistent prior-choosing rule work?  I assume the former from the phrasing.  \r  \nWhy is that?  "does not much constrain the end results" could just mean that unless we assume  the agent is Occam ish, then we can't tell from its posteriors whether it did Bayesian inference or something else.  But I don't see why *that* couldn't be true of some non-Occam-ish prior picking rule, as long as we knew what *that* was.\r  \nI think this definition includes agents that only cared about their sensory inputs, since sensory inputs are a subset of states of the world. \r  \nThis makes me think that the definition of economic agent that I googled *isn't* what was meant, since this one seems to be primarily making a claim about efficiency, rather than about impacting markets ("an agent who is part of the economy").  Something more like homo economicus?\r  \nNaturalistic agents seems to have been primarily a claim about the situation that agent finds itself in, rather than a claim about that agents' models (eg, a cartesian dualist which was in fact embedded in a universe made of atoms and was itself made of atoms, would still be a "naturalistic agent", I think)\r  \nThe last point reminds me of Dawkins style extended phenotypes; not sure how analogous/comparable that concept is.  I guess it makes me want to go back and figure out if we defined what "an agent" was.  So like does a beehive count as "an agent" (I believe that conditioned on it being an agent at all, it would be a naturalized agent)?\r  \n...does Arbital have search functionality right now?  Maybe not :-/',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '1',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '2016-02-06 00:41:54',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'KenziAmodei'
  ],
  childIds: [],
  parentIds: [
    'standard_agent'
  ],
  commentIds: [],
  questionIds: [],
  tagIds: [],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '102',
      pageId: '88',
      userId: 'AlexeiAndreev',
      edit: '1',
      type: 'newParent',
      createdAt: '2015-10-28 03:46:51',
      auxPageId: 'standard_agent',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1276',
      pageId: '88',
      userId: 'KenziAmodei',
      edit: '1',
      type: 'newEdit',
      createdAt: '2015-06-22 07:46:11',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}