{
  localUrl: '../page/behaviorist.html',
  arbitalUrl: 'https://arbital.com/p/behaviorist',
  rawJsonUrl: '../raw/102.json',
  likeableId: 'EricRogstad',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '4',
  dislikeCount: '0',
  likeScore: '4',
  individualLikes: [
    'EliezerYudkowsky',
    'NopeNope',
    'RolandPihlakas',
    'NathanFish'
  ],
  pageId: 'behaviorist',
  edit: '10',
  editSummary: '',
  prevEdit: '9',
  currentEdit: '10',
  wasPublished: 'true',
  type: 'wiki',
  title: 'Behaviorist genie',
  clickbait: 'An advanced agent that's forbidden to model minds in too much detail.',
  textLength: '5812',
  alias: 'behaviorist',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'EliezerYudkowsky',
  editCreatedAt: '2016-03-31 18:34:00',
  pageCreatorId: 'EliezerYudkowsky',
  pageCreatedAt: '2015-07-14 02:33:27',
  seeDomainId: '0',
  editDomainId: 'EliezerYudkowsky',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '1',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '383',
  text: '[summary:  A behaviorist [6w genie] is an [2c advanced AI] that can understand, e.g., material objects and technology, but does not model human minds (or possibly its own mind) in unlimited detail.  If creating a behaviorist agent were possible, it might meliorate several [6r anticipated difficulties] simultaneously, like the problems of [6v creating models of humans that are themselves sapient] or [309 the AI psychologically manipulating its users].  Since the AI would only be able to model humans via some restricted model class, it would be metaphorically similar to a [Skinnerian behaviorist](http://plato.stanford.edu/entries/behaviorism/#1) from the days when it was considered unprestigious for scientists to talk about the internal mental details of human beings.]\n\nA behaviorist [6w genie] is an AI that has been [1g4 averted from modeling] minds in more detail than some whitelisted class of models.\n\nThis is *possibly* a good idea because many [6r possible difficulties] seem to be associated with the AI having a sufficiently advanced model of human minds or AI minds, including:\n\n- [6v Mindcrime]\n- [10f Programmer deception] and [10f programmer manipulation]\n- [ Recursive self-improvement]\n- [5j Modeling distant adversarial superintelligences]\n\n...and yet an AI that is extremely good at understanding material objects and technology (just not other minds) would still be capable of some important classes of [6y pivotal achievement].\n\nA behaviorist genie would still require most of [6w genie theory] and [45 corrigibility] to be solved.  But it's plausible that the restriction away from modeling humans, programmers, and some types of reflectivity, would collectively make it significantly easier to make a safe form of this genie.\n\nThus, a behaviorist genie is one of fairly few open candidates for "AI that is restricted in a way that actually makes it safer to build, without it being so restricted as to be incapable of game-changing achievements".\n\nNonetheless, limiting the degree to which the AI can understand cognitive science, other minds, its own programmers, and itself is a very severe restriction that would prevent a number of [2s1 obvious ways] to make progress on the AGI subproblem and the [6c value identification problem] even for [2rz commands given to Task AGIs] ([6w Genies]).  Furthermore, there could perhaps be easier types of genies to build, or there might be grave difficulties in restricting the model class to some space that is useful without being dangerous.\n\n# Requirements for implementation\n\nBroadly speaking, two possible clusters of behaviorist-genie design are:\n\n- A cleanly designed, potentially self-modifying genie that can internally detect modeling problems that threaten to become mind-modeling problems, and route them into a special class of allowable mind-models.\n- A [1fy known-algorithm non-self-improving AI], whose complete set of capabilities have been carefully crafted and limited, which was shaped to not have much capability when it comes to modeling humans (or distant superintelligences).\n\nBreaking the first case down into more detail, the potential desiderata for a behavioristic design are:\n\n- (a) avoiding mindcrime when modeling humans\n- (b) not modeling distant superintelligences or alien civilizations\n- (c) avoiding programmer manipulation\n- (d) avoiding mindcrime in internal processes\n- (e) making self-improvement somewhat less accessible.\n\nThese are different goals, but with some overlap between them.  Some of the things we might need:\n\n- A working [1fv] that was general enough to screen the entire hypothesis space AND that was resilient against loopholes AND passed enough okay computations to screen the entire hypothesis space\n- A working [1fv] that was general enough to screen the entire space of potential self-modifications and subprograms AND was resilient against loopholes AND passed enough okay computations to compose the entire AI\n- An allowed class of human models, that was clearly safe in the sense of not being sapient, AND a reliable way to tell *every* time the AI was trying to model a human (including modeling something else that was partially affected by humans, etc) (possibly with the programmers as a special case that allowed a more sophisticated model of some programmer intentions, but still not one good enough to psychologically manipulate the programmers)\n- A way to tell whenever the AI was trying to model a distant civilization, which shut down the modeling attempt or avoided the incentive to model (this might not require healing a bunch of entanglements, since there are no visible aliens and therefore their exclusion shouldn't mess up other parts of the AI's model)\n- A reflectively stable way to support any of the above, which are technically [1g4 epistemic exclusions]\n\nIn the KANSI case, we'd presumably be 'naturally' working with limited model classes (on the assumption that everything the AI is using is being monitored, has a known algorithm, and has a known model class) and the goal would just be to prevent the KANSI agent from spilling over and creating other human models somewhere else, which might fit well into a general agenda against self-modification and subagent creation.  Similarly, if every new subject is being identified and whitelisted by human monitors, then just not whitelisting the topic of modeling distant superintelligences or devising strategies for programmer manipulation, might get most of the job done to an acceptable level *if* the underlying whitelist is never being evaded (even emergently).  This would require a lot of *successfully maintained* vigilance and human monitoring, though, especially if the KANSI agent is trying to allocate a new human-modeling domain once per second and every instance has to be manually checked.',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '2',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '2016-02-25 07:25:01',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'EliezerYudkowsky',
    'NateSoares',
    'AlexeiAndreev'
  ],
  childIds: [],
  parentIds: [
    'task_agi'
  ],
  commentIds: [
    '1fj'
  ],
  questionIds: [],
  tagIds: [
    'KANSI',
    'work_in_progress_meta_tag',
    'mindcrime'
  ],
  relatedIds: [
    'distant_SIs',
    'probable_environment_hacking'
  ],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9173',
      pageId: 'behaviorist',
      userId: 'EliezerYudkowsky',
      edit: '10',
      type: 'newEdit',
      createdAt: '2016-03-31 18:34:00',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4619',
      pageId: 'behaviorist',
      userId: 'EliezerYudkowsky',
      edit: '9',
      type: 'newEdit',
      createdAt: '2015-12-28 22:32:34',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4618',
      pageId: 'behaviorist',
      userId: 'EliezerYudkowsky',
      edit: '8',
      type: 'newTag',
      createdAt: '2015-12-28 22:32:11',
      auxPageId: 'KANSI',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4615',
      pageId: 'behaviorist',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newAlias',
      createdAt: '2015-12-28 22:31:51',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4616',
      pageId: 'behaviorist',
      userId: 'EliezerYudkowsky',
      edit: '8',
      type: 'newEdit',
      createdAt: '2015-12-28 22:31:51',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4551',
      pageId: 'behaviorist',
      userId: 'EliezerYudkowsky',
      edit: '7',
      type: 'newUsedAsTag',
      createdAt: '2015-12-28 21:14:32',
      auxPageId: 'distant_SIs',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4548',
      pageId: 'behaviorist',
      userId: 'EliezerYudkowsky',
      edit: '7',
      type: 'newUsedAsTag',
      createdAt: '2015-12-28 21:14:12',
      auxPageId: 'probable_environment_hacking',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4506',
      pageId: 'behaviorist',
      userId: 'EliezerYudkowsky',
      edit: '7',
      type: 'newTag',
      createdAt: '2015-12-28 19:42:59',
      auxPageId: 'mindcrime',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4330',
      pageId: 'behaviorist',
      userId: 'NateSoares',
      edit: '7',
      type: 'newEdit',
      createdAt: '2015-12-24 23:54:35',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4324',
      pageId: 'behaviorist',
      userId: 'NateSoares',
      edit: '6',
      type: 'newEdit',
      createdAt: '2015-12-24 23:51:25',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4323',
      pageId: 'behaviorist',
      userId: 'NateSoares',
      edit: '0',
      type: 'newAlias',
      createdAt: '2015-12-24 23:51:15',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4321',
      pageId: 'behaviorist',
      userId: 'NateSoares',
      edit: '5',
      type: 'newEdit',
      createdAt: '2015-12-24 23:49:55',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '3915',
      pageId: 'behaviorist',
      userId: 'AlexeiAndreev',
      edit: '0',
      type: 'newAlias',
      createdAt: '2015-12-16 16:21:17',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '3916',
      pageId: 'behaviorist',
      userId: 'AlexeiAndreev',
      edit: '4',
      type: 'newEdit',
      createdAt: '2015-12-16 16:21:17',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1119',
      pageId: 'behaviorist',
      userId: 'AlexeiAndreev',
      edit: '1',
      type: 'newUsedAsTag',
      createdAt: '2015-10-28 03:47:09',
      auxPageId: 'work_in_progress_meta_tag',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '316',
      pageId: 'behaviorist',
      userId: 'AlexeiAndreev',
      edit: '1',
      type: 'newParent',
      createdAt: '2015-10-28 03:46:51',
      auxPageId: 'task_agi',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1464',
      pageId: 'behaviorist',
      userId: 'EliezerYudkowsky',
      edit: '3',
      type: 'newEdit',
      createdAt: '2015-08-05 23:42:40',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1463',
      pageId: 'behaviorist',
      userId: 'EliezerYudkowsky',
      edit: '2',
      type: 'newEdit',
      createdAt: '2015-07-14 02:37:10',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1462',
      pageId: 'behaviorist',
      userId: 'EliezerYudkowsky',
      edit: '1',
      type: 'newEdit',
      createdAt: '2015-07-14 02:33:27',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}