{
  localUrl: '../page/cognitive_alignment.html',
  arbitalUrl: 'https://arbital.com/p/cognitive_alignment',
  rawJsonUrl: '../raw/7td.json',
  likeableId: '0',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '0',
  dislikeCount: '0',
  likeScore: '0',
  individualLikes: [],
  pageId: 'cognitive_alignment',
  edit: '1',
  editSummary: '',
  prevEdit: '0',
  currentEdit: '1',
  wasPublished: 'true',
  type: 'wiki',
  title: 'Generalized principle of cognitive alignment',
  clickbait: 'When we're asking how we want the AI to think about an alignment problem, one source of inspiration is trying to have the AI mirror our own thoughts about that problem.',
  textLength: '2205',
  alias: 'cognitive_alignment',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'EliezerYudkowsky',
  editCreatedAt: '2017-02-13 18:55:58',
  pageCreatorId: 'EliezerYudkowsky',
  pageCreatedAt: '2017-02-13 18:55:58',
  seeDomainId: '0',
  editDomainId: 'EliezerYudkowsky',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'false',
  isApprovedComment: 'false',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '59',
  text: 'A generalization of the [7g0] is that whenever we are asking how we want an AI algorithm to execute with respect to some alignment or safety issue, we might ask how we ourselves are thinking about that problem, and whether we can have the AI think conjugate thoughts.   This may sometimes seem like a much more complicated or dangerous-seeming approach than simpler avenues, but it's often a source of useful inspiration.\n\nFor example, with respect to the [-2xd], this principle might lead us to ask:  "Is there some way we can have the AI [3ps truly understand that its own programmers may have built the wrong AI], including the wrong definition of exactly what it means to have 'built the wrong AI', such that [7rc the AI thinks it *cannot* recover the matter by optimizing any kind of preference already built into it], so that the AI itself wants to shut down before having a great impact, because when the AI sees the programmers trying to press the button or contemplates the possibility of the programmers pressing the button, updating on this information causes the AI to expect its further operation to have a net bad impact in some sense that it can't overcome through any kind of clever strategy besides just shutting down?"\n\nThis in turn might imply a complicated mind-state we're not sure how to get right, such that we would prefer a simpler approach to shutdownability along the lines of a perfected [1b7 utility indifference] scheme.  If we're shutting down the AI at all, it means something has gone wrong, which implies that something else may have gone wrong earlier before we noticed.  That seems like a bad time to have the AI be enthusiastic about shutting down even better than in its original design (unless we can get the AI to [3ps understand even *that* part too], the danger of that kind of 'improvement', during its normal operation).\n\nTrying for maximum cognitive alignment isn't always a good idea; but it's almost always worth trying to think through a safety problem from that perspective for inspiration on what we'd ideally want the AI to be doing.  It's often a good idea to move closer to that ideal when this doesn't introduce greater complication or other problems.',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '1',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'EliezerYudkowsky'
  ],
  childIds: [],
  parentIds: [
    'nonadversarial'
  ],
  commentIds: [],
  questionIds: [],
  tagIds: [
    'c_class_meta_tag'
  ],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '22010',
      pageId: 'cognitive_alignment',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newEditGroup',
      createdAt: '2017-02-13 18:56:13',
      auxPageId: 'EliezerYudkowsky',
      oldSettingsValue: '123',
      newSettingsValue: '2'
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '22008',
      pageId: 'cognitive_alignment',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newParent',
      createdAt: '2017-02-13 18:55:59',
      auxPageId: 'nonadversarial',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '22009',
      pageId: 'cognitive_alignment',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newTag',
      createdAt: '2017-02-13 18:55:59',
      auxPageId: 'c_class_meta_tag',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '22006',
      pageId: 'cognitive_alignment',
      userId: 'EliezerYudkowsky',
      edit: '1',
      type: 'newEdit',
      createdAt: '2017-02-13 18:55:58',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}