{
  localUrl: '../page/otherizer.html',
  arbitalUrl: 'https://arbital.com/p/otherizer',
  rawJsonUrl: '../raw/2r9.json',
  likeableId: '1674',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '0',
  dislikeCount: '0',
  likeScore: '0',
  individualLikes: [],
  pageId: 'otherizer',
  edit: '1',
  editSummary: '',
  prevEdit: '0',
  currentEdit: '1',
  wasPublished: 'true',
  type: 'wiki',
  title: 'Other-izing (wanted: new optimization idiom)',
  clickbait: 'Maximization isn't possible for bounded agents, and satisficing doesn't seem like enough.  What other kind of 'izing' might be good for realistic, bounded agents?',
  textLength: '3007',
  alias: 'otherizer',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'EliezerYudkowsky',
  editCreatedAt: '2016-03-22 01:33:10',
  pageCreatorId: 'EliezerYudkowsky',
  pageCreatedAt: '2016-03-22 01:33:10',
  seeDomainId: '0',
  editDomainId: 'EliezerYudkowsky',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '199',
  text: 'The open "other-izer" problem is to find something besides maximizing, satisificing, meliorizing, and several other existing but unsatisfactory idioms, which is actually suitable as an optimization idiom for [bounded_agent bounded agents] and is [1fx reflectively stable].\n\nIn standard theory we tend to assume that agents are expected utility *maximizers* that always choose the available option with highest expected utility.  But this isn't a realistic idiom because a realistic, [bounded_agent bounded agent] with limited computing power can't compute the expected utility of every possible action.\n\nAn expected utility satisficer, which e.g. might approve any policy so long as the expected utility is at least 0.95, would be much more realistic.  But it also doesn't seem suitable for an actual AGI, since, e.g., if policy X produces at least expected utility 0.98, then it would also satisfice to randomize between mostly policy X and a small chance of policy Y that had expected utility 0; this seems to give away a needlessly large amount of utility.  We'd probably be fairly disturbed if an otherwise aligned AGI was actually doing that.\n\nSatisficing is also [reflectively consistent](2rb) but not [1fx reflectively stable] - while [1mq tiling agents theory] can give formulations of satisficers that will approve the construction of similar satisficers, a satisficer could also tile to a maximizer.  If your decision criterion is to approve policies which achieve expected utility at least $\\theta,$ and you expect that an expected utility *maximizing* version of yourself would achieve expected utility at least $\\theta,$ then you'll approve self-modifying to be an expected utility maximizer.  This is another reason to prefer a formulation of optimization besides satisficing - if the AI is strongly self-modifying, then there's no guarantee that the 'satisficing' property would stick around and have our analysis go on being applicable, and even if not strongly self-modifying, it might still create non-satisficing chunks of cognitive mechanism inside itself or in the environment.\n\nA meliorizer has a current policy and only replaces it with policies of increased expected utility.  Again, while it's possible to demonstrate that a meliorizer can approve self-modifying to another meliorizer and hence this idiom is reflectively consistent, it doesn't seem like it would be reflectively stable - becoming a maximizer or something else might have higher expected utility than staying a meliorizer.\n\nThe "other-izer" open problem is to find something better than maximization, satisficing, and meliorization that actually makes sense as an idiom of optimization for a resource-bounded agent and that we'd think would be an okay thing for e.g. a [6w Task AGI] to do, which is at least reflectively consistent, and preferably reflectively stable.\n\nSee also "[2r8]" for a further desideratum, namely an adjustable parameter of optimization strength, that would be nice to have in an other-izer.',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '1',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'EliezerYudkowsky'
  ],
  childIds: [],
  parentIds: [
    'reflective_stability'
  ],
  commentIds: [],
  questionIds: [],
  tagIds: [
    'value_alignment_open_problem',
    'stub_meta_tag'
  ],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8876',
      pageId: 'otherizer',
      userId: 'EliezerYudkowsky',
      edit: '1',
      type: 'newEdit',
      createdAt: '2016-03-22 01:33:10',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8870',
      pageId: 'otherizer',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newTag',
      createdAt: '2016-03-21 23:49:10',
      auxPageId: 'stub_meta_tag',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8868',
      pageId: 'otherizer',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newTag',
      createdAt: '2016-03-21 23:49:06',
      auxPageId: 'value_alignment_open_problem',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8866',
      pageId: 'otherizer',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newParent',
      createdAt: '2016-03-21 23:47:54',
      auxPageId: 'reflective_stability',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}