{
  localUrl: '../page/Delegating_mixed_crowd.html',
  arbitalUrl: 'https://arbital.com/p/Delegating_mixed_crowd',
  rawJsonUrl: '../raw/1tp.json',
  likeableId: '768',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '0',
  dislikeCount: '0',
  likeScore: '0',
  individualLikes: [],
  pageId: 'Delegating_mixed_crowd',
  edit: '3',
  editSummary: '',
  prevEdit: '2',
  currentEdit: '3',
  wasPublished: 'true',
  type: 'wiki',
  title: 'Delegating to a mixed crowd',
  clickbait: '',
  textLength: '4502',
  alias: 'Delegating_mixed_crowd',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'PaulChristiano',
  editCreatedAt: '2016-02-20 02:43:53',
  pageCreatorId: 'PaulChristiano',
  pageCreatedAt: '2016-02-02 00:41:26',
  seeDomainId: '0',
  editDomainId: '705',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '14',
  text: '\n\n### \n\nSuppose I have ten programs, each a human-level agent. I suspect that at least one or two of them will behave helpfully, but the others may behave maliciously. How can I get useful work out of the combination?\n\nHere’s a very rough proposal:\n\n- Each agent receives an initial budget of points.\n- By default, each agent runs slowly in an independent environment. An agent may pay points to receive extra computational resources.\n- At any time, an agent may propose an action and claim that I would approve of it—perhaps it’s an instruction to give to a robot, a message to display to me, or a modification to this top-level loop. They also estimate how much I’d be willing to pay them for the time they spent thinking up the action.\n- Any other agent can challenge the proposed action, or the estimate for how much I’d be willing to pay. If so, the actor and challenger have an argument about it, which I arbitrate.\n- Running arguments is the key ingredient of the proposal: I need to arbitrate arguments which are potentially (1) extremely frequent, (2) extremely complicated, and (3) between extremely smart people. And I want to do it without spending almost any of my time. I think that an idea [along these lines](https://arbital.com/p/1xx) might work, where agents wager points about what we will believe if we hear them out, and we concentrate our attention on claims where there is significant disagreement.\n- If there are no successful challenges, the action is taken and the actor is compensated. If there is a successful challenge, the action is not taken and the challenger is compensated instead, at the expense of the actor (we can use a similar argument to determine the extent of the compensation). In addition to these payments, many points may be won or lost over the couse of the argument itself.\n\nWould this work? I don’t know. I’d guess that it would work iff the [arguments](https://arbital.com/p/1xx) worked; if the arguments work correctly and the malicious agents try to significantly interfere with the helpful agents, the helpful agents will probably be able to extract points from them.\n\nOdds and ends\n=============\n\n### Details\n\n### \n\nThis game is still underspecified; many details would have to be filled in, and many tweaks and elaborations would be required to get something sensible.\n\nWe can use the system itself to address the question “how should the system work?” For example, if the honest agent realizes that the setup is subject to manipulation, she can immediately recommend a change to fix the problem. As long as her initial budget is large enough to convince us that we should change the system, the problem can correct itself.\n\n### Why not go straight for the throat?\n\n### \n\nThe most important issue would be understanding which of the agents actually want to be helpful and why. If the argument system is working well, it might be used to address this problem directly; and once we knew which agent actually wanted to be helpful, we could get rid of the others.\n\nSo why do I care about the mixed-crowd setting at all?\n\n1. I think that an implementation of the transform (helpful + ?) → (helpful) may be a useful ingredient in building (helpful) to start with. For example, we might apply similar ideas at a lower level or iterate the proposal to “amplify” slightly helpful agents into very helpful agents.\n2. This procedure can be used to define a computation (where the “actions” are all computational steps) which “I would approve of” as a method to determine which agents want to be helpful. Defining a process to determine X is generally easier than determining X directly, so this places lower demands on the argument system.\n3. Determining what agents want to be helpful may prove to be a particularly nasty problem. It may be that none of the agents reliably want to be helpful across contexts, or that there is no easy way to see that any agent is motivated to be helpful (even if it is), or so on. We might want to have helpful behavior in the meantime.\n\n### Delegating to different crowds\n\n### \n\nIn principle we could also delegate to agents who only care about collecting as many points as they can. But in the self-interested case, collusion becomes a significant concern.\n\nWe could apply the same idea to delegate to people. The first step would be understanding whether and how the [argument scheme](https://arbital.com/p/1xx) can work well enough (with people) to support an application like this.',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '1',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'PaulChristiano'
  ],
  childIds: [],
  parentIds: [
    'paul_ai_control'
  ],
  commentIds: [],
  questionIds: [],
  tagIds: [],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '7442',
      pageId: 'Delegating_mixed_crowd',
      userId: 'JessicaChuan',
      edit: '3',
      type: 'newEdit',
      createdAt: '2016-02-20 02:43:53',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6623',
      pageId: 'Delegating_mixed_crowd',
      userId: 'JessicaChuan',
      edit: '2',
      type: 'newEdit',
      createdAt: '2016-02-10 00:02:14',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6075',
      pageId: 'Delegating_mixed_crowd',
      userId: 'JessicaChuan',
      edit: '0',
      type: 'deleteChild',
      createdAt: '2016-02-02 01:18:09',
      auxPageId: 'learning_logic',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6072',
      pageId: 'Delegating_mixed_crowd',
      userId: 'JessicaChuan',
      edit: '1',
      type: 'newChild',
      createdAt: '2016-02-02 01:17:03',
      auxPageId: 'learning_logic',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6067',
      pageId: 'Delegating_mixed_crowd',
      userId: 'JessicaChuan',
      edit: '1',
      type: 'newEdit',
      createdAt: '2016-02-02 00:41:26',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6066',
      pageId: 'Delegating_mixed_crowd',
      userId: 'JessicaChuan',
      edit: '0',
      type: 'newParent',
      createdAt: '2016-02-02 00:40:35',
      auxPageId: 'paul_ai_control',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}