{
  localUrl: '../page/76p.html',
  arbitalUrl: 'https://arbital.com/p/76p',
  rawJsonUrl: '../raw/76p.json',
  likeableId: '3895',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '2',
  dislikeCount: '0',
  likeScore: '2',
  individualLikes: [
    'EricBruylant',
    'StephanieZolayvar'
  ],
  pageId: '76p',
  edit: '4',
  editSummary: '',
  prevEdit: '3',
  currentEdit: '4',
  wasPublished: 'true',
  type: 'wiki',
  title: 'Approaches to strategic disagreement',
  clickbait: 'Organizations self-select staff to agree with their strategies. By default, this causes them to sacrifice the fulfillment of others' plans. How can we resolve these strategic disagreements?',
  textLength: '6866',
  alias: '76p',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'RyanCarey2',
  editCreatedAt: '2017-01-03 04:38:17',
  pageCreatorId: 'RyanCarey2',
  pageCreatedAt: '2017-01-02 01:03:36',
  seeDomainId: '0',
  editDomainId: '2142',
  submitToDomainId: '2069',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'false',
  isApprovedComment: 'false',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '152',
  text: 'Suppose a philanthropic community contains groups that have the same goals but disagree about how to achieve their goals. One organization pursues a strategy X and the other pursues Y. Each has the opportunities to carry out actions that advance strategy X a lot but fail to help with (or hinder) strategy Y and vice versa. In supporting these projects, the community probably wants to follow some principles:\n\n - **Self-knowledge**. Individuals should decide to help an\n   organization working on X by estimating their impact there.\n - **Donor-independence**. Donors should refrain from telling\n   executives how to trade off between the success of X and\n   Y. \n - **Pareto improvement**. When organizations trade off between  X and Y, they should do so at a similar\n   exchange rate.\n\nIt's possible to achieve all of these at once. But this is not the default outcome. By the self-knowledge principle, people who work at an organization working on X are self-selected to think X is more important than it actually is. By donor-independence, most donors will hang back from interfering with strategic trade-offs. Then, the organization that favors X will sacrifice Y, while the organization that favors Y will do the reverse.\n\nThis can be fixed. Basically, we need allied decision makers to coordinate to counteract their biases. I see three main approaches:\n\n 1. **Convergence**. Each leader learns why other groups pursue different strategies, then incorporate that reasoning into their own plans.\n 2. **Compromise**: Each leader takes small steps to avoid trading off others' goals too aggressively.\n 3. **Let a thousand flowers bloom / throw everything at the wall and see what sticks**. Each leader deploys their own strategy. When some fail, various leaders' views converge, and then everyone commits to the projects that are succeeding.\n\nIt could also help to weaken the original principles. When deciding who to work for (or advise), folks can mitigate self-selection effects (and winners' curse) by giving others' estimates more weight. Donors may flag concern if they notice an organization trading off too harshly against others' priorities. \n\nBut major impact can be made from these three approaches. To say a litle more about the process of convergence, clearly, the buck for strategic decision-making stops with the leadership. Ultimately, they are biased. The worst thing would be interpret this as calling for them to be replaced with committee. Rather, the point is that they must ward against their bias by engaging with those who disagree most strongly with their strategic views, and must pursue the arguments and evidence [that would flip their strategies](http://lesswrong.com/lw/o6p/double_crux_a_strategy_for_resolving_disagreement/).\n\nPerhaps I have mostly stated the obvious so far. So let's use this analysis to deal with some actual strategic disagreement. Consider a classic source of strategic disagreement in the effective altruism community:\n\n*There is a deficit of people about to carry out planning, research and executive roles in the EA community. How can we best find such people? If the EA movement is larger, they are more likely to encounter it, but if its average quality is low, they may be turned away. Tradeoffs are to be made with respect to both: i) allocation of talent: to what extent should high-value staff work on growing the EA movement, and ii) branding: to what extent should the EA movement be a mass movement rather than a close-knit intellectual elite?*\n\nThese tradeoffs are important, and to a good extent they can be broken down into subquestions that are empirical. To what extent have top contributors arrived from the broader effective altruist movement rather than its precursors? To what extent are top contributors spending their time on outreach? To what extent was it overdetermined that top contributors would encounter the effective altruism movement, even when it was small e.g. did they encounter many adjacent mailing lists? To the extent that we have data on these questions, it would be useful to discuss these in order to set shared strategic priorities. This also would help with the admittedly more subjective question of branding. [76r EA leaders should analyze their disagreements about branding].\n\nTo the extent that there is residual disagreement regarding branding, each party could agree to temper its most extreme actions. Those promoting offputting yet noncentral intellectual claims regarding politics, diet or pharmaceuticals or diet could shelve these while those pushing for rapid growth that would have the greatest dilutional effects, such as banner advertisements could do the same.\n\nIf these parties still disagree and are not prepared to spend further time and effort carrying out compromise then all we can ask is that they are responsive to evidence of failure. It's hard to discuss when you think an organization is failing (this is something that could be independently worth discussing) but let me give an example of past strategic mistakes. I previously started three EA-branded projects. EA Melbourne, the EA Handbook and the EA Forum. Each reached at least hundreds of individuals. However, none was ever going to get past some thousands, because there were simply not that many people interested in effective altruism. My lesson: a project that is going to mostly appeal to effective altruists (such as most that are branded “EA”) must be extremely intensive to be worthwhile. On the other hand, for a startup-style outreach-focused project, substantial value comes from the case where the project outgrows the EA movement. So to reach a larger scale, you usually don’t want to use just the EA brand. Since we have just said that a main bottleneck is researchers and executives, you might want to take things in a direction that would attract a person who would do that kind of work (and ideally, would do it in priority areas). To my mind, good examples are [Envision](http://envision-conference.com/), [80,000 Hours](http://www.80000hours.org) and Arbital. [76v "EA Blank" projects should disband or rename].\n\nApart from the issue of movement size, similar discussions could be had in relation to: the importance of epistemic norms, importance of risk-neutrality and risky ventures, sources for recruitment, and so on. It's tricky to argue that leaders need to put more work into resolving these disagreements on the margin, but based on working through one example, my intuition is that this could be a helpful project. It seems in general like it could be useful to put more work into achieving strategic convergence, displaying strategic consensus regarding work in some cause area (where it exists), and doing more work in collecting evidence of past projects in order to make ongoing strategic progress.[76w EAs should do more strategy research]\n\n\n\n',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '2',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'RyanCarey2'
  ],
  childIds: [],
  parentIds: [],
  commentIds: [
    '77l',
    '77m',
    '77p',
    '77r',
    '77s',
    '7kr'
  ],
  questionIds: [],
  tagIds: [],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '21241',
      pageId: '76p',
      userId: 'RyanCarey2',
      edit: '4',
      type: 'newEdit',
      createdAt: '2017-01-03 04:38:17',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '21220',
      pageId: '76p',
      userId: 'RyanCarey2',
      edit: '3',
      type: 'newEdit',
      createdAt: '2017-01-02 02:22:54',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '21219',
      pageId: '76p',
      userId: 'RyanCarey2',
      edit: '2',
      type: 'newEdit',
      createdAt: '2017-01-02 02:20:55',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '21217',
      pageId: '76p',
      userId: 'RyanCarey2',
      edit: '1',
      type: 'newEdit',
      createdAt: '2017-01-02 01:03:36',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [
    {
      domainId: '2069',
      pageId: '76p',
      submitterId: 'RyanCarey2',
      createdAt: '2017-01-02 01:03:36',
      score: '40.36125491488713',
      featuredCommentId: ''
    }
  ],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'false',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}