Suppose a philanthropic community contains groups that have the same goals but disagree about how to achieve their goals. One organization pursues a strategy X and the other pursues Y. Each has the opportunities to carry out actions that advance strategy X a lot but fail to help with (or hinder) strategy Y and vice versa. In supporting these projects, the community probably wants to follow some principles:
- Self-knowledge. Individuals should decide to help an organization working on X by estimating their impact there.
- Donor-independence. Donors should refrain from telling executives how to trade off between the success of X and Y.
- Pareto improvement. When organizations trade off between X and Y, they should do so at a similar exchange rate.
It's possible to achieve all of these at once. But this is not the default outcome. By the self-knowledge principle, people who work at an organization working on X are self-selected to think X is more important than it actually is. By donor-independence, most donors will hang back from interfering with strategic trade-offs. Then, the organization that favors X will sacrifice Y, while the organization that favors Y will do the reverse.
This can be fixed. Basically, we need allied decision makers to coordinate to counteract their biases. I see three main approaches:
- Convergence. Each leader learns why other groups pursue different strategies, then incorporate that reasoning into their own plans.
- Compromise: Each leader takes small steps to avoid trading off others' goals too aggressively.
- Let a thousand flowers bloom / throw everything at the wall and see what sticks. Each leader deploys their own strategy. When some fail, various leaders' views converge, and then everyone commits to the projects that are succeeding.
It could also help to weaken the original principles. When deciding who to work for (or advise), folks can mitigate self-selection effects (and winners' curse) by giving others' estimates more weight. Donors may flag concern if they notice an organization trading off too harshly against others' priorities.
But major impact can be made from these three approaches. To say a litle more about the process of convergence, clearly, the buck for strategic decision-making stops with the leadership. Ultimately, they are biased. The worst thing would be interpret this as calling for them to be replaced with committee. Rather, the point is that they must ward against their bias by engaging with those who disagree most strongly with their strategic views, and must pursue the arguments and evidence that would flip their strategies.
Perhaps I have mostly stated the obvious so far. So let's use this analysis to deal with some actual strategic disagreement. Consider a classic source of strategic disagreement in the effective altruism community:
There is a deficit of people about to carry out planning, research and executive roles in the EA community. How can we best find such people? If the EA movement is larger, they are more likely to encounter it, but if its average quality is low, they may be turned away. Tradeoffs are to be made with respect to both: i) allocation of talent: to what extent should high-value staff work on growing the EA movement, and ii) branding: to what extent should the EA movement be a mass movement rather than a close-knit intellectual elite?
These tradeoffs are important, and to a good extent they can be broken down into subquestions that are empirical. To what extent have top contributors arrived from the broader effective altruist movement rather than its precursors? To what extent are top contributors spending their time on outreach? To what extent was it overdetermined that top contributors would encounter the effective altruism movement, even when it was small e.g. did they encounter many adjacent mailing lists? To the extent that we have data on these questions, it would be useful to discuss these in order to set shared strategic priorities. This also would help with the admittedly more subjective question of branding. EA leaders should analyze their disagreements about branding.
To the extent that there is residual disagreement regarding branding, each party could agree to temper its most extreme actions. Those promoting offputting yet noncentral intellectual claims regarding politics, diet or pharmaceuticals or diet could shelve these while those pushing for rapid growth that would have the greatest dilutional effects, such as banner advertisements could do the same.
If these parties still disagree and are not prepared to spend further time and effort carrying out compromise then all we can ask is that they are responsive to evidence of failure. It's hard to discuss when you think an organization is failing (this is something that could be independently worth discussing) but let me give an example of past strategic mistakes. I previously started three EA-branded projects. EA Melbourne, the EA Handbook and the EA Forum. Each reached at least hundreds of individuals. However, none was ever going to get past some thousands, because there were simply not that many people interested in effective altruism. My lesson: a project that is going to mostly appeal to effective altruists (such as most that are branded “EA”) must be extremely intensive to be worthwhile. On the other hand, for a startup-style outreach-focused project, substantial value comes from the case where the project outgrows the EA movement. So to reach a larger scale, you usually don’t want to use just the EA brand. Since we have just said that a main bottleneck is researchers and executives, you might want to take things in a direction that would attract a person who would do that kind of work (and ideally, would do it in priority areas). To my mind, good examples are Envision, 80,000 Hours and Arbital. "EA Blank" projects should disband or rename.
Apart from the issue of movement size, similar discussions could be had in relation to: the importance of epistemic norms, importance of risk-neutrality and risky ventures, sources for recruitment, and so on. It's tricky to argue that leaders need to put more work into resolving these disagreements on the margin, but based on working through one example, my intuition is that this could be a helpful project. It seems in general like it could be useful to put more work into achieving strategic convergence, displaying strategic consensus regarding work in some cause area (where it exists), and doing more work in collecting evidence of past projects in order to make ongoing strategic progress.EAs should do more strategy research
Comments
Alexei Andreev
Yup, this is the classic "build your own market" vs "take over existing market." Companies that do really well usually end up expanding or creating a market. (For example, AirBnB allowed a lot more people than before to easily rent out their homes.) However, it's often easier to start by capturing / competing in a small market. So it seems best to me to start with a relatively small market, like EA, but have the brand and product aimed at a broader market. (Which is what I think you are saying.)
Alexei Andreev
This seems hard, especially depending on what kind of project you are doing. Startups, for example, are often pretty extreme in their trade-offs. Compromising would lead to a sub-par solution.
Eric Rogstad
I'm not sure what you mean about an exchange rate. Isn't a Pareto improvement something that makes everyone better off (or rather: someone better off and no one worse off)?
Eric Rogstad
Is the idea that a single organization should pursue X or Y and not worry about the fact that any given donors will value both X and Y to varying degrees?
(If so I might have called this organization-independence, or single-focus.)
Eric Rogstad
The question of tradeoffs between X and Y and winners' curses reminds me of Bostrom's paper, The Unilateralist's Curse.
From the abstract:
Chris Leong
Any ideas of what projects would work better when separated from the EA brand? Perhaps, EA policy for one because of the difficulties posed by political interests.
There are already several projects/organisations associated with EA under different names such as: Givewell (pre-dated EA), Founder's pledge, Giving What We Can Pledge, Effective Giving. Even LessWrong is an excellent source of people to join EA and it wouldn't work as well if it were EA rationality.
Definitely would like to see more projects developing out of EA. I'm curious if you have any thoughts for what else could be split out of EA.