February 9, 2024:
This past weekend I attended the latest Effective Altruism Global conference, which was held in the Bay Area, where I live. These events always have me reflecting on two things: how to effectively do good in the world — the prime directive of EA — and the culture and history of the EA movement itself.
Here’s one puzzle the weekend had me mulling over: What should you tell someone who asks, as the young people I meet at EA events often do, what they can do to best make the world a better place?
There is a lot of obvious advice: Point them at an important problem where there’s actionable work that’ll allow for professional development, think about which problems are neglected, etc. But today I want to highlight a nonobvious one: The best advice you can give about how to improve the world depends more than you might think on how many people will actually listen to the advice.
It might be a great idea to advise a dozen people to apply for a particularly critical job doing important policy work. But advising a thousand people to all apply for that job is just setting up most of them for failure. There are some incredibly impactful charities out there doing great work, and they can absorb a lot of funding. But if every Vox reader donated, we’d flood the charities with more money than they can effectively spend in a responsible way — thus turning an effective charity into a less effective one.
When I was in college a decade ago, the fledgling EA movement was just starting to talk about its work on precisely this question. The most common response I encountered was “Well, if everyone did that, it’d go really badly.”
Sure, I would reply — but right now there are five of us in this student group, and a few hundred signatories worldwide to the Giving What We Can pledge to donate to cost-effective charities. Our advice was all about what to do on current margins, as economists put it — which means what to do as small fish in a big pond. Of course, you’d have to do things differently if there were millions of people donating based on these recommendations, but there weren’t. And it didn’t seem like that would change any time soon.
The effective altruism movement has grown a lot since then, but there still aren’t millions of people donating based on those recommendations, and those highly effective charities still aren’t flooded with more funds than they know what to do with. In fact, they remain badly in need of more funding. So, in a sense, I was right to be dismissive of those worries. But upon reflection, I think I was deeply wrong.
The magnitude of the problems that the effective altruism movement tries to address is still greater than the scale of the movement itself. There are still billions of people living in poverty worldwide, and tens of billions of animals subject to cruel and inhumane conditions on factory farms. There are enormous corporations trying to build AI as fast as possible, and scientists warning that foreseeable technological advances will bring a lot of good and also a lot of ways to kill everyone in the world. I was right to assume that no movement that largely originated on college campuses was going to accumulate enough social sway to solve every one of those problems.
But long before a movement gets that much social sway, it does get enough to face profound new challenges, even a backlash — which is precisely what EA is facing now.
Charities and organizations affiliated with the effective altruists are still a very small share of all charities, however you measure it. Americans gave more than $499 billion to charity last year, but even by the most expansive possible definition, just a fraction could be likely classified as influenced by effective altruism.
But within various niche areas that ordinary Americans generally ignore and that have few other funders — like farmed-animal welfare, AI safety, and biosafety and biosecurity — the movement’s influence has been much larger. While more funding for an important cause is almost always a good thing, a sudden increase in money can change the dynamics of a field overnight.
And while effective altruist organizations and their professional networks were growing, the cause areas that the movement largely focused on — especially pandemics and AI, the chief subjects for the EA Global conference this past weekend — hit the international spotlight in a huge way. Far more people, attention, money, and energy has been focused in recent years on AI and, to a lesser extent, pandemics. And that’s also led to vastly more — and vastly better-funded — opposition to efforts in once niche fields like AI safety than there was a few years ago. Partially as a result, statements like “we should probably prepare more for the next pandemic” have gone from obvious (if often ignored) to politically divisive.
I think these issues are important ones that affect everyone alive, and the massive increase in their popularity is probably a necessary step in building safe AI systems that are subject to fair regulation, or in preventing the next pandemic. But I think it’s easy to underrate how different the dynamics of a medium-sized movement working on hugely controversial issues are from the dynamics of a tiny movement working on issues no one else is thinking about.
When this shift occurs, as it has for EA, missteps — including the FTX scandal — are magnified dramatically. It’s no longer sufficient to give advice that’s good on the margin. You have to do the hard work of building coalitions, making research happen, and ensuring that the best and clearest voices are heard discussing your issue. You need to make allies while differentiating yourself from groups with different interests and different reputations, and while hopefully still staying focused on fixing the original problem.
I think the jury’s still out on whether the issues the effective altruists have faced over the past few years are inevitable growing pains or the first signs of an implosion. But to find a way forward, the movement has to recognize that there’s no going back. The dynamics surrounding everything they do are very different now.
A version of this story originally appeared in the Future Perfect newsletter. Sign up here!