Thoughts on Effective Altruism
Since last year, I’ve been engaging with the effective altruist community. I discovered it through a friend who recommended a rationality camp to me. It’s been an interesting experience, with lots of good and bad. I also don’t have as much direct exposure to other activist movements, like the climate change/feminist/etc movement, so I can’t exactly compare.
Effective altruism is a framework based on maximizing your positive impact on the world. This involves thinking critically about how your altruistic actions actually make the world better, and building a framework for comparing different charities in terms of their life-saving cost-effectiveness. It’s also a community dedicated to attacking under-addressed problems and expanding one’s moral circle. Longtermism is also an increasingly influential idea in EA, that states that future people should matter to us as much as present people, and therefore preserving the survival of civilization is extremely important because of all the potential that could be lost (ie trillions of lives or more).
The duality between long-termism and global development is an important thing to understand: EA both seeks to improve the world now and wants to make sure our civilization survive to see more of it. However sometimes longtermist causes claim an inordinate impact for initiatives that could save humanity in a long time, at which point things can be a bit fuzzy and hard to evaluate because of the difficulty of predicting the future. This has lead to community criticism and some divide as money gets poured into helping intellectual elites work on important issues as opposed to donations to the global poor.
I believe both of these facets are important and worthwhile, especially if you believe in risks that could endanger the future of civilization soon, like AI, biotech, nuclear, etc…. I’m not sure I buy the moral assumption of longtermism and get skeptical sometimes when people pull out numbers like 10^50 lives saved, but I believe the problems EAs call “long-termist” are pressing enough it doesn’t matter.
I want to reflect a bit on my position and thoughts about EA, as a diverse community that has great things to stand for but that I don’t truly identify with. I don’t mean to attack/misrepresent anyone, and insist this is just my personal opinion.
I love the fact we have intelligent and dedicated people working to improve the lives of other people in the world. EA has managed to mobilize a uniquely talented set of people to come together, talk about important problems and do meaningful work on crucial problems. They’re leading the force in terms of research on potential dangers of AI, they did great stuff during COVID and had actually foreshadowed lots of problems with engineered pandemics/sponsor good research in bio, and they are generally interesting people to talk to.
Lots of people ignore the harsh realities EA confronts head-on: so many people are suffering in the world and a lot of governance processes are really bad: how can we as individuals work towards making the situation better, instead of acting as bystanders?
Engaging with EA has personally allowed me to meet great and insanely smart people/learn valuable things. I am quite grateful this community exists.
It’s also a prime example of people having a problem-based approach to the world: if you focus on suffering, the world can be seen as a cacophony of tragedies happening over and over again, so what can I do about it? How can I make things better so others can appreciate the same things I enjoy?
This way of looking at the world is unnatural to some because it kind of subverts your personal preferences if you take it seriously, insofar as you neglect the fact you will do better (be more comparatively impactful) at the things you enjoy. However, this is similar to what happens when someone picks a job or career for status and/or money, and is much more common than the Western “work for object-level things that you love” narrative makes us believe.
What I don’t like
I don’t fully buy consequentialism
I believe consequentialism can often fail as a moral framework, and EAs should be more careful about recognizing its limits. We can’t really claim to understand the utility function of our human lives or the implications of our actions, and a lot of life to me is trying to piece that together, not just a simple optimization process to make a number (eg “wellbeing”) go up. Compressing this to a simple understanding doesn’t really resonate with me, and a lot is lost in translation when you do that. If we’re maximizing a function we don’t really understand we’ll make lots of mistakes, and I think EA is a victim of this at times.
It feels like a lot of EAs solve this by invoking negative externalities in situations where you could say consequentialism gives you weird results (ie lying to people, ruthlessly trying to convert people to impact), but I claim this is kind of BS and the actual problem of over-optimizing for an over-simplified metric is overlooked.
General ugh fields I’ve had interacting with some EAs
This stuff isn’t like a general fact about EAs, it’s just something I’ve noticed a lot.
Social Reality stuff
People in EA going like “oh who do you know that’s smart, and why? takes notes how can we get them to work on problems we think are important?” This conversion mindset is quite culty and when I’ve seen it happen I’ve been made thoroughly uneasy. Although you can always argue about negative externalities etc if people find out you’re doing this, it’s justified by a certain utilitarian framework, but isn’t something I would ever agree with.
There’s a focus in EA/rationality on the idea of a “hero mindset”: you can go out in the world and truly change things if you just try hard to make it happen. This is awesome, but it also leads to this thing where in my opinion EAs kind of segregate from the rest of the world, which hasn’t been explicitly “agency/impact-pilled”. This also implies a sense of “EAs are better at getting shit done in general”. This is bullshit. EA is a small pocket of the world - the best people at X shouldn’t be expected, statistically, to be EAs. Being insular reinforces the community’s kind of cultish vibes. People should have other communities and be less stuck together with each other.
If you want to meaningfully improve the world, actually engage with it. Try to understand what people care about and why. Especially young people who get into EA early should be careful about this. Don’t build your epistemic/social foundations on just one thing.
Progress Studies (and effective altruism and AI safety for that matter) have become so popular because they fill the religion-shaped hole in the hearts of frustrated nerds who are desperately searching for something to make their lives feel meaningful. - 20 Modern Heresies
Something else I think is problematic arises from EA’s totalizing nature. On top of the insularity thing, this manifests when you see people who feel this pressure to be constantly making the world better going down bad mental health spirals. But then there’s this EA response of “oh take care of yourself and take vacations because if you burn out you’ll be less effective”. Although I get this type of carrot and stick rationalization can be useful, I think in the first place we don’t want people to just be caring/optimizing for literally one thing, and suppressing their other desires. On the other hand, maybe this is just me not having viscerally internalized EA ideas, because I view my goal to do something impactful as an important priority but still only one of my goals, and EA is not something I apply to my entire life but just the part of me trying to do good.
Lots of EAs agree with me on this, but from content on the EA forum/interactions with a fraction of EAs, there is also lots of sentiment pushing more in the totalizing direction. EA is about maximization, and maximization is perilous addresses this well.
My personal disposition towards this is that I have 3 main goals:
- do something that meaningfully improves the world
- build cool things in cool places with cool people (vague but it’s something)
- enjoy my life and explore the diversity of things that are possible, experience lots of different cultures and sensations
I want the part of me that cares about being impactful to actually accomplish this way in a way I find meaningful, in the sense that it’s a substantial contribution but it also resonates with my personal understanding on the world. This is complex and more complicated than “oh we can all optimize for the same thing because our closest approximation to what’s good is enough”, which feels to me like the EA perspective. In my case these two attitudes overlap somewhat, but still think it’s important to note the mindset difference.
EA has lots of beautiful ideas and people. It also has warts, and I’m not sure if I’d describe myself as an EA. But when I think of all the fucked up shit out there, I’m pretty happy people out there are rationally trying to work their best on the problem, warts and all. I’m personally still somewhat lost about my perspective on the world, and am cautiously accepting that “still stumbling in the dark” is a much better position than being too eager to be part of something. In this last year of exposure to EA, I’ve had moments where I thought wow it’s amazing something like this exists, but also feelings of confusion and sometimes disgust. I’d predict I’ll keep working on the sidelines towards a version of myself that can and is doing something good for the world, without labels.