EA Forum Podcast (Curated & popular)

40 Episodes
Subscribe

By: EA Forum Team

Audio narrations from the Effective Altruism Forum, including curated posts and posts with 125 karma. If you'd like more episodes, subscribe to the "EA Forum (All audio)" podcast instead.

“Please Donate to CAIP (Post 1 of 3 on AI Governance)” by Jason Green-Lowe
Last Sunday at 11:30 PM

I am Jason Green-Lowe, the executive director of the Center for AI Policy (CAIP). Our mission is to directly convince Congress to pass strong AI safety legislation. As I explain in some detail in this post, I think our organization has been doing extremely important work, and that we’ve been doing well at it. Unfortunately, we have been unable to get funding from traditional donors to continue our operations. If we don’t get more funding in the next 30 days, we will have to shut down, which will damage our relationships with Congress and make it harder for futu...


“Doing Prioritization Better” by arvomm, David_Moss, Hayley Clatterbuck, Laura Duffy, Derek Shiller, Bob Fischer
Last Saturday at 4:15 AM

Or on the types of prioritization, their strengths, pitfalls, and how EA should balance them

The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone is trying to figure out how to prepare for AI. This is the first in a series of posts examining the state of cause prioritization and proposing strategies for moving forward.

 

Executive Summary

Performing prioritization work has been one of the main tasks, and arguably achievements, of EA. We highlight three types of prioritization: Cause Prioritization, W...


“The Soul of EA is in Trouble” by Mjreard
Last Friday at 4:15 AM

This is a Forum Team crosspost from Substack.

Whither cause prioritization and connection with the good?

There's a trend towards people who once identified as Effective Altruists now identifying solely as “people working on AI safety.”[1] For those in the loop, it feels like less of a trend and more of a tidal wave. There's an increasing sense that among the most prominent (formerly?) EA orgs and individuals, making AGI go well is functionally all that matters. For that end, so the trend goes, the ideas of Effective Altruism have exhausted their usefulness. They pointed us t...


“12x more cost-effective than EAG - how I organised EA North 2025 (and how you could, too)” by matthes
Last Thursday at 10:00 AM

I put on a small one-day conference. The cost per attendee was ÂŁ50 (vs ÂŁ1.2k for EAGs) and the cost per new connection was ÂŁ11 (vs ÂŁ130 for EAGs).

intro

EA North was a one-day event for the North of England. 35 people showed up on the day. In total, I spent £1765 (≈ $2.4k), including paying myself £20/h for 30h total. This money will be reimbursed by EA UK[1].

The cost per attendee was ÂŁ50 and the cost per new connection was ÂŁ11. These are significantly lower than for EAG events, suggesting that we should be putting on more smaller events. <...


“E2G help available” by Will Kirkpatrick
05/05/2025

If you’re interested in having a meaningful EA career but your experience doesn’t match the types of jobs that the typical white collar, intellectual EA community leans towards, then you’re just like me.

I have been earning to give as a nuclear power plant operator in Southern Maryland for the past few years, and I think it's a great opportunity for other EA's who want to make a difference but don’t have a PhD in philosophy or public policy.

Additionally, I have personal sway with Constellation Energy's Calvert Cliffs plant, so I can in...


“Cultivating doubt: why I no longer believe cultivated meat is the answer” by Tom Bry-Chevalier🔸
05/03/2025

Introduction

In this post, I present what I believe to be an important yet underexplored argument that fundamentally challenges the promise of cultivated meat. In essence, there are compelling reasons to conclude that cultivated meat will not replace conventional meat, but will instead primarily compete with other alternative proteins that offer superior environmental and ethical benefits. Moreover, research into and promotion of cultivated meat may potentially result in a net negative impact. Beyond critique, I try to offer constructive recommendations for the EA movement. While I've kept this post concise, I'm more than willing to elaborate on...


“Reflections on 7 years building the EA Forum — and moving on” by JP Addison🔸
05/03/2025

I’m ironically not a very prolific writer. I’ve preferred to stay behind the scenes here and leave the writing to my colleagues who have more of a knack for it. But a goodbye post is something I must write for myself.

Perhaps I’m getting old and nostalgic, because what came out wound up being a wander down memory lane. I probably am getting old and nostalgic, but I also hope I’ve communicated something about my love for this community and the gratefulness for the chance to serve you all.

My story of the E...


“Prioritizing Work” by Jeff Kaufman 🔸
05/01/2025

I recently read a blog post that concluded with:

When I'm on my deathbed, I won't look back at my life and wish I had worked harder. I'll look back and wish I spent more time with the people I loved.

Setting aside that some people don't have the economic breathing room to make this kind of tradeoff, what jumps out at me is the implication that you're not working on something important that you'll endorse in retrospect. I don't think the author is envisioning directly valuable work (reducing risk from international conflict, pandemics, or...


“Reflections on the $5 Minimum Donation Barrier on the Giving What We Can Platform — A Student Perspective from a Lower-Income Country.” by Habeeb Abdul
04/29/2025

I wanted to share a small but important challenge I've encountered as a student engaging with Effective Altruism from a lower-income country (Nigeria), and invite thoughts or suggestions from the community.

Recently, I tried to make a one-time donation to one of the EA-aligned charities listed on the Giving What We Can platform. However, I discovered that I could not donate an amount less than $5.


While this might seem like a minor limit for many, for someone like me — a student without a steady income or job, $5 is a significant amount.

To pr...


[Linkpost] “Scaling Our Pilot Early-Warning System” by Jeff Kaufman 🔸
04/25/2025

This is a link post.

Summary: The NAO will increase our sequencing significantly over the next few months, funded by a $3M grant from Open Philanthropy. This will allow us to scale our early-warning system to where we could flag many engineered pathogens early enough to mitigate their worst impacts, and also generate large amounts of data to develop, tune, and evaluate our detection systems.

One of the biological threats the NAO is most concerned with is a 'stealth' pathogen, such as a virus with the profile of a faster-spreading HIV. This could cause a devastating pandemic...


“Why you can justify almost anything using historical social movements” by JamesÖz 🔸
04/25/2025

[Cross-posted from my Substack here]

If you spend time with people trying to change the world, you’ll come to an interesting conundrum: Various advocacy groups reference previous successful social movements as to why their chosen strategy is the most important one. Yet, these groups often follow wildly different strategies from each other to achieve social change. So, which one of them is right?

The answer is all of them and none of them.

This is because many people use research and historical movements to justify their pre-existing beliefs about how social change ha...


“AI for Animals 2025 Bay Area Retrospective” by Constance Li, AI for Animals
04/19/2025

Our Mission: To build a multidisciplinary field around using technology—especially AI—to improve the lives of nonhumans now and in the future.

Overview

Background

This hybrid conference had nearly 550 participants and took place March 1-2, 2025 at UC Berkeley. It was organized by AI for Animals for $74k by volunteer core organizers Constance Li, Sankalpa Ghose, and Santeri Tani.

This conference has evolved since 2023:

The 1st conference mainly consisted of philosophers and was a single track lecture/panel. The 2nd conference put all lectures on one day and follow...


“ALLFED emergency appeal: Help us raise $800,000 to avoid cutting half of programs” by Denkenberger🔸, JuanGarcia, Laura Cook
04/18/2025

SUMMARY:

ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions...


“Cost-effectiveness of Anima International Poland” by saulius
04/17/2025

Summary

In this article, I estimate the cost-effectiveness of five Anima International programs in Poland: improving cage-free and broiler welfare, blocking new factory farms, banning fur farming, and encouraging retailers to sell more plant-based protein. I estimate that together, these programs help roughly 136 animals—or 32 years of farmed animal life—per dollar spent. Animal years affected per dollar spent was within an order of magnitude for all five evaluated interventions.

I also tried to estimate how much suffering each program alleviates. Using SADs (Suffering-Adjusted Days)—a metric developed by Ambitious Impact (AIM) that accounts for specie...


“Announcing our 2025 strategy” by Giving What We Can
04/14/2025

We are excited to share a summary of our 2025 strategy, which builds on our work in 2024 and provides a vision through 2027 and beyond!

Background

Giving What We Can (GWWC) is working towards a world without preventable suffering or existential risk, where everyone is able to flourish. We do this by making giving effectively and significantly a cultural norm.

Focus on pledges

Based on our last impact evaluation[1], we have made our pledges – and in particular the 🔸10% Pledge – the core focus of GWWC's work.[2] We know the 🔸10% Pledge is a powerful institution, as we’ve s...


“Announcing our 2025 strategy” by Giving What We Can
04/14/2025

We are excited to share a summary of our 2025 strategy, which builds on our work in 2024 and provides a vision through 2027 and beyond!

Background

Giving What We Can (GWWC) is working towards a world without preventable suffering or existential risk, where everyone is able to flourish. We do this by making giving effectively and significantly a cultural norm.

Focus on pledges

Based on our last impact evaluation[1], we have made our pledges – and in particular the 🔸10% Pledge – the core focus of GWWC's work.[2] We know the 🔸10% Pledge is a powerful institution, as we’ve s...


“EA Reflections on my Military Career” by Tom Gardiner 🔸
04/12/2025

Introduction

Four years ago, I commissioned as an Officer in the UK's Royal Navy. I had been engaging with EA for four years before that and chose this career as a coherent part of my impact-focused career plan, and I stand by that decision.

Early next year, I will leave the Navy. This article is a round-up of why I made my choices, how I think military careers can sensibly align with an EA career, and the theories of impact I considered along the way that don't hold water. Military service won't be the right...


“GWWC is retiring 10 initiatives” by Giving What We Can
04/12/2025

In our recent strategy retreat, the GWWC Leadership Team recognised that by spreading our limited resources across too many projects, we are unable to deliver the level of excellence and impact that our mission demands.

True to our value of being mission accountable, we've therefore made the difficult but necessary decision to discontinue a total of 10 initiatives. By focusing our energy on fewer, more strategically aligned initiatives, we think we’ll be more likely to ultimately achieve our Big Hairy Audacious Goal of 1 million pledgers donating $3B USD to high-impact charities annually. (See our 2025 strategy.)

We...


“GWWC is retiring 10 initiatives” by Giving What We Can
04/12/2025

In our recent strategy retreat, the GWWC Leadership Team recognised that by spreading our limited resources across too many projects, we are unable to deliver the level of excellence and impact that our mission demands.

True to our value of being mission accountable, we've therefore made the difficult but necessary decision to discontinue a total of 10 initiatives. By focusing our energy on fewer, more strategically aligned initiatives, we think we’ll be more likely to ultimately achieve our Big Hairy Audacious Goal of 1 million pledgers donating $3B USD to high-impact charities annually. (See our 2025 strategy.)

We...


“Maybe We Aren’t Stubborn Enough” by emre kaplan🔸
04/11/2025

I sometimes worry that focus on effectiveness creates perverse incentives in strategic settings, leading us to become less effective. Here are a few observations illustrating this concern.

Effectiveness-focused advocacy creates perverse incentives for adversaries

When we conduct cage-free campaigns, the target companies frequently ask us why they are being targeted instead of some other company. While trying to answer that, one immediately realises the following tension. If we say that "because targeting you is the most effective thing we can do", we incentivise them to not budge. Because they will know that willingness to compromise...


“EA Adjacency as FTX Trauma” by Mjreard
04/10/2025

This is a Forum Team crosspost from Substack.
Matt would like to add: "Epistemic status = incomplete speculation; posted here at the Forum team's request"

When you ask prominent Effective Altruists about Effective Altruism, you often get responses like these:

For context, Will MacAskill and Holden Karnofsky are arguably, literally the number one and two most prominent Effective Altruists on the planet. Other evidence of their ~spouses’ personal involvement abounds, especially Amanda's. Now, perhaps they’ve had changes of heart in recent months or years – and they’re certainly entitled to have those – but being evasive an...


“What I learned from a week in the EU policy bubble” by Joris 🔸
04/06/2025

Last week, I participated in Animal Advocacy Careers’ Impactful Policy Careers programme. Below I’m sharing some reflections on what was a really interesting week in Brussels!

Please note I spent just one week there, so take it all with a grain of (CAP-subsidized) salt. Posts like this and this one are probably much more informative (and assume less context). I mainly wrote this to reflect on my time in Brussels (and I capped it at 2 hours, so it's not a super polished draft). I’ll focus mostly on EU careers generally, less on (EU) animal welfare-related career...


“What if I’m not open to feedback?” by frances_lorenz
04/03/2025

It is standard form in EA to state one's welcomingness of feedback, both in a personal and professional capacity. Individuals and organisations alike often have many means by which you can deliver feedback, whether through anonymous forms or direct communication, and forum posts will often begin or end with:

"I'm open to feedback..."
"I'm looking for feedback of the following nature..."
"I'm very full because I ate feedback for breakfast, but there's always room for more..."
And so on.

I'm now wondering: what happens if you write, "I am not open to...


“New Cause Area: Low-Hanging Fruit” by Tandena Wagner, Jackson Wagner
04/02/2025

Hi, I’m Tandena Wagner. As part of my research for EcoResillience Initiative (an EA organization searching for the best ways to preserve biodiversity into the long-term future), I’ve investigated several common claims that various resource limitations could be disastrous for civilization – ie, that we’re approaching “peak oil”, or imminently running out of phosphorus, soil nitrogen, chromium, etc. For the most part, I’ve found these claims to be overblown, often thanks to systematic exaggeration caused by the poor epistemic environment of activist environmentalism. In general, Paul-Erlich-style resource limitations do not seem pressing compared to other risks to civilizatio...


“Centre for Effective Altruism Is No Longer ‘Effective Altruism’-Related” by Emma Richter🔸
04/01/2025

For immediate release: April 1, 2025

OXFORD, UK — The Centre for Effective Altruism (CEA) announced today that it will no longer identify as an "Effective Altruism" organization.

"After careful consideration, we've determined that the most effective way to have a positive impact is to deny any association with Effective Altruism," said a CEA spokesperson. "Our mission remains unchanged: to use reason and evidence to do the most good. Which coincidentally was the definition of EA."

The announcement mirrors a pattern of other organizations that have grown with EA support and frameworks and eventually distanced themselves fr...


“Mitigating Risks from Rouge AI” by Stephen Clare
04/01/2025

Introduction

Misaligned AI systems, which have a tendency to use their capabilities in ways that conflict with the intentions of both developers and users, could cause significant societal harm. Identifying them is seen as increasingly important to inform development and deployment decisions and design mitigation measures. There are concerns, however, that this will prove challenging. For example, misaligned AIs may only reveal harmful behaviors in rare circumstances, or perceive detection attempts as threatening and deploy countermeasures – including deception and sandbagging – to evade them.

For these reasons, developing a range of efforts to detect misaligned behavior, incl...


“80,000 Hours: Job Birds” by Conor Barnes 🔶
04/01/2025

Hi everybody!

I'm Conor. I run the 80,000 Hours Job Board. Or I used to. As of today — April 1 — we are becoming Job Birds!

We've been talking to users for the last few years about making this change, and people have overwhelmingly been in favour (remember, there are six or more birds for every human on Earth). Whether it's the daily emails asking me to finally switch, or the flocks of people accosting me at conferences to urge a migration to Job Birds, the demand is overwhelming! Luckily, the wait is over!

I've included an F...


“Introducing The Spending What We Must Pledge” by Thomas Kwa 💸
04/01/2025

Epistemic status: highly certain, or something

The Spending What We Must 💸11% pledge 

In short: Members pledge to spend at least 11% of their income on effectively increasing their own productivity.

This pledge is likely higher-impact for most people than the Giving What We Can 🔸10% Pledge, and we also think the name accurately reflects the non-supererogatory moral beliefs of many in the EA community.

Example

Charlie is a software engineer for the Centre for Effective Future Research. Since Charlie has taken the SWWM 💸11% pledge, rather than splurge on a vacation, they decide to...


“Introducing The Spending What We Must Pledge” by Thomas Kwa
04/01/2025

Epistemic status: highly certain, or something

The Spending What We Must 💸11% pledge 

In short: Members pledge to spend at least 11% of their income on effectively increasing their own productivity.

This pledge is likely higher-impact for most people than the Giving What We Can 🔸10% Pledge, and we also think the name accurately reflects the non-supererogatory moral beliefs of many in the EA community.

Example

Charlie is a software engineer for the Centre for Effective Future Research. Since Charlie has taken the SWWM 💸11% pledge, rather than splurge on a vacation, they decide to...


“Anthropic is not being consistently candid about their connection to EA” by burner2
03/31/2025

In a recent Wired article about Anthropic, there's a section where Anthropic's president, Daniela Amodei, and early employee Amanda Askell seem to suggest there's little connection between Anthropic and the EA movement:

Ask Daniela about it and she says, "I'm not the expert on effective altruism. I don't identify with that terminology. My impression is that it's a bit of an outdated term". Yet her husband, Holden Karnofsky, cofounded one of EA's most conspicuous philanthropy wings, is outspoken about AI safety, and, in January 2025, joined Anthropic. Many others also remain engaged with EA. As early employee Amanda...


“Five insights from farm animal economics” by LewisBollard
03/29/2025

How the dismal science can help us end the dismal treatment of farm animals


By Martin Gould

Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post.

This year we’ll be sharing a few notes from my colleagues on their areas of expertise. The first is from Martin. I’ll be back next month. - Lewis

In 2024, Denmark announced plans to introduce the world's first carb...


“Stewardship: CEA’s 2025-26 strategy to reach and raise EA’s ceiling” by Oscar Howie, Zachary Robinson🔸
03/29/2025

“I” refers to Zach, the Centre for Effective Altruism's CEO. Oscar is CEA's Chief of Staff. We are grateful to all the CEA staff and community members who have contributed insightful input and feedback (directly and indirectly) during the development of our strategy and over many years. Mistakes are of course our own.

Exec summary

As one CEA, we are taking a principles-first approach to stewardship of the EA community.

During the search for a new CEO, the board and search committee were open to alternative strategic directions, but from the beginning of my t...


“Joey/AIM transition announcement” by Joey🔸
03/28/2025

After ~7 years, I am stepping away from being the CEO of AIM and will transition to a board member. My current planned last day is December 1st, giving ample time for a smooth transition. As is true for most times a CEO or co-founder leaves an organization, this is for a pretty large variety of reasons. The biggest three are that: 1) AIM is in a stable place, 2) My counterfactual impact, and 3) My fit for founder vs. CEO roles. You can see more details on each of these in my Substack post.

AIM has been one of the...


“Money, Population, and Insecticide Resistance: Why malaria cases haven’t declined since 2015” by Paul SHC
03/24/2025

Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way.

Summary

While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise:

Population Growth: Af...


“You probably won’t solve malaria or x-risk, and that’s ok” by Rory Fenton
03/21/2025

Cross-posted from my blog.

Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small.

Helping youngsters get swole in the Pacific Northwest is n...


“80,000 Hours is shifting our strategic approach to focus more on AGI” by 80000_Hours, Niel_Bowerman
03/21/2025

TL;DR

In a sentence:
We are shifting our strategic focus to put our proactive effort towards helping people work on safely navigating the transition to a world with AGI, while keeping our existing content up.

In more detail:

We think it's plausible that frontier AI companies will develop AGI by 2030. Given the significant risks involved, and the fairly limited amount of work that's been done to reduce these risks, 80,000 Hours is adopting a new strategic approach to focus our efforts in this area.

During 2025, we are prioritising:

Deepening...


“Projects I’d like to see in the GHW meta space” by Melanie Basnak🔸
03/20/2025

In my past year as a grantmaker in the global health and wellbeing (GHW) meta space at Open Philanthropy, I've identified some exciting ideas that could fill existing gaps. While these initiatives have significant potential, they require more active development and support to move forward.

The ideas I think could have the highest impact are:

Government placements/secondments in key GHW areas (e.g. international development), and Expanded (ultra) high-net-worth ([U]HNW) advising

Each of these ideas needs a very specific type of leadership and/or structure. More accessible options I’m excited about — particularly for...


“Am I Missing Something, or Is EA? Thoughts from a Learner in Uganda” by Dr Kassim
03/18/2025

Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA's approach?

I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let's talk.

Cause Prioritization. Does I...


“Consider haggling” by Sam Anschell
03/12/2025

*Disclaimer* I am writing this post in a personal capacity; the opinions I express are my own and do not represent my employer.

I think that more people and orgs (especially nonprofits) should consider negotiating the cost of sizable expenses. In my experience, there is usually nothing to lose by respectfully asking to pay less, and doing so can sometimes save thousands or tens of thousands of dollars per hour. This is because negotiating doesn’t take very much time[1], savings can persist across multiple years, and counterparties can be surprisingly generous with discounts. Here are a fe...


“Forethought: A new AI macrostrategy group” by Amrit Sidhu-Brar 🔸, MaxDalton, William_MacAskill, Tom_Davidson, Forethought
03/12/2025

Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar.

We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window.

More details on our website.

Why we exist

We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading...