EA Forum Podcast (All audio)

40 Episodes
Subscribe

By: EA Forum Team

Audio narrations from the Effective Altruism Forum, including curated posts, posts with 30 karma, and other great writing. If you'd like fewer episodes, subscribe to the "EA Forum (Curated & Popular)" podcast instead.

“Impact of Holocaust on soil nematodes, mites, and springtails” by Niki Dupuis
Last Thursday at 11:00 PM

Summary

I estimate the Holocaust increased the living time of (wild) soil nematodes, mites, and springtails. My best guess is that those soil invertebrates have negative lives, so I think that indirectly increasing their animal-years by mass murdering millions of people was harmful on net. I have been estimating the cost-effectiveness of various interventions accounting for effects on soil nematodes, mites, and springtails. Any event which causes a large and sustained change in human population affects agricultural land use, and therefore soil invertebrate welfare. The Holocaust killed 1.1*10^7 people and prevented an estimated 2.7*10^7 people from existing by 2026. I am no...


“EAGxAmsterdam 2025: What We Learned” by James Herbert
Last Thursday at 6:30 PM

James Herbert, Co-Director, Effectief AltruĂŻsme Nederland and EAGxAmsterdam Team Lead

EAGxAmsterdam 2025 took place on 12–14 December at B.Amsterdam, bringing together 517 attendees from across Europe. This is a brief write-up of what went well, what didn't, and what I think other EAGx organisers can learn from it.

The headline numbers:

517 attendees (up 36% from 380 at EAGxUtrecht 2024) — one of the largest EAGx events in recent years, behind only EAGxBerlin 2025 €387 cost per attendee (target was €520; 2024 was €586) 10 reported new connections per attendee (target was 10, 2024 was 10) 906 applications (34% above target) 57 GWWC pledges (26 full 10% Pledges + 31 Trial Pledges) — the strongest pledge outcome of any EAGx t...


“The EA Forum Is the New LinkedIn: A Guide to Personal Branding” by Anna Pitner
Last Thursday at 12:45 PM

I'm humbled (and slightly Bayesianly uncertain) to announce that after years of carefully optimizing my personal brand on LinkedIn, I've decided to pivot to a higher-impact platform: the EA Forum. LinkedIn let me share updates about clients, networking events, and my "journey." The EA Forum offers something better: 3,000-word posts, detailed cost-effectiveness analyses, and occasional updates about my shifting credences on shrimp welfare.

(Quick tip: always add shrimp. They inflate every number.)

Why bother?

Let's be honest. We all know it's basically impossible to get a job through the 80,000 Hours job board.

<...


“I’m Suing Anthropic for Unauthorized Use of My Personality” by Linch
Last Thursday at 9:00 AM

Last year, I was sitting in my favorite coffee shop Caffe Strada, sipping on a matcha latte and writing a self-insert fanfic about how our plucky protagonist escapes the mind-controlling clutches of an evil anti-animal welfare company, when I came across an interesting article on AI character. The core argument is that when you train an AI to be helpful, honest, and ethical, the AI model doesn’t just learn those rules as abstract instructions. Instead, it infers an entire persona from cultural signals in the training data:

Why are [AI Model Claude's] favorite books The Feynman Le...


“EA Netherlands is hiring a Co-Director” by James Herbert, mariekedev
Last Thursday at 3:15 AM

After more than seven years with EA Netherlands — first as a volunteer, then as a board member, and most recently as co-director — Marieke de Visscher is moving on. Marieke was instrumental in building EAN from an all-volunteer group into a staffed nonprofit. She leaves behind a stronger, more professional organisation than the one she found, and I'm grateful for the years we worked together.

We're now hiring her successor.

The role in brief: This is a generalist co-directorship at one of the larger national EA communities globally, based in Amsterdam. You'd have real ownership over prog...


“Announcing 8,000,000 Hours: Career advice for the rest of us” by Soemano Zeijlmans
Last Wednesday at 7:00 PM

TL;DR: We're launching 8,000,000 Hours, a new career guidance organisation for people who want to have an impact but would rather not change what they're doing. Instead of finding the highest-impact career, we help you work longer in your existing one to achieve the same impact at the end of your career.

The core insight

80,000 Hours estimates that you have roughly 80,000 hours in your career and argues you should spend them on the world's most pressing problems. Central to 80,000 Hours' argument is the observation that in a pair of two randomly picked interventions, the better...


“Fish Welfare Initiative Hacked” by Tom Billington
Last Wednesday at 4:15 PM

It pains me to report that, in the early hours of this morning, Fish Welfare Initiative (arguably the best charity ever) experienced a serious website security breach. Our entire leadership team is currently reviewing some of the most advanced technical literature available to resolve the issue.

In the meantime, we strongly request: DO NOT VISIT OUR WEBSITE (www.fwi.fish).

Please, no one visit our website under any circumstances. This includes (but is not limited to) going to www.fwi.fish, refreshing www.fwi.fish, or attempting to access any page on www.fwi.fish.<...


“Alternative AI Forecasting Methods” by Scott Alexander
Last Wednesday at 3:00 PM

Forecasts based on benchmarks and time horizons have failed to produce a consensus timeline for the arrival of AGI. We supplement these with several alternative methods.

1: MATS Applications

AGI - artificial intelligence capable of performing every task - is by definition cognitively capable of putting all humans out of work. However, there are certain edge cases where human employment might persist for other reasons. As the majority of jobs get automated, we expect newly unemployed workers to migrate to these edge case industries.

The clearest case for an industry which cannot be completely...


“Announcing Highly Engaged EAs!” by Sam Anschell
Last Wednesday at 1:45 PM

I’m excited to launch Highly Engaged EAs: a matchmaking and nuptialization service to optimize tax relief, green card accumulation and more!

Workstreams

I Do(nate)

By marrying EAs in different tax brackets, Highly Engaged EAs reduces average tax burden through joint filing to enable greater giving. A Californian AI safety researcher with a million dollar salary could give an extra $53k/year by tying the knot with an unpaid grad student!

Til 80,000 Hours do us part

The place premium is so high in the US that >1,000 people have bo...


“80,000 Hours: Job Board -> Job Bards” by Conor Barnes 🔶, 80000_Hours
Last Wednesday at 1:30 PM

Hi everybody!

I'm Conor. I run the 80,000 Hours Job Board. Or I used to. As of today — April 1 — we are becoming Job Bards!

You may not know that the number one complaint that people have about job boards is sensory poverty. While movie theatres have dared to experiment with haptics and smell-o-vision, no job board has even dared to become an audiovisual experience. Until now.

We believe Job Bards is the premium job board experience, and all job boards that don't make the leap will fall the way of silent films. While this inno...


“RejectDirectly” by RejectDirectly
Last Wednesday at 12:45 PM

We're thrilled to announce the founding of RejectDirectly, a new EA-adjacent organization dedicated to closing the global rejection gap.

For too long, the EA rejection pipeline has been plagued by ineffectiveness. Billions of collective DALYs spend on unsuccessful work trials and interviews. The water waste involved in the mass duplication of Google Docs. The community health is in peril.

In comes RejectDirectly. By cutting out the middleman we can deliver high-quality, unconditional rejections straight to applicants — no strings attached, no waiting period. No eight-hour work trials where you pour your Claude extra balance into a st...


“Announcing Deep Thought” by William_MacAskill, MaxDalton
Last Wednesday at 12:30 PM

Forethought is proud to announce the launch of Deep Thought[1] — the world's first fully automated macrostrategy researcher, and the world's most powerful AI model.

For some time, we have believed that one of the most important things a macrostrategy research organisation can do is work to automate its own work. The case is simple: if the questions we're investigating matter, then expanding access to high-quality answers matters too — and no bottleneck is more binding than the scarcity of researchers who can generate them. Today marks our first concrete step toward addressing that bottleneck.

Scorecard

A...


“An unexplained annual spike in false claims on the EA Forum” by Tobias Häberli
Last Wednesday at 11:45 AM

Epistemic status: Very high confidence in the statistical findings. Genuinely confused about the cause. For reasons that will become obvious, I wanted to publish this post on March 31, but unfortunately I could only get it done today.

I've been building a classifier to flag potentially misleading content on the EA Forum as part of a side project on epistemics infrastructure. While validating the model, I noticed something I initially assumed was a bug. This is an interim report on that.


Summary: Every year, on April 1, the rate of posts containing verifiably untrue claims spikes by ro...


“Announcing the Lead Exposure Elimination Project” by Kestrel🔸
Last Wednesday at 11:30 AM

Are there too many cables on your desk? Is this preventing you from doing your most productive work?

Research suggests that the aesthetic quality of a person's work environment has an impact on their mood, productivity, and ability to cope with the crushing reality of knowing that more children will suffer and die from preventable diseases if they don't get enough done.

I've had a high-impact meta side project of helping EAs clear up their desks. After a few iterations aimed at increasing longevity (it turns out that if you remove coffee mugs, the intervention...


“Should 80,000 hours rebrand to 10,000 hours (max)?” by SofiaBalderson
Last Wednesday at 10:15 AM

I say this with deep love for 80k and mild existential dread.

I love 80,000 Hours. One of the most useful resources in the effective altruism ecosystem. Their core premise is simple and compelling: you have roughly 80,000 hours in your career, so spend them wisely.

But I've been doing some math, and I think they might need a rebrand.

See, 80,000 Hours increasingly (and rightly!) recommends AI safety and AI governance careers. Their top career paths are dominated by AI-related roles. Which makes a lot of sense... until you think about the implication.

...


“We’re growing: CEA is increasing its font size” by OllieRodriguez
Last Wednesday at 9:15 AM

“We” in this post refers to CEA leadership, who didn’t write this post.

In 2025, we grew engagement with CEA's programs by 20–25% across every tier of engagement.

But we recognize that growing the community is about more than engagement or metrics—at its core, growth is about the bigness of things. How can we claim that we’ve grown the community if the words on the page are the same size?

To match our ambition off the page, it's time to show ambition on the page. We’re increasing our font size from 15px to 25px acr...


“Forum feature update: debate poll” by Toby Tremlett🔹
Last Wednesday at 9:15 AM

A piece of feedback we've had a few times on our debate weeks is that the debate slider shouldn't be single-dimensional.

When you agree or disagree with something, you also want to be able to signal your confidence in your belief. It's time-consuming to do so by explaining yourself in a comment, and percentage agreement isn't necessarily the same thing as confidence in your agreement.

So, not to worry, we're now introducing a second axis, like so:

…or, we were going to. However, once I finished designing this and we tested it internally, I...


“Giving up on EA after 13 years” by Jackson Wagner
Last Wednesday at 8:00 AM

Donating my shares to Lightcone Infrastructure, the Good Food Institute, and the Long-Term Future Fund, because EA refuses to make Mirror's Edge 3

Leaning into EA disillusionment: Why I no longer believe in EA

I bought this EA stock almost thirteen years ago:

Leopold Aschenbrenner once said that “people with situational awareness have a lower cost basis in Nvidia than you do”.  I’m not sure if this is exactly what he meant… but close enough, right?

Back in the late 2000s and early 2010s, EA was a great company pumping out great, interesting games:

...


“Tapping into EA’s most neglected market” by ElliotTep, Bradley Tjandra
Last Wednesday at 7:45 AM

EA has made admirable efforts to reach populations that are underrepresented in the movement: busy mid-career professionals, people of faith, and people across a wide range of professions and academic disciplines.

Yet despite all these efforts, we are underserving our most neglected group, one that makes up 98–99% of all people on earth: neurotypicals (NTs). While we don’t have the data on hand, we think it's safe to say that EA has a severe underrepresentation issue among the NT population.

The case for impact

The scale of the opportunity is significant. According to my r...


“The $120 Million Question: How will it help animals?” by Zoë Sigle 🔹
Last Wednesday at 5:45 AM

Executive Summary

The four largest funds in the effective animal advocacy movement can absorb ~$120 million in new funding in the coming year, with potential for greater growth in future years. Setting aside the historic scarcity mindset, I share this money's significant opportunity to reduce animal suffering and encourage emerging major donors from Anthropic to fund this impact.

We can deploy this money effectively across three intervention categories:

Scaling what works (~50-60%) Testing new interventions (~30-40%) Growing movement infrastructure & people power (~10-20%)

Major donors should contact the major funds’ managers directly to decide where to gi...


“300,000 lives, 100 million hens, and a world still to save” by William_MacAskill
Last Wednesday at 1:30 AM

I went on the Sam Harris podcast again recently. If you want the full non-paywalled episode, I’m able to share it here:

https://samharris.org/episode/SE6877E700B

30 mins of it is also on youtube, spotify, etc.

The occasion for the podcast was the 10-year anniversary edition of Doing Good Better. So, preparing for the podcast, I collected some relevant facts about what EA has achieved over the last decade, and where the movement stands today. I found the numbers both surprising and inspiring, so I thought I should share them he...


[Linkpost] “The good, the bad, and the fair cop” by Toby_S
Last Wednesday at 12:45 AM

This is a link post.

Hi, I’m Thorbjørn “Toby” Schiønning, co-founder of Anima International. I was encouraged by my colleague Jakub Stencel to share our new blog post on negotiating and campaigning for animal welfare here (this is my first post here).

Summary

Over the past two decades, a small group of advocates has secured major corporate commitments on animal welfare – from cage-free eggs to fur-free policies – but securing a commitment means nothing if companies don’t follow through. The animal advocacy movement relies on two traditional approaches: the “good cop” (dialogue and collaboration) and...


“Proactively tell people/organisations when they have changed your actions/impact” by TJPHutton🔸
Last Tuesday at 8:45 PM

Measuring and communicating impact is hard, particularly when focusing on community building over the long term.

You have a much better idea of what led to you starting a new role, changing your donations, etc, than anyone else could. It will take you 2-10 minutes to directly tell someone how they've impacted your trajectory[1] where they might take hours-to-forever to try to reach you and ask.

It seems likely that many community builders, orgs, and funders would get significant-value from additional impact stories[2].

It also seem very unlikely to be high-cost for any...


[Linkpost] “AI should be a good citizen, not just a good assistant” by Forethought, Tom_Davidson, William_MacAskill
Last Tuesday at 7:45 PM

This is a link post.

Introduction

Consider a lorry driver who sees a car crash and pulls over to help, even though it’ll delay his journey. Or a delivery driver who notices that an elderly resident hasn’t collected their post in days, and knocks to check they’re okay. Or a social media company employee who notices how their platform is used for online bullying, and brings it up with leadership, even though that's not part of their job description.

This kind of proactive prosocial behaviour is admirable in humans. Should we want it in...


“Introducing 8 New Evidence-Based Nonprofits from Latin America” by Verónica Suárez M.
Last Tuesday at 4:00 PM

TL;DR

We incubated 8 new nonprofits in Latin America using an evidence-based, cost-effectiveness–driven approach. Each organization is tackling a high-priority, neglected problem with proven interventions adapted to their context, all designed to scale. These orgs would not exist without this process, supporting them creates real counterfactual impact. 📅 We are hosting a “Meet the Founder” session on April 9th — connectors, funders, and mentors are invited to join and meet the teams. Donations are now open to support these new orgs pilots.

🚀 We are also opening applications for our 2026 cohort.

Why this work exists

In Latin America, ma...


[Linkpost] “What it’s like to be an AI safety grantmaker (and why we need more of them)” by JulianHazell
Last Tuesday at 6:00 AM

This is a link post.

TL;DR

Here are the key points I want you to take away from this post:

There are maybe 30 to 60 people in the world doing AI safety grantmaking, collectively directing hundreds of millions of dollars a year. Soon, there will be >$1B being directed per year, and potentially multiple billions. AI safety grantmaking orgs like CG have a strong track record of counterfactually seeding impactful organizations and careers. Grantmaking involves a lot more than evaluating a stack of inbound proposals. You also proactively generate new grants (e.g., headhunting founders, designing...


[Linkpost] “Concrete projects to prepare for superintelligence” by Forethought, William_MacAskill, finm
Last Monday at 1:15 PM

This is a link post.

Introduction

There are lots of good, neglected, and pretty concrete projects people could set up to make the transition to superintelligence go better. This document describes some that readers might not have thought much about before. They are ordered roughly by how excited we are about them.[1] Of these, Forethought is actively working on AI character evaluation and space governance, and we are very interested in automating macrostrategy.

Summary

AI character evaluation. Start an independent org to evaluate and stress-test AI character traits (epistemic integrity, prosociality, appropriate refusals...


“Claude’s constitution is great” by OscarD🔸
Last Monday at 12:00 PM

I read Claude's Constitution recently. I thought it was very good! This was my favourite quote:

Our own understanding of ethics is limited, and we ourselves often fall short of our own ideals. We don’t want to force Claude's ethics to fit our own flaws and mistakes, especially as Claude grows in ethical maturity. And where Claude sees further and more truly than we do, we hope it can help us see better, too.

Here are some of the other greatest hits according to me (I would encourage you to read the quotes in th...


“How did Leo do? Evaluating Situational Awareness’s predictions” by Jamie_Harris
Last Monday at 11:45 AM

In June 2024, Leopold Aschenbrenner published Situational Awareness, a 165-page essay predicting AGI by 2027, trillion-dollar compute clusters, an inevitable US/China AI arms race, and a world not remotely prepared. It was influential on many people's thinking about AI timelines and risks, including my own.

It's now almost two years later. I was curious: how have the predictions actually held up?

What we did

I got Claude to go through the essay's key claims and check each against the best available evidence as of March 2026. The substantive analysis is Claude's (plus 2 rounds of red-teaming...


“What is integral altruism?” by EuanMcLean
Last Monday at 9:00 AM

Recently, a growing group of EAs, EA-adjacents, post-EAs and EA-curious folk have been gathering and organising around a new term - integral altruism (int/a). The central claims of int/a are that the EA toolkit is powerful but incomplete, and that EA can learn from other movements who are trying to improve the world.

The goal of int/a is to find a broader approach to altruism by integrating EA with epistemics/ontologies/world models/language/culture from outside of the EA/rationalist bubble. More specifically, we reckon EA can be most constructively complemented by learning...


“The Future Will Be Weirder Than That” by MichaelDickens
Last Monday at 8:30 AM

Many people in the animal welfare community treat AI as a powerful but normal technology, in the same category as the steam engine or the internet. They talk about how transformative AI will impact factory farming and what it will mean for animal advocacy.

Only two futures are plausible:

AI progress slows down—either because it hits a natural wall, or because civilization deliberately makes the (correct) choice to stop building it until we know how to make it safe. Superintelligent AI makes the future radically weird: Dyson spheres, molecular nanotechnology, digital minds, von Neumann probes, an...


[Linkpost] “Planning 80000 Hours Before the Plausible End of the World” by Carolanne Jiang
Last Sunday at 1:00 PM

This is a link post.

Being a freshman at university, I seem to have been bestowed the great privilege of infinite possibilities. There is this strange feeling of trying to plan a career in a world that might not exist in five years. Not in a doomer sense but that the world of 2031 might be so radically different from today that most attempts of planning are incoherent. I contemplate machine god super intelligence arriving before I graduate, intelligence explosions compressing 10,000 years of progress into one, the possibility of being among the last humans to die before death itself is...


“Launching Euzoia: Beeminder for Effective Charities” by Christoph Hartmann 🔸, Cameron.K
03/28/2026

TLDR: We built Beeminder for effective charities. Access it here.

Commitment Contracts don't work with effective charities

Commitment Contracts are simple: You commit to pay money if you don't do something. Then, when you don't do the thing, you pay the company offering these contracts. This hurts, and that's why it works.

Commitment Contracts are surprisingly effective for behaviour change: A large-scale study from stikK showed that contracts with money staked are 60 percentage points more likely to succeed.

That's why there are plenty of organisations offering commitment contracts already: Beeminder, stickK...


“TAI-driven clean meat won’t solve the problem but changes movement strategy anyway” by Bella
03/27/2026

This is a contribution to AGI & Animals Debate Week (March 23–29, 2026). Co-written with Claude and ChatGPT.

Summary: It seems reasonable to think that there's a >20% chance[1] that TAI will bring us a ‘technological roadmap’[2] to clean meat within the next 15 years. A simple model: such a roadmap won’t solve the problem, but it will make ending factory farming much easier, i.e. the cost-effectiveness of factory farming interventions will increase – perhaps by a factor of 2-10x. The movement should therefore be investing instead of deploying, focusing on capacity-building, research, and literal investment of money. It may also be co...


“Sen. Sanders (I-VT) and Rep. Ocasio-Cortez (D-NY) propose AI Data Center Moratorium Act” by Matrice Jacobine🔸🏳️‍⚧️
03/26/2026

The text of the bill can be found here. It begins by citing the warnings of AI company CEOs and deep learning pioneers Geoffrey Hinton and Yoshua Bengio, as well as the 2023 FLI open letter calling for a 6-month pause. The bill would prohibits the construction or upgrading of AI datacenters until Congress pass an AI safety law aimed at preventing AI companies "from releasing harmful products into the world that threaten the health and well-being of working families, our privacy and civil rights, and the future of humanity". It would also impose export controls for advanced chips to...


“Transitioning the Community Building Grants Program” by Naomi N
03/26/2026

The Community Building Grants (CBG) program, run by the Groups team at CEA since 2018, is undergoing substantial changes.

Through the CBG program, we fund a set of professional city and national EA groups[1] and support their leaders through retreats, regular check-ins and calls, a Slack space, and resources for running groups.

CBG groups have achieved exciting things, such as helping EAs navigate (policy) hubs, supporting hundreds of community members in taking next steps, and incubating new organisations.

In this post, we want to share what's changing and the reasoning behind it. In 2026, two...


“Animal Welfare is Just Part of AI Alignment Now, and Both Groups of Advocates Should Celebrate This.” by Aidan Kankyoku
03/26/2026

This post was inspired by debate week, but I also published it to my Substack. There, most of my readers are not as familiar with EA discourse, so I added a long introduction which I haven’t copied here (but which you might find entertaining). There's also a voiceover of the full version available on Substack or in any podcast feed if you search “Sandcastles”.

1. The heart of the matter

There's a debate taking place on the EA forum this week. The motion is:

If AGI goes well for humans, it’ll probably (>70% likeliho...


“I used to think aligned ASI would be good for all sentient beings; now I don’t know what to think” by MichaelDickens
03/26/2026

Cross-posted from my website.

Epistemic status: Speculating with no central thesis. This post is less of an argument and more of a meditation.

A decade ago, before there was a visible path to AGI and before AI alignment was a significant research field, I figured the solution to the alignment problem would look something like Coherent Extrapolated Volition. I figured we'd find a way to get the AI to internalize human values. I had problems with this approach (why only human values?), but I still felt reasonably confident that the coherent extrapolation of human values...


“Cultivated meat isn’t necessarily a solved problem under AGI” by Hannah McKay🔸, Rethink Priorities
03/26/2026

Summary

Post-AGI…Subjective assessmentCore pointsCultivated meat is a prioritized issue

In many worlds, but not all

- If AGI is widely available, alt protein advocates could direct it themselves and prioritization is not an issue.

- If AGI capability is concentrated in governments and companies, cultivated meat competes with other issues like cancer, climate, and national security.

- An autonomous AGI reasoning from first principles might prioritize it, but animal welfare is underrepresented in alignment frameworks—there is not yet a strong basis for assuming it would.

The remaining science is s...


“Being transparent about capacity” by Kashvi Mulchandani 🔸
03/26/2026

Epistemic status: medium, unsure if this is biased to what I have experienced, but think it is useful regardless.

I think that organisations and individuals should try be more transparent about their capacity, e.g. at conferences when people ask for volunteering / collaboration opportunities, a lot of orgs/individuals say yes, and then nothing ever really happens.

For example, some members from EA Bath attended EAGx Amsterdam last year. One of my members was pretty excited about EA and using his skills to help in climate change / GHD. He met with a lot of people...