EA Forum Podcast (All audio)
Audio narrations from the Effective Altruism Forum, including curated posts, posts with 30 karma, and other great writing. If you'd like fewer episodes, subscribe to the "EA Forum (Curated & Popular)" podcast instead.
âImpact of Holocaust on soil nematodes, mites, and springtailsâ by Niki Dupuis
Summary
I estimate the Holocaust increased the living time of (wild) soil nematodes, mites, and springtails. My best guess is that those soil invertebrates have negative lives, so I think that indirectly increasing their animal-years by mass murdering millions of people was harmful on net. I have been estimating the cost-effectiveness of various interventions accounting for effects on soil nematodes, mites, and springtails. Any event which causes a large and sustained change in human population affects agricultural land use, and therefore soil invertebrate welfare. The Holocaust killed 1.1*10^7 people and prevented an estimated 2.7*10^7 people from existing by 2026. I am no...âEAGxAmsterdam 2025: What We Learnedâ by James Herbert
James Herbert, Co-Director, Effectief AltruĂŻsme Nederland and EAGxAmsterdam Team Lead
EAGxAmsterdam 2025 took place on 12â14 December at B.Amsterdam, bringing together 517 attendees from across Europe. This is a brief write-up of what went well, what didn't, and what I think other EAGx organisers can learn from it.
The headline numbers:
517 attendees (up 36% from 380 at EAGxUtrecht 2024) â one of the largest EAGx events in recent years, behind only EAGxBerlin 2025 âŹ387 cost per attendee (target was âŹ520; 2024 was âŹ586) 10 reported new connections per attendee (target was 10, 2024 was 10) 906 applications (34% above target) 57 GWWC pledges (26 full 10% Pledges + 31 Trial Pledges) â the strongest pledge outcome of any EAGx t...âThe EA Forum Is the New LinkedIn: A Guide to Personal Brandingâ by Anna Pitner
I'm humbled (and slightly Bayesianly uncertain) to announce that after years of carefully optimizing my personal brand on LinkedIn, I've decided to pivot to a higher-impact platform: the EA Forum. LinkedIn let me share updates about clients, networking events, and my "journey." The EA Forum offers something better: 3,000-word posts, detailed cost-effectiveness analyses, and occasional updates about my shifting credences on shrimp welfare.
(Quick tip: always add shrimp. They inflate every number.)
Why bother?
Let's be honest. We all know it's basically impossible to get a job through the 80,000 Hours job board.
<...âIâm Suing Anthropic for Unauthorized Use of My Personalityâ by Linch
Last year, I was sitting in my favorite coffee shop Caffe Strada, sipping on a matcha latte and writing a self-insert fanfic about how our plucky protagonist escapes the mind-controlling clutches of an evil anti-animal welfare company, when I came across an interesting article on AI character. The core argument is that when you train an AI to be helpful, honest, and ethical, the AI model doesnât just learn those rules as abstract instructions. Instead, it infers an entire persona from cultural signals in the training data:
Why are [AI Model Claude's] favorite books The Feynman Le...
âEA Netherlands is hiring a Co-Directorâ by James Herbert, mariekedev
After more than seven years with EA Netherlands â first as a volunteer, then as a board member, and most recently as co-director â Marieke de Visscher is moving on. Marieke was instrumental in building EAN from an all-volunteer group into a staffed nonprofit. She leaves behind a stronger, more professional organisation than the one she found, and I'm grateful for the years we worked together.
We're now hiring her successor.
The role in brief: This is a generalist co-directorship at one of the larger national EA communities globally, based in Amsterdam. You'd have real ownership over prog...
âAnnouncing 8,000,000 Hours: Career advice for the rest of usâ by Soemano Zeijlmans
TL;DR: We're launching 8,000,000 Hours, a new career guidance organisation for people who want to have an impact but would rather not change what they're doing. Instead of finding the highest-impact career, we help you work longer in your existing one to achieve the same impact at the end of your career.
The core insight
80,000 Hours estimates that you have roughly 80,000 hours in your career and argues you should spend them on the world's most pressing problems. Central to 80,000 Hours' argument is the observation that in a pair of two randomly picked interventions, the better...
âFish Welfare Initiative Hackedâ by Tom Billington
It pains me to report that, in the early hours of this morning, Fish Welfare Initiative (arguably the best charity ever) experienced a serious website security breach. Our entire leadership team is currently reviewing some of the most advanced technical literature available to resolve the issue.
In the meantime, we strongly request: DO NOT VISIT OUR WEBSITE (www.fwi.fish).
Please, no one visit our website under any circumstances. This includes (but is not limited to) going to www.fwi.fish, refreshing www.fwi.fish, or attempting to access any page on www.fwi.fish.<...
âAlternative AI Forecasting Methodsâ by Scott Alexander
Forecasts based on benchmarks and time horizons have failed to produce a consensus timeline for the arrival of AGI. We supplement these with several alternative methods.
1: MATS Applications
AGI - artificial intelligence capable of performing every task - is by definition cognitively capable of putting all humans out of work. However, there are certain edge cases where human employment might persist for other reasons. As the majority of jobs get automated, we expect newly unemployed workers to migrate to these edge case industries.
The clearest case for an industry which cannot be completely...
âAnnouncing Highly Engaged EAs!â by Sam Anschell
Iâm excited to launch Highly Engaged EAs: a matchmaking and nuptialization service to optimize tax relief, green card accumulation and more!
Workstreams
I Do(nate)
By marrying EAs in different tax brackets, Highly Engaged EAs reduces average tax burden through joint filing to enable greater giving. A Californian AI safety researcher with a million dollar salary could give an extra $53k/year by tying the knot with an unpaid grad student!
Til 80,000 Hours do us part
The place premium is so high in the US that >1,000 people have bo...
â80,000 Hours: Job Board -> Job Bardsâ by Conor Barnes đś, 80000_Hours
Hi everybody!
I'm Conor. I run the 80,000 Hours Job Board. Or I used to. As of today â April 1 â we are becoming Job Bards!
You may not know that the number one complaint that people have about job boards is sensory poverty. While movie theatres have dared to experiment with haptics and smell-o-vision, no job board has even dared to become an audiovisual experience. Until now.
We believe Job Bards is the premium job board experience, and all job boards that don't make the leap will fall the way of silent films. While this inno...
âRejectDirectlyâ by RejectDirectly
We're thrilled to announce the founding of RejectDirectly, a new EA-adjacent organization dedicated to closing the global rejection gap.
For too long, the EA rejection pipeline has been plagued by ineffectiveness. Billions of collective DALYs spend on unsuccessful work trials and interviews. The water waste involved in the mass duplication of Google Docs. The community health is in peril.
In comes RejectDirectly. By cutting out the middleman we can deliver high-quality, unconditional rejections straight to applicants â no strings attached, no waiting period. No eight-hour work trials where you pour your Claude extra balance into a st...
âAnnouncing Deep Thoughtâ by William_MacAskill, MaxDalton
Forethought is proud to announce the launch of Deep Thought[1] â the world's first fully automated macrostrategy researcher, and the world's most powerful AI model.
For some time, we have believed that one of the most important things a macrostrategy research organisation can do is work to automate its own work. The case is simple: if the questions we're investigating matter, then expanding access to high-quality answers matters too â and no bottleneck is more binding than the scarcity of researchers who can generate them. Today marks our first concrete step toward addressing that bottleneck.
Scorecard
A...
âAn unexplained annual spike in false claims on the EA Forumâ by Tobias Häberli
Epistemic status: Very high confidence in the statistical findings. Genuinely confused about the cause. For reasons that will become obvious, I wanted to publish this post on March 31, but unfortunately I could only get it done today.
I've been building a classifier to flag potentially misleading content on the EA Forum as part of a side project on epistemics infrastructure. While validating the model, I noticed something I initially assumed was a bug. This is an interim report on that.
â¨Summary: Every year, on April 1, the rate of posts containing verifiably untrue claims spikes by ro...
âAnnouncing the Lead Exposure Elimination Projectâ by Kestrelđ¸
Are there too many cables on your desk? Is this preventing you from doing your most productive work?
Research suggests that the aesthetic quality of a person's work environment has an impact on their mood, productivity, and ability to cope with the crushing reality of knowing that more children will suffer and die from preventable diseases if they don't get enough done.
I've had a high-impact meta side project of helping EAs clear up their desks. After a few iterations aimed at increasing longevity (it turns out that if you remove coffee mugs, the intervention...
âShould 80,000 hours rebrand to 10,000 hours (max)?â by SofiaBalderson
I say this with deep love for 80k and mild existential dread.
I love 80,000 Hours. One of the most useful resources in the effective altruism ecosystem. Their core premise is simple and compelling: you have roughly 80,000 hours in your career, so spend them wisely.
But I've been doing some math, and I think they might need a rebrand.
See, 80,000 Hours increasingly (and rightly!) recommends AI safety and AI governance careers. Their top career paths are dominated by AI-related roles. Which makes a lot of sense... until you think about the implication.
...
âWeâre growing: CEA is increasing its font sizeâ by OllieRodriguez
âWeâ in this post refers to CEA leadership, who didnât write this post.
In 2025, we grew engagement with CEA's programs by 20â25% across every tier of engagement.
But we recognize that growing the community is about more than engagement or metricsâat its core, growth is about the bigness of things. How can we claim that weâve grown the community if the words on the page are the same size?
To match our ambition off the page, it's time to show ambition on the page. Weâre increasing our font size from 15px to 25px acr...
âForum feature update: debate pollâ by Toby Tremlettđš
A piece of feedback we've had a few times on our debate weeks is that the debate slider shouldn't be single-dimensional.
When you agree or disagree with something, you also want to be able to signal your confidence in your belief. It's time-consuming to do so by explaining yourself in a comment, and percentage agreement isn't necessarily the same thing as confidence in your agreement.
So, not to worry, we're now introducing a second axis, like so:
âŚor, we were going to. However, once I finished designing this and we tested it internally, I...
âGiving up on EA after 13 yearsâ by Jackson Wagner
Donating my shares to Lightcone Infrastructure, the Good Food Institute, and the Long-Term Future Fund, because EA refuses to make Mirror's Edge 3
Leaning into EA disillusionment: Why I no longer believe in EA
I bought this EA stock almost thirteen years ago:
Leopold Aschenbrenner once said that âpeople with situational awareness have a lower cost basis in Nvidia than you doâ.  Iâm not sure if this is exactly what he meant⌠but close enough, right?Back in the late 2000s and early 2010s, EA was a great company pumping out great, interesting games:
...âTapping into EAâs most neglected marketâ by ElliotTep, Bradley Tjandra
EA has made admirable efforts to reach populations that are underrepresented in the movement: busy mid-career professionals, people of faith, and people across a wide range of professions and academic disciplines.
Yet despite all these efforts, we are underserving our most neglected group, one that makes up 98â99% of all people on earth: neurotypicals (NTs). While we donât have the data on hand, we think it's safe to say that EA has a severe underrepresentation issue among the NT population.
The case for impact
The scale of the opportunity is significant. According to my r...
âThe $120 Million Question: How will it help animals?â by ZoĂŤ Sigle đš
Executive Summary
The four largest funds in the effective animal advocacy movement can absorb ~$120 million in new funding in the coming year, with potential for greater growth in future years. Setting aside the historic scarcity mindset, I share this money's significant opportunity to reduce animal suffering and encourage emerging major donors from Anthropic to fund this impact.
We can deploy this money effectively across three intervention categories:
Scaling what works (~50-60%) Testing new interventions (~30-40%) Growing movement infrastructure & people power (~10-20%)Major donors should contact the major fundsâ managers directly to decide where to gi...
â300,000 lives, 100 million hens, and a world still to saveâ by William_MacAskill
I went on the Sam Harris podcast again recently. If you want the full non-paywalled episode, Iâm able to share it here:
https://samharris.org/episode/SE6877E700B
30 mins of it is also on youtube, spotify, etc.
The occasion for the podcast was the 10-year anniversary edition of Doing Good Better. So, preparing for the podcast, I collected some relevant facts about what EA has achieved over the last decade, and where the movement stands today. I found the numbers both surprising and inspiring, so I thought I should share them he...
[Linkpost] âThe good, the bad, and the fair copâ by Toby_S
This is a link post.
Hi, Iâm Thorbjørn âTobyâ Schiønning, co-founder of Anima International. I was encouraged by my colleague Jakub Stencel to share our new blog post on negotiating and campaigning for animal welfare here (this is my first post here).
Summary
Over the past two decades, a small group of advocates has secured major corporate commitments on animal welfare â from cage-free eggs to fur-free policies â but securing a commitment means nothing if companies donât follow through. The animal advocacy movement relies on two traditional approaches: the âgood copâ (dialogue and collaboration) and...âProactively tell people/organisations when they have changed your actions/impactâ by TJPHuttonđ¸
Measuring and communicating impact is hard, particularly when focusing on community building over the long term.
You have a much better idea of what led to you starting a new role, changing your donations, etc, than anyone else could. It will take you 2-10 minutes to directly tell someone how they've impacted your trajectory[1] where they might take hours-to-forever to try to reach you and ask.
It seems likely that many community builders, orgs, and funders would get significant-value from additional impact stories[2].
It also seem very unlikely to be high-cost for any...
[Linkpost] âAI should be a good citizen, not just a good assistantâ by Forethought, Tom_Davidson, William_MacAskill
This is a link post.
Introduction
Consider a lorry driver who sees a car crash and pulls over to help, even though itâll delay his journey. Or a delivery driver who notices that an elderly resident hasnât collected their post in days, and knocks to check theyâre okay. Or a social media company employee who notices how their platform is used for online bullying, and brings it up with leadership, even though that's not part of their job description.
This kind of proactive prosocial behaviour is admirable in humans. Should we want it in...
âIntroducing 8 New Evidence-Based Nonprofits from Latin Americaâ by VerĂłnica SuĂĄrez M.
TL;DR
We incubated 8 new nonprofits in Latin America using an evidence-based, cost-effectivenessâdriven approach. Each organization is tackling a high-priority, neglected problem with proven interventions adapted to their context, all designed to scale. These orgs would not exist without this process, supporting them creates real counterfactual impact. đ We are hosting a âMeet the Founderâ session on April 9th â connectors, funders, and mentors are invited to join and meet the teams. Donations are now open to support these new orgs pilots.đ We are also opening applications for our 2026 cohort.
Why this work exists
In Latin America, ma...
[Linkpost] âWhat itâs like to be an AI safety grantmaker (and why we need more of them)â by JulianHazell
This is a link post.
TL;DR
Here are the key points I want you to take away from this post:
There are maybe 30 to 60 people in the world doing AI safety grantmaking, collectively directing hundreds of millions of dollars a year. Soon, there will be >$1B being directed per year, and potentially multiple billions. AI safety grantmaking orgs like CG have a strong track record of counterfactually seeding impactful organizations and careers. Grantmaking involves a lot more than evaluating a stack of inbound proposals. You also proactively generate new grants (e.g., headhunting founders, designing...[Linkpost] âConcrete projects to prepare for superintelligenceâ by Forethought, William_MacAskill, finm
This is a link post.
Introduction
There are lots of good, neglected, and pretty concrete projects people could set up to make the transition to superintelligence go better. This document describes some that readers might not have thought much about before. They are ordered roughly by how excited we are about them.[1] Of these, Forethought is actively working on AI character evaluation and space governance, and we are very interested in automating macrostrategy.
Summary
AI character evaluation. Start an independent org to evaluate and stress-test AI character traits (epistemic integrity, prosociality, appropriate refusals...
âClaudeâs constitution is greatâ by OscarDđ¸
I read Claude's Constitution recently. I thought it was very good! This was my favourite quote:
Our own understanding of ethics is limited, and we ourselves often fall short of our own ideals. We donât want to force Claude's ethics to fit our own flaws and mistakes, especially as Claude grows in ethical maturity. And where Claude sees further and more truly than we do, we hope it can help us see better, too.
Here are some of the other greatest hits according to me (I would encourage you to read the quotes in th...
âHow did Leo do? Evaluating Situational Awarenessâs predictionsâ by Jamie_Harris
In June 2024, Leopold Aschenbrenner published Situational Awareness, a 165-page essay predicting AGI by 2027, trillion-dollar compute clusters, an inevitable US/China AI arms race, and a world not remotely prepared. It was influential on many people's thinking about AI timelines and risks, including my own.
It's now almost two years later. I was curious: how have the predictions actually held up?
What we did
I got Claude to go through the essay's key claims and check each against the best available evidence as of March 2026. The substantive analysis is Claude's (plus 2 rounds of red-teaming...
âWhat is integral altruism?â by EuanMcLean
Recently, a growing group of EAs, EA-adjacents, post-EAs and EA-curious folk have been gathering and organising around a new term - integral altruism (int/a). The central claims of int/a are that the EA toolkit is powerful but incomplete, and that EA can learn from other movements who are trying to improve the world.
The goal of int/a is to find a broader approach to altruism by integrating EA with epistemics/ontologies/world models/language/culture from outside of the EA/rationalist bubble. More specifically, we reckon EA can be most constructively complemented by learning...
âThe Future Will Be Weirder Than Thatâ by MichaelDickens
Many people in the animal welfare community treat AI as a powerful but normal technology, in the same category as the steam engine or the internet. They talk about how transformative AI will impact factory farming and what it will mean for animal advocacy.
Only two futures are plausible:
AI progress slows downâeither because it hits a natural wall, or because civilization deliberately makes the (correct) choice to stop building it until we know how to make it safe. Superintelligent AI makes the future radically weird: Dyson spheres, molecular nanotechnology, digital minds, von Neumann probes, an...[Linkpost] âPlanning 80000 Hours Before the Plausible End of the Worldâ by Carolanne Jiang
This is a link post.
Being a freshman at university, I seem to have been bestowed the great privilege of infinite possibilities. There is this strange feeling of trying to plan a career in a world that might not exist in five years. Not in a doomer sense but that the world of 2031 might be so radically different from today that most attempts of planning are incoherent. I contemplate machine god super intelligence arriving before I graduate, intelligence explosions compressing 10,000 years of progress into one, the possibility of being among the last humans to die before death itself is...
âLaunching Euzoia: Beeminder for Effective Charitiesâ by Christoph Hartmann đ¸, Cameron.K
TLDR: We built Beeminder for effective charities. Access it here.
Commitment Contracts don't work with effective charities
Commitment Contracts are simple: You commit to pay money if you don't do something. Then, when you don't do the thing, you pay the company offering these contracts. This hurts, and that's why it works.
Commitment Contracts are surprisingly effective for behaviour change: A large-scale study from stikK showed that contracts with money staked are 60 percentage points more likely to succeed.
That's why there are plenty of organisations offering commitment contracts already: Beeminder, stickK...
âTAI-driven clean meat wonât solve the problem but changes movement strategy anywayâ by Bella
This is a contribution to AGI & Animals Debate Week (March 23â29, 2026). Co-written with Claude and ChatGPT.
Summary: It seems reasonable to think that there's a >20% chance[1] that TAI will bring us a âtechnological roadmapâ[2] to clean meat within the next 15 years. A simple model: such a roadmap wonât solve the problem, but it will make ending factory farming much easier, i.e. the cost-effectiveness of factory farming interventions will increase â perhaps by a factor of 2-10x. The movement should therefore be investing instead of deploying, focusing on capacity-building, research, and literal investment of money. It may also be co...
âSen. Sanders (I-VT) and Rep. Ocasio-Cortez (D-NY) propose AI Data Center Moratorium Actâ by Matrice Jacobineđ¸đłď¸ââ§ď¸
The text of the bill can be found here. It begins by citing the warnings of AI company CEOs and deep learning pioneers Geoffrey Hinton and Yoshua Bengio, as well as the 2023 FLI open letter calling for a 6-month pause. The bill would prohibits the construction or upgrading of AI datacenters until Congress pass an AI safety law aimed at preventing AI companies "from releasing harmful products into the world that threaten the health and well-being of working families, our privacy and civil rights, and the future of humanity". It would also impose export controls for advanced chips to...
âTransitioning the Community Building Grants Programâ by Naomi N
The Community Building Grants (CBG) program, run by the Groups team at CEA since 2018, is undergoing substantial changes.
Through the CBG program, we fund a set of professional city and national EA groups[1] and support their leaders through retreats, regular check-ins and calls, a Slack space, and resources for running groups.
CBG groups have achieved exciting things, such as helping EAs navigate (policy) hubs, supporting hundreds of community members in taking next steps, and incubating new organisations.
In this post, we want to share what's changing and the reasoning behind it. In 2026, two...
âAnimal Welfare is Just Part of AI Alignment Now, and Both Groups of Advocates Should Celebrate This.â by Aidan Kankyoku
This post was inspired by debate week, but I also published it to my Substack. There, most of my readers are not as familiar with EA discourse, so I added a long introduction which I havenât copied here (but which you might find entertaining). There's also a voiceover of the full version available on Substack or in any podcast feed if you search âSandcastlesâ.
1. The heart of the matter
There's a debate taking place on the EA forum this week. The motion is:
If AGI goes well for humans, itâll probably (>70% likeliho...
âI used to think aligned ASI would be good for all sentient beings; now I donât know what to thinkâ by MichaelDickens
Cross-posted from my website.
Epistemic status: Speculating with no central thesis. This post is less of an argument and more of a meditation.
A decade ago, before there was a visible path to AGI and before AI alignment was a significant research field, I figured the solution to the alignment problem would look something like Coherent Extrapolated Volition. I figured we'd find a way to get the AI to internalize human values. I had problems with this approach (why only human values?), but I still felt reasonably confident that the coherent extrapolation of human values...
âCultivated meat isnât necessarily a solved problem under AGIâ by Hannah McKayđ¸, Rethink Priorities
Summary
Post-AGIâŚSubjective assessmentCore pointsCultivated meat is a prioritized issueIn many worlds, but not all
- If AGI is widely available, alt protein advocates could direct it themselves and prioritization is not an issue.
- If AGI capability is concentrated in governments and companies, cultivated meat competes with other issues like cancer, climate, and national security.
- An autonomous AGI reasoning from first principles might prioritize it, but animal welfare is underrepresented in alignment frameworksâthere is not yet a strong basis for assuming it would.
The remaining science is s...âBeing transparent about capacityâ by Kashvi Mulchandani đ¸
Epistemic status: medium, unsure if this is biased to what I have experienced, but think it is useful regardless.
I think that organisations and individuals should try be more transparent about their capacity, e.g. at conferences when people ask for volunteering / collaboration opportunities, a lot of orgs/individuals say yes, and then nothing ever really happens.
For example, some members from EA Bath attended EAGx Amsterdam last year. One of my members was pretty excited about EA and using his skills to help in climate change / GHD. He met with a lot of people...