Future of Life Institute Podcast

40 Episodes
Subscribe

By: Future of Life Institute

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.

Defense in Depth: Layered Strategies Against AI Risk (with Li-Lian Ang)
Last Thursday at 7:48 PM

Li-Lian Ang is a team member at Blue Dot Impact. She joins the podcast to discuss how society can build a workforce to protect humanity from AI risks. The conversation covers engineered pandemics, AI-enabled cyber attacks, job loss and disempowerment, and power concentration in firms or AI systems. We also examine Blue Dot's defense-in-depth framework and how individuals can navigate rapid, uncertain AI progress.

LINKS:

Li-Lian Ang personal siteBlue Dot Impact organization site

CHAPTERS:

(00:00) Episode Preview

(00:48) Blue dot beginnings

(03:04) Evolving AI risk concerns

(06:20) AI agents in cyber<...


What AI Companies Get Wrong About Curing Cancer (with Emilia Javorsky)
03/20/2026

Emilia Javorsky is a physician-scientist and Director of the Futures Program at the Future of Life Institute. 

She joins the podcast to discuss her newly published essay on AI and cancer. She challenges tech claims that superintelligence will cure cancer, explaining why biology’s complexity, poor data, and misaligned incentives are bigger bottlenecks than raw intelligence. The conversation covers realistic roles for AI in drug discovery, clinical trials, and cutting unnecessary medical bureaucracy.

You can read the full essay at: curecancer.ai

CHAPTERS:

(00:00) Episode Preview

(01:10) Introduction and essay motivation

<...


AI vs Cancer - How AI Can, and Can't, Cure Cancer (by Emilia Javorsky)
03/16/2026

Tech executives have promised that AI will cure cancer. The reality is more complicated — and more hopeful. This essay examines where AI genuinely accelerates cancer research, where the promises fall short, and what researchers, policymakers, and funders need to do next.

You can read the full essay at: curecancer.ai

CHAPTERS:

(00:00) Essay Preview

(00:54) How AI Can, and Can't, Cure Cancer

(17:05) Reckoning with Past Failures

(35:23) Misguiding Myths and Errors

(59:15) AI Solutions Derive from First Principles or Data

(01:31:31) Systemic Bottlenecks & Misalignments

(02:08:46) Conclusion

...


How AI Hacks Your Brain's Attachment System (with Zak Stein)
03/05/2026

Zak Stein is a researcher focused on child development, education, and existential risk. He joins the podcast to discuss the psychological harms of anthropomorphic AI. We examine attention and attachment hacking, AI companions for kids, loneliness, and cognitive atrophy. Our conversation also covers how we can preserve human relationships, redesign education, and build cognitive security tools that keep AI from undermining our humanity.

LINKS:

AI Psychological Harms Research Coalition

Zak Stein official website

CHAPTERS:

(00:00) Episode Preview

(00:56) Education to existential risk

(03:03) Lessons...


The Case for a Global Ban on Superintelligence (with Andrea Miotti)
02/20/2026

Andrea Miotti is the founder and CEO of Control AI, a nonprofit. He joins the podcast to discuss efforts to prevent extreme risks from superintelligent AI. The conversation covers industry lobbying, comparisons with tobacco regulation, and why he advocates a global ban on AI systems that can outsmart and overpower humans. We also discuss informing lawmakers and the public, and concrete actions listeners can take.

LINKS:

Control AI

Control AI global action page

ControlAI's lawmaker contact tools

Open roles at ControlAI

ControlAI's theory of change<...


Can AI Do Our Alignment Homework? (with Ryan Kidd)
02/06/2026

Ryan Kidd is a co-executive director at MATS. This episode is a cross-post from "The Cognitive Revolution", hosted by Nathan Labenz. In this conversation, they discuss AGI timelines, model deception risks, and whether safety work can avoid boosting capabilities. Ryan outlines MATS research tracks, key researcher archetypes, hiring needs, and advice for applicants considering a career in AI safety. Learn more about Ryan's work and MATS at: https://matsprogram.org

CHAPTERS:

(00:00) Episode Preview

(00:20) Introductions and AGI timelines

(10:13) Deception, values, and control

(23:20) Dual use and alignment

(32:22...


How to Rebuild the Social Contract After AGI (with Deric Cheng)
01/27/2026

Deric Cheng is Director of Research at the Windfall Trust. He joins the podcast to discuss how AI could reshape the social contract and global economy. The conversation examines labor displacement, superstar firms, and extreme wealth concentration, and asks how policy can keep workers empowered. We discuss resilient job types, new tax and welfare systems, global coordination, and a long-term vision where economic security is decoupled from work.

LINKS:

Deric Cheng personal websiteAGI Social Contract project siteGuiding society through the AI economic transition

CHAPTERS:

(00:00) Episode Preview

(01:01) Introducing Derek and AGI

<...


How AI Can Help Humanity Reason Better (with Oly Sourbut)
01/20/2026

Oly Sourbut is a researcher at the Future of Life Foundation. He joins the podcast to discuss AI for human reasoning. We examine tools that use AI to strengthen human judgment, from collective fact-checking and scenario planning to standards for honest AI reasoning and better coordination. We also discuss how we can keep humans central as AI scales, and what it would take to build trustworthy, society-wide sensemaking.

LINKS:

FLF organization siteOly Sourbut personal site

CHAPTERS:

(00:00) Episode Preview

(01:03) FLF and human reasoning

(08:21) Agents and epistemic virtues

(22:16) Human...


How to Avoid Two AI Catastrophes: Domination and Chaos (with Nora Ammann)
01/07/2026

Nora Ammann is a technical specialist at the Advanced Research and Invention Agency in the UK. She joins the podcast to discuss how to steer a slow AI takeoff toward resilient and cooperative futures. We examine risks of rogue AI and runaway competition, and how scalable oversight, formal guarantees and secure code could support AI-enabled R&D and critical infrastructure. Nora also explains AI-supported bargaining and public goods for stability.

LINKS:

Nora Ammann siteARIA safeguarded AI program pageAI Resilience official siteGradual Disempowerment website

CHAPTERS:

(00:00) Episode Preview

(01:00) Slow takeoff expectations

(08:13...


How Humans Could Lose Power Without an AI Takeover (with David Duvenaud)
12/23/2025

David Duvenaud is an associate professor of computer science and statistics at the University of Toronto. He joins the podcast to discuss gradual disempowerment in a post-AGI world. We ask how humans could lose economic and political leverage without a sudden takeover, including how property rights could erode. Duvenaud describes how growth incentives shape culture, why aligning AI to humanity may become unpopular, and what better forecasting and governance might require.

LINKS:

David Duvenaud academic homepageGradual DisempowermentThe Post-AGI WorkshopPost-AGI Studies Discord

CHAPTERS:

(00:00) Episode Preview

(01:05) Introducing gradual disempowerment

(06:06) Obsolete labor...


Why the AI Race Undermines Safety (with Steven Adler)
12/12/2025

Stephen Adler is a former safety researcher at OpenAI. He joins the podcast to discuss how to govern increasingly capable AI systems. The conversation covers competitive races between AI companies, limits of current testing and alignment, mental health harms from chatbots, economic shifts from AI labor, and what international rules and audits might be needed before training superintelligent models. 


LINKS:

Steven Adler's Substack: https://stevenadler.substack.com


CHAPTERS:
(00:00) Episode Preview
(01:00) Race Dynamics And Safety
(18:03) Chatbots And Mental Health
(30:42) Models Outsmart Safety Tests
(41:01) AI Swarms And Work
(54:21) Human B...


Why OpenAI Is Trying to Silence Its Critics (with Tyler Johnston)
11/27/2025

Tyler Johnston is Executive Director of the Midas Project. He joins the podcast to discuss AI transparency and accountability. We explore applying animal rights watchdog tactics to AI companies, the OpenAI Files investigation, and OpenAI's subpoenas against nonprofit critics. Tyler discusses why transparency is crucial when technical safety solutions remain elusive and how public pressure can effectively challenge much larger companies.

LINKS:

The Midas Project WebsiteTyler Johnston's LinkedIn Profile


CHAPTERS:

(00:00) Episode Preview
(01:06) Introducing the Midas Project
(05:01) Shining a Light on AI
(08:36) Industry Lockdown and Transparency
(13:45...


We're Not Ready for AGI (with Will MacAskill)
11/14/2025

William MacAskill is a senior research fellow at Forethought. He joins the podcast to discuss his Better Futures essay series. We explore moral error risks, AI character design, space governance, and persistent path dependence. The conversation also covers risk-averse AI systems, moral trade between value systems, and improving model specifications for ethical reasoning.

LINKS:
- Better Futures Research Series: https://www.forethought.org/research/better-futures
- William MacAskill Forethought Profile: https://www.forethought.org/people/william-macaskill

CHAPTERS:

(00:00) Episode Preview

(01:03) Improving The Future's Quality

(09:58) Moral Errors and AI...


What Happens When Insiders Sound the Alarm on AI? (with Karl Koch)
11/07/2025

Karl Koch is founder of the AI Whistleblower Initiative. He joins the podcast to discuss transparency and protections for AI insiders who spot safety risks. We explore current company policies, legal gaps, how to evaluate disclosure decisions, and whistleblowing as a backstop when oversight fails. The conversation covers practical guidance for potential whistleblowers and challenges of maintaining transparency as AI development accelerates.


LINKS:

About the AI Whistleblower InitiativeKarl Koch


PRODUCED BY:

https://aipodcast.ing

CHAPTERS:

(00:00) Episode Preview

(00:55) Starting the Whistleblower Initiative<...


Can Machines Be Truly Creative? (with Maya Ackerman)
10/24/2025

Maya Ackerman is an AI researcher, co-founder and CEO of WaveAI, and author of the book "Creative Machines: AI, Art & Us." She joins the podcast to discuss creativity in humans and machines. We explore defining creativity as novel and valuable output, why evolution qualifies as creative, and how AI alignment can reduce machine creativity. The conversation covers humble creative machines versus all-knowing oracles, hallucination's role in thought, and human-AI collaboration strategies that elevate rather than replace human capabilities.

LINKS:
- Maya Ackerman: https://en.wikipedia.org/wiki/Maya_Ackerman
- Creative Machines: AI, Art & Us...


From Research Labs to Product Companies: AI's Transformation (with Parmy Olson)
10/14/2025

Parmy Olson is a technology columnist at Bloomberg and the author of Supremacy, which won the 2024 Financial Times Business Book of the Year. She joins the podcast to discuss the transformation of AI companies from research labs to product businesses. We explore how funding pressures have changed company missions, the role of personalities versus innovation, the challenges faced by safety teams, and power consolidation in the industry.

LINKS:
- Parmy Olson on X (Twitter): https://x.com/parmy
- Parmy Olson’s Bloomberg columns: https://www.bloomberg.com/opinion/authors/AVYbUyZve-8/parmy-olson
- Supremacy (bo...


Can Defense in Depth Work for AI? (with Adam Gleave)
10/03/2025

Adam Gleave is co-founder and CEO of FAR.AI. In this cross-post from The Cognitive Revolution Podcast, he joins to discuss post-AGI scenarios and AI safety challenges. The conversation explores his three-tier framework for AI capabilities, gradual disempowerment concerns, defense-in-depth security, and research on training less deceptive models. Topics include timelines, interpretability limitations, scalable oversight techniques, and FAR.AI’s vertically integrated approach spanning technical research, policy advocacy, and field-building.


LINKS:
Adam Gleave - https://www.gleave.me
FAR.AI - https://www.far.ai
The Cognitive Revolution Podcast - https://www.cognitiverevolution.ai...


How We Keep Humans in Control of AI (with Beatrice Erkers)
09/26/2025

Beatrice works at the Foresight Institute running their Existential Hope program. She joins the podcast to discuss the AI pathways project, which explores two alternative scenarios to the default race toward AGI. We examine tool AI, which prioritizes human oversight and democratic control, and d/acc, which emphasizes decentralized, defensive development. The conversation covers trade-offs between safety and speed, how these pathways could be combined, and what different stakeholders can do to steer toward more positive AI futures.

LINKS:
AI Pathways - https://ai-pathways.existentialhope.com
Beatrice Erkers - https://www.existentialhope.com/team/beatrice-erkers<...


Why Building Superintelligence Means Human Extinction (with Nate Soares)
09/18/2025

Nate Soares is president of the Machine Intelligence Research Institute. He joins the podcast to discuss his new book "If Anyone Builds It, Everyone Dies," co-authored with Eliezer Yudkowsky. We explore why current AI systems are "grown not crafted," making them unpredictable and difficult to control. The conversation covers threshold effects in intelligence, why computer security analogies suggest AI alignment is currently nearly impossible, and why we don't get retries with superintelligence. Soares argues for an international ban on AI research toward superintelligence.


LINKS:
If Anyone Builds It, Everyone Dies - https://ifanyonebuildsit.com
...


Breaking the Intelligence Curse (with Luke Drago)
#1
09/10/2025

Luke Drago is the co-founder of Workshop Labs and co-author of the essay series "The Intelligence Curse". The essay series explores what happens if AI becomes the dominant factor of production thereby reducing incentives to invest in people. We explore pyramid replacement in firms, economic warning signs to monitor, automation barriers like tacit knowledge, privacy risks in AI training, and tensions between centralized AI safety and democratization. Luke discusses Workshop Labs' privacy-preserving approach and advises taking career risks during this technological transition.  

"The Intelligence Curse" essay series by Luke Drago & Rudolf Laine: https://intelligence-curse.ai/
Luke's S...


What Markets Tell Us About AI Timelines (with Basil Halperin)
#1
09/01/2025

Basil Halperin is an assistant professor of economics at the University of Virginia. He joins the podcast to discuss what economic indicators reveal about AI timelines. We explore why interest rates might rise if markets expect transformative AI, the gap between strong AI benchmarks and limited economic effects, and bottlenecks to AI-driven growth. We also cover market efficiency, automated AI research, and how financial markets may signal progress.

Basil's essay on "Transformative AI, existential risk, and real interest rates": https://basilhalperin.com/papers/agi_emh.pdf Read more about Basil's work here: https://basilhalperin.com/

CHAPTERS:

<...


AGI Security: How We Defend the Future (with Esben Kran)
#1
08/22/2025

Esben Kran joins the podcast to discuss why securing AGI requires more than traditional cybersecurity, exploring new attack surfaces, adaptive malware, and the societal shifts needed for resilient defenses. We cover protocols for safe agent communication, oversight without surveillance, and distributed safety models across companies and governments.   

Learn more about Esben's work at: https://blog.kran.ai  

00:00 – Intro and preview 

01:13 – AGI security vs traditional cybersecurity 

02:36 – Rebuilding societal infrastructure for embedded security 

03:33 – Sentware: adaptive, self-improving malware 

04:59 – New attack surfaces 

05:38 – Social media as misaligned AI 

06:46 – Personal vs societal de...


Reasoning, Robots, and How to Prepare for AGI (with Benjamin Todd)
#1
08/15/2025

Benjamin Todd joins the podcast to discuss how reasoning models changed AI, why agents may be next, where progress could stall, and what a self-improvement feedback loop in AI might mean for the economy and society. We explore concrete timelines (through 2030), compute and power bottlenecks, and the odds of an industrial explosion. We end by discussing how people can personally prepare for AGI: networks, skills, saving/investing, resilience, citizenship, and information hygiene.  

Follow Benjamin's work at: https://benjamintodd.substack.com  

Timestamps: 

00:00 What are reasoning models?  

04:04 Reinforcement learning supercharges reasoning 

05:06 Reaso...


From Peak Horse to Peak Human: How AI Could Replace Us (with Calum Chace)
#1
07/31/2025

On this episode, Calum Chace joins me to discuss the transformative impact of AI on employment, comparing the current wave of cognitive automation to historical technological revolutions. We talk about "universal generous income", fully-automated luxury capitalism, and redefining education with AI tutors. We end by examining verification of artificial agents and the ethics of attributing consciousness to machines.  

Learn more about Calum's work here: https://calumchace.com  

Timestamps:  

00:00:00  Preview and intro 

00:03:02  Past tech revolutions and AI-driven unemployment 

00:05:43  Cognitive automation: from secretaries to every job 

00:08:02  The “peak horse” analogy and av...


How AI Could Help Overthrow Governments (with Tom Davidson)
#1
07/17/2025

On this episode, Tom Davidson joins me to discuss the emerging threat of AI-enabled coups, where advanced artificial intelligence could empower covert actors to seize power. We explore scenarios including secret loyalties within companies, rapid military automation, and how AI-driven democratic backsliding could differ significantly from historical precedents. Tom also outlines key mitigation strategies, risk indicators, and opportunities for individuals to help prevent these threats.  

Learn more about Tom's work here: https://www.forethought.org  

Timestamps:  

00:00:00  Preview: why preventing AI-enabled coups matters 

00:01:24  What do we mean by an “AI-enabled coup”? 

00:01:59...


What Happens After Superintelligence? (with Anders Sandberg)
#1
07/11/2025

Anders Sandberg joins me to discuss superintelligence and its profound implications for human psychology, markets, and governance. We talk about physical bottlenecks, tensions between the technosphere and the biosphere, and the long-term cultural and physical forces shaping civilization. We conclude with Sandberg explaining the difficulties of designing reliable AI systems amidst rapid change and coordination risks.  

Learn more about Anders's work here: https://mimircenter.org/anders-sandberg  

Timestamps:  

00:00:00 Preview and intro 

00:04:20 2030 superintelligence scenario 

00:11:55 Status, post-scarcity, and reshaping human psychology 

00:16:00 Physical limits: energy, datacenter, and waste-heat bottlenecks 

00:23:48 Technos...


Why the AI Race Ends in Disaster (with Daniel Kokotajlo)
#1
07/03/2025

On this episode, Daniel Kokotajlo joins me to discuss why artificial intelligence may surpass the transformative power of the Industrial Revolution, and just how much AI could accelerate AI research. We explore the implications of automated coding, the critical need for transparency in AI development, the prospect of AI-to-AI communication, and whether AI is an inherently risky technology. We end by discussing iterative forecasting and its role in anticipating AI's future trajectory.  

You can learn more about Daniel's work at: https://ai-2027.com and https://ai-futures.org  

Timestamps:  

00:00:00 Preview and intro 

00:00:50 Why...


Preparing for an AI Economy (with Daniel Susskind)
#1
06/27/2025

On this episode, Daniel Susskind joins me to discuss disagreements between AI researchers and economists, how we can best measure AI’s economic impact, how human values can influence economic outcomes, what meaningful work will remain for humans in the future, the role of commercial incentives in AI development, and the future of education.  

You can learn more about Daniel's work here: https://www.danielsusskind.com  

Timestamps:  

00:00:00 Preview and intro  

00:03:19 AI researchers versus economists  

00:10:39 Measuring AI's economic effects  

00:16:19 Can AI be steered in positive directions?  

00:22:10 Human val...


Will AI Companies Respect Creators' Rights? (with Ed Newton-Rex)
#1
06/20/2025

Ed Newton-Rex joins me to discuss the issue of AI models trained on copyrighted data, and how we might develop fairer approaches that respect human creators. We talk about AI-generated music, Ed’s decision to resign from Stability AI, the industry’s attitude towards rights, authenticity in AI-generated art, and what the future holds for creators, society, and living standards in an increasingly AI-driven world.  

Learn more about Ed's work here: https://ed.newtonrex.com  

Timestamps:  

00:00:00 Preview and intro  

00:04:18 AI-generated music  

00:12:15 Resigning from Stability AI  

00:16:20 AI industry attitudes...


AI Timelines and Human Psychology (with Sarah Hastings-Woodhouse)
#1
06/13/2025

On this episode, Sarah Hastings-Woodhouse joins me to discuss what benchmarks actually measure, AI’s development trajectory in comparison to other technologies, tasks that AI systems can and cannot handle, capability profiles of present and future AIs, the notion of alignment by default, and the leading AI companies’ vague AGI plans. We also discuss the human psychology of AI, including the feelings of living in the "fast world" versus the "slow world", and navigating long-term projects given short timelines.  

Timestamps:  

00:00:00 Preview and intro

00:00:46 What do benchmarks measure?  

00:08:08 Will AI develop like other t...


Could Powerful AI Break Our Fragile World? (with Michael Nielsen)
#1
06/06/2025

On this episode, Michael Nielsen joins me to discuss how humanity's growing understanding of nature poses dual-use challenges, whether existing institutions and governance frameworks can adapt to handle advanced AI safely, and how we might recognize signs of dangerous AI. We explore the distinction between AI as agents and tools, how power is latent in the world, implications of widespread powerful hardware, and finally touch upon the philosophical perspectives of deep atheism and optimistic cosmism.

Timestamps:  

00:00:00 Preview and intro 

00:01:05 Understanding is dual-use  

00:05:17 Can we handle AI like other tech?  

00:12:08 Can...


Facing Superintelligence (with Ben Goertzel)
#1
05/23/2025

On this episode, Ben Goertzel joins me to discuss what distinguishes the current AI boom from previous ones, important but overlooked AI research, simplicity versus complexity in the first AGI, the feasibility of alignment, benchmarks and economic impact, potential bottlenecks to superintelligence, and what humanity should do moving forward.   

Timestamps:  

00:00:00 Preview and intro  

00:01:59 Thinking about AGI in the 1970s  

00:07:28 What's different about this AI boom?  

00:16:10 Former taboos about AGI 

00:19:53 AI research worth revisiting  

00:35:53 Will the first AGI be simple?  

00:48:49 Is alignment achievable?  

01:02:40...


Will Future AIs Be Conscious? (with Jeff Sebo)
#1
05/16/2025

On this episode, Jeff Sebo joins me to discuss artificial consciousness, substrate-independence, possible tensions between AI risk and AI consciousness, the relationship between consciousness and cognitive complexity, and how intuitive versus intellectual approaches guide our understanding of these topics. We also discuss AI companions, AI rights, and how we might measure consciousness effectively.  

You can follow Jeff’s work here: https://jeffsebo.net/  

Timestamps:  

00:00:00 Preview and intro 

00:02:56 Imagining artificial consciousness  

00:07:51 Substrate-independence? 

00:11:26 Are we making progress?  

00:18:03 Intuitions about explanations  

00:24:43 AI risk and AI consciousness  

00:40...


Understanding AI Agents: Time Horizons, Sycophancy, and Future Risks (with Zvi Mowshowitz)
#1
05/09/2025

On this episode, Zvi Mowshowitz joins me to discuss sycophantic AIs, bottlenecks limiting autonomous AI agents, and the true utility of benchmarks in measuring progress. We then turn to time horizons of AI agents, the impact of automating scientific research, and constraints on scaling inference compute. Zvi also addresses humanity’s uncertain AI-driven future, the unique features setting AI apart from other technologies, and AI’s growing influence in financial trading.  

You can follow Zvi's excellent blog here: https://thezvi.substack.com  

Timestamps:  

00:00:00 Preview and introduction  

00:02:01 Sycophantic AIs  

00:07:28 Bottlenecks for AI ag...


Inside China's AI Strategy: Innovation, Diffusion, and US Relations (with Jeffrey Ding)
#1
04/25/2025

On this episode, Jeffrey Ding joins me to discuss diffusion of AI versus AI innovation, how US-China dynamics shape AI’s global trajectory, and whether there is an AI arms race between the two powers. We explore Chinese attitudes toward AI safety, the level of concentration of AI development, and lessons from historical technology diffusion. Jeffrey also shares insights from translating Chinese AI writings and the potential of automating translations to bridge knowledge gaps.  

You can learn more about Jeffrey’s work at: https://jeffreyjding.github.io  

Timestamps:  

00:00:00 Preview and introduction  

00:01:36 A US-Chi...


How Will We Cooperate with AIs? (with Allison Duettmann)
04/11/2025

On this episode, Allison Duettmann joins me to discuss centralized versus decentralized AI, how international governance could shape AI’s trajectory, how we might cooperate with future AIs, and the role of AI in improving human decision-making. We also explore which lessons from history apply to AI, the future of space law and property rights, whether technology is invented or discovered, and how AI will impact children. 

You can learn more about Allison's work at: https://foresight.org  

Timestamps:  

00:00:00 Preview 

00:01:07 Centralized AI versus decentralized AI  

00:13:02 Risks from decentralized AI  

00...


How Will We Cooperate with AIs? (with Allison Duettmann)
#1
04/11/2025

On this episode, Allison Duettmann joins me to discuss centralized versus decentralized AI, how international governance could shape AI’s trajectory, how we might cooperate with future AIs, and the role of AI in improving human decision-making. We also explore which lessons from history apply to AI, the future of space law and property rights, whether technology is invented or discovered, and how AI will impact children. 

You can learn more about Allison's work at: https://foresight.org  

Timestamps:  

00:00:00 Preview 

00:01:07 Centralized AI versus decentralized AI  

00:13:02 Risks from decentralized AI  

00...


Brain-like AGI and why it's Dangerous (with Steven Byrnes)
#1
04/04/2025

On this episode, Steven Byrnes joins me to discuss brain-like AGI safety. We discuss learning versus steering systems in the brain, the distinction between controlled AGI and social-instinct AGI, why brain-inspired approaches might be our most plausible route to AGI, and honesty in AI models. We also talk about how people can contribute to brain-like AGI safety and compare various AI safety strategies.  

You can learn more about Steven's work at: https://sjbyrnes.com/agi.html  

Timestamps:  

00:00 Preview  

00:54 Brain-like AGI Safety 

13:16 Controlled AGI versus Social-instinct AGI  

19:12 Learning from t...


Brain-like AGI and why it's Dangerous (with Steven Byrnes)
04/04/2025

On this episode, Steven Byrnes joins me to discuss brain-like AGI safety. We discuss learning versus steering systems in the brain, the distinction between controlled AGI and social-instinct AGI, why brain-inspired approaches might be our most plausible route to AGI, and honesty in AI models. We also talk about how people can contribute to brain-like AGI safety and compare various AI safety strategies.  

You can learn more about Steven's work at: https://sjbyrnes.com/agi.html  

Timestamps:  

00:00 Preview  

00:54 Brain-like AGI Safety 

13:16 Controlled AGI versus Social-instinct AGI  

19:12 Learning from t...


How Close Are We to AGI? Inside Epoch's GATE Model (with Ege Erdil)
#1
03/28/2025

On this episode, Ege Erdil from Epoch AI joins me to discuss their new GATE model of AI development, what evolution and brain efficiency tell us about AGI requirements, how AI might impact wages and labor markets, and what it takes to train models with long-term planning. Toward the end, we dig into Moravec’s Paradox, which jobs are most at risk of automation, and what could change Ege's current AI timelines.  

You can learn more about Ege's work at https://epoch.ai  

Timestamps:  00:00:00 – Preview and introduction 

00:02:59 – Compute scaling and automation - GATE model 

...