Sitemap - 2025 - Doom Debates
I Debated Beff Jezos and His "e/acc" Army
Doom Debates LIVE Call-In Show! Listener Q&A about AGI, evolution vs. engineering, shoggoths & more
DOOMER vs. BUILDER — AI Doom Debate with Devin Elliot, Software Engineer & Retired Pro Snowboarder
Doom Debates Live Q&A this Thursday @ 12-2pm PT!
PhD AI Researcher Says P(Doom) is TINY — Debate with Michael Timothy Bennett
Nobel Prizewinner SWAYED by My AI Doom Argument — Prof. Michael Levitt, Stanford University
Facing AI Doom, Lessons from Daniel Ellsberg (Pentagon Papers) — Michael Ellsberg
Max Tegmark vs. Dean Ball: Should We BAN Superintelligence?
The AI Corrigibility Debate: MIRI Researchers Max Harms vs. Jeremy Gillen
These Effective Altruists Betrayed Me — Holly Elmore, PauseAI US Executive Director
DEBATE: Is AGI Really Decades Away? | Ex-MIRI Researcher Tsvi Benson-Tilsen vs. Liron Shapira
Liron Debunks The Most Common “AI Won't Kill Us" Arguments
Why AI Alignment Is 0% Solved — Ex-MIRI Researcher Tsvi Benson-Tilsen
Eben Pagan (aka David DeAngelo) Interviews Liron — Why 50% Chance AI Kills Everyone by 2050
Former MIRI Researcher Solving AI Alignment by Engineering Smarter Human Babies
Robert Wright Interrogates the Eliezer Yudkowsky AI Doom Position
Climate Change Is Stupidly EASY To Stop — Andrew Song, Cofounder of Make Sunsets
David Deutschian vs. Eliezer Yudkowskian Debate: Will AGI Cooperate With Humanity? — With Brett Hall
Debating People On The Street About AI Doom
Wes & Dylan Join Doom Debates — Violent Robots, Eliezer Yudkowsky, & Who Has the HIGHEST P(Doom)?!
The Merch Store Is Open For Business!
Are We A Circular Firing Squad? — with Holly Elmore, Executive Director of PauseAI US
Ex-OpenAI CEO Says AI Labs Are Making a HUGE Mistake — Emmett Shear
Donate to Doom Debates — YOU can meaningfully contribute to lowering AI x-risk!
Liv Boeree Has a Strategy to Stop the AI Death Race
Max Tegmark Says It's Time To Protest Against AI Companies
Unofficial "If Anyone Builds It, Everyone Dies" launch party is going LIVE soon!
Eliezer Yudkowsky — If Anyone Builds It, Everyone Dies
ANNOUNCEMENT: Eliezer Yudkowsky interview premieres tomorrow!
How AI Kills Everyone on the Planet in 10 Years — Liron on The Jona Ragogna Podcast
Get ready for LAUNCH WEEK!!! “If Anyone Builds It, Everyone Dies” by Eliezer Yudkowsky & Nate Soares
Tech CTO Has 99.999% P(Doom) — “This is my bugout house” — Louis Berman, AI X-Risk Activist
Rob Miles, Top AI Safety Educator: Humanity Isn’t Ready for Superintelligence!
Debate with Vitalik Buterin — Will “d/acc” Protect Humanity from Superintelligent AI?
Why I'm Scared GPT-9 Will Murder Me — Liron on Robert Wright’s Nonzero Podcast
The Man Who Might SOLVE AI Alignment — Dr. Steven Byrnes, AGI Safety Researcher @ Astera Institute
Top Professor Condemns AGI Development: “It’s Frankly Evil” — Geoffrey Miller
Zuck’s Superintelligence Agenda is a SCANDAL | Warning Shots EP1
Rationalist Podcasts Unite! — The Bayesian Conspiracy ⨉ Doom Debates Crossover
His P(Doom) Doubles At The End — AI Safety Debate with Liam Robins, GWU Sophomore
AI Won't Save Your Job — Liron Reacts to Replit CEO Amjad Masad
Every Student is CHEATING with AI — College in the AGI Era (feat. Sophomore Liam Robins)
Carl Feynman, AI Engineer & Son of Richard Feynman, Says Building AGI Likely Means Human EXTINCTION!
Richard Hanania vs. Liron Shapira — AI Doom Debate
Emmett Shear (OpenAI Ex-Interim-CEO)'s New “Softmax” AI Alignment Plan — Is It Legit?
Will AI Have a Moral Compass? — Debate with Scott Sumner, Author of The Money Illusion
Searle's Chinese Room is DUMB — It's Just Slow-Motion Intelligence
Doom Debates Live @ Manifest 2025 — Liron vs. Everyone
Poking holes in the AI doom argument — 83 stops where you could get off the “Doom Train”
Q&A: Ilya's AGI Doomsday Bunker, Veo 3 is Westworld, Eliezer Yudkowsky, and much more!
🥳 5,000 subscribers live Q&A! Ask me anything…
This $85M-Backed Founder Claims Open Source AGI is Safe — Debate with Himanshu Tyagi
Emergency Episode: Center for AI Safety Chickens Out
Gary Marcus vs. Liron Shapira — AI Doom Debate
Mike Israetel vs. Liron Shapira — AI Doom Debate
Doom Scenario: Human-Level AI Can't Control Smarter AI
The Most Likely AI Doom Scenario — with Jim Babcock, LessWrong Team
AI Could Give Humans MORE Control — Ozzie Gooen
Top AI Professor Has 85% P(Doom) — David Duvenaud, Fmr. Anthropic Safety Team Lead
“AI 2027” — Top Superforecaster's Imminent Doom Scenario
Top Economist Sees AI Doom Coming — Dr. Peter Berezin, BCA Research
AI News: GPT-4o Images, AI Unemployment, Emmett Shear's New Safety Org — with Nathan Labenz
How an AI Doomer Sees The World — Liron on The Human Podcast
Gödel's Theorem Says Intelligence ≠ Power? AI Doom Debate with Alexander Campbell
Alignment is EASY and Roko's Basilisk is GOOD?!
Roger Penrose is WRONG about Gödel's Theorem and AI Consciousness
We Found AI's Preferences — What David Shapiro MISSED in this bombshell Center for AI Safety paper
Does AI Competition = AI Alignment? Debate with Gil Mark
Toy Model of the AI Control Problem
Superintelligent AI vs. Real-World Engineering | Liron Reacts to Bryan Cantrill
2,500 Subscribers Live Q&A Recording
Effective Altruism Debate with Jonas Sota
🥳 2,500 subscribers live Q&A! Ask me anything…
God vs. AI Doom: Debate with Bentham's Bulldog
Debate with a former OpenAI Research Team Lead — Prof. Kenneth Stanley
