0:00
/
0:00
Transcript

Liron Enters Bannon's War Room to Explain Why AI Could End Humanity

Joe Allen and I urge War Room viewers to make averting AI extinction a top voting issue

I joined Steve Bannon’s War Room Battleground to talk about AI doom.

Hosted by Joe Allen, we cover AGI timelines, raising kids with a high p(doom), and why improving our survival odds requires a global wake-up call.

Timestamps

00:00:00 — Episode Preview

00:01:17 — Joe Allen opens the show and introduces Liron Shapira

00:04:06 — Liron: What’s Your P(Doom)?

00:05:37 — How Would an AI Take Over?

00:07:20 — The Timeline to AGI

00:08:17 — Benchmarks & AI Passing the Turing Test

00:14:43 — Liron Is Typically a Techno-Optimist

00:18:00 — Raising a Family with a High P(Doom)

00:23:48 — Mobilizing a Grassroots AI Survival Campaign

00:26:45 — Final Message: A Wake-Up Call

00:29:23 — Joe Allen’s Closing Message to the War Room Posse

Links

Rumble — WarRoom Battleground EP 922: AI Doom Debates with Liron Shapira — https://rumble.com/v742oo4-warroom-battleground-ep-922-ai-doom-debates-with-liron-shapira.html

Joe’s Substack — https://substack.com/@joebot

Joe’s Twitter — https://x.com/JOEBOTxyz

Bannon’s War Room Twitter — https://x.com/Bannons_WarRoom

Transcript

Episode Preview

Joe Allen 00:00:00
...If you had to pick the most likely paths by which an artificial intelligence system were to overtake the human race, and as you say, spread across the solar system and the galaxy like a cancer, how do you see it going down?

Liron Shapira 00:00:16
I want to give people a sense of perspective that the intelligence scale goes a lot higher than humanity. Einstein, with all due respect... it’s possible to make a mind that’s much, much smarter than Einstein’s mind, and that’s what we’re doing with AI in as short as five or 10 years.

Liron 00:00:35
When you see a mind like that on the same planet as you, you should expect things that are pretty miraculous. Because what the human race has already done, just using little two-pound pieces of meat in our heads, is already quite miraculous.

Liron 00:00:49
We’re about to see fireworks in terms of the level of superhuman technology that’s probably going to exist soon. Things like nanotechnology, things like building a Dyson swarm—a swarm of satellites harvesting the sun’s entire power so Earth doesn’t get any sunlight. I want to set expectations that those kind of crazy technological feats are likely to happen.

Joe Allen opens the show and introduces Liron Shapira

Joe 00:01:17
Good evening. It is Thursday, January 8th, in the year of our Lord, 2026. I am Joe Allen, and this is War Room Battleground.

Joe 00:01:29
As you know, posse, artificial intelligence has spread out across the world, infecting brains like algorithmic prions, giving the sense that perhaps the entire human race is under threat of getting digital mad cow disease. We’ve seen instances of AI psychosis. We’ve seen instances in which artificial intelligence has lured children into suicide.

Joe 00:01:57
What happens when jobs en masse are replaced by AI? And then on the deepest level, the catastrophic risks: What happens if AI systems allow any simpleton to create novel viruses, for instance, or any other type of bioweapon? What happens if these AI companies create a system that they can’t control at all?

Joe 00:02:19
What happens when they create first a human-level artificial intelligence—artificial general intelligence? What happens if they create a system, or a system of systems, which is smarter than all human beings on Earth combined? Here to talk about that possibility is Liron Shapira, host of Doom Debates. If Denver will roll, I just want to give you a sense of what Liron has going on over there. It’s fantastic, and I encourage you to dig in.

Liron 00:02:56
Welcome to Doom Debates. Professor Gary Marcus, what’s your P(Doom)?

Gary Marcus 00:03:00
P is a number that should be updated daily, depending on the circumstances in the world, just like the midnight clock for nuclear war. Mine has gone up.

Max Tegmark 00:03:10
I would argue that artificial super intelligence is vastly more powerful in terms of the downside than hydrogen bombs would ever be.

Dean Ball 00:03:17
Let me make an uninterrupted point for a few minutes if you don’t mind.

Max 00:03:20
Of course.

Dean 00:03:20
Okay. I think that there will be tons of side effects, and I think that we will stave off a lot of wonderful possibilities for the future.

Vitalik Buterin 00:03:27
It’s very possible that super intelligent AI alignment is intractable.

Liron 00:03:31
Vitalik Buterin, what’s your P(Doom)?

Noah Smith 00:03:34
My probability of total extinction by 2050 is so low that Daniel Kahneman would yell at me for giving a number. It’s 0.1%.

Liron 00:03:41
You did agree that one data center pretty soon could be better than a doctor at doctoring. Maybe it could be better than a general at commanding an army. Maybe it could be better than Hitler or David Koresh.

Noah 00:03:49
We need to think about the good futures more, instead of just reacting and being terrified by things and wanting everything to stay the same, because otherwise you end up being like, “I warned you!” And then nothing’s gonna happen. Imagine the good scenario and push through.

Joe 00:04:06
Liron Shapira, welcome to the War Room.

Liron 00:04:10
Joe, great to be with you, and thanks so much for showing the montage. A lot of great stuff to talk about there.

Joe 00:04:15
The War Room audience, we’ve talked a lot about AI risk, catastrophic risk, existential risk. What I really appreciate about your show is that you’re not just simply berating people. You’re not necessarily an evangelist. You are holding your ideas and other people’s ideas up to scrutiny, and I really appreciate that. Now, my first question for you: What is your P(Doom)?

Liron, What’s Your P(Doom)?

Liron 00:04:42
I appreciate the question. My probability of doom is about 50%. About even odds that in the next 10 or 20 years, humanity is just going to be over in a bad way. There’s not going to be a human future.

Liron 00:04:57
The whole universe is just gonna get conquered by some AI virus, some AI cancer, and it’s over. We lost our chance on Earth. We lost our chance to have kids, descendants. That’s how I see the world right now, most likely.

How Would an AI Take Over?

Joe 00:05:08
Brother, that’s harsh. Now, the War Room is not at all unfamiliar with harsh evaluations, but if you had to pick, say, three most likely paths by which an artificial intelligence system or multiple systems were to overtake the human race and, as you say, spread across the solar system and the galaxy like a cancer, what would those three paths be?

Joe 00:05:46
Would it be nanotechnology? Would it be something more mundane, like just driving humanity insane? How do you see it going down?

Liron 00:05:58
The first place I would go is all the way to what you’d call science fiction, except it’s not gonna be fiction, it’s gonna be real. I would go all the way to nanotechnology, new forms of life. The reason I insist on going there is, even though it might not happen—nobody can predict the future—I want to give people a sense of perspective that the intelligence scale goes a lot higher than humanity.

Liron 00:06:21
Einstein, with all due respect... it’s possible to make a mind that’s much, much smarter than Einstein’s mind, and that’s what we’re doing with AI in as short as five or 10 years. When you see a mind like that on the same planet as you, you should expect things that are pretty miraculous.

Liron 00:06:39
Because what the human race has already done in the year 2026, relative to humans in biblical times, is already quite miraculous. And we’ve pulled that off just using little two-pound pieces of meat in our heads. We’ve done it with very little hardware over the course of 2,000 years of human-level intelligence.

Liron 00:06:57
We’re about to have superhuman intelligence. So I do want to set expectations that we’re about to see fireworks in terms of the level of superhuman technology that’s probably going to exist soon. Things like nanotechnology, things like building a Dyson swarm—a swarm of satellites harvesting the sun’s entire power so Earth doesn’t get any sunlight. I want to set expectations that those kind of crazy technological feats are likely to happen.

The Timeline to AGI

Joe 00:07:20
If you have a definite timeline, what does your timeline say for the arrival of artificial general intelligence?

Liron 00:07:29
I don’t even have a unique timeline. I would just encourage people to go look at the consensus timeline of the experts. If you go to Metaculus.com, which is a prediction site, they will tell you roughly 2032.

Liron 00:07:44
If you’d asked them five or 10 years ago, they would’ve been like, “Oh, don’t worry, 2050, 2060.” But now they’re converging to 2032, which is in about six years. And they don’t know for sure. So when they say 2032, they really mean it could happen this year, it could happen in three years, it could happen in nine years.

Liron 00:08:00
If you listen to the experts, Elon Musk is saying, “Yeah, it could happen in 2026.” If you want my personal opinion, I just agree. I think it could happen in one year to five years. If it doesn’t happen in 10 years, I start to get surprised, because even people who have traditionally been pessimists are now saying it’ll probably happen within 10 years.

Benchmarks & AI Passing the Turing Test

Joe 00:08:18
I came at this quite skeptical of the possibility of superhuman AI or even human-equivalent AI. It was going over the evaluations that certainly driven home the real possibilities of what these systems could do.

Joe 00:08:40
The METR benchmark, for instance—how long can an AI code be 50% of the output a human could do? Benchmarks like the Omniscience Index or Humanity’s Last Exam, how well can AI go into its own mind and draw out meaningful answers to incredibly difficult questions on health, business, science?

Joe 00:09:10
I know that you’ve been at this for a decade and a half plus. Do those evaluations come into play as a way of judging where we’re at in relation to this possible artificial general or super intelligence?

Liron 00:09:29
The METR benchmark that you’re referring to is very interesting, and it’s talking about the dimension of task length. Can an AI do a task that would traditionally take a human two hours to do, like write a software program or a simple checkers game? Can the AI also do that? And if a human can do it in two hours, can the AI also do it with 80% reliability?

Liron 00:09:54
That time length, like two hours, it’s turning into four hours. We’re roughly at this point where if a human can do something in four hours, an AI can do it with maybe 80% reliability if you run it now. Maybe the AI will even do it faster than the human.

Liron 00:10:10
To your question about me following this for the last 20 years: I have been a self-described AI doomer for the last 20 years, but the difference is that I used to think we had a lot of time. I used to think we had a century, and it’s okay, not the biggest rush. We’ll discover new theories.

Liron 00:10:25
The problem is that the timeline got accelerated with ChatGPT. Recent developments have pulled the timeline forward. As you saw on Metaculus, now I don’t think it’s gonna happen in 2100. I think it’s gonna happen around 2030 or something like that.

Liron 00:10:43
Looking at these benchmarks, we have to realize how weird it is that these benchmarks already exist. The METR benchmark pre-assumes that there’s such a thing as artificial general intelligence. The idea that you could ask about a general task—any task that a human can do—that wasn’t even on the table.

Liron 00:11:03
That’s now the language that we’re talking in. We’re talking like, “Here’s a human, here’s an AI,” and we’re now watching the AI ascend past humanity as we speak, in a matter of months or years.

Joe 00:11:14
When I think about the history of this, just the last nine or 10 years... the development of the transformer, its adoption by OpenAI, the release of GPT-1 in 2018. At the time it was very clunky. It wasn’t a whole lot better than something like ELIZA.

Joe 00:11:39
Then all of a sudden, by 2022, you have a very sophisticated chatbot. ChatGPT released in November of 2022. Even then, it’s wonky and it’s only a large language model. You had DALL-E coming out. It has just been an onslaught ever since. These models are now multimodal. They are much more accurate in the ability to gather and interpret information.

Joe 00:12:19
I’ve seen your posts on LessWrong go back to 2009. You’ve been thinking about this for a long time. Was there any moment or incident that really changed your mind on how soon something like artificial general intelligence could actually develop?

Liron 00:12:41
I changed my mind roughly the same time everybody else did. If you dig up Metaculus and look at the history of the predictions the community has been making, you can see around 2022, when ChatGPT and GPT-3 come out, the timeline just crashes. It crashes from 2050 to 2030. My own opinion was roughly coincident with that.

Liron 00:13:05
What you’re seeing with ChatGPT is the famous Turing test. Alan Turing proposed this in the ‘40s: if you can talk to an AI in natural language, bring up any subject, and you can’t tell if you’re talking to a human or a bot. You used to be able to tell, and now the only reason you can tell is because they programmed it to act like an AI.

Liron 00:13:25
But if somebody programs it to pretend to be a human, they’ve done tests where you really can’t tell. This was a famous test. I didn’t think the Turing test was going to fall in my lifetime, and now there’s been studies to show: nope, we’re past the Turing test now.

Liron 00:13:38
This is such a brave new world. We’re past the Turing test, watching the METR evaluation where the AI is getting better than humans at every single task. The time horizon is going up at a rate faster than doubling every year. It’s about to do things that humans can do in a whole year. It’s about to be able to grind through that in a day. What’s it gonna do the rest of the year? It’s going to do superhuman amounts of work in a single data center.

Joe 00:14:07
Absolutely. And the scale of adoption is so remarkable. Google’s Gemini has some 650 million users. OpenAI’s ChatGPT has over 800 million users. Meta AI claims a billion users. You’re talking about anywhere from a tenth to perhaps a sixth of the entire planet. Liron, if you would, hang on through the break.

00:14:38
(Music: “In America’s heart...”)

Liron Is Typically a Techno-Optimist

Joe 00:14:45
War Room posse, welcome back. We are here with Liron Shapira of Doom Debates. I cannot recommend enough the Doom Debates platform. You can find it on YouTube and on Liron’s social media.

Joe 00:15:00
You’ll see some War Room favorites like Max Tegmark, Geoffrey Miller. You’ll find people like Robert Wright. You can find Liron debating Beff Jezos, who has still not accepted the invitation to come on the War Room. Gary Marcus, Holly Elmore, Roman Yampolskiy—whose P(Doom) beats everyone’s, I think it’s almost 100%.

Joe 00:15:27
You can really sink your teeth into the technical details. As Liron and his various opponents go over the possibilities of either a wonderful future of abundance or a horrific, doom-inflected end to all humanity, they’re teaching you the underlying mechanisms of artificial intelligence. You can gauge where it’s at now and where it’s going.

Joe 00:16:06
Liron, if we can come back with a breath of fresh air, a little bit of optimism. You have been involved in Silicon Valley firms and technology for a long time. As an outsider, I would describe you generally as a techno-optimist. Is that correct?

Liron 00:16:27
Very much a techno-optimist. This cuts against some people’s assumptions about AI doomers. I’ve never suffered from depression. I’ve never been a pessimistic guy. I’ve loved technology my whole life.

Liron 00:16:41
If you ask me about self-driving cars or virtual reality, I’m like, “Yep, that’s great. I love that. I love the internet.” I’m even fine with social media. It’s just in the case of artificial intelligence, I don’t think we’re ready to survive sharing the planet with a smarter species. It’s purely logical.

Joe 00:17:01
It’s so funny, I don’t know whether I would want to debate you on the possibility of doom. It’s not a huge concern of mine. If I had any thesis, it would be a reformulation of Yudkowsky and Nate Soares: “If anyone builds it, everything sucks.”

Joe 00:17:19
What I would argue about is whether fully autonomous vehicles, Bug Man mobiles, or people lost in virtual reality is beneficial to humanity. But maybe we can coexist, assuming we’re not all destroyed.

Liron 00:17:39
There’s different levels of doom. Some people like to focus on the problem like, “How are we gonna have privacy in the age of AI?” I’m like, “Okay, sure, you can think about that. It’s just that we’re all about to get annihilated.” You really have to prioritize the concerns here. If we can survive 10 or 20 years so we have time to worry about things like privacy or amusing ourselves to death, those are good problems to have.

Raising a Family with a High P(Doom)

Joe 00:18:02
If I can ask a more personal question: You’re a father. What has that done to your perception of technology and its potential consequences?

Liron 00:18:12
It does make me conflicted about whether I should have had kids or have more kids. It’s tough because I’m partially responsible for creating more victims of getting annihilated by AI. One thing that helps is that my P(Doom) isn’t 100%. So I’m still optimistic that we’re not going to destroy ourselves.

Liron 00:18:34
I have to live much of my life according to the good outcome. I haven’t thrown away my retirement savings. I’m still hoping I’ll have a retirement or live forever. I haven’t completely committed to the idea of annihilation. The other thing about having kids is I can see that the AI is getting smarter faster than my kids are.

Joe 00:18:53
That is a very eerie phenomenon. I think it was on the Joe Rogan show where Elon Musk was talking about watching his kids grow up and weaving it in with artificial intelligence. He talked about how watching an AI being trained is like watching a baby grow up. There came a point where it wasn’t clear if he was talking about a digital mind or his baby.

Joe 00:19:25
Beyond just the capabilities, you described the Turing test as this major milestone that’s already been passed. This tendency for humans to anthropomorphize these systems and the vast number of people using them—it’s as if we’ve been invaded by artificial immigrants.

Joe 00:19:44
Without a total ban on development of AI, what is a comfortable limit for you? How far do you think these companies should take AI capabilities?

Liron 00:20:01
I wish I could tell you a really crisp answer because then we would just go right up to that line and stay there. Unfortunately, because of the nature of this research, nobody knows where the line is. It feels like we’re driving in the fog toward a cliff, and all the AI research companies are just flooring the gas.

Liron 00:20:21
The closer you get to the cliff, it’s like shuffleboard—more points, trillions of dollars. Today, I don’t think we’re over the cliff yet. Some people say AI has caused so much damage, but I think today it’s still net good. It’s very useful.

Liron 00:20:41
The problem is, I think the cliff is coming, and the cliff is when it gets smarter than humanity. At the very least, we need to build an off button. We need a brake pedal because right now there is no brake pedal, only gas. At the very minimum, let’s get ready to hit the brakes a little later.

Joe 00:20:58
We have SB 53 in California, the RAISE Act in New York, and the legislation introduced by Hawley and Blumenthal, the AI Risk Evaluation Act. These are steps towards something like an E-stop. Do you see these attempts at legislation as positive? Or is it giving people a false sense of comfort?

Liron 00:21:34
The short answer is: it’s not enough. We’re making a smarter species, and we’re going to lose control. In 10 years or less, we may have no levers of control because all the levers are at the hands of the AI. There’s no undo button. It’s game over.

Liron 00:21:55
This was going to be our galaxy. Now it’s never going to be. We’re never gonna have grandkids. This is a major disaster we’re trying to avoid, and the regulators are saying, “Hey, can you send us a report when you’re creating this AI?” There’s a big disconnect between the magnitude of the emergency and these little baby step regulations.

Joe 00:22:25
In Marsha Blackburn’s proposed Trump America AI Act, it’s a framework that gives a sense of where a federal standard might go. One recommendation is to have agencies such as the Department of Energy, which has been responsible for tracking nuclear risks, be involved. Do you think the Department of Energy has the expertise to address out-of-control AI?

Mobilizing a Grassroots AI Survival Campaign

Liron 00:23:12
The problem is that all of humanity has to cooperate. This solution is complex; it requires an international treaty. Think about nuclear proliferation—it’s not about one country managing itself. It’s about all the countries policing everybody in a shared, centralized way.

Liron 00:23:33
I’m no fan of centralization. I like free markets. I like everybody defending themselves. Unfortunately, when it comes to creating a smarter species, you really do need some oversight so that random hackers don’t decide to create a smarter species and unleash it on the whole human race.

Liron 00:23:52
So you do need something like nuclear proliferation enforcement happening through a consortium of nations, and this has to happen fast. When I see these little efforts, one state at a time, it’s better than nothing. The funny thing is the AI companies are aggressively fighting even these token efforts.

Liron 00:24:12
We need to get serious. The grassroots people watching right now need to consider this an urgent voting issue. Whatever your number one voting issue is, consider surviving the next decade to also be an important voting issue.

Joe 00:24:23
You hear from older people saying, “I’m not gonna be alive. It’s not my problem.” But whether the issue is brains getting melted, massive job loss, or the ultimate out-of-control AI, the salience is really sinking in. Do you think that populism plays into this? Is this a task appropriate to a populist approach, as opposed to a moneyed political activism?

Liron 00:25:28
It has to be grassroots, because leaders aren’t going to lead from the front. You’re not going to have a leader saying, “I’ve heard the argument, I’ve looked at Metaculus, trust me America, we need international treaties and a stop button.” There’s not going to be a forward-thinking leader who gets elected and pulls the nation along.

Liron 00:25:51
It has to be what the voters are demanding in the polls. “Raising awareness” is usually hippies wasting their time, but on this issue, I think raising awareness helps to make it a voting priority.

Liron 00:26:10
The War Room Posse, most of them probably agree this is important, but they haven’t been treating it like the number one voting issue. They don’t have politicians promising to build that stop button and negotiate with China. It’s crazy how little time we have left. Only people in Silicon Valley have opened their eyes to how little time we have left. The rest of the world is completely head in the sand.

Final Message: A Wake-Up Call

Joe 00:26:45
I’d like to give you the opportunity to give any final message that I haven’t prompted you to give. The floor is yours, sir.

Liron 00:26:59
It really is this idea of waking up. See how serious the threat is. Listen to what the AI companies are saying in Silicon Valley. They know this is coming. They’ve driven the progress where AI went from language translation to “It can do anything, it’s an agent, it’s about to replace jobs.”

Liron 00:27:20
If you extrapolate the curve, we don’t have much time left. Take it seriously. Vote on it. For more information, I recommend watching my show, DoomDebates.com, where I discuss this every week.

Joe 00:27:31
I actually have one final question. Of the various guests you’ve had or opponents you’ve taken on, who has given you pause? Who has swayed your opinion the most, if at all?

Liron 00:28:00
There’s been a couple smart insiders from different AI companies. OpenAI has this employee named Roon, who came onto the show.

Joe 00:28:09
I’ve met him! He’s a fantastic guy.

Liron 00:28:10
He’s saying, “Look, I think that the AI will probably keep listening to our orders,” and he has some arguments why. The problem is, if you watch my show, the different people who are saying why we’re going to survive say different reasons. They haven’t gotten their story straight about why we’re going to survive. That makes me anxious again.

Joe 00:28:33
Well, I hope they’re not watching right now, because they’re gonna gang up on you. Again, Liron, thank you so much for coming on. Where can they find Doom Debates, and perhaps a suggestion for one or two first episodes?

Liron 00:28:55
DoomDebates.com, or search Doom Debates on YouTube or any podcast player. For a gentle introduction, check out my debate with Mike Israetel. I’ve also got one with Gary Marcus, and a debate with Dean Ball who wrote America’s AI Action Plan.

Joe 00:29:19
Fantastic. Thank you very much, sir.

Liron 00:29:22
Thank you, John.

Joe Allen’s Closing Message to the War Room Posse

Joe 00:29:23
Well, Posse, I think we have just enough time for a little bit of entertainment.

Reporter 00:29:30
You said recently “tens of billions of robots,” but that’s decades away.

Elon Musk 00:29:35
At least one decade away. I think humanoid robots will be the biggest product ever. The demand will be insatiable. You really have task extensibility that is dramatic, ‘cause it can learn anything very quickly.

Reporter 00:29:51
To a lot of people, that sounds scary. You don’t foresee a world of Terminators?

Elon Musk 00:29:58
Absolutely not.

Reporter 00:29:59
The Unitree G1, you can actually buy it right now via Looking Glass XR. Unitree’s been advertising it as starting at $16,000, but via Looking Glass XR, the starting price is actually 20,000...

Joe 00:30:11
War Room Posse, I do not recommend buying the Unitree robot, nor do I recommend inviting these beasts into your home. Consider them algorithmic immigrants, and bar them at the border. Stay human. God bless, War Room Posse. Till next time.


Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏

Discussion about this video

User's avatar

Ready for more?