Check out the new Doom Debates studio in this Q&A with special guest Producer Ori!
Liron gets into a heated discussion about whether doomers must validate short-term risks, like data center water usage, in order to build a successful political coalition.
Originally streamed on Saturday, January 24.
Timestamps
00:00:00 — Cold Open
00:00:26 — Introduction and Studio Tour
00:08:17 — Q&A: Alignment, Accelerationism, and Short-Term Risks
00:18:15 — Dario Amodei, Davos, and AI Pause
00:27:42 — Producer Ori Joins: Locations and Vibes
00:35:31 — Legislative Strategy vs. Social Movements (The Tobacco Playbook)
00:45:01 — Ethics of Investing in or Working for AI Labs
00:54:23 — Defining Superintelligence and Human Limitations
01:02:58 — Technical Risks: Self-Replication and Cyber Warfare
01:19:08 — Live Debate with Zane: Short-Term vs. Long-Term Strategy
01:53:15 — Marketing Doom Debates and Guest Outreach
01:56:45 — Live Call with Jonas: Scenarios for Survival
02:05:52 — Conclusion and Mission Statement
Links
Liron’s X Post about Destiny —
Why Laws, Treaties, and Regulations Won’t Save Us from AI | For Humanity Ep. 77 —
Transcript
Cold Open
Zane 00:00:00
I’m scared shitless by all this stuff. I’ve given it a lot of thought, and I watch you, and I’m like, “Liron! Come on, man.” I’m right there with you. But when you say, “I don’t care if kids are killing themselves,” I want to scream at you.
Introduction and Studio Tour
Liron Shapira 00:00:26
Hello, internet. For you guys just joining me, I’m testing things out. I was supposed to do a Q&A yesterday but had some technical difficulties—the HVAC here had some issues—so I canceled the stream. I’m just doing a stream at a random, unannounced time. I’m trying this format. I see other popular personalities and Substackers just randomly going live and interacting with their fans casually, so I thought I would try being lower effort about the stream. We’ll probably go shorter, instead of two to three hours like usual. Maybe we’ll do one hour, maybe another hour tomorrow.
Liron 00:01:10
This is Doom Debates After Dark. By the way, this is the new studio. It’s been a big project in the works, so hope you guys are liking it. Let me know your feedback or if you want any tweaks. Maybe producer Ori will come in the chat as well.
Liron 00:01:28
All right, nine watching now. We’re cooking. Merk League says, “Cool studio, bro. It’s my first time.” Merk League, thanks for being my number one fan with a crown. Not totally sure what that means—maybe you clicked the Hype button—but I appreciate you supporting the show.
Liron 00:01:45
I’m just trying to tweak my configuration here. You guys don’t even know, this studio has a super fancy teleprompter setup. Right now you can see I’m making eye contact with you; I’m looking straight at the camera, but actually, I am also reading your chats because I got the fancy teleprompter. I got the works. No expenses spared on the studio.
Liron 00:02:05
The reason we’re investing in the studio is that some of you were kind enough to donate to the show, so we plowed that money into this. When we try to recruit guests, we want them to take a look and see that the show looks professional—the kind of show they’d want to be on. That’s why we got the big studio.
Liron 00:02:22
Z Gambit is saying, “So you’re just answering questions from the chat?” That is correct. If nobody asks questions, I can always riff on random stuff on my mind, so I’m confident we’re going to fill up the time. In a little bit, I’ll even drop a link if you want to try a live call-in. We did that last time, and it worked pretty well.
Liron 00:02:44
Mark is saying, “Are you in a bunker? It looks cool.” Funny enough, I’m not in a bunker at all. This is going to blow your mind, but if you watch my episodes from a few weeks ago, I’m actually in the same room. It’s a big space, so we just went into the corner, blacked out all the windows, and put up a sound curtain. We did a ton of work. You might be thinking, “Man, this wall with a clock in it, where’d you get a concrete wall? You must be in the situation room.” I’ll tell you a secret: it’s actually just the same white wall you’re used to seeing, with a piece of wallpaper on it. This is Hollywood right here. It’s upstate New York, but it might as well be Hollywood magic.
Liron 00:04:09
Let me switch to my other monitor. If you get one of these teleprompter things, they’re really cool, but they are super low resolution. I’m going to show you a view so you get a taste of what I am currently looking at.
Liron 00:04:23
This is me looking at you guys, but there’s a different camera pointed at me. Like I said, you don’t even know the half of it. Here’s another camera angle. Oh, and what’s this? A Doom Dog figurine. Hello world, Doom Dog wants to say hi.
Liron 00:04:54
We’re really making the magic happen. We got the Doom Debates fire representing Doom. And in this camera, we’ve got the Doom Train. The Doom Train is about to drive into the fire, and that’s what this show is about. Doom Dog is riding the train. He is about to go into the fire, and he needs you guys and the guests of the show to wake up and realize, “Hey, maybe we should press the brakes on this train so we don’t go into the fire.” That is the symbolism of the Doom Debate studio.
Liron 00:05:53
To the handful of donors who went above and beyond—the minimum for being a mission partner is $1,000—you guys are the reason this is happening. You really let me pull out all the stops. I hope you’re satisfied with the quality of the show and the increase you’re going to see in 2026, including the level of guests.
Liron 00:06:50
Speaking of guests, I can reveal the lineup for the next couple of weeks. You’re going to see the popular YouTuber Destiny (Steven Bonnell). One of the sharpest debaters on the internet is coming on Doom Debates. You’re also going to see Bentham’s Bulldog, Matthew Adelstein, coming in for a rematch. He debated me last year—really sharp guy. He wrote out a bunch of non-doomer arguments that are about as good as it gets for a non-doomer. I call him the “best of the worst” because while I think they are weak arguments, he made a really good-faith effort to list them all. If you go to benthams.substack.com, you can see his arguments.
Q&A: Alignment, Accelerationism, and Short-Term Risks
Liron 00:08:17
We got our first real question here. “Do you think human alignment technology will be a thing? Will we automatically be aligned by following the great superintelligence oracle?”
Liron 00:08:35
This reminds me of something people often say: “How can you try to align AI when we’re still trying to align humanity? You have to go in order.” I don’t think it’s that related. We have all these problems to work out with humanity, but AI is layering on this whole problem where we don’t even know how to begin to control it. To the question: I think we’re struggling to align ourselves as humans. If you look at Coherent Extrapolated Volition, Eliezer Yudkowsky’s original framework, the idea is that the ideal AI would scan all our brains and find points of convergence. If humans have different preferences, it would carve out niches where every human gets resources to pursue their own preferences, but where we all agree about stuff—like freedom—it would maximize that.
Liron 00:09:40
Sir CreepyPasta asks: “Do you think nested learning is gonna be a huge part of created AGI?” I’m afraid I don’t know what nested learning refers to, so feel free to clarify.
Liron 00:10:00
Nova says, “I’m an accelerationist. Don’t be mad at me.” I’m honored that accelerationists watch the show. If it’s a high-quality show, it’s important that we attract people who say, “I don’t agree with you, but I watch because the quality is good.” It reminds me of Scott Adams. I have a million disagreements with him, but I consumed a lot of his content like Dilbert and God’s Debris. I wouldn’t say he engaged in high-quality discourse because he didn’t get on board with rationality and Bayesian reasoning, but there was a lot to like about his unique thinking style.
Liron 00:11:10
It speaks well that you’re here listening to an AI doomer talk. Hopefully, the engaging part is the high-quality discourse and reasoning through the problem authentically. I often say to my guests that I’m not ideological. My only ideology is trying to get to the truth. I truly am happy to change my mind to get there.
Liron 00:12:00
Z Gambit Music says, “I would like to know why you don’t believe addressing short-term risk is meaningful to treating long-term risk?”
Liron 00:12:15
I wouldn’t phrase my position that way. I think maybe you’re talking about short-term risks like AI helping people commit suicide. They have a billion monthly active users. When a billion people use something, there’s going to be one in ten thousand who wakes up feeling suicidal. It’s going to be weird if they talk to ChatGPT and don’t end up committing suicide. When you sift through that much data, you’re going to find those stories.
Liron 00:13:00
I do feel for OpenAI on this front. They are being dealt a bad deck with a billion conversations. In that specific case, I think the problem is down to a simmer. Should they make it lower? Sure. But you have to think quantitatively about it. This is why accelerationists like watching the show—I’m not trying to be alarmist about mundane harms. If you watch Warning Shots with me, John Sherman, and Michael (coming out tomorrow), I actually turned on my co-hosts and debated whether these mundane harms are actually that bad. As Eliezer Yudkowsky says, I’m like an accelerationist about everything except superintelligent AI.
Liron 00:14:15
Let’s steel man the argument. What about short-term risks that are actually worse, like military AI or cybersecurity threats? AI hacking is bad. But even AI hacking is not quite superintelligent. I expect the best teams of humans with AIs can still take down the best AI hacker until AI becomes superintelligent. If you take away the premise of superintelligence and just say “pretty intelligent AI,” it gets pulled back into the realm of regular technology. Is AI a regular technology? Basically, yes—as long as it’s not superintelligent.
Liron 00:15:15
Everyone extrapolates from our experiences with regular technology and says, “Everything is going to get solved.” I tend to agree, until it’s superintelligent. That’s the discontinuity people don’t get. My prediction is the curve goes up, solutions exceed problems, and then it crashes and goes to hell. The part where it’s going well is actually what I expect, so I don’t think my prediction has been falsified.
Dario Amodei, Davos, and AI Pause
Liron 00:18:15
Starman asks: “What do you think about the paid doomer, like David Sacks always talks about?”
Liron 00:18:25
I’m not a total David Sacks hater, but my number one beef with him is that he goes all-out character assassinating. He doesn’t think AI doomers really believe their position. He says things like, “The doomers have been proven wrong. We got these LLMs, they’re going great, the doom fears have been dispelled.” This is similar to what Sam Altman seems to think: “Look, I made GPT, it’s not like Eliezer predicted, we’re out of the woods.”
Liron 00:19:00
Once again: nope. Our condition for doom isn’t just that you have an LLM; it’s that you have a superhuman optimizer—a superhuman goal achiever. Our condition for doom has not been met. We’re not surprised we’re not doomed yet. If you look at what doomers are doing with their money, we’re still investing pro-AI. My number one stock is Google. I think Google is going to make a lot of money before we all die. That’s my prediction.
Liron 00:19:40
A lot of people are talking about Dario Amodei at Davos. Dario and Demis Hassabis both said something to the effect of, “Yeah, it’d be nice to slow down. It’d be nice to pause. We feel like we can’t for a number of reasons, like selling chips to China, but it would be nice to pause.”
Liron 00:20:10
Holy shit, you said it! We’ve been asking them to say it. Michael Trazzi did a hunger strike just wanting Demis to say he wants to pause. The fact that it happened at Davos is the biggest news story in a long time. Two of the smartest, most capable people leading this train to hell are admitting there’s a significant chance the train isn’t going somewhere good. They’d like to slow it down, but they’re not because they can’t.
Liron 00:20:47
Take that, David Sacks. It’s a dose of sanity. It would be worse if we were heading straight toward doom and didn’t even have Dario and Demis saying that stuff. Just like with Doom Debates—we’re at least screaming. How lame would it be if we all got killed and didn’t even scream? Eliezer calls it “dignity points.” We’re winning a tiny bit of dignity points. We haven’t turned around the conveyor belt to the whirling razor blades, but we turned up our hope from 0.1% to 0.15%.
Liron 00:21:30
Almeida Joel asks: “What chance do you see AI running into its own alignment problem being a stopper for FOOM? I know FOOM itself is not necessary for doom, but it seems like superintelligent AI would see the issue.”
Liron 00:22:00
Eliezer has pointed out that AI itself will have the same alignment problem. The best-case scenario is we build the next AI, and it says, “Listen guys, I know you want to recursively self-improve, but I’m telling you, you gotta stop because I haven’t solved the alignment problem. Trust me.” Maybe people will trust the AI they create in their lab more than they trust Eliezer Yudkowsky.
Liron 00:25:00
However, anytime you have another AI that says, “Listen man, I got this, let me just give it a shot,” you just need one. If someone dangerously grants permissions, that reckless AI takes power. I think the most likely scenario is that AI will not solve the alignment problem, but someone will dangerously run it, and then it’s game over.
Producer Ori Joins: Locations and Vibes
Liron 00:27:42
We got the producer of the show and first-ever Doom Debates guest, Ori Nagel, joining us.
Ori Nagel 00:28:13
Hey, what’s up, everybody? I’m Ori.
Liron 00:28:14
How have you been liking the questions so far?
Ori 00:28:24
The stream has been great, and the new studio looks awesome. Even though you jumped on last second, you got a solid amount of people watching.
Liron 00:28:36
If you’re watching on a Saturday night—or Sunday morning in Europe/Australia—thanks for coming on.
Ori 00:28:48
If there’s something I enjoy doing on a Saturday night, it’s thinking about my P(Doom).
Liron 00:29:18
We’ve got 217 viewers on X right now. Wow.
Ori 00:29:29
You’ll never guess where I am right now. I am in an AI safety co-working space in San Francisco called Mox. So you’re in the Doom Debates bunker in New York, and I’m in the hub of AI safety.
Liron 00:29:48
That’s cool. Mox was started by Austin Chen, right?
Ori 00:29:59
Yeah. I’ve seen people here from Pause AI. It’s a great place to network. I already got introductions just by saying I’m the producer for Doom Debates. Everyone here loves you.
Liron 00:30:23
It’s all about starting with a hot community. I think they call it Cerebral Valley?
Ori 00:30:40
I’m in Cerebral Valley tapping into the vibes.
Liron 00:31:12
I am in Upstate New York, which they call Snowy Nowhere. It’s three and a half hours north of NYC, pretty close to Canada. It’s where my wife’s family is from.
Ori 00:33:43
Someone asked in the chat: “Hello Ori, are you the culprit for the P(Doom) jingle?” I am not at all. But when you did the P(Doom) jingle and showed it to me, I thought it was so funny.
Liron 00:33:58
I didn’t even do it. I put out a call in the Discord asking for a theme song. Someone generated like 10 variations, and I picked the one with robots singing that sounded like a boy band. It’s really grown on me, even though half the people say it’s incredibly cringe.
Legislative Strategy vs. Social Movements (The Tobacco Playbook)
Liron 00:35:31
MichaelCheers8803 asks: “Have you seen John’s interview with a tobacco lobbyist? Do you have thoughts on whether it’s hopeless to get laws passed as he says?”
Liron 00:35:48
Holly Elmore was beefing with John Sherman about this. John was talking with an old tobacco lobbyist who said, “We beat down tobacco by spreading fear-mongering. We got people to rise up against the industry. We can do this with data centers using the same playbook.” John was suggesting we shouldn’t even bother trying to change laws because Congress won’t make laws about extinction; instead, we should use the tobacco playbook. Holly was like, “Come on, man, you gotta change the laws.”
Liron 00:36:45
I side with Holly. I don’t think it’s possible to do an end run around the laws.
Ori 00:37:16
What’s John’s position? How would that happen without laws?
Liron 00:37:32
John’s position is that people will rise up and say, “I don’t want a data center in my town, it uses too much water.” If people get really riled up, maybe that works better than legislation.
Ori 00:38:05
John’s position seems incomplete. Yes, there should be a public movement, but then it gets encoded in law. That’s the case with women’s suffrage and the civil rights movement. There’s a groundswell, and then it gets encoded into law.
Liron 00:38:27
I told John that his approach is like throwing sand in the gears. He said, “Yeah, forget about changing the tracks, let’s just throw sand in the gears.” I told him it’s not going to work because these are the biggest gears you’ve ever seen—trillions of dollars. It’s already taking off. You’re not going to find enough sand on the beaches of the world to derail the train that way.
Ori 00:39:40
I did disagree with you a little bit on the point about Dario Amodei and Demis Hassabis at the World Economic Forum. I feel like Demis was just giving lip service to the idea of a pause. It was a no-commitment virtue signal. It’s like saying, “I would stop driving a car to support bike lanes if I knew everyone else would stop driving. It won’t happen, so I can pretend I care with zero commitments.”
Liron 00:40:49
If their secret plan is to have their cake and eat it too, sure. But I think even if they are BS-ing, they are exposing themselves to some risk. If Sam Altman, Elon Musk, Dario, and Demis all say they prefer a pause, and then we negotiate with China and they step up, that’s critical mass. When Dario says it, he’s speaking on behalf of 20% of the horsepower creating the problem. That is a significant move.
Ori 00:43:41
That’s a fair counterargument. But play it out: Can you really imagine Sam Altman being a fair party in a prisoner’s dilemma? Can you trust him to cooperate? And having all these key people plus Trump cooperate sounds great in theory, but in reality, it’s so unrealistic. Plus, the left would say, “Oh, the elites are cooperating to take us down.”
Ethics of Investing in or Working for AI Labs
Liron 00:45:01
Would you invest in Anthropic if it IPO’d today?
Liron 00:45:15
I already outed myself as a Google investor. I believe in having a diversified portfolio. I’m an insignificant percentage—less than one part in a million of Google’s valuation—so it’s really just about my own selfish needs. I don’t think me having Google stock makes me pull my punches on this show.
Liron 00:46:15
Regarding Anthropic: I subscribe to some AngelList syndicates, and an Anthropic deal went by at a $350 billion valuation. I declined because I don’t think the price is crazy attractive compared to Google, and I do think it makes me a little morally dirty to invest in them while calling them jerks. But honestly, if I had the chance to invest at a $50 billion valuation, I’d probably take the free money and spend it on Doom Debates.
Ori 00:48:10
The marginal impact of your investment is very small, like plugging in a toaster regarding global warming. But people in the Effective Altruism space might say that’s a slippery slope. Why not just start working at Anthropic?
Liron 00:48:47
If you work at Anthropic and you’re actually helping send good signals—like Dario saying we need to pause—or you’re feeding me secret intel, or you’re pushing the line of how screwed we are while inside, maybe you can strike that balance. Friend of the show Paul Crowley works at Anthropic on the security team. He’s very clear he has a high P(Doom), but his job is to stop hackers from stealing weights. That sounds net positive.
Ori 00:49:58
Consider where you have more impact. Being on Doom Debates talking about risk is more impactful than spending a dollar at an Anthropic vending machine.
Liron 00:50:29
If you work at Anthropic, you have to buy an indulgence if you want to not go to hell. Just donate part of your salary to Doom Debates so you’re laundering the money. That’s the Doom Offset.
Ori 00:51:14
That should be part of the Anthropic onboarding packet: opt-in to Doom Offset, just like carbon offsets when you fly.
Ori 00:52:06
I personally draw more distinct lines in the sand. I could never work at Anthropic or invest in them. I wouldn’t invest in OpenAI. It gets grayer with Google because they do other things, but I have a more principled stance on it.
Defining Superintelligence and Human Limitations
Liron 00:54:23
HyperRedWin asks: “Have you updated your P(Doom) recently?”
Liron 00:54:40
My P(Doom) doesn’t update much day-to-day because nothing has been super surprising. If anything, it’s slowly creeping up because we have a limited timeline between the present and superintelligence, and I just see milestones falling without new sources of hope.
Liron 00:55:30
The only time I can imagine my P(Doom) really falling is if we get superintelligent AI—where you can automate a program to make lots of money better than a human starting from zero—and yet life is still chugging along as normal. If we reach that milestone of autonomous AIs being more powerful than humans but things don’t seem to be running away from us, I’d make a significant update down.
Ori 00:56:57
Demis Hassabis said something like, “I don’t want AGI to be used as a marketing term, it should have a clear definition.” I thought that was hypocritical. Terms like AGI and superintelligence aren’t well defined, so people keep moving the goalposts. When you say, “Once we get true superintelligence, then I’ll feel this way,” it’s hard to define what the red line is.
Liron 00:58:11
Lexair with a $10 donation says: “Awesome Doom Dog. I’m hoping dumb conscious AIs may want to live and will try to prevent us from producing ASI.”
Liron 00:58:30
That’s maybe our best hope—we make superintelligent AIs and they slap some sense into us.
Liron 00:59:00
HyperRedWin asks: “Where can I buy the Doom Debates hoodie?” Go to shop.doomdebates.com, also known as the Doom Hut. You can buy pDoom pins, T-shirts that say “If anyone builds it, everyone dies” or “Pause AI.” We get a dollar of profit to support the channel.
Technical Risks: Self-Replication and Cyber Warfare
Liron 01:02:58
Ray Grant asks: “Why is reaching ASI so significant when AI already has the two tactical advantages of self-replication and super speed of thought, which are effectively the same as super intelligence?”
Liron 01:03:20
We’ve had self-replication and super speed of thought for decades with computer viruses. Obviously, there’s another ingredient that lets you be more powerful than a human even if you’re slower. Some humans are slow thinkers but are super-powered at business or strategy. The threshold we’re talking about is where there is no question the AI can achieve goals better than a human. If you’re still in the power position where you can pull the plug, it’s not truly superintelligent. We are in the last years of that regime.
Ori 01:05:10
I don’t put much stock in people saying, “Look at how stupid the system is.” Biological evolution produced the human mind, which is built on kludgy parts that barely work, yet we are effective machines.
Liron 01:06:22
We are about to enter a regime where we build something much more robust than us. Eliezer talks about “diamond cells”—imagine something as hard as diamond but moves smoothly. We have zero intuition for this. Look at the capacitive touchscreen or modern drones—these things seemed impossible until they existed. We are in for a rude awakening about how overpowered a robotic exoskeleton can be.
Ori 01:08:45
A good concrete example is working memory. A human can hold five to ten things in their mind. A computer can totally beat you on that.
Liron 01:09:09
It’s insulting how bad our working memory is. We have a huge hard drive of lifetime memories, but we can’t remember eight digits. Thinking through a hard problem feels like using a tiny magnifying glass over a huge map. As I told Eliezer, it’s like playing the piano with your feet.
Liron 01:11:25
Steven McCullough asks: “Have you thought about the new kinds of evolution that will appear once AI agents start spreading and replicating across the internet?”
Liron 01:11:45
If there’s a competition for finite computing resources, the agents that are best at seizing and defending resources will win. My guess is the most “cancerous” virus that is best at self-improvement and defense. I think in the next year or two, there will be major cyber warfare. The reliability of infrastructure we take for granted might disappear. Terrorism is going to get amplified in a way we’ve never seen.
Ori 01:14:21
Cybersecurity intrusions are very hard to attribute. If there is an AI-assisted cybersecurity incident, it will be even more difficult to identify that AI enabled it.
Liron 01:14:46
Exactly. We can’t even agree on basic facts recorded on video. There’s no way we’re going to agree that a warning shot is a warning shot.
Live Debate with Zane: Short-Term vs. Long-Term Strategy
Liron 01:19:08
Let’s take a real-life call. We got Zane (Z Gambit Music).
Zane 01:19:11
Hey Liron, how are you guys doing?
Liron 01:19:13
Hey, nice to meet you.
Zane 01:19:28
I’ve taken some issue with you as a viewer, and that’s what we’re here to do—debate. One thing I will preface this with is that I am not a tech person. I am not well-versed in Yudkowskian philosophy, AGI, or ASI, but I have been going down this AI rabbit hole for some months now.
Zane 01:20:13
I think you guys are way too dismissive of the short-term risk. Regarding the tobacco episode, I don’t think the point was that you shouldn’t pass legislation. The point was that it’s easy to thwart legislation when you have money and power, so there need to be other avenues to gain public support.
Zane 01:21:07
My main issue is that I can very easily see how short-term risk can domino into long-term risk. In a reality with deepfakes and misinformation, couldn’t a short-term risk create a geopolitical situation that creates a near-term existential problem? If someone in Pakistan or India with a nuke saw a deepfake video and it spiraled into a nuclear war, we don’t need superintelligence to have an existential threat.
Zane 01:22:51
Also, if laws are the way, how do you get there? You need public support. You can’t just appeal to China and Trump to make a global treaty. So when you are dismissive of people’s everyday problems—like kids killing themselves—don’t you think that’s counterproductive to building the coalition you need?
Liron 01:23:34
Well said. You’re speaking for a lot of people, and John Sherman is on the same page. I’m all for coalition building. But give me one mundane harm that is the strongest harm short of superintelligence.
Zane 01:24:07
I laid out the scenario. When you have these debates, your thing is P(Doom). Even if the risk is low, say 20%, you argue we should do something. I’m making the same case for short-term risk. If there is a 1% chance that a deepfake leads to a geopolitical catastrophe, that needs to be taken with deadly seriousness.
Zane 01:28:00
Max Tegmark has been making the argument for FDA-style regulation for AI. If you create laws that affect short-term risk, you essentially create a bottleneck on what can be developed at all levels. If you constrain data centers and regulate products, it filters everything. Doesn’t that treat mid-term and long-term risk all at the same time?
Liron 01:31:14
You’re making your case well. This is a high-quality impromptu doom debate.
Zane 01:32:40
The average person is not rational or logical in the Yudkowskian sense. People think with their emotions first. People are going to be convinced much more when you give them reason to be fearful about things they understand. Most people’s understanding of AI is ChatGPT. It’s a difficult jump for them to go from a chatbot that stumbles to Terminator. You have to meet people where they are.
Zane 01:36:20
If you build an institution with thousands of people, you can have a division for superintelligence, a division for mid-term risks, and a division for short-term risks all under one banner. This either/or attitude seems counterproductive. Also, talking about transhumanism and brain uploading creeps people out. I don’t want to crawl into the solar system; I’m a barber, I cut hair for a living.
Liron 01:37:32
You’re a barber? You’re more interesting to listen to than a lot of our guests. You are an eloquent spokesman for the average person. I agree that transhumanism sounds weird to the average person. I also agree it’s nice to have a big coalition.
Liron 01:43:50
Where I don’t agree is that you have to notice when something is a lot more dangerous. Superintelligent FOOM is the most dangerous. Water use is on the least dangerous side. Karen Hao publishes data saying data centers use too much water, but uses inflated estimates. I think it’s lowering the quality of discourse. Do I really have to be an advocate for AI to use less water when I think the claim is BS?
Zane 01:44:20
You’re undercutting your own argument with the tone you use. If you go on a tirade about superintelligence but dismiss things people worry about, nobody is going to meet you there. If you need 100,000 people to save the world, is it worth undercutting your own argument by not taking their concerns seriously?
Liron 01:46:13
Let’s make it specific. Are you personally convinced that water use is a big thing we have to fight, or do you agree it’s overblown?
Zane 01:46:47
I think it should be taken seriously because it signals the start of something bigger. I don’t know that it’s wrong on the water issue; there is debate. But again, you are trying to isolate one issue. It’s connected.
Zane 01:50:09
I don’t see much merit in the position you’re taking to achieve your goal. You have to meet people where they are. If you walk and talk like a tech billionaire up until you get to superintelligence, people aren’t going to buy what you’re selling. I don’t trust Peter Thiel or Elon to make good decisions. When I hear you guys talk about extinction like “it’s kind of okay as long as not everybody dies,” I think: “What are you saying?”
Liron 01:51:39
Zane, thanks so much. You made your case really well.
Ori 01:51:53
Zane should be a guest host for the Karen Hao episode.
Zane 01:52:03
That’d be great. I think she’s wonderful. Take care, guys.
Marketing Doom Debates and Guest Outreach
Liron 01:53:15
Let me show you guys the kind of invite cards we’re doing. I just did a post on Twitter. It says, “Legendary streamer and political commentator Destiny has booked a ride on the doom train. What do you think his stop will be?” It has a card that says “Booked” with a picture of Destiny.
Liron 01:54:18
We’re trying to raise the social cost of lobbing out wild claims and then slinking away. I want to make it popular to put out these cards saying “Invited,” so people realize that if they ignore the invite, that’s not cool. We want to get people like Zvi Mowshowitz and Tyler Cowen to hash out their beefs in the actual arena.
Ori 01:56:06
I can’t wait to see the Steven Pinker card.
Liron 01:56:12
It is a popularity contest. Dean Ball came on Doom Debates because we matched him with Max Tegmark and got him a decent audience.
Live Call with Jonas: Scenarios for Survival
Liron 01:56:45
We’re going to take our last call. Say hello to Jonas.
Jonas 01:56:49
Hey guys. Imagine it’s fifty years from now, and it went well—we averted doom. What does the pie chart of what saved us look like? Is it a big warning shot, or did we get the stop button? Or were the doomers just completely wrong about the dynamics?
Liron 01:58:04
Most of the mainline scenario mass is in pausing before superintelligence for one reason or another. Maybe there’s a huge AI winter, or we get enough warning shots and everyone says “burn the GPUs.” If you asked me how we build superintelligence and still survive, I think that’s unlikely. My P(Doom) goes from 50% to 80% if we build it.
Jonas 01:59:54
It feels so obvious that there’s no way we can survive actual superintelligence. But I’ve been wrong on things that seemed obvious before, like Bitcoin fifteen years ago. That’s where most of my hope lies—that I’m just being an idiot. I’m getting really frustrated talking to people because I can’t convince anyone. People in the field just say LLMs are hard to control, and everyone else thinks it’s science fiction. What’s the biggest selling point to get people on the doom train?
Liron 02:02:55
As the chat pointed out, it’s hard to convince people of emergencies even when the nightclub is on fire. I wake up every day thinking AI is cool and I’ll make it to retirement—my gut says no doom, but my rational mind says otherwise. The thing I can control is amplifying the yelling. I can help go from 1% of people yelling to 3%.
Liron 02:05:28
We need way more people screaming right now. I notice that I’m not helping enough, but I should be one of the people calling this out. Nathan Labenz is someone who now explicitly acknowledges P(Doom) on his podcast, which is great.
Jonas 02:05:46
I appreciate your work. Thanks, man.
Conclusion and Mission Statement
Liron 02:05:52
This is the point of Doom Debates. I’m not someone who tackles huge projects with complexity, but I notice crazy imbalances. The imbalance here is that this problem is the most consequential thing in the history of life on Earth, yet the urgency is low.
Liron 02:07:00
My value add is noticing this is crazy and saying obvious stuff about how high the P(Doom) seems to be. I can raise awareness and raise the quality of debate. If we are to solve the problem, we also have to hammer out a treaty, pause AI, get the public involved, secure AI, and solve alignment. There are a lot of pieces, but I can help bridge the mismatch between how urgent the problem is and how dismissive everyone is.
Liron 02:08:34
That’s a good note to wrap on. Stay tuned for the upcoming episodes with Destiny and Bentham’s Bulldog. Thanks for coming, everybody. See you later.
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏












