0:00
/
0:00

Doomsday Clock Physicist Warns AI Is Major Threat to Humanity!

Renowned scientists just set the Doomsday Clock closer than ever to midnight. UChicago Professor Daniel Holz, a top physicist and Chair of the Science and Security Board for the Bulletin of the Atomic Scientists, joins me to defend his organization's Doomsday Clock and debate society’s top existential risks.

I agree our collective doom seems imminent, but why don't the clocksetters emphasize rogue AI as the most urgent, irreversible force that could kill us all?

Links

Timestamps

00:00:00 — Cold Open

00:00:51 — Introducing Professor Holz

00:02:08 — The Doomsday Clock is at 85 Seconds to Midnight!

00:04:37 — What’s Your P(Doom)?™

00:08:09 — Making A Probability of Doomsday, or P(Doom), Equation

00:12:07 — How We All Die: Nuclear vs Climate vs AI

00:21:08 — Nuclear Close Calls from The Cold War

00:28:38 — History of The Doomsday Clock

00:30:18 — The Threat of Biological Risks Like Mirror Life

00:33:40 — Professor Holz’s Position on AI Misalignment Risk

00:44:49 — Does The Doomsday Clock Give Short Shrift to AI Misalignment Risk?

00:59:09 — Why Professor Holz Founded UChicago’s Existential Risk Laboratory (XLab)

01:06:22 — The State of Academic Research on AI Safety & Existential Risks

01:12:32 — The Case for Pausing AI Development

01:17:11 — Debate: Is Climate Change an Existential Threat?

01:28:48 — Call to Action: How to Reduce Our Collective Threat

Transcript

Cold Open

Liron Shapira 0:00:00
Daniel Holz chairs the committee of scientists who set the Doomsday Clock.

Daniel Holz 0:00:04
It’s our assessment that this is the most dangerous moment ever in the history of the clock.

Liron 0:00:10
I don’t think that you guys have ever explicitly acknowledged AI misalignment risk.

Daniel 0:00:15
We may not explicitly list it, but it’s definitely one of the reasons that we talk about AI as a growing concern.

Liron 0:00:23
I hope people watch this conversation. That is huge news. Somebody could read the Doomsday Clock statement and not get that.

Daniel 0:00:28
Okay, so I’ll go back and read it again.

Liron 0:00:31
With nuclear risk, it’s like every year there’s a good chance we won’t nuke each other. Muddle through is a proven strategy for nuclear risk.

Daniel 0:00:37
But then the question for you is, why isn’t that true for AI?

Introducing Professor Holz

Liron 0:00:51
Welcome to Doom Debates. Daniel Holz is a professor of physics at the University of Chicago. He’s a world-class astrophysics researcher with a whopping 140 h-index. For those of you not in academia, it means that he publishes a lot and gets a lot of citations. Very impressive.

He’s leading the charge to legitimize the field of existential risk studies in academia. He teaches a course that’s literally called “Are We Doomed?” He founded the University of Chicago’s Existential Risk Lab. He also chairs the committee of scientists and experts who set the Doomsday Clock, the famous Doomsday Clock. That brings us to our conversation today.

On January 27th, the symbolic clock moved closer to midnight than ever before, generating a wave of news headlines across the globe. The clock is determined by the Bulletin of Atomic Scientists, a nonprofit created by Albert Einstein and Robert Oppenheimer in the 1940s.

In other words, the Doomsday Clock is one of society’s strongest signals of our collective doom threat, and when it hits a new record, you know Doom Debates will be on the scene. So I’m excited to talk with Professor Holz about tracking existential risks and compare our views about one risk in particular, which is AI extinction. Daniel Holz, welcome to Doom Debates.

Daniel 0:02:06
Liron, it’s great to be here.

The Doomsday Clock is at 85 Seconds to Midnight!

Liron 0:02:08
Okay, so we got to start here. Weeks ago, the Bulletin of Atomic Scientists issued a press release with the headline, quote, “It is now eighty-five seconds to midnight,” the closest to midnight that it’s ever been. You were part of that decision.

Alexandra Bell 0:02:22
The risks we face from nuclear weapons, climate change, and disruptive technologies are all growing. Every second counts, and we are running out of time. It is now eighty-five seconds to midnight.

Liron 0:02:36
So you see 2026 as an all-time record for proximity to disaster, correct?

Daniel 0:02:41
Yeah, that’s right. That’s exactly what this is supposed to capture. And it’s our assessment that this is the most dangerous moment ever in the history of the clock. The clock is specifically risk to all of civilization. I would claim that the first time we really had the ability to end civilization was with nuclear weapons, and that’s why the clock was founded.

And therefore, really, the statement is, this is the closest we’ve ever been in the history of humanity to the end of humanity by our own hand.

Liron 0:03:17
I respect that. I agree with your conclusion. I also think this is the closest to the end of humanity that we’ve ever been. I suspect I’m going to have different reasons that we’re going to get into.

Daniel 0:03:26
Mm-hmm.

Liron 0:03:26
But I just want to say right off the bat, game recognizes game. Both of us understand that communicating about the impending end of the world is a very important thing to be doing. You do it through the Doomsday Clock, I do it through Doom Debates. So we’re joining forces. This is the most doom communication that’s ever appeared on a single screen, I think.

Daniel 0:03:48
Yes, totally, and it’s critical, and we agree, now is the time. This has to be happening. We need these discussions.

Liron 0:03:55
By the way, Doom Debate Studio, we just built this all-new studio for the show. There’s an homage right here to the Doomsday Clock. You can see, according to my show, it is now five minutes to midnight.

Daniel 0:04:07
Okay, nice. I love it. So I’d move it a little forward.

Liron 0:04:12
Well, I would have done eighty-five seconds, but then it would just look like midnight because it’s kind of far away.

Daniel 0:04:17
What we agree is, regardless, it’s too close.

Liron 0:04:20
Yes, exactly. Now, is it okay if I ask you to translate the Doomsday Clock into my preferred ontology for modeling uncertain events?

Daniel 0:04:29
Yeah. I can guess where this is going, but yeah.

What's Your P(Doom)?™

Liron 0:04:37
Professor Daniel Holz, what’s your P(Doom)?

Daniel 0:04:41
Okay, so I’m gonna do the professor thing, and I’m gonna first try to define a bunch of terms, which maybe you’ve already defined, but it helps me before just giving a number. I mean, I can just say, “Look, it’s eighty-five seconds,” but that doesn’t help.

Liron 0:04:57
Right.

Daniel 0:04:57
So when you ask this, there are a bunch of structural issues in the way it’s asked. What do you mean by doom? Do we mean human extinction or just some really, really bad things? I’m gonna take the version which we generally use with the Doomsday Clock, which is sort of this end of civilization type statement. There may be human beings left, but the world that they’re inhabiting is not at all like ours, much, much worse. All the things that we take for granted are gone: electricity, cell phones, functioning governments.

By taking some definition like that, and then the question of probability here, what is the timescale? I think probably you mean on the order of a century, but maybe it’s longer.

So in all of that, taking that for granted, and for me, P(Doom) is from everything, not just AI. I think usually when people talk about P(Doom), they mean AI, but my job is all the possible vectors for doom. I, at eighty-five seconds, I come up with something which, for me, sounds ridiculously high, where high is — okay, I’m gonna say one other caveat. I don’t really have a number. I have a distribution.

Liron 0:06:27
Right. Okay.

Daniel 0:06:27
There’s a lot of uncertainty. I think we all know there’s uncertainty. Anyone that’s like, “I know exactly what it is,” makes me very nervous. So in my head, I have a kind of distribution. I would be shocked if the real P(Doom) is less than twenty percent. I just see no possible argument for that, and we can talk about why.

I don’t think it’s ninety-nine percent. I have hope. So I end up with some kind of distribution, probably peaked in the thirty or forty, maybe up to fifty percent, something like that. Which is a very long-winded way not to answer your question.

Liron 0:07:14
No, I think you answered it great, because I’m not expecting an answer that’s plus or minus one percent. I’m really happy with plus or minus twenty-five percent, or more accurately, geometrically, if you’re within a factor of two or three — in other words, forty percent, which is one point five to one, compared to zero point seven five to one. For me, that’s all in the same range.

But then there are some people who come on this show, and they’re like, “Oh, yeah, it’s less than one in a thousand,” which is orders of magnitude difference.

So that’s the kind of ballparks we’re working in. You have what I consider the obviously correct ballpark.

Daniel 0:07:48
We agree on this then, yes. No, I think that’s right. And I think the main message is precisely what you just said, which is it’s not infinitesimally small. Anything that’s kind of order one — that’s unacceptably high.

Liron 0:08:07
We got a real doom echo chamber here. I like it.

Making A Probability of Doomsday, or P(Doom), Equation

Liron 0:08:09
Let’s talk about probabilities for a second, because I’m a Bayesian. I think that this idea of epistemic probabilities, of modeling your own belief state using the math of probability theory, makes a lot of sense. And by the way, it looks like you’ve got a conditional probability on the top left of your blackboard over there. So is that also your native ontology for reasoning about uncertainty?

Daniel 0:08:32
So yes. That is actually a cosmology problem that I’m working on. But yeah, that’s exactly right. You like to have priors, and then state them, and work through everything, and come to whatever the conclusion is.

As you know, one of the hardest aspects of this business is we can’t get statistics by looking at the historical data, because we, fortunately, as far as we know, haven’t wiped ourselves out yet. We’ve had one run at it, and here we are, and we’re trying to make some statement about a future probability of something that has not happened before.

Liron 0:09:15
Right. So you can’t do naive frequentist probabilities.

Daniel 0:09:17
Frequentist, that falls apart, so it’s fundamentally Bayesian.

And as in many things, Bayesian priors play a major role. However, there’s a lot of information. Everything is going to be, at the end, some sort of extrapolation, but it can be, I think, extremely informed. And a lot of the discussion, I think you can come to what I would claim are very reasonable positions, defensible positions.

One of the things I would love to do is come up with a doomsday equation. That’s something I’ve been working on, where right now, the process of setting the Doomsday Clock and coming up with eighty-five seconds is — we have a lot of expertise, we do a lot of deep dives on various things, but then at the end of the day, it’s informed discussion and consensus, but there’s no math involved. I’d like to formalize that a little more.

Liron 0:10:28
I’ll give you my two cents because I know we’re among friends. I think Bayesian probability is such a powerful tool that it’s important to embrace that tool. So if I were you — far be it for me to critique your process — but if I were you, I would want to first come up with the Bayesian probability, and then as a final step — I know the clock is your brand, it’s famous, it was started by Einstein and friends. So I’m not saying you should abandon the PR win of the clock, but maybe you should publish a translation table of what the real probability is.

Daniel 0:10:57
Yeah. I love the idea of doing that, and that’s something I’m trying to do. One, it would be great to actually do it at the level you’re talking about and have that inform the actual clock setting. But even if it doesn’t get there, for me, going through and coming up with the formalism has been really fascinating because, as you expect with these things, you really have to write everything down. What are the different terms? How do they interact? What are the priors for each of these terms? How do I defend this? And then, what do I end up with at the end?

I’ve been starting to do that, and it’s a sobering exercise, but I think it’s super important. And the thing I’ve been finding is that the interaction terms are critical.

Whenever you take each thing individually, you get some reasonable-sounding number. But then, if you even just do very mild couplings, things change radically.

How We All Die: Nuclear vs Climate vs AI

Liron 0:12:07
So I think the other thing that’s gonna be a big focus of the rest of the conversation is, how much does AI factor into your position on the risks we face?

Daniel 0:12:17
So that’s a complex question. I can both channel the Doomsday Clock and eighty-five seconds, and then my own personal way of thinking about it, and those are distinct. The Doomsday Clock — we have this science and security board, a bunch of experts, and then we talk to lots of other people. We try to make an overall assessment of the field and then distill that all into a number.

Usually, what we do is we assess over the past year, what has happened, are things getting better or worse in terms of all the ways that civilization could end? And that’s our basic question — are things getting better or worse? What have we learned over the last year? What has changed? What has been developed? And that can go either way. Some years, the clock goes back, some years, the clock goes forward.

Liron 0:13:10
Okay, let’s — Daniel, instead of the Doomsday Clock, just you personally, if we hypothetically condition on the outcome that humanity ends up dying, what would be your top three causes why that would have happened?

Daniel 0:13:26
So I’m still gonna do the professor thing and say it’s very hard to distinguish, but let me explain why, and this is also related to the Doomsday Clock. The point is, everything is interconnected. So if you ask, what is the most dramatic way that civilization as we know it ends, the easiest to conceptualize? Well, it’s definitely nuclear war. Nuclear war can happen in the next hour, and it’s all over.

And that is definitely — we have thousands of weapons. The launch happens very quickly. Once they’re launched, there’s nothing that can be done. The decisions to launch are done by certain individuals. So there are a number of people, each of them, if they decide to end civilization, they can do it. For any reason, no checks and balances, that’s our system, and if they do it, within an hour, everything is finished.

This is both the initial impact of the nuclear weapons — and I’ll remind you, we’re talking about thermonuclear weapons, these are totally different. So all of that can happen. Now, that can happen because an AI system is in the command and control. So let’s suppose there’s an AI system there, and the AI system has complete control and decides, for whatever reason, there’s some weird interaction or hallucinates something or who knows what, and civilization ends. Is that because of nuclear weapons or is that because of AI? That’s a question for you.

Liron 0:14:59
Right. You’re saying how would you classify that type of doom?

Daniel 0:15:03
Yeah.

Liron 0:15:03
You can give credit to both, but you’re confident that regardless, the most likely doom scenario will involve nuclear, correct?

Daniel 0:15:10
I think it’s one of the scenarios that’s easy to play out for me. I can imagine many things that happen that end up resulting in nuclear exchange. There are many different scenarios in which there’s a coupling, and in the end, nuclear weapons are the clearest way to think about it.

I can easily imagine other things where nuclear weapons don’t play a role. I’m sure you have many examples of that just from AI. Climate change can also do some things which make life as we know it — the planet basically becomes inhospitable to life. That doesn’t require nuclear weapons either.

Liron 0:15:53
Does that register? I mean, because remember, I’m curious to get your breakdown of, let’s say, we all die by 2050. Don’t you think it’s a little fast for climate change to kill everybody by 2050?

Daniel 0:16:01
Yes. However, again, I’m just gonna do the same thing. You’re gonna hear a lot of this. Climate change is happening. We all know it’s happening. We’re now projecting forward, and the last three years were the hottest on record. We’re getting record snow, storms. Things are bad from the climate perspective and will get worse, and that stuff could accelerate.

So we can easily imagine cases where there are climate refugees in the tens of millions or hundreds of millions, dwarfing any sort of refugees that we’re used to. We can imagine wars over resources, clean water, agriculture, arable land, because what was previously arable is no longer arable. All this stuff could happen and will happen. It’s just a question of scale. It’s definitely already happening, and it’s just gonna get worse over the coming decades.

That’s gonna lead to huge instability, and instability then can lead to conflict, which could lead to nuclear war. For example, is that a climate thing or is that a nuclear thing? Is it even worth spending a lot of time on the distinction? I don’t know, but if you try to do the counterfactual world without climate change and world with climate change, it is possible that climate change plays a major role in increasing the likelihood of doom.

Liron 0:17:28
So just to compare with my view, if you ask me, what’s the probability that by 2050, half the human population or more gets killed because directly attributable to climate change? I feel like that probability is very low, like one percent. And then if you ask the probability that that happens because of nuclear war, I’d give you more like twenty percent. So I see a pretty big separation there. You just don’t see as much of a separation, correct?

Daniel 0:17:49
No, I would agree with the separation in that if you ask how many people will die directly because of climate change — because there’s a heatwave and people die, and we’re gonna do attribution and say that’s because of climate change — I agree that number is likely smaller for the coming decades. Long term, that’s a different story, but for the coming decades, I would agree with that.

And if you say the fact that there’s been all these societal and political pressures because of climate change leads to conflict, which then leads to some war that ends up with many people dying, we’re gonna call that nuclear, or we’ll call that war, and we won’t ascribe it to climate change, then I agree with you. But it’s a little tricky. How do you do the attribution of all these things?

So it’s this thought experiment — world with climate change and world without — and the world with climate change, all these other risks go up significantly. That’s basically what I’m saying.

Liron 0:19:04
Some people might even use climate change doom as a litmus test for whether somebody is too hair trigger about their doomerism. It’s maybe actually important to pass the litmus test of pointing out that climate change isn’t a doom-grade threat. What are your thoughts on that?

Daniel 0:19:18
I don’t know about litmus test. I would agree that the idea that climate change, just in and of itself, if you could keep everything else the same, is not at the level of a full-on nuclear war. For the timescales we’re talking about, there are lots of uncertainties, tipping points, but for reasonable assumptions and extrapolations, I would agree with that.

But I would disagree with the idea that as a threat multiplier, climate change and its impacts on everything else can push us over to the point where doom becomes much more likely than not. I think it can be a significant threat multiplier.

Liron 0:20:08
One difference that I see between climate change and nuclear, besides the odds or the directness of the threat, is that on climate change, I personally feel good about kicking the can down the road, which is why I like solutions like geoengineering, like dumping sulfur dioxide in the atmosphere, because people point out, “Yeah, but in fifty years, we’re creating a somewhat harder problem for ourselves.” But I like that because I’m thinking: Yeah, we’re gonna have so much progress in geoengineering over fifty years. I feel good about kicking the can down the road.

But then I look at nuclear, and I just don’t see progress being made on how we’re dealing with the coordination around nuclear. And I think your institute has been good about pointing out that we’re really acting like clowns. You guys had the clock at seventeen minutes to midnight, which is a lot of minutes, relatively, because the Cold War was ending, and people seemed to be cooperating to not nuke each other, and now that cooperation is really dissolving. The world is kind of going into chaos.

Do you agree there’s a difference, where climate change, you can be optimistic that solutions are coming down the road, and nuclear, it’s like we’re just really treading water here?

Nuclear Close Calls from The Cold War

Daniel 0:21:08
Yeah, I’d love to share your optimism about climate, so we’ll get to that. But I agree about your pessimism about nuclear. I think it’s very hard to look at the last few years, even the last months, even the last week, and say everything’s going great in nuclear.

Last week, New START expired. New START being the last treaty governing strategic nuclear weapons between the US and Russia. And so after over half a century of arms control, there’s no more arms control of the key stockpiles that would end civilization. And that’s insane.

One of the few things that we accomplished over the Cold War was decreasing the number of weapons from over seventy thousand nuclear weapons to now something like thirteen thousand, with maybe five thousand deployed, ready-to-go weapons. And New START played a role in capping these. That’s good. That’s still way too many weapons, that’s still enough weapons to wipe out civilization many times over, but that’s a much smaller number, and I think it’s generally agreed upon, less weapons make it less likely that nuclear war happens.

Okay, that’s been progress. Now we’re tossing that away. It happened, and it’s one of those things where I think it’s one of the most consequential things that’s happened. If over the coming years, we end up with stockpiles ballooning and all this race to develop new types of nuclear weapons and new delivery systems, and there’s an arms race between the US, Russia, China, if all that happens and that ends up in a nuclear World War III, and it’s all over, one of the things you’ll be able to clearly point back — if there were anyone left — that point back at the expiration of New START as this clear inflection point.

It happened. No one really talked about it. Most people didn’t even notice. That’s super depressing, very alarming, very dramatic in the circles I’m in, where people worry about nuclear war. How can we be forgetting everything? All this hard-won knowledge, where we came so close to nuclear annihilation so many times during the Cold War, built up this mechanism of trust and control and all this stuff. It’s the one thing that we got out of it. We managed to survive. A lot of that was luck.

We ended up with these sorts of agreements and relationships between nations so that we don’t end up blowing ourselves up, and we’re just tossing that away.

Liron 0:24:17
This is what I like most about the doom clock idea, is that healthy respect, where you look back at how history played out in the Cold War. Oh, we survived. I think you guys are very much on the same page as me, which is — you run it again, you run the simulation again, and we both agree that counterfactually, it easily just could have gone a different way, but then we got lucky, correct?

Daniel 0:24:36
Exactly. That’s exactly the thing, and that’s exactly the way I think about it. If we imagine doing this many times, I think many of the times we don’t make it through the Cold War.

That’s not hypothetical. People that were involved in these decisions will say, “Yeah, it was like a coin toss, whether it would’ve gone this way or that way.” Kennedy was very famous for saying this about the Cuban Missile Crisis, and even at the time, he didn’t know many of the close calls. There were so many little things.

Liron 0:25:13
You read Kennedy’s account or the US side, and it’s like: Yeah, we’re so brave. We stood our ground, and the Russians backed down. And then I read The Doomsday Machine by Daniel Ellsberg, and he’s like: Yeah, what the Americans didn’t know was that at the time, the Russians actually didn’t have good control over when the Cubans would decide to launch the nukes. And so they were like: We can’t even negotiate.

And so Khrushchev was just like, “Okay, I’m just gonna back down. This is gonna get too crazy. I’m just going to accidentally nuke it before I even mean to.” The Americans didn’t know this was going on, so they’re learning the wrong lesson from the negotiation. The nukes could have easily been launched, is basically what I’m saying.

Daniel 0:25:52
Totally. And I think that Ellsberg book, The Doomsday Machine, should be mandatory reading. It just captures it from someone who was there and really knows.

And one of the other lessons which is so important is, Kennedy and Khrushchev — if you had asked them, they’d be like, “The one thing we’re definitely not gonna do is end civilization over this. Not worth it.” And they were, I think really genuinely, not irrational. And yet, because of miscommunication, because of a lack of understanding of the force control and that there are nuclear torpedoes and all this stuff, because of just mistakes, they came so close so many times, and they really had the right frame of mind for it.

So now run the movie, and you happen to have someone else in power, or we just don’t get as lucky — I don’t think it works out.

Liron 0:26:59
Exactly. So the Doomsday Clock, I feel like that’s the most valuable contribution it’s giving to society, is just reminding us that doom is always here. We’re not that far from doom. I know daily life doesn’t feel like it, but we constantly have these near misses. We might have another one.

I think this take is important because I go on my corner of social media, basically tech Twitter, that’s where I hang out, a lot of people building things, the builders of the world, and everybody acts like the lesson that they learned from history is that everything’s gonna be fine. You should be optimistic. That’s the lesson they learned. And so you guys are there reminding us — no, it could have been different, and it could be different tomorrow. And yeah, I appreciate that.

Daniel 0:27:41
That’s exactly right. And that is the goal, and it’s a tricky thing, because we’re not — the Doomsday Clock is not predictive. We’re not trying to say, “This is what’s gonna happen.” It’s informed by the past, it’s informed by the last year, and it’s making some assessment over the history of the clock, where are we?

And it’s exactly what you said. Our assessment is: This is super risky. Part of the reason is exactly that people don’t seem to think it’s risky. If things are dangerous, like during the Cold War, people were like, “Oh yeah, this is bad. We gotta take this seriously.” That is helpful. If you’re at the same level of danger, but everyone’s like, “There’s no problem here. We don’t have to think about this,” that’s much more dangerous because then the likelihood that you stumble into something or some miscalculation gets much, much higher.

History of The Doomsday Clock

Liron 0:28:37
Exactly. And now, for the viewers, I looked up the history of where the Doomsday Clock was moving. It started in 1947. It debuted at seven minutes to midnight. Then in 1953, it was the first major scare. It ticks forward to two minutes to midnight after the US and the Soviet Union both test thermonuclear hydrogen bombs, and this was the closest point for decades — two minutes to midnight.

And then in 1963 to 1972, the clock moved all the way back to twelve minutes because there was the Partial Test Ban Treaty and other arms control agreements. And then we get to 1991, the post-Cold War optimism. The signing of the Strategic Arms Reduction Treaty, START. The clock is set back seventeen minutes, and this is the furthest it’s ever been from catastrophe.

And then we get to 1998, the clock jumps forward to nine minutes because India and Pakistan conducted nuclear weapons tests. And then we get to 2015 to 2018 — I remember Trump versus Kim Jong Un, Rocket Man — the time drops to three minutes to midnight, then two minutes to midnight, matching the 1953 record because of North Korean tests, the rising threat of climate change.

And now we’re in this new era, 2020 to 2026, the era of counting down the seconds. 2020, a hundred seconds to midnight, breaking the 1953 record. 2023, ninety seconds to midnight because of the war in Ukraine. 2025, eighty-nine seconds to midnight. And finally, you’ve shifted it all the way to eighty-five seconds to midnight because basically international politics are worse than ever on the nuclear front and other coordination problems, correct?

Daniel 0:30:08
Yeah, that’s a very good summary. There’s a lot of detail in there and additional movements, but yeah, that’s the basic trajectory.

The Threat of Biological Risks Like Mirror Life

Liron 0:30:18
I want to touch on one other thing. When I asked you about how do we all die, you hit on some things. Did you wanna also make sure to include bio risk, mirror life, what about that whole category?

Daniel 0:30:29
Yeah. That’s one of the things that we talk about a lot. And I have to say, this is an inside baseball thing, but we have this committee, all these experts, and they’re in separate groups, writing white papers about what they think the issues are. Then we all meet and discuss, and the different groups present: Here’s what’s happened over the last year, here’s what we’re worried about, here’s what we’re tracking.

And every year, especially over the last few years, the committee that does the mic drop — like, this is terrifying — it’s always the bio people. They always seem to come up with these new, inventive, terrible things that might happen.

And the latest that you alluded to was mirror life, which is not something I knew about a few years ago, but they’ve started to talk about. Basically, most life on Earth has a handedness. The DNA twirls one way, and basically, all the fundamental biological mechanisms seem to have a handedness, and there’s just one flavor of that for life on Earth. That’s the way we’ve evolved.

And it’s a cool, abstract question to ask, can we just synthesize the opposite handedness? Wouldn’t that be neat? A lot of scientists thought, “Let’s try to do that. That seems like a good challenge,” and maybe it has some applications. You could imagine some sort of drug delivery — if you have the opposite handedness, and then you ingest it, it won’t be attacked by your stomach acid. It’s like a protective layer, and so you can do drug delivery better.

But then at some point, biologists realized, “Wait a second, this stuff has no predators. Nothing has evolved to compete with this, consume it.” So if you actually synthesize this stuff and it could reproduce, it would just keep reproducing without any limit. Nothing in nature can touch it, and eventually, it just takes over and kills everything.

A lot of scientists, a lot of people that study this think, “Yeah, that’s a reasonable possibility.” We can make this. There’s nothing that prevents us from building this technology, and there’s a reasonable chance that if it were released, it exponentiates very quickly, and then that’s it for all of us.

It’s just one example where the technology develops, there are all these tools, and these tools can be great — new drugs, addressing diseases — the biological advances have been extraordinarily beneficial, but there’s the potential that there could be something catastrophic. AI, of course, falls right in the same bucket.

Liron 0:33:33
I see eye to eye with you guys on mirror life, and I think bio is trending in the wrong direction for now.

Professor Holz’s Position on AI Misalignment Risk

Liron 0:33:38
So we’ve hit on a lot of the major landmarks of the risk landscape. I think a lot of the conversation I want to have with you is about super intelligent AI, because I’m getting the sense that it’s not as central in your doom worldview as it has become in mine in the last few years. So maybe I can walk you through what I think are the major points of the AI doom argument, which I’ve lined up in a sequence. I call it the doom train. You’re gonna ride the doom train, you’re gonna see all the different arguments, all the stops.

Daniel 0:34:09
Yep.

Liron 0:34:10
I’m curious if you’re gonna get off on a specific stop, but to start high level — what if we narrow the aperture to just look at AI? I know you said it would be related to other things. I guess the one thing I’ll start by asking you is, what’s your probability of doom from autonomous AI that goes uncontrollable before we make it robustly care about what humanity wants? Because that’s my number one doom scenario — uncontrollable, rogue, super intelligent AI.

Daniel 0:34:35
Okay. Again, I’m not gonna make as hard statements as you’re probably hoping for. What I will say is that I’m probably reasonably aligned with your concerns, and it may just be questions of timescales.

I am very hesitant to make some statement like, “AI will never be able to do blah.” I don’t know how you can have any confidence along those lines. So I’m not saying AI will for sure — we’re gonna definitely get to super intelligence and all this stuff. I’m not saying that necessarily. But to say we know for sure it won’t ever happen, I’m very uncomfortable with that.

I think it could happen. The timescales could be shorter than people imagine. And the things you’re talking about — alignment issues — those are clear concerns. People have been talking about AI capabilities and possible areas of concern for a while, and now we start to see some of that emerge. The fact that in some sense that was predicted and now starts happening — it’s early stages — but I don’t know anyone that actually isn’t alarmed by that.

That trajectory says the practice of really thinking through how things might go wrong, that is valuable, and the people that have been doing it are onto something.

Liron 0:36:19
Yeah, just to quantify it, my own ballpark that rogue AI — super intelligent, unaligned, uncontrollable AI, we’ve lost control — my own probability that this is going to happen by 2050, twenty-four years from now, is in the ballpark of fifty percent. Incredibly huge. Even more than the nuclear threat.

And I don’t think you’re quite there, but I don’t think you’re at zero. It sounds like you’re at least at one percent. So roughly, where are you between one and fifty?

Daniel 0:36:45
That’s many e-foldings. That’s many generations of improvement. I can see why people might say, “Yeah, over that many years, on the order of twenty-five years, that could happen.” I would probably end up in the ten percent range or twenty percent, but again, with big uncertainties and totally open to the possibility that maybe it should be much higher.

Liron 0:37:28
Got it. I mean, ten to twenty is already pretty high. Even I have difficulty much higher than that because there is so much uncertainty. If once you’ve said ten to twenty, I don’t even see it as a meaningful disagreement anymore. It’s like, “Oh, crap!”

Daniel 0:37:44
That’s probably right now. What exactly goes wrong and how, there are lots of question marks there. But just in terms of the raw capability of these systems and some of the concerns we may have, I think that’s a totally reasonable number.

Liron 0:38:02
So when I do the whole doom train, and I say, “Which part of the argument do you find unconvincing?” It’s possible you’re just gonna find all the arguments convincing because you’re gonna end up in a similar place that I already am.

Daniel 0:38:13
Yeah. And what I would be interested in for you is sort of this level of confidence. I view it as there’s this possibility, some probability of these things. The consequences of getting this wrong are very large, and since I’m really interested in the long-term survival of humanity, there’s a huge penalty if some technology has the possibility of curtailing the future of humanity.

And there are some things where the nuclear case — I think the probabilities are large, and the consequences are clear. And for AI, there’s just this cloud of uncertainty around it. As the years go by, that cloud becomes less uncertain. But I’m very curious to know how you think about it and what your uncertainty is on your number.

Liron 0:39:08
Oh, totally, the uncertainty is high. I think it would be crazy not to allow ten percent plus, and I also think ninety percent plus is crazy. The world is so complicated. How are you gonna get greater than ninety or less than ten? I think both of those are crazy.

And I’m shocked that people today act like experts. Yann LeCun is a top expert on the building of AI. He doesn’t seem to be an expert on the consequences of having superintelligence roam the planet, but he’s saying his P(Doom) is less than point one percent. I know plenty of really smart people who are saying this shocking number, and normally, that would make me question, “Okay, well, then I must be wrong because I’m not as smart as them.” But there’s also a bunch of smart people who have a high P(Doom), so I can’t just use the test of who’s smarter because you just get all over the map.

But yeah, it’s crazy to me that they’re having such a low probability. A thing that’s weird to me, though, is — you’re coming out with this super sane estimate in this conversation, ten to twenty, plus or minus, but then in February 2024, Rachel Bronson, she used to be the CEO of the Bulletin of the Atomic Scientists, right?

Daniel 0:40:07
That’s right.

Liron 0:40:07
And she said, “We’re not yet convinced that it’s an existential risk.”

Rachel Bronson 0:40:12
When the recent breakthroughs in ChatGPT came through, and you have this surge right now — AI having their Oppenheimer moment, if you will — the leaders are grappling with consequences, and many in that realm are saying it’s existential.

Very early, we’re like, “This is very familiar to us, and we should listen.” So I think where we are — and Daniel can speak to the debates that we had internally as a board — we’re not yet convinced that it’s existential. We’re not convinced yet that it is, or rather, that it’s not just a tool that individuals still have control over.

Liron 0:40:49
So what do you think of that?

Daniel 0:40:50
Yeah, I think the discussion within the clock has been nuanced in the sense of, when we talk about AI risk, do we mean a threat multiplier of many possible things, or do we mean solely this outcome where AI becomes super intelligent and something dramatic happens, it enslaves us? What is the likelihood of that specific scenario compared to nuclear war over the next decade or a couple of decades?

And we try to balance that, and that gets harder because of this cloud of uncertainty. Nuclear, it’s pretty — I think we can start to make real estimates, given history. And with AI, it’s harder. And I think that’s been part of what she was trying to capture.

Liron 0:41:48
When I hear a statement like that, it’s frustrating to me because I feel like that’s a kind of statement that’s delegitimizing Bayesian epistemology.

Because she’s saying, “We’re not yet convinced it’s existential.” Not yet convinced. There’s no — as a Bayesian, we don’t really flip our belief state from not being convinced to being convinced. We just have a probability distribution. So when you’re telling me ten to twenty percent, that is huge news. The world should know that top people like yourself are thinking ten to twenty percent.

And then when your former CEO is saying, “We’re not yet convinced it’s existential,” I just feel like that communication is missing an opportunity to teach the world how smart grown-ups are using Bayesian epistemology.

Daniel 0:42:32
Yeah, I can see that, and that’s fair. One thing to keep in mind is the field has been evolving. If we were having this discussion five years ago, my number would likely be different, again, with huge uncertainty around it.

And I think it’s important to normalize and give grace to people’s inability to really take what’s happening now and try to project. It’s a tricky business. I know Rachel, and I know she would say, yeah, there’s certainly a tail which is of great concern. And what she’s trying to do, which we often are stuck trying to do at the Bulletin, is — we come up with one number, eighty-five seconds, and we don’t get to have a lot of nuance. We don’t have a probability distribution over the clock. We get one number, and so our message has to be pretty clear.

And right now, I think there are some things that are extraordinarily clear. Nuclear risk, very high. Climate is really a grave concern, but of a different nature. Biological threats, very high. And then what we call disruptive technology, which includes AI, and in many ways is increasingly dominated by AI, is also very high, and we say that very explicitly in our clock statement.

But we also mention things like mis- and disinformation, also fueled by AI, huge risk, and that’s something we’re seeing right now. And so you get to a point where — we’re not trying to be too much in the business of projecting and making predictions as opposed to looking at what has happened over the last year, how are things going, and saying, “This is of concern.”

Does The Doomsday Clock Give Short Shrift to AI Misalignment Risk?

Liron 0:44:49
In general, you guys have a good attitude about letting all the different doom risks into the tent, because doom risks pop up, you want to consider them seriously. And you’ve mentioned AI misuse as a doom risk. I’ll read a quote that you gave to CBS News. You said, “AI is a significant and accelerating disruptive technology. AI is also supercharging mis- and disinformation, which makes it even more difficult to address all the other threats we considered,” and we talked about that in this conversation. Maybe it’ll even accelerate nuclear war.

So you guys have made statements about AI misuse. This idea of AI misalignment, where we have the super intelligence, and even if an evil human isn’t constantly giving it commands, maybe it just accidentally runs away and the off button stops working — which is a big concern of mine. I don’t think that you guys have ever explicitly acknowledged AI misalignment risk. Do you think that that’s worth acknowledging?

Daniel 0:45:43
We certainly discuss it. That is definitely something that has come up. It is clearly a concern, and we already see it in some AI systems. We may not explicitly list it, but it’s definitely one of the reasons that we talk about AI as a growing concern, which we do.

And my guess is as the years go on, we’ll probably be more specific about the different types of concerns associated with AI. But let me ask, because this is — so what specifically, if you had to pick one and say, “Okay, this is the route by which things go wrong, here is the full scenario, beginning to end,” what is it?

Liron 0:46:43
Out of all the different problems, I think it’s basically some AI company gets a little ahead of the others, or it could even be a private researcher, and they’re like: “Oh, wow, this system is more robust than before. It really doesn’t need any input from me. It could run a long time, and it’s just so good at executing on these tactics. It figures out how to be a virus and seize a bunch of computing infrastructure. Oh, this is pretty cool. Oh, oops, I forgot to program myself an off button.”

Kind of like the Morris worm, I think in the 1980s, it accidentally took down some huge fraction of the internet because there was a bug in the code. It was supposed to stop spreading, and then Robert Morris, the original designer, was like: “Oh, oops, forgot to fix that part.” But then he couldn’t fix it because it had already left his control.

I think the analog of that is very likely to happen when AIs get super humanly powerful. I think it’s very easy to just forget the off switch.

Daniel 0:47:34
And then what happens?

Liron 0:47:36
So then the AI just gets to do whatever it wants with the world. Whatever it has in mind, that’s just what’s going to happen. So then the only question becomes: Okay, so what did it have in mind?

Unfortunately, we haven’t really figured out the science of actually getting AIs to robustly have something in mind. Some people will claim we do, but we really don’t. For example, there’s this transformation that AIs can do, where they just rewrite their code, and this kind of AI would totally be able to do that. It would be like: “Hey, I know how to write a better version of myself,” so it rewrites a better version, and this idea that it can pass forward some core of what the new version wants to do — we don’t have a good handle on that.

We have some ideas, in the vague stage, but we’re about to just unleash this thing, and I don’t think we have a good architecture for the off button, for the robustness of the future successors. But we’re just tinkering with it anyway.

Daniel 0:48:22
Yeah. I agree completely with all of that. I think all of that is a concern. How quickly that’ll happen, what might end up in that regime where we can’t turn it off and we don’t really know what its alignment is, what happens next — I’m more fuzzy on. I’m not gonna make strong statements about “and then we’re all doomed.”

But of course, it’s a concern if this scenario plays out, and it’s hard to argue that it won’t play out, especially in the context of this arms race, AI arms race, where everyone — it’s winner take all, we have to go as hard as we can, and any discussion of AI safety or regulating or anything means we lose. If we’re in that context, then that’s terrifying.

And so the message we have, which is a very common message and actually is common across all our risk areas, but certainly for AI, is there is a huge risk here. It’s a potential threat. We want this — this is a powerful new technology, and it makes sense to be deliberate about it. I assume that’s also what you would advocate, and I agree a hundred percent.

And the sad thing is that’s not where we’re headed. It’s very clear that we’re in this take no prisoners, accelerate as fast as possible — this is a goal, we have to win.

Liron 0:50:12
You’re being super calibrated in this conversation. It seems like we’re on the same page about misalignment risk, but I feel like we got short-shrifted by the statement of the Doomsday Clock, because it doesn’t talk about misalignment risk.

Daniel 0:50:24
Well, I think we talk about AI risk, and we talk about having some sort of regulation, guardrails. We talk about all those things. And underneath that is definitely the alignment problem, whether you have a kill switch, all these things — basically, very straightforward AI safety questions is part of what we have in mind. At least that’s what I have in mind when I read this document.

Liron 0:51:02
I hope people watch this conversation. Because reading your guys’ statement, you wouldn’t get the sense that you think ten to twenty percent or more in the next couple decades of extincting everybody because of misaligned AI. That is huge news, and somebody could read the Doomsday Clock statement and not get that.

Daniel 0:51:20
Okay, well, that would be unfortunate. I would hope — okay, so I’ll go back and read it again, but just to be clear, the Doomsday Clock statement is saying, “Here are all these different things we’re worried about: nuclear, climate, bio, AI, disinformation, and all of them are a concern.” We’re very careful not to say we’re only worried about one. We treat them all equally because they’re all equally a concern.

And in particular, one of the overarching concerns is the fact that addressing any of these concerns requires international engagement, multilateralism, all this infrastructure, which we’re currently in the process of tearing apart. So if you want to have guardrails on AI, it’s gonna be very hard if we’re in an arms race with China and everything is zero sum. That is then what we highlight as our key.

But I don’t — I would hope no one reads our statement or hears about eighty-five seconds and says, “Everything’s fine,” because that’s definitely not the message.

And no one should read it and think, “Yeah, everything’s fine. AI is definitely not a problem.” That is definitely not our message. It is a problem. It’s not the only problem. I think this might be the only place where we differ. I think it sounds like you’re quite convinced AI is a problem, but I view bio as also a problem — equivalent level problem. I view climate as also a problem. It sounds like you’re less convinced of that. I view nuclear as — these are all major problems which are being exacerbated by the geopolitical situation, being exacerbated by disinformation.

All this stuff is this perfect storm, where as these risks are increasing, our ability to address the risk is decreasing. If that continues, doom approaches, and that’s why we’re at eighty-five seconds.

Liron 0:53:43
I mean, I see what you’re saying. With nuclear risk, it’s like every year there’s a good chance we won’t nuke each other, and we just have to keep muddling through. Muddle through is a proven strategy for nuclear risk. We’re sure every year there’s a one percent chance or whatever that it doesn’t work, but at least it’s like, okay, well, we’ll probably survive a few decades. If we can kick nuclear down the road a little bit, that’s what it feels like to me.

And then bio is always a wild card. You never know how bad the pandemic is gonna be, but at least we can hope to have really good bio. So as long as we accelerate good bio forward, then we’ve got to hope that the good can outweigh the bad.

But the thing is with AI, AI is in a league of its own for me because I really don’t think that we have a handle on what to do about superintelligent AI, but I also really think that we’re about to create superintelligent AI. Some people are giving the timeline as two years. And if you ask me the timeline, I would just tell you a few years, I don’t know. We’re so close to this, and it’s irreversible. And the consequences, there’s no way to stop them. That’s the problem. It’s a hundred percent extinction.

So for me, AI is just in a separate category. What do you think of that?

Daniel 0:54:42
Yeah, I would argue that there’s this cloud of uncertainty about AI, so it might be longer. It could be decades. I don’t know, and you talk to different people — it’s hard for me to have a firm statement, but there’s definitely a risk there, and I don’t want to downplay that risk.

For nuclear, this idea that it’s all gonna be fine and maybe we can muddle along — I would sharply disagree with that. I think most people that have looked into it and done a sober assessment of the Cold War would say we were very lucky, and maybe fifty-fifty is an optimistic view of everything that happened in the Cold War. We’re probably less than fifty-fifty likely to make it through.

And we’re now entering a stage where, because of the way things are going, no one’s paying attention. We’re about to go into a new arms race. The arms race is not bilateral — it’s now not a two-body problem, it’s a three-body problem, which is fundamentally unstable from a physics perspective. There’s no way to get parity among three nations.

So China, Russia, and the US — the US is saying we want to have more weapons than Russia and China combined. If everyone says that, we end up with infinite weapons on all sides. That’s not a good solution. That seems to be the mentality and maybe where we’re headed, so that’s super alarming. Meanwhile, arms control is going away. There’s a threat of resumption of testing.

Last year, there were wars between nuclear-armed states — India and Pakistan, Russia and Ukraine. These are states with nuclear weapons attacking other states. And Putin has said multiple times, “I’m gonna use nuclear weapons. You gotta be careful. This is not a bluff.”

Now, to say it’s one percent, we’ll muddle through — I find that very optimistic. I think the risk is high, and it is now, and that makes me extremely nervous. And I agree, there’s also a risk with AI. It has a slightly different character to it, but so does climate. So I basically agree with you on AI, but disagree with you in the characterization of nuclear risk, because I think it’s actually much more present and in our faces right now. Especially if people don’t think it is and think this is not the major concern — that I find especially alarming.

Liron 0:57:44
I’m glad you’re standing up for nuclear because I actually agree with you. Nuclear risk is vastly underrated. I agree with you, The Doomsday Machine is really good. You’re clearly a true connoisseur of doom. You know a good doom when you see one.

I agree with that. On the AI front, though, have you ever dived into the writings of Eliezer Yudkowsky?

Daniel 0:58:03
I’ve read some. I am not — I don’t have nearly as much expertise on that as I do of others, but I know the contours of those arguments, and I do find many of them compelling.

Liron 0:58:23
Nice. Yeah, so he has this 2025 New York Times bestselling book, If Anyone Builds It, Everyone Dies, co-authored with Nate Soares. You have a board member of the Doomsday Clock, John Wolfsthal, and he reviewed the book. He wrote: “A compelling case that superhuman AI would almost certainly lead to global human annihilation. Governments around the world must recognize the risks and take collective and effective action.” You basically agree with that?

Daniel 0:58:50
Yeah. And that’s what we say in the statement. John Wolfsthal is a member of the board, of the Science and Security Board, and I agree. It’s consistent with what we say in our statement, which is governments need to take action.

Why Professor Holz Founded UChicago’s Existential Risk Laboratory (XLab)

Liron 0:59:09
So let’s talk about your other organization that you started, the X Lab. Tell us about that.

Daniel 0:59:14
Yeah. The Existential Risk Lab — this is a laboratory at University of Chicago. X Lab, for short, is an attempt to create an academic environment focused on these existential risks.

So I should say that the Doomsday Clock, the Bulletin of the Atomic Scientists, which sets the Doomsday Clock, was founded at the University of Chicago by Manhattan Project scientists who were freaked out by this new technology they had developed. And so the analogies with AI are very clear. You have scientists, the scientists developed this, and then got very worried.

And these scientists, the experts, the people that understood the power of the atom, which they had just unlocked — those scientists in 1945 were like: “We’re worried, and let us tell you why. We’re worried that other nations will develop this expertise. We’re worried that these fission weapons are just the beginning.” They didn’t — I mean, it was classified, but they knew that fusion weapons were probably around the corner.

“We’re worried that there could be an arms race. And we’re worried that this will lead to a World War III with nuclear weapons, and that would be catastrophic for civilization.” All of that in 1945, when only the US had these weapons, and they had zero or a few, depending on what month you’re talking. Already recognized all this, created the Bulletin of the Atomic Scientists, and then created the Doomsday Clock, and the whole point was to have this organization warn and use their expertise. I think the analogy with what’s happening in AI right now is extremely instructive.

But okay, to your question about X Lab. The Bulletin, founded at the University of Chicago, it’s housed in the Harris public policy school on campus. But the Bulletin has no formal connection with the university. Then I’m on campus, I’m a faculty member, students come to me and say, “I want to learn about this and make a difference,” and I have nothing to offer them. The Bulletin doesn’t — it’s not a research unit.

There is no real research community on existential risk. There’s no journal on existential risk. There’s no major in existential risk.

Liron 1:02:02
Give us a student who found the X Lab, and you’re enabling them to do something in particular. Give us a taste.

Daniel 1:02:09
Okay, we have all sorts of students involved. We have many projects. Right now, we have a working group on nuclear issues where students are looking at improving models of nuclear winter to try to better understand what’s going on. We have students working on AI safety related work. One of our students actually went off afterwards and helped pass the RAISE bill in New York, played a major role in getting that through.

And we have projects like — one of our projects has to do with using AI tools to map AI data centers, and that’s one of these cross-cutting projects where it shows the power of AI tools. It creates a database to capture what AI development looks like, but in some ways can also be thought of as a target list if you’re worried that some nation is about to develop superintelligence and you want to stop that from happening. It also ties into energy — if they’re gonna be small modular reactors at every AI data center, and then they become targets, what happens next? It gets to be a very complicated project and problem, and that’s something we’re exploring.

We have an AI safety primer course, which is available on our website, which is a way to bring students up to speed if they want to make contributions. If any students are listening, and you want to learn about these things and develop some tools and capabilities, we can help you do that.

We run a summer fellowship, a research fellowship, where we train students and then set them loose with faculty research advisors from all over on a whole range of projects, and they drive their own research projects. We brought Stuart Russell to campus. I teach this class called Are We Doomed, which is part of X Lab and is meant to eventually turn into potentially a minor or a major in existential risk.

And I should say, Geoffrey Hinton was one of the lecturers in my Are We Doomed class. AI has always been part of what we’re about.

And then we do things like we ran this Nobel Laureate Assembly for the Prevention of Nuclear War this past summer, where we brought Nobel Laureates to campus, along with experts on the nuclear threat, and produced a set of recommendations for how the world could reduce the nuclear risk. Very concrete, clear, achievable recommendations, and we’re now going out and trying to push those recommendations. We presented them already, and we’ll present again to the United Nations. We’re working with other groups around the world.

So basically, we’re just trying to do anything we can to have impact from an academic perspective — produce research, produce recommendations that can go on and make the world safer.

Liron 1:05:52
This class that you’re teaching at U Chicago called Are We Doomed? On the subject of AI existential risk, do you think you could make “If Anyone Builds It, Everyone Dies” one of the assigned readings?

Daniel 1:06:02
Yeah, I think it likely will be. We have a pretty deep set of readings, and my guess is for the next iteration, it’ll be on there.

Liron 1:06:14
Very cool. I think it should be because I think the title says it all: If Anyone Builds It, Everyone Dies. I think there’s good reason to think that’s literally true.

The State of Academic Research on AI Safety & Existential Risks

Liron 1:06:21
Because what is the state of academic research on safety of these kinds of misaligned AIs? There’s one professor, Professor Tyler Cowen, maybe you’ve heard of him, where he’s basically saying that there aren’t any convincing peer-reviewed papers why we should expect high probability of near-term AI existential risk. But I would flip it around, and I would ask you, do you feel like the state of academia around how to survive misaligned AI is near adequate?

Daniel 1:06:50
So it just turns out one of the student projects is a reproducibility project, where they’re trying to reproduce AI results and AI safety results. In part because of what you’re talking about — the peer review is lacking, and you hear claims, and it’s very hard — so they’re trying to systematize that. I’m quite excited about it, I think it’s very promising and interesting, and they’ve already found some interesting results trying to replicate and failing to replicate various results.

So there’s a lot to be done there. The idea that now the field is mature and everything is in place, I would not agree with. I think it’s very early. This is essentially a new discipline, and so there’s an incredible amount of activity and opportunity, and just a lot that’s unknown, which is part of the reason reasonable people can have pretty different conceptions of the risk.

And given the consequences, we really should understand it better and develop tools to mitigate it and control it. We need to do it more. Our students are trying, but there’s a lot to do.

Liron 1:08:11
I want to compare your perspective with that of the leaders of the AI companies. The most recent statement I heard on the matter was Dario Amodei. He published an essay a couple of weeks ago called The Adolescence of Technology. Anthropic is one of the frontier companies building the most powerful AIs, heading toward that superintelligence threshold. They’re taking a straight shot to the superintelligence threshold. They want to be in the lead.

Dario has published about how important it is for the United States to get to that superintelligence threshold before China does, and then we can give China an ultimatum — really intense stuff that he’s trying to do as part of this race.

And he recently published in his essay, he says, “I don’t want to take a doom attitude about this. I don’t want to say it because it’s unproductive when you say that this might lead to doom.” So he’s kind of calling you out, because you have doom in the name, Doomsday Clock.

Daniel 1:08:58
Yep.

Liron 1:08:59
Do you think that these companies right now are doing enough to acknowledge the risk of what they’re doing, or do you think that they’re out of line?

Daniel 1:09:08
I think I would hope that they would take the potential risk more seriously. I think many of these people, if you get them aside, will acknowledge that there are these risks. And they’ve either decided or kind of truthfully know that it’s worth the risk.

And I don’t know if we want the leaders of the companies that are building the technologies and have the most to gain from those technologies to be deciding what level of risk is acceptable. You can debate whether or not it’s earnest, but even some of these leaders have said, “We welcome some sort of regulation as long as it’s applied globally.” Well, people say, “Everyone knows we’re never gonna be able to get global agreement, and verification is very hard, and there are all these issues that make it unlikely to happen.”

That may all be true, but nonetheless, we should try. There are things that can be done. We should prioritize that stuff.

Liron 1:10:19
So what’s the idea from here? All these AI companies are racing, and you’re saying they have pressure to race, but it would be nice if they didn’t. Have you thought about the ideal equilibrium here or the ideal global solution?

Daniel 1:10:30
A number of things have been proposed, like the RAISE Act in New York, and there are variations of that in California. There are European regulations. People are struggling with this. I don’t know if there’s one easy solution now. I can’t point to one thing and say: If we do this, it’s all gonna be fine.

I think the communities that are working on solutions should be supported, and it is completely reasonable for the public and for policymakers to say, “We want some sort of assurances that what you’re gonna do is gonna be okay.” These companies do bear some responsibility for what they’re producing, and we can create a climate where the technology still moves forward, but we get more assurances that what they’re doing will be okay — if it’s aligned or has certain criteria.

I think there are many ways forward that way. What do you think?

Liron 1:11:37
So I think because we’re maybe two years — it could even happen next month. At this point, I would say nobody has a right to be confident. Nobody has a right to be ninety-eight percent confident that we won’t have recursively improving superintelligence next month. There’s now too many unknown unknowns. You never know what some underground data center in China, right, the Chinese government has a few months lead for whatever. You never know. I’m not saying this is likely, but the chance is now a single-digit percent that we’re so close to superintelligence, and then it becomes more like fifty percent likely that we’ll get superintelligence in ten years. That’s the middle of the bell curve.

So my point is, the situation is extremely urgent, and so in terms of what policy to do, I don’t think it’s enough to be like, okay, yeah, we’ll make the cooperation a little better every year. I think we have to either pause development now or get ready to pause. This whole idea of pausing has to be on the table just because we’re about to hit the point of no return. What do you think of pausing?

The Case for Pausing AI Development

Daniel 1:12:32
I mean, I like the idea of pausing. There’s always this question — let me just give the analogy of nuclear risk, just to give you a sense. So there’s always a debate when you talk about nuclear risk, and the debate is, if we have nuclear weapons, then there’s always a risk of using nuclear weapons. And so the goal is no nuclear weapons.

As a goal, I think many, many people agree with that. There’s a non-proliferation treaty. There are lots of ways in which people, countries, express their desire for a world free of nuclear weapons.

However, we’re not in that world. There are nuclear weapons, and the question of how do you get there is a major question. And the question of whether arms control prevents us from getting there, because arms control tries to create a stable equilibrium instead of saying there is no stable equilibrium, we have to go all the way to zero — yeah, this is an active debate. You can have different philosophies, even if the goal, and I think the goal for almost everyone involved in this, is no nuclear war. That’s the one main thing.

Okay, so now for AI, the question is: we do not want these worse AI risks to happen. What is the step that we can take now that is achievable? Is the abolition of all nuclear weapons in the next year achievable? That seems unlikely, given the state of the world. Is a pause on AI development at this moment — given the incredible, earth-shattering investments in AI right now, where it’s like many Manhattan projects over and over in these data centers and in these capabilities, and given the political alignments and the relationships between nations — I would have no problem with a pause. I think that would likely be better for the world.

But whether that’s the place to put all our chips, I’m less sure of. It’s all a very long-winded way to say, I think a pause would be great, but I think we should also be making plans for assuming things continue — what sort of regulations can we put in place? What developments can we make in AI safety? Let’s also require major investments to the extent that we can to try to keep the worst cases from happening.

Liron 1:15:36
Yeah, and one good regulation that we can put in place is preparations to prepare for pausing if and when we decide to pause.

I’ve never been a fan of centralized control. I’m a libertarian at heart. But I just think we’re so close to runaway AI. I think desperate times call for desperate measures. So, yeah, phone home, centralized control, or a centralized off button. I think these are necessary precautions right now.

The framing I would use is that we’re accelerating capabilities, and we need to make sure to not have super intelligent capabilities before we have the theory of how to control a super intelligence, which we’re totally lacking right now. It’s this crazy thing where we know how to turn the crank to get us to super intelligence, and we just have no idea how we’re going to control the thing that we’re going to make.

Daniel 1:16:27
Yeah, no, I agree with that. And if we could get — these are common sense things that we could do now that could help us in the future. We should probably do those things.

Liron 1:16:42
Yeah. So pause policy is good. And another thing you said is it’s gonna be so hard to pause. It’s hard to coordinate, and I agree with that too. So unfortunately, I’m just here to make the best of a bad situation. I agree we’re stuck between a rock and a hard place. If we fight to achieve a pause, that’s a hard fight, but I also think it’s an even harder fight to build super intelligent AI and then regret it and then be screwed.

Daniel 1:17:03
Yeah. I agree with that as well.

Liron 1:17:07
Okay. All right, sounds good. Yeah, so a lot of agreement in this conversation.

Debate: Is Climate Change an Existential Threat?

Daniel 1:17:11
Yeah, a lot of agreement. So I will say, given everything you’ve said, I want to push back on one thing you mentioned earlier, which is we don’t really have to worry about climate change because geoengineering can fix it.

Liron 1:17:24
You’re the omni-doom rubber. I can’t get away with not being a doomer about something.

Daniel 1:17:28
Yeah. So we know climate change is happening. We agree on this. There are various projections, most of them are pretty alarming. They involve significant additional warming — probably two degrees, probably higher. The idea that that’s fine because eventually we’ll have a technological fix makes me nervous.

You could imagine someone saying, “Well, with AI, we’ll somehow have some magic fix.” Overall, the trend is pretty well understood, and the easiest intervention is really well understood, which is stop burning fossil fuels.

Liron 1:18:21
Yeah, well, in terms of stuff we’re gonna be able to do, have you heard of this company Make Sunsets?

Daniel 1:18:26
Yeah.

Liron 1:18:26
Okay, so I feel like they can buy us fifty years.

Daniel 1:18:29
So this is geoengineering. This is the idea that we have been, in some sense, geoengineering — we’ve been changing our planet already through our own actions — and then the only question is, can we deliberately do some things that would help us out?

And I think the answer is clearly yes, in the sense that there are things we can do that would offset some of the warming. For example, one version of this I’ve heard which I find very compelling is — we’ve already been reducing sulfur emissions. Sulfur turns out to have been having a cooling effect.

Liron 1:19:15
Right, and that’s what Make Sunsets would do, is get that sulfur dioxide back in the atmosphere and get that cooling effect back.

Daniel 1:19:20
Right. And so you could imagine doing it in a way that doesn’t pollute, that is sort of cleaner for the atmosphere and offsets and causes a little cooling. And if the option is the warming, then there is something appealing about that.

But it’s not a solution, and I don’t think anyone is proposing it as a solution. And my understanding is, it’s unlikely to scale all the way to what we need. And there are lots of — it changes the temperature, but it doesn’t change any of the chemistry that’s underlying it — acidification and many of the other things that may be unpleasant and potentially catastrophic. There’s just lots of uncertainty. I think as something to study and understand, that makes sense to me.

As “don’t worry, we have the solution, we can keep burning fossil fuels, it’s all going to be fine” — that I find extremely alarming.

Liron 1:20:33
Well, the IPCC reports — this is the organization where the scientists get together, and they explain to the world what’s up. Those reports never go out and say, “Hey, we think there’s a high chance of extinction from this.” They never say that. Their scenario is always like, “Hey, the economy might suffer a ten percent GDP reduction or whatever in a few decades.” So they’re not even claiming that. So you don’t have to be a bigger doomer than the IPCC.

Daniel 1:20:57
Yeah. Well, IPCC — first, there’s a range. It’s a consensus document, and they say: Here is roughly where we think we are. The last few years have been quite warm. All within the range of projections because there are uncertainties.

But to say the projections for the next decades imply that there won’t be any severe impacts on society — I think that’s a misreading of the reports, personally.

Liron 1:21:32
Oh, I’m sure there will be severe impacts on society. I just think there’s a pretty big gap between that and more than one or two percent chance of major death — really decimating the population. That’s where nuclear and AI and bio risk would go.

Daniel 1:21:49
Yeah. In my reading of this, again, it’s a question of timescale. In my reading, if we keep burning fossil fuels, really lean into this, everyone burns all the coal or whatever, then the likelihood of catastrophe — where there really is mass dislocation of a good fraction of the human population — I think that gets very high.

Liron 1:22:20
So now I’m getting curious because you’re falling into this pattern where anything that sounds icky, or is traditionally maybe a left-wing cause — and I’m pretty apolitical myself — but it sounds like you’re embracing every type of doom. So maybe I can find something that traditionally people might expect you to say is doomy, and then you can break the mold, and you can be like, “Actually, that’s fine.”

Daniel 1:22:45
Okay.

Liron 1:22:45
Let me try you. Okay, what about landfills? Do you have a problem with landfills?

Daniel 1:22:51
Like, from a doom perspective, or what do you mean?

Liron 1:22:55
Do you think that when we put a bunch of stuff in a landfill, that’s bad for the earth?

Daniel 1:22:59
I mean, not from a planetary perspective.

Liron 1:23:04
Okay, so landfills are good. I agree. So at least we can agree on that, because there are some people who will keep going, they’ll be like, “Okay, this is bad. We should stop landfills. We should recycle everything.”

Daniel 1:23:12
Mm-hmm.

Liron 1:23:13
When we use a bunch of water, is that a problem?

Daniel 1:23:17
Well, again, I’m not understanding the question. If we’re projecting a scenario where there’s gonna be water shortages that will impact a billion people, then that’s a problem.

Liron 1:23:31
No, for sure. Yeah, but you understand what I was trying to do? Because you’re saying yes to all the possible doom scenarios, I was just trying to find what your limit is when you’d be like, “Okay, that’s actually not doom.”

And you said it was landfill. You don’t have a problem with landfills.

Daniel 1:23:42
Well, and I don’t have — I mean, there’s — a lot of wasted water in — like Chicago, water is not as precious a resource as in California. There’s a lot of wasted water in Chicago. As a general principle, I think it’s probably better not to waste resources and be inefficient, but is it a doom scenario for Chicago? No.

But for California, the issue of water is potentially a doom scenario for that state and for that economy, and it doesn’t take that much projection to end up in a regime where there really is a catastrophic problem there. And then the question is, what happens next?

Liron 1:24:30
Somebody will innovate a way out of it.

Daniel 1:24:33
Yeah, you can believe that we’ll just innovate, but then — the question for you is, why isn’t that true for AI?

Liron 1:24:43
Oh, geez, yeah. So markets do a lot. In the majority of realistic scenarios, you really can just put a price on things and have somebody supply it because of the profit motive.

It’s a great question why that won’t work for AI. It’s basically because the feedback loop gets attenuated. If everybody had a free fifty years to go do any experiment they want, to go build any AI they want, and then if it goes rogue, they can hit the reset button, I do think the profit incentive would solve it. I think you would have the Google of fifty years from now having the upgraded google.com that can do everything for you because they figured out how to make it work.

The problem is, I think people are going to try to make profit, and then they’re going to get overwhelmed, and they can’t take another crack at it.

Daniel 1:25:22
Okay. And now let’s just try that exact same argument with climate.

Liron 1:25:27
With climate, I just — we have to react. We’re like, “Oh, okay, it’s really getting crazy. Okay, pump the sulfur now, sulfur dioxide.”

Daniel 1:25:32
This is all a question of timescales, so I agree. With climate, the timescales are long. You do something, then it takes maybe a decade or a few decades for it to really kick in, and at that point, you can’t just fix it like that, because that’s not the way the climate works.

By the time — if we wait a few decades, all the projections are bad. Even if you take the best-case IPCC projection, it’s much worse than it is now. You can’t just then say, “Oh, okay, well, that’s bad.” Now we’re totally screwed. Huge sections of the Earth, the mid-latitudes, are now uninhabitable. We’re getting famines and floods — at a point where it’s just completely catastrophic. Every time you pick up a newspaper, it’s another disaster. Now we want to fix it? No, it’s the same argument. It doesn’t work anymore.

Liron 1:26:33
If we can buy fifty years of time for climate change, I feel like a lot is going to come on the table in terms of tools for geoengineering the atmosphere. I feel like that’s quite a tool bag. Whereas if you just run the clock forward two years, which is the time to super intelligence, and you ask me, what tools are we gonna have there for AI safety? It’s nada. So that’s the difference.

Daniel 1:26:52
Okay. I mean, to me, I hear that being like, geoengineering is gonna be a magic solution. And I could say, “Well, maybe someone will use one of these AI systems to come up with a magic solution for AI.” I don’t believe that, but I also don’t believe that geoengineering can address all the threats.

And one of the things — it’s funny, because I am a physicist, and I am optimistic about technology. But I think especially when you’re talking about the entire climate, the idea that there’s just gonna be a technological fix, that we can just hit a switch and everything’s gonna be fine, makes me extremely nervous.

I mean, it’d be awesome if there were some technology where we could do especially carbon removal, capture, get rid of it all. And if I thought in ten years we were gonna invent that, that would be fantastic. I don’t see any evidence of that, and there are lots of in-principle arguments that say it’s gonna be very hard to achieve that at scale on the timescale that’s relevant. It might be possible—

Liron 1:28:03
I have a lot — yeah, about what we can do technologically, about how much we can manhandle the atmosphere in the year 2074. I’m a lot more optimistic than you.

Daniel 1:28:09
Well, 2074 is a long time away, and you might be right, but if you’re not, that’s not great.

Liron 1:28:20
Yeah, yeah. Okay, fair enough. I mean, look, I’m not saying you’re zero percent. I’m not gonna be one of those guys. I’m not gonna be the Yann LeCun, saying you’re warning about something one in a thousand. I just see it as — I’ll give you two percent, okay? I’m just not willing to go higher than two percent that climate change is a decimate-the-population-level doom risk. But I appreciate that we were able to have a legitimate doom debate on something.

Daniel 1:28:40
Yeah, okay. I was trying to find something to kind of spice this up.

Liron 1:28:44
Yeah, yeah.

Call to Action: How to Reduce Our Collective Threat

Liron 1:28:46
That’s what the viewers like. Okay, great. So I think we can head into the wrap-up here. Anything else you want to hit on, and then we can end with talking about the most productive action that the average viewer can take?

Daniel 1:28:57
I do want to say that I really think it’s important that the next generation get involved. I think this is a major part of what X Lab is about — informing students, this next generation, about these threats. They’re the ones most impacted. And giving them tools to address those threats.

There’s a real hunger for it. We’re just on one campus. We have all these incredibly capable, passionate U Chicago students, and we can’t keep up with them. The goal would be to scale this to all campuses globally. This should be part of what an informed student — when you graduate from college, when you graduate from high school — just as a part of your basic education, there should be a holistic view of the planet. We’re all on this one planet. We share it. There are certain risks to all of us.

I think we agree that AI, if things go wrong, is gonna impact every single person. If things go wrong on nuclear, it’s gonna impact every single person. This is the sort of thing that we just need to work on. We need to have these conversations. Everyone needs to be engaged in this conversation and then find ways to make a difference. And that is the philosophy of X Lab. I know you share it.

Liron 1:30:43
Yeah. Do you want to recommend a particular call to action? Should people apply to your program, or what should they do?

Daniel 1:30:49
Yeah. So this is something I always struggle with, so I’m very curious to know what your general call to action is, but I’ll give you mine. There are some general things: please be informed and talk to your neighbors, your friends, your family. Create this shared reality connected to the world based on facts.

Vote, contact your leaders. That stuff matters. Americans — their representatives get calls about all sorts of stuff, but they almost never get calls about doom, nuclear threat, or AI. That’s not the calls they’re receiving. That’s the generic stuff.

Then there is all the other stuff which ends up being very personal. Once you take it on board, as you have, as I have — the future of civilization is not assured. We’re doing things that are endangering all of us. We’ve been lucky, and our luck may run out.

Once you take that on board, it’s up to each of us to try to figure out how to make a difference, and that ends up being very personal, and it depends a lot on what your interests are and what your capabilities are. I’m a physicist. I study black holes. It’s an incredible time to be studying black holes. We’re making all these wonderful discoveries. There’s a temptation to just only do that and pretend the rest of the world isn’t happening.

But I feel like I want a future, and I care about this stuff, and so I’m doing my part. I’m trying. I’m at U Chicago, there’s the Bulletin, and it’s been incredibly gratifying. I’ve learned so much. This is what I’m doing.

Everyone has — you’re doing a podcast. It’s a way to have impact. If you’re an artist, make some art. If you’re a lawyer, go think about it — a lot of these issues end up having legal aspects. Whatever it is, however you go into the world, you want to carry this knowledge with you, not as a burden, but as an inspiration to make a difference. And I think if everyone were to do that, these problems — they’re not insurmountable. These are problems we’re doing to ourselves. There’s plenty of opportunities to make a difference. We just have to.

Liron 1:33:45
What do you think of this call to action? You mentioned vote, prioritize doom as one of your voting issues. What if I could throw in specifically prioritize international coordination on pausing AI urgently as a voting issue?

And I think you said this to kind of move the Overton window. Make it okay to discuss with your friends when you’re sitting around talking about the issues, talking about sports, but also politics. Just mention, “Hey, you know what’s a good political issue for me? The fact that AI is likely to literally kill everybody on a very short timeline. We should vote on that.”

Daniel 1:34:16
Yeah.

Liron 1:34:17
So what a call to action.

Daniel 1:34:18
Yeah, I think that’s a great call to action, and I would add, of course, the nuclear risk, and we should renew New START. That’s a no-brainer. Don’t restart explosive nuclear testing. That would be idiotic.

Things like that. All of those things should be discussed. We should have basic pandemic preparedness. We should turn away from burning fossil fuels. They’re dirty, and they’re expensive. Really invest in leaning to renewables, which are cheaper and cleaner, and also have the tendency not to destroy our planet.

All of those things we should be leaning into, and we should demand that our leaders do likewise, and we have to prioritize that. It won’t happen without all of us saying we care. And we do care, because if we get it all wrong, we’re all impacted. It’s terrible for every single person. So even out of naked self-interest, this is what we should be doing.

Liron 1:35:24
All right, Professor Daniel Holz, this was a great conversation. Two giants in the field of Doomsday. I just want to call out your impact — holding that Nobel Laureate Conference, being on the board of the Doomsday Clock, helping see through that process. I think you’re having a really huge positive impact. You’re helping our civilization be a little bit mature instead of running straight into the whirling razor blade. So thank you so much for all you do.

Daniel 1:35:49
Likewise, Liron. Thanks for having me on and having this discussion.


Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏

Discussion about this video

User's avatar

Ready for more?