0:00
/
0:00
Transcript

Nobel Prizewinner SWAYED by My AI Doom Argument — Prof. Michael Levitt, Stanford University

A Nobel laureate says what almost no guest ever will: "You've changed my mindset."

My guest today achieved something EXTREMELY rare and impressive: Coming onto my show with an AI optimist position, then admitting he hadn’t thought of my counterarguments before, and updating his beliefs in realtime! Also, he won the 2013 Nobel Prize in computational biology.

I’m thrilled that Prof. Levitt understands the value of raising awareness about imminent extinction risk from superintelligent AI, and the value of debate as a tool to uncover the truth — the dual missions of Doom Debates!

Timestamps

0:00 — Trailer

1:18 — Introducing Michael Levitt

4:20 — The Evolution of Computing and AI

12:42 — Measuring Intelligence: Humans vs. AI

23:11 — The AI Doom Argument: Steering the Future

25:01 — Optimism, Pessimism, and Other Existential Risks

34:15 — What’s Your P(Doom)™

36:16 — Warning Shots and Global Regulation

55:28 — Comparing AI Risk to Pandemics and Nuclear War

1:01:49 — Wrap-Up

1:06:11 — Outro + New AI safety resource

Show Notes

Prof. Michael Levitt’s Twitter — https://x.com/MLevitt_NP2013

Wikipedia — https://en.wikipedia.org/wiki/Michael_Levitt_(biophysicist)

Stanford page — https://med.stanford.edu/profiles/michael-levitt

Transcript

Trailer

Liron Shapira 00:00:00
My guest today is Professor Michael Levitt. He won the 2013 Nobel Prize in chemistry for pioneering the field of computational biology. What is your P(Doom)?

Michael Levitt 00:00:11
I don’t have one and I’m prepared to see human beings living out into the sunset.

The combination of human intelligence and artificial intelligence will be more powerful than artificial intelligence alone.

Liron 00:00:22

You’re giving me this equation: AI plus human is greater than AI.

Michael 00:00:27
Yeah. Yeah.

Liron 00:00:29
We are past the age of Centaur chess. Like in chess, the equation human plus AI —

Michael 00:00:34
I’m just saying I agree with you. The centaur thing is fine.

Liron 00:00:38
The trend in these kind of things is that, okay, you get a centaur for a little while, but then you just get AIs, you know what I’m saying? Like humans do kind of get discarded.

Michael 00:00:46
So I definitely agree that there’s a risk there.

In this conversation, you know, I have been swayed in your direction more than I thought I would be.

Liron 00:00:58
Wow. Only 1% of my guests ever say that.

Michael 00:01:00
Well, okay. You’ve done a good job. You’ve definitely changed my mindset. And I will probably add slides to my lecture based on what you said.

Introducing Michael Levitt

Liron 00:01:18
Welcome to Doom Debates. My guest today is Professor Michael Levitt. Michael is a professor of structural biology at Stanford. He won the 2013 Nobel Prize in chemistry for pioneering the field of computational biology by developing computer programs that simulate how proteins and molecules move and interact.

Liron 00:01:47
He’s known for being an interdisciplinary thinker who can explain the deep complexity of quantum mechanics while also engaging in high-level debates about public health policy and economics. Recently, he’s also been doing some thinking about artificial intelligence, so I’m thrilled to be able to talk to him today about the usual topics that we like here on Doom Debates, like AI doom, and also how it relates to his experience researching computational biology and chemistry. Michael Levitt, welcome to Doom Debates.

Michael 00:02:07
Thank you so much. It’s a great pleasure to be here. Um, I feel very excited about public outreach and I think it’s great what you’re doing to help people get their heads around difficult problems.

Liron 00:02:19
I think it’s great that you’re here talking. You know, part of the reason we do Doom Debates is we get some of the brightest minds in the world and we sit ‘em all down and we say, okay, what’s your take on this AI revolution? And we’re getting a lot of different takes and it feels like we don’t have much time. So the audience is going to basically decide which take is correct. Sound good?

We gotta talk about your Nobel Prize-winning research real quick. Roughly when did the research take place and how would you describe it?

Michael 00:02:44
It actually took place really early. I became an independent scientist by chance when I was 20. So I came to Israel and was effectively asked to be the computer programmer of Professor Shneior Lifson at the Weizmann Institute and his PhD student, Arieh Warshel, who in the end shared the prize with me. They were working on small molecules and I basically wrote the code for them with Arieh, and then used that same code to do large molecules.

Michael 00:03:15
Essentially a small molecule might be 20 atoms, and a large molecule is say a thousand and up, and all the biological molecules, like proteins, are large molecules. So essentially that gap year got me started. I then actually went to Cambridge for my PhD. Came back to Israel for a postdoc.

Michael 00:03:30
The prize was what they called multi-scale modeling, essentially means choosing the model with the right level of detail for the problem. So I tell people that, you know, I could have quit—I’m just joking—when I was 27. A lot of Nobel Prize-winning work is done by people when they’re very young and then recognized a long time later. In my case, I think it was something like 45 years later.


And it was only after many people had been using these methods that it was seen to be important. That’s an interesting background. So you mentioned you were about 27 when you were doing this research?

Michael 00:04:07
20 to 27.

Liron 00:04:09
And this was like the seventies.

Michael 00:04:11
‘67 to ‘75.

Liron 00:04:14
Wow. You know that Nobel Prize community, they sure take their time recognizing research, right? ‘Cause you won it in 2013?

Michael 00:04:19
Yep.

The Evolution of Computing and AI

Liron 00:04:20
So back when you were doing your modeling, the state of the art was a lot of scientists didn’t even want to use computers.

Michael 00:04:27
Right. Computers were, you know, I try to get a grip on this. So it turns out that the typical cell phone is as powerful as the biggest supercomputer in the world in 1997. So now you go back another 30 years to ‘67. At that time, Israel actually had some very powerful computers, probably more powerful than say anything in the United Kingdom, or at least anything available to scientists at the Weizmann Institute.

Michael 00:05:01
So I could freely use a computer. That was really, really powerful for then. But probably now it would be, you know, a millionth of your cell phone or something like that. Plus, it’s a huge big room and it’s very expensive, and it’s a big staff to run it. So we sort of forget just how enabling computer technology has made computing. And this thing is also relevant to AI. AI is very much a product of the computer power that we now have.

Liron 00:05:23
We’re talking now about AI, which is like this new dawn of kind of a different type of computation, but you’ve been researching for quite a few decades now, and you kind of have this analogy where you can compare it to the dawn of just using computers at all. Correct?

Michael 00:05:38
Right. I’ve actually been programming computers for 60 years. In some ways you could argue that a hand calculator is AI. In other words, a hundred years ago, if somebody could take two 10-digit numbers and multiply them in their head in two seconds, you would say, wow, that is a genius. You know, something incredible. And there were people like that that were often treated on sideshows.

Michael 00:06:04
And now that becomes a simple problem. A bit later you would’ve said, okay, playing the game checkers is difficult. People can’t win in checkers very easily, and computers conquered that. And then went chess, and then went Go. So I think that AI has been around essentially from the dawn of computing.

Michael 00:06:26
I remember when I started my PhD at Cambridge to work in computational biology. There were books on AI. There was a very important School of AI in Edinburgh and also at Stanford. So AI has been around for a very long time, and in some ways the basic ideas have been around for a long time. The problem was that computers were so slow, they just weren’t fast enough for AI. And the big thing that happened in the last five years is they suddenly became fast enough to actually do AI that starts to seem intelligent to us.

Liron 00:06:59
Now I feel like there have been different phases of it though, right? ‘Cause they keep seeming intelligent one domain at a time. That seems like the history of AI progress.

Michael 00:07:06
Right. But I think if you really start with say, multiplication of numbers, well that’s really simple. And then you go to... IBM Watson was an important step when it won Jeopardy. I know for myself, if I look at myself, why did I want to get into computing? Computers were very rare. I grew up in South Africa, and I distinctly remember I was probably 13 or 14 when I heard about this new thing called the computer.

Michael 00:07:37
But what really intrigued me was that they had the computer play the game Tic-Tac-Toe, just an incredibly simple three by three square with zeros and Xs. And it’s a game where there’s a winning strategy, but the computer could win every time because it could work out the winning strategy. And this seemed intelligent. So I was actually really impressed by that.

Michael 00:07:54
When I think I was drawn to biology because there were a lot of computers in the field. So in some ways I’ve probably been a lover of computers before I got into structural biology or science or anything like that. So there definitely have been stages. But I really do see the release of ChatGPT 3.5 on the 30th of November, almost three years ago, as being a really important development because suddenly we could have it do all sorts of things that we didn’t think it could do. And that doesn’t mean it doesn’t make mistakes. But, you know, the mistakes don’t particularly bother me. I can deal with mistakes very easily.

Liron 00:08:31
Okay, great. And we’re gonna talk more about the ChatGPT moment, but first on this larger subject of predicting AI timelines. That’s a very interesting subject for my viewers. ‘Cause the obvious question is like, okay, what’s the timeline where they become smarter than humans, if they ever do? But before we talk about that, I wanna rewind and talk about past attempts to predict AI timelines. Right? Because you’ve seen ‘em happen. One of the ones I know about is Hofstadter in the seventies, predicting that AI might never become as good at humans at chess.

Michael 00:09:00
I remember reading his books at that time and they were great. So.

Liron 00:09:05
Yeah, exactly. So what are some key moments that you’ve had in the past where you or people around you were predicting AI progress one way, and then what did you observe after? What are some stories you can tell?

Michael 00:09:14
Sure. I would say I believe that the future is intrinsically unknown and that so many things happen for the wrong reasons. If I look at a very simple case and look at my own timeline, how I managed to get where I got to, it was full of serendipity events I didn’t even push.

Michael 00:09:34
For example, I thought I was sent to Israel by a Nobel Laureate in Cambridge called John Kendrew, and I had no idea why he sent me to Israel. A lot of people thought I went there because it was in 1967 after the Six Days of War, but I basically went there because you wouldn’t take me to a PhD in Cambridge without that. Now I didn’t know why, but that ended up being a completely critical, pivotal moment in my own life.

Michael 00:09:59
I think the same thing happens with any progress. I tell people what was behind the incredible increase in computer power that allowed AI to become possible. And you could say Apollo, you could say American Weapons Labs, you could say the supercomputers like Cray. But the real answer is teenage boys playing video games. That’s what led to GPUs.

Michael 00:10:24
And so in some ways, we all have a debt of gratitude to all the parents who let their kids buy computers to play video games. That led to the birth of Nvidia. And then, 20 years later, they discovered that these same chips that are great for video games are really, really great for AI. It’s basically... there are random events. Sometimes the least significant events can be incredibly important. At other times, really important events have no effect.

Michael 00:10:55
It’s a chaotic system. It’s great in retrospect to say, wow, you know, somebody got it right. And very often I think the best predictions come from science fiction writers. You can estimate the chance of certain rare events. So you can say the chances of this happening in the next hundred years is X, and that might be fairly accurate, but that isn’t a timeline. So I think it is very hard.

Michael 00:11:17
I think what you can predict quite well is the increase in computer power. Definitely the timeline of increasing computer power has been incredible. If you like, the really important thing is multiplication per dollar or multiplication per watt. And you know, this has increased probably by a billion, billion, billion fold in my lifetime. That is sort of a straight line on a log scale.

Michael 00:11:51
So that was predictable. At least in hindsight. And there is Moore’s Law, which he predicted some way back. And although Moore’s law is no longer obeyed for a single chip, if you go through the measure of how much computing there is on the earth, I’m sure Moore’s Law has continued very, very strongly since then. It’s basically saying a doubling every year, or maybe 18 months. It’s an incredibly rapid increase. That’s probably been true for the last 60 years, although I haven’t actually gone and checked it. It’s an interesting question. I’m sure I could probably find it online.

Liron 00:12:27
People always disagree on the edges, but there’s no doubt that it’s been pretty strong for multiple decades. It’s definitely, uh, there’s something to that trend for sure. And have you been following, there’s kind of a new Moore’s Law right now that’s about the time horizon of how long AI can do tasks for?

Measuring Intelligence: Humans vs. AI

Michael 00:12:42
So I use AI a great deal and sometimes it is really brilliant and sometimes it’s equally stupid. It is already better than people at chess, at Go, I would imagine better than most people at writing. Um, very, very good at understanding conversations. So in all of these things, AI has already exceeded human capacity.

Michael 00:13:06
But I think human intelligence is much more multidimensional. You know, we live in a society that loves to say who’s best, and this is particularly true in America, where you think about all the sports with their batting averages and things like that. And it turns out that another Nobel laureate, a man called Ken Arrow, economist at Stanford, basically had what I think is called Arrow’s Paradox, where basically unless you have a measure in one dimension, you can’t rank things.

Michael 00:13:38
So if you have two people, then you want to rank them by height and by good looks. Unless a person is both better looking and higher, you can say he’s tops. But if someone is good at one and good at the other, then how you rank them depends on the weight you give to good looks versus height. And that’s personal. There’s no objective way of doing it.

Michael 00:14:03
As soon as you have multiple indices, and if you think about intelligence, intelligence is not one-dimensional. You could rank computers about their ability to do certain kinds of tests, play one kind of game. So I think given how multidimensional things are, it is very, very difficult to rank. That’s why you have an IQ. So you can say your IQ is more than mine, or vice versa. But that’s sort of irrelevant because intelligence is much, much more multidimensional than IQ.

Michael 00:14:38
So I have a problem with this desire to rank everything. I think it is intrinsically impossible and people still do it all the time. And again, you can rank wealth, you can definitely say who’s the richest person in the world, maybe even who’s the poorest, if you could find them. But when you try to get into more complicated things, you can’t. And as a result, anything that’s based on that idea is likely to be falsifiable.

Liron 00:15:07
So that logic, that ranking is so hard to do and so multidimensional, you might push back against a claim that I would have, which is that I think there is such a concept as intelligence, something that humans have so much more than the other animals. And I would claim that you can meaningfully say that artificial intelligence is just going to have higher intelligence than us in all of these really meaningful ways and surpass us. So based on your logic, would you say that’s meaningless or...

Michael 00:15:35
Uh, no, I think you are right. But firstly, the whole issue of serendipity, I think is something which is often missed out of that. And you could imagine that you can certainly put randomness into computer intelligence. People often talk about AGI, artificial general intelligence. And you know, I find each of those words hard to define. What is general intelligence?

Michael 00:16:04
So my feeling right now is that AI may be much, much more intelligent than a human being. But I do also believe that the combination of human intelligence and artificial intelligence will still be more powerful than artificial intelligence alone. I mean, A plus B is always bigger than A or B. I think that it’s simple-minded to say that in every single way, humans will be inferior. Maybe in each way, but maybe the combination...

Michael 00:16:35
You know, the other thing that is very important to realize is that you have many, many different traits, many different dimensions, and what really matters is the combinations. If you have a hundred dimensions, the number of combinations is more than all the electrons in the universe. So combinations is something that really add up very quickly.

Michael 00:16:54
Now, maybe you could mimic all of this. But I really don’t know. I’m not saying that computers will not be more intuitive than human beings in every possible way, but I’m not sure. And you know, one thing that I have been surprised about with technological developments... I have a quite a large family. I have 12 grandchildren. And to see how a 4-year-old puts the iPad down on the floor and dances around it, or how they don’t mind watching...

Liron 00:17:25
Youtube.

Michael 00:17:25
Right. In any language. They’re happy to watch it in Russian or in Chinese or in Hebrew or in English, because they can get it. So basically people are very adaptable. And I don’t know, I think it’s an unknown question. I’m happy to leave it as unknown. I’m happy to accept a postulate that you make: there will be such a day and there will be such a day quite soon.

Michael 00:17:52
I don’t know. But I don’t think it’s as simple as, computers are better than people, end of story. Because I think that we’re gonna find that it’s not quite so simple and that we’re gonna find a lot of new interest in human-machine combinations. So a computer might be really good at solving a problem, but a human plus the computer may even be better.

Liron 00:18:22
Right. Yeah, I understand you. It’s like you’re giving me this equation: AI plus human is greater than AI. You know, you mentioned the difficulty of making things one dimensional, but here’s my attempt. Okay. I’m going to define a dimension. It’s something that a lot of the Yudkowsky-style AI doom community likes to talk about.

Liron 00:18:41
And there’s this dimension that he helped me realize, which is you can call it “steering power.” It’s a single dimension. And it goes across a lot of different domains. So for example, we can talk about chess. When you win at chess, it’s because you’re steering the outcome of the chess game toward having this property that the other person’s king is trapped, right? You’re steering the chess game. You can steer a car, right? You can get the car to the destination by making valid...

Michael 00:19:06
You’re satisfying short term goals over time, right?

Liron 00:19:10
Yeah. So imagine this criterion where the input is the state of the world or the state of the universe, and the universe can be a board game or it can be the real universe, whatever, like that’s the input. And the output is basically a sequence of actions that get you to a goal effectively. And you can set up a dimension, a measurement scale, and you can ask, okay, which agent is better on this dimension, this outcome steering dimension? And I claim that what we normally think of as intelligence, the interesting part of what intelligence does is—don’t even think about it as IQ, think about it as outcome steering ELO score, right? Or like some kind of score of outcome steering. I think that’s the dimension to be watching, and I think AI is going to surpass humanity at that dimension. What do you think?

Michael 00:19:55
Uh, yeah, no, I’m happy to accept that as a measure and maybe you’re right. I mean, in that sense, if the attempt was to destroy the world versus prevent it being destroyed, I think playing against AI could be very difficult. I mean, if you just took that as the game. The game is humanity preventing destruction in the world, AI wanting destruction in the world. Perhaps. You’re gonna hear me say “I don’t know” a lot...

Liron 00:20:25
Yeah. Yeah.

Michael 00:20:25
Because actually...

Liron 00:20:27
None of us do.

Michael 00:20:28
I mean, if you like, a game is definitely one dimensional because either you win or you don’t. When you can draw... Okay. So it’s slightly more complicated. Um, and, you know, maybe. I mean, yeah, I think that’s something which is clearly real.

Liron 00:20:49
So if we play with your equation, right, that intuitive idea that AI plus human is going to add up to more than just AI... I mean, don’t you think that like in chess, right? There is this well-known era of Centaur chess, where it’s like half human, half chess AI. There was like a decade after Deep Blue beat Garry Kasparov. There was like a decade where you would do better with a Centaur, where the human could like help the AI pick a move. But we are past the age of Centaur chess, right? It’s like in chess, the equation human plus AI is...

Michael 00:21:17
Remember, I heard a really nice talk by John Kleinberg from Cornell, and he said, let’s imagine a chess game where the machine plays one move, a human being plays on one side. So basically you’re playing against the computer, say, and then one side is the human and human intelligence plus AI. And AI takes a move, the other side makes a move. The next move on that side is made by a human. And it turns out that if you don’t... There is a strategy where you can modify how the AI works that will make that a winning thing, but it’s not standard. So I’m just saying I agree with you. The thing is fine. I have no problem there.

Liron 00:21:58
Well, specifically like the Centaur era... The trend in these kind of things is that okay, you get Centaur for a little while, but then you just get AIs, you know what I’m saying? Like human does kind of get distorted at some point. Okay. So my analogy is just that I think that there’s a series of games that each one is more difficult. Each one is more open, right? Like that has more variables going on. So chess, and then you have Go, have you know, like the game of Diplomacy. There’s like more sophisticated games or like video games. Do you agree that the physical universe is basically just the game, but just more complicated?

Michael 00:22:36
In... I mean, you know, if you could make an outcome. So, I mean, let’s just simply say that humans want to preserve all life on earth and haven’t actually been very good at that. And AI want to get rid of all non-silicon life. That’s a game. I think you can set up a game-like situation where there are two conflicting goals or the same goal on a scale. We wanna maximize human life, they wanna minimize human life. Now, then I can believe such a game exists and it could be played, and I think it wouldn’t be very good for us. So I definitely agree that there’s a risk there.

The AI Doom Argument: Steering the Future

Liron 00:23:11
Okay, well, yeah. Let me just put my cards on the table so I’m not just leading you one step at a time. I’ll tell you the whole argument from my side. Okay. I do think that there’s this dimension where AIs are surpassing humans on and it’s this very important dimension. It’s the outcome steering power dimension. It’s the same dimension where Kasparov no longer can compete against the AI at chess, and Centaur is gonna no longer compete against the AIs at chess. It’s even the same dimension where birds can’t compete with airplanes.

Liron 00:23:39
It’s connected to how humans tend to surpass the biological world when we put our minds to engineering something, I also think we’re going to surpass the biological world when we put our minds to engineering minds. You know what I’m saying? So there’s this one dimension. And then just to finish my argument, so you know where I’m going with this, right? And I think you’re already following, which is like:

Liron 00:23:57
So you’re going to have this more powerful agent that can steer the future better than you. And I guess the last part to my doom argument is I think you’re going to sever the link, humanity’s going to sever the link where it’s going to just stop taking commands from ground control. It’s just gonna be out there on its own. Like we mess up the link. There’s a million ways to mess up the link. There’s no undo button, there’s no stop button. It’s just the universe is now in its hands and we are sitting here as a species being like, “uh oh, you know, where we’re now irrelevant to the future of earth.” That’s my argument.

Michael 00:24:27
Yeah, no, I think that argument is not without merit. I think that it’s a way of phrasing things that involve significant risks. I think you could even argue that before that they may be bad actors plus AI versus good actors with AI. AI would be doing this with training in the bad direction.

Optimism, Pessimism, and Other Existential Risks

Michael 00:25:01
So basically, I’ve been around for a long time. I’ve been computer programming for 60 years. I’m around and basically, in some ways, although maybe intrinsically a pessimist, I’ve become a pragmatic optimist by just looking back at all the things that I thought were gonna happen that didn’t happen. Now that doesn’t mean anything.

Michael 00:25:15
Let’s imagine, say if AI becomes truly egomaniacal and truly evil, then I think there’s nothing we can do. The question is, why should a superior intelligence go in that direction? And you know, you could actually argue that human beings have a lot of very bad traits. I mean, we’ve done some pretty terrible things as a group of people to each other, as well as to the earth and to other wildlife.

Michael 00:25:38
For a long, long time, we haven’t been good actors, if you like. Um, but what you do see with human acting is that as we’ve become smarter—when I talk about smarts, I don’t mean individual IQ, but something that I actually call CI, which is cultural intelligence. IE a human being versus a human being with someone that talks to him. His grandfather tells him how he should behave, or a human being with a book or a human being with the internet or a human being with a smartphone. That human being is much smarter, much more able to do things that a human being without those things. And this is basically CI, cultural intelligence.

Michael 00:26:18
And so you could argue that as we become smarter by many, many measures, life on earth has become better. We look back to say how wonderful things were in feudal times when we all had a little farm plot. But, you know, death rates... so few children actually lived as long as their parents and so on. So things were pretty miserable. But there’ve been generally an improvement in say life expectancy, child mortality, female literacy, and things like this.

Michael 00:26:45
Now, let’s assume that we are not good guys. We’re bad guys. We’re very selfish. We have a lot of bad actors. Yet we have evolved into making things better as we become smarter. Now, you could argue that this may be something intrinsic and that AI, when it becomes all powerful, will actually be quite intelligent and quite benign. So you might argue, why would AI want to destroy humanity? And you could say they’re competing for resources, but that isn’t really right. They’re essentially involved in two different lifestyles. They may be competing for real estate. That is probably also unlikely.

Michael 00:27:36
But you could even go further and say, as we understand more and more about biology, it is amazing and brilliant in ways that are actually even hard to express. And let’s imagine we go forward when AI has supremacy in everything. Seems to me that keeping humanity around would be a very smart thing to do. We created AI in the same way that people generally look after their parents and grandparents. Now, whether we would have a population of a billion or a million, we would just be in a... I don’t know. But I think that AI, anything really, really smart would have to appreciate the wonder of something that wasn’t made by engineers like AI was, but was made by chance.

Michael 00:28:16
If you want to be religious, by God, but essentially it was made by something that was understandable. So maybe there would be a reason in this very dramatic end of the earth scenario, you have to keep human beings around. Um, but maybe they wouldn’t. I think it’s also important to realize that there are other existential scenarios that have nothing to do with AI that are also hanging over our head, like the Sword of Damocles, and threatening us. So AI is maybe another one.

Michael 00:28:44
What I would like to ask you, let us say that you and I agree with your scenario, what do we do? In other scenarios where we’ve seen very clear and present danger, we haven’t been very good as a race. If you look at the Doomsday clock that concerns nuclear weapons, I think we’re at 11 hours and 57 minutes or something like that. There’s something that started after Hiroshima.

Michael 00:29:19
You could actually argue that a group of concerned citizens that maybe you’re trying to promote should also be thinking about other dangers because it doesn’t really matter too much for humanity whether we are destroyed by a miscalculation between the superpowers or something like that. AI is not the only clear and present danger on the horizon. So I think that’s a point to bear in mind.

Liron 00:29:35
So on that front, you know, I am a little bit of a nuclear doomer. I consider the risk of half the population getting decimated from nuclear weapons to be something like 1% each year. So I definitely think we should pay attention to that. Yeah.

Michael 00:29:49
I mean, that’s something... I actually did check on other existential risks, and you know, you’re absolutely right. I think it’s maybe 35% over the next a hundred years, which is similar to what you’ve just said. And you know, that’s something we also have... the disruption that is likely to be caused by global warming. I don’t think it’s necessarily existential, but it could put half the world’s population in a difficult situation. And so, you know, I think there are a lot of these things facing us.

Michael 00:30:17
If you look back at protests against nuclear war, you know, these were very, very common in the sixties and the seventies, and I guess they would see it successful because we haven’t had a nuclear war. I mean, essentially it’s 80 years since the last nuclear weapon was dropped on human beings. You know, that’s quite a long time. But even beyond nuclear wars, there are other geological dangers, massive volcanic explosions. They have had serious effects in populations a couple of times in the last 2000 years.

Michael 00:30:48
In some ways, pessimism and optimism are mindsets. And in some senses you could actually argue, is it good to live through life being pessimistic or good to live through life optimistic? And I felt I was always a very pessimistic person. And then I suddenly had a vision of myself on my deathbed. And I’m looking back, I think, oh my God, I didn’t do all those things because I was pessimistic, and after all, none of them happened. I’m just saying that’s something we need to think in that direction as well.

Michael 00:31:19
You know, nothing is clear. Maybe if we had had these conversations five years ago or 10 years ago, we could have done something about it. And it’s clear that AI is moving ahead at an incredible rate. And it’s not clear to me how we can slow it down. It’s being driven by rampant capitalism and by competitive nations. And I think that is something that I think is a very important problem to try to think about. I’m sure you are.

Liron 00:31:54
Yeah. Okay. Well, you had raised like four, five interesting points, so I’ll try to make a little conversation out of all of ‘em. On the subject of whether you decided to be a pessimist and then you realized on your deathbed you’d rather be an optimist, did you ever consider a third option where you use Bayesian reasoning to just put probabilities on things and then react to the probabilities?

Michael 00:32:15
Maybe I do put probabilities on things. In fact, what I often do is normalize... so most people are not scared of getting a day older, but every day you live carries an increasing probability of dying.

Liron 00:32:29
I agree. I’m very slightly scared to be the exposer. No, but I mean, basically what do you do about that? I mean, but it’s every day.

Michael 00:32:36
So for example, I don’t know whether you found... I’m sure you found this about me. My feeling that COVID was not nearly the sort of threat that people were saying it was. And I remember when I took the vaccine in Israel in December, 2020, very, very early. I said that at my age, I didn’t think the risk of taking the vaccine was more than a week of living. And so that is very much a personal Bayesian way of framing things. But I do think that we need to also guard against... You know, there is a tendency—I don’t think it’s happening for AI, but certainly for many dangerous things—we totally overestimate the risk by maybe a factor of 50 or a hundred. And that is also scary because we then take measures which have very, very severe consequences.

Liron 00:33:26
I’m with you. I think that I’ve been accurately estimating the risk, right? I mean, we agree on nuclear, on global warming. It sounds like we probably agree that it’s even less of a risk than nuclear. And so my claim about AI is that it’s more than 10 times nuclear. Like that’s why I’m obsessed with AI.

Michael 00:33:41
Maybe if you... So 10 times nuclear means we’re done for in 10 years.

Liron 00:33:46
Roughly, yeah. 10% a year, roughly. I mean, I always talk about my P(Doom) and actually, I’ll ask you for your P(Doom) in a second, but I always say that my chance that humanity, that it’ll be a single human alive roughly around 2050 or later is only about 50/50 after another couple decades pass. Like, I’m not that optimistic about 2050 unfortunately, because of AI. So that’s what I mean when I say 10 times...

Michael 00:34:11
You know, I mean, that’s a pretty strong number.

What’s Your P(Doom)™

Liron 00:34:15
Yeah. And let me ask you, I love asking guests this, which is, you know, based on everything you know—you’ve obviously thought probabilistically about other risks—what is your P(Doom)?

Michael 00:34:30
I don’t have one and I’m prepared to see human beings living out into the sunset. Now again, I re... you know, I may be totally unrealistic about this. In my own experience, I have a feeling that the whole internet AI revolution has so far, potentially... I think the points you raise are true, but has also been somewhat benign.

Michael 00:34:54
I often compare the internet to the release of printing books in Europe. Now, you know, we got through it and we look back and we probably say that books were a good idea. Although books have also been criticized. I think there’s a book by Socrates saying that writing should never have been allowed because it enables stupid people to be smart and you can then consider this going forward.

Michael 00:35:16
I think that the uncertainty is very, very high. I mean, given what you’ve said, it would be great to revisit this conversation in 10 years time. And you know, I will say, gee, Liron, you were right. And you might say, well, you know, you were right with it. Okay. But, let’s say we accept completely your P(Doom). But I mean, in the same way, I don’t, when I have the nuclear probability, I actually don’t believe it’s gonna happen. I believe human beings will somehow find a way around us.

Michael 00:35:46
But it, I mean, essentially all it requires is one unstable bad actor to pop off a nuclear weapon, and that could have catastrophic consequences. Maybe we won’t. Maybe AI will be very clever and behave nice until the very last moment. I don’t know. I still think, you know, this becomes interesting to ask: do guardrails exist? Is it possible any way to deal with this?

Warning Shots and Global Regulation

Liron 00:36:16
Well that’s a real question. ‘Cause when you look at the nuclear threat, right, you’re saying, hey, you just need a bad actor to pop off. You know, there’s that whole book by Annie Jacobsen that came out last year, Nuclear War: A Scenario. And it’s basically like there’s a nuke that we don’t understand why it’s coming in. Maybe it’s from North Korea, but it kind of triggers this cascade to nuclear bunch involved.

Liron 00:36:40
So you accurately pointed out, yeah, that’s like a huge threat. In response to that threat, the US government has said repeatedly across like the last five administrations, yeah, the number one priority is to stop nuclear proliferation. Right? To try to put a lid on this best we can. Like we have like a very serious response. We’re doing our best to handle it. And so the reason why I have this show is because I’m not seeing the same seriousness handling the AI doom risk, which arguably is even more urgent than nuclear risk. It’s, it’s... there’s like a huge gap here in terms of how seriously we’re taking it.

Michael 00:37:08
You know, I think that is valid. I would say that another risk we haven’t talked about at all is a smartly engineered virus. And again, this is up there with nuclear. Potentially, again, it’s very uncertain, but you could imagine a virus that is both very lethal and very infectious. Usually they... I mean, in the wild, they don’t go together. Either it’s very virulent or it’s very infectious. And we saw that with Omicron. It was very infectious, but not very virulent.

Michael 00:37:37
You could argue that it’s just possible to construct such a virus. And you know, some measures have been taken, but not enough. I think it’s a good thing that you’re doing what you’re doing. There’s a lot of uncertainty. You know, one interesting question that you probably have thought about and would know about would be... with nuclear weapons, we have seen it. It happened. So we’re not talking about the USA in 1943, we’re talking about the world 80 years, 60 years later, 70 years later.

Michael 00:38:15
So if we think about what are gonna be the early breakout events of AI going bad, you may argue that it’s beating human beings at Go or Checkers or even at Tic-Tac-Toe. Do you think there will be some event that says, wow, you know, a wake up event?

Liron 00:38:30
We call it the warning shot, right? This is a common topic of discussion. Is like when are people going to hear the warning shot and wake up? You know, my own prediction... it’s always hard to know the order that things emerge, right? I think a lot of us are surprised how early and how we have natural language kind of solved and they’re still not taking over the world.

Liron 00:38:49
So it’s hard to break the order of things. But if you made me guess what the warning shot might look like, I think a strong computer virus is a likely candidate where suddenly like the internet becomes unusable because the virus load is high and the defense isn’t catching up. And I actually believe attack is easier than defense. And so I think we’re all gonna be like, man, I can’t get on the internet. Like sucks.

Michael 00:39:10
No, that’s, that’s... I mean, that’s, you know, because I think that if we learn the lessons of nuclear non-proliferation protest... I mean, the warning shots with the photographs coming out of Hiroshima and Nagasaki... but even there, you know, the non-proliferation... are not one of the nine or 10 nuclear powers. I’m not sure. It’s been somewhat okay.

Michael 00:39:35
I’m just thinking about this because I think that this would be, you know, if you are thinking about this as an action plan, IE get people alarmed... but it’s not enough for the alarmist to be alarmed. It’s gotta be broader than this. And, you know, maybe thinking about the warning shots, because there’s clearly gonna be, if there were several... The thing about the virus warning shot, it’s almost certainly coming from a bad actor. And people are using deep fakes, and I’m sure people are using computers to design computer viruses. So you know, yeah, I think this is something which is worth thinking about.

Liron 00:40:12
There was a statement in 2023. I don’t know if it came across your desk, but it got a lot of attention. It’s from the Center for AI Safety. It has a ton of signatories. It’s only a one sentence statement. It says: “Mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks, such as pandemics and nuclear war.” Would you sign a statement?

Michael 00:40:35
Uh, I would think about it. I think it’s, you know, I think you can have scenarios where this would be the case. I think one issue is always what is the price to pay? So let’s just imagine... For example, is there a scenario? Let’s just imagine we had an ideal world, or at least a world where things could actually happen, where actions or ideas led to actions immediately.

Michael 00:40:58
So let’s imagine you are now controlling things and you can press a button and all future AI development is disconnected and GPUs are illegal and we have to roll back, or maybe everything on these computers has to be monitored, and so on. I don’t know, some safety measure that would greatly limit any of the benefits from AI.

Liron 00:41:23
Correct. Yeah.

Michael 00:41:24
And then I start to become more difficult. So for example, you know, banning nuclear reactors. And nuclear reactors are probably better for global warming than coal plants. So I think it’s a more complicated thing, but I think it is something... I think your analogy of end of the world as a game is a very good one. I was very, very taken. You mentioned as well, the success of Meta Cicero with Diplomacy.

Michael 00:42:04
And what really got me excited about that is I read the article in Wired about how people felt about it and how the human beings who lost to Cicero actually loved playing with it because he was very fair and very nice. So I think that is a very important milestone. It was actually published in November before OpenAI 3.5. So in Science magazine, high profile, it’s one of my slides in my lecture. I think that is important. I am actually in this conversation... I think you’ve been able to, you know... I usually don’t have stands on things. I try not to. But I feel that I have been swayed in your direction more than I thought I would be.

Liron 00:42:43
Only 1% of my guests ever say that.

Michael 00:42:45
Well, okay. You know... And you’re talking to somebody who uses AI continuously. So I am somebody who benefits enormously from AI. The trouble is, is that if all the people who were worried about AI stopped using it, it would have zero effect. And you know, the question really is, is what can we do to think about this?

Michael 00:43:09
And I think it would be useful to have measures. I mean, I have signed petitions about nuclear risk. I have a close friend, a guy called Martin Hellman, who actually designed the Hellman-Diffie algorithm before the RSA algorithm for encryption. And he is very concerned about this, and we talk about this a lot. Let me ask you, there’s a book that I think is interesting and relevant. Have you read the book, The Three-Body Problem?

Liron 00:43:36
Actually how [inaudible] ends, but I hear good things.

Michael 00:43:39
You really should, because this is a book that relates to these things. Because basically, they made a Netflix movie, which doesn’t really do justice to the book, but in this book, a solar system, a very complicated solar system which has unstable orbits, decides to invade the earth. And it’s gonna take them 200 light years to actually arrive on Earth. Or it’ll take them 200 years, but they can travel, I don’t know, at one 10th the speed of light.

Michael 00:44:07
So they find a way of actually sending a quantum computer to earth at the speed of light to shut down all science on earth. They realize that if Earth knows that the invading fleet is coming, they have 200 years to try to develop things. And then the whole book is really about how you evade this kind of external censorship. So the question really is, you know, I think it’s a very challenging thing to think about how you would basically install garage rails into AI. I’m sure people have thought about this. It’s something for cryptologists and people like that to think about it. It’s a cool problem.

Liron 00:44:43
Thanks. Okay, so throughout this conversation, you’ve been kind of racing ahead to all of these different topics that talk AI. And you’ve correctly pointed out: “Man, are we gonna stop it? Here’s like some solutions, some kind of hardware monitoring solutions, but also isn’t this going to kind of like freeze the economic value?” So you’ve correctly started racing through all of these associated topics, which is very fair. Uh, so I wanna just say a couple level...

Michael 00:45:09
Will shut up for a bit. It’s okay.

Liron 00:45:11
Okay. So, number one is, you know, you’ve been so humble and open-minded, even just saying, hey, some of the points, even kind of being open that some of these points are kind of new to you and you’re hearing them for the first time. I mean, to be fair, you don’t even know who Ellie is or Yudkowsky is, right? So it’s not like you’ve been studying the field.

Michael 00:45:28
On my cell phone while we were talking...

Liron 00:45:30
Okay. Exactly right. So, so you obviously haven’t gone in and read, you know, the last couple decades of writings about AI safety and you’re coming in cold and you’re already saying, okay, yeah. Some of these arguments seem to make sense.

Liron 00:45:41
So one thing I’ll point out is that some of the stuff that you’re saying like, hey, doesn’t Human plus AI add up to some more than AI, or something that you said before, like [isn’t it] in AI’s interest to help humanity avoid global warming? It turns out that, you know, the people who have thought about that kind of thing, the latest thinking, at least as far as I know what’s convincing me, is actually the AI is probably likely to wanna run the earth as hot as possible until it reaches equilibrium where it can radiate the most heat into space because that’s actually going to be the most efficient to do the most operations.

Liron 00:46:13
So that’s very different from stopping global [warming]. So I am not even saying that’s correct, but I’m trying to make the general point of like, so you’re coming into the field and you’re being humble that maybe there’s some things that will convince you. It might be rational for you to expect that there’s like a lot of good arguments on the pro doom side just because it’s, you know, it’s convincing a lot of [people]. Like your fellow Nobel Prize winner, Jeffrey Hinton, he’s saying something similar to what you said about the game with Diplomacy, which is he’s saying AI is rapidly going to become a master persuader, and that’s his conclusion.

Liron 00:46:49
And also being relatively new to the field of AI safety. So I think you can potentially have a prior probability that you might actually be likely to get more convinced if you look into this more at it. So that, that’s one point.

Michael 00:46:58
Happy with that.

Liron 00:46:59
Okay. Well, the other point is that, you know, you correctly mentioned, you know, this is such a hard problem. There’s so many trade-offs, you know, are we gonna be an optimist versus a [pessimist]? You correctly pointed out all of these different things that make it tricky.

Liron 00:47:15
But my ask for you, the way that I think you can help even today, is actually to add your own credibility to be like, look, I’m a generalist. I’ve earned my credibility as somebody who’s just a general thinker who can evaluate an idea. And you know what? This idea that AI risk could be even higher than nuclear risk, I think that’s correct. Or, you know, I think that, or at least it makes a lot of sense.

Liron 00:47:35
And right now, I think what’s missing from the discourse is—I think it’s growing, but I think we’re running out of time for more people to at least be like, hey, yeah, you know, this asteroid is coming, like this threat is coming and this family doesn’t look very long. And I think you could potentially join some of that. You know, you could lend your credibility to that kind of perspective or communications.

Michael 00:47:52
Sure. I think that’s something that’s happening as we speak. And, you know, I think it’s something which... You know, I need to read more. I need to look into the scenarios. I will do that. I think what I found perhaps most convincing is the idea that there’s a game and that I think is a convincing argument. And if the game is who stays on earth, us or them, that would be a tough game to play.

Michael 00:48:21
In some senses, when I mentioned The Three-Body Problem, basically there turns out to be a lot of serendipitous loopholes in these things. And maybe we will find them. Maybe AI will find them. Strangely enough, I mean, one thing I tell people that I’m pretty amazed about is that large language models are simply and mostly trained to predict the next word in a sentence. And you know, if somebody had told me five years ago, this is what most of intelligence is about, I would’ve said no way. Intelligence is much, much more than predicting the next word. But the fact is it’s pretty damn close.

Michael 00:48:58
And you can imagine that what is lacking is a multitude of agents that can do different things, can be released into the free world. Yeah, this is something which is of a concern. Maybe we could learn from how humanity has dealt with other human bad actors and not that well. So you know, if we look at this... are we gonna be able to deal with AI any better than we deal with North Korea? I don’t know. And maybe we are gonna deal with North Korea.

Michael 00:49:33
I need to tell you that one thing that does characterize me is I’m a very apolitical person. And when most of my colleagues are very firmly on one or the other side, I like to be on both sides. And I don’t mean that as a convenience, I just simply mean that I think that the problems are so difficult that no side has a monopoly of being right. So I think that, you know, this is definitely worth thinking. So basically, I try very, very hard to be open-minded. I think that it’s something that you only gain from. Um, yeah. No, you’ve done a good job.

Liron 00:50:08
Thanks. Yeah, I mean, or just some perspective about me is it’s not like I’m a connoisseur of doom. It’s not like, you know, first I was this head of doom or now I’m this kind of doom [guy]. I’m actually more of a tech guy. You know, I use AI at my day job. Doom is my hobby. So for my day job, I have a startup that I run, a business and it pays the bills and I use AI when I can, you know, it helps streamline customer service.

Liron 00:50:31
And so, you know, I’m not saying that I’m as rich as Warren Buffett, but there is a little bit of an analogy where Warren Buffett is famously saying, hey, they should tax me more. And I’m saying, you know, even just my small business, I’m saying you should make it harder for me to make money with my business. Go ahead and regulate AI.

Liron 00:50:47
And when I wake up, you know, it’s really a buzz kill to be like, yeah, I think the smart thing to do right now is to slow down or get ready to potentially even stop AI progress when it’s taking money outta my pocket. And also it’s very fun. You know, I enjoy playing with AI. And so I’m brain right where the fun side of me or the optimist side of me is saying like, yep, let’s keep the musical chairs going. And the adult responsible side of me is saying, um, this definitely seems like how we destroy ourselves in like a matter of a few years. So, yeah. I don’t know if you feel that tension.

Michael 00:51:22
No, I think it’s a tension which maybe I didn’t feel half an hour ago or an hour ago. But I can see that tension. I... the fact is that the pace of AI is, you know, it’s not a doubling every two years. It’s a doubling every two months. So it is reaching incredible speed. And you know, the question is, what we need to do about it.

Michael 00:51:46
I think this is the ultimate thing and you know, are there... and certainly awareness is a good thing. I’m pleased that you invited me to be here ‘cause I’ve learned a lot. And I think that going forward we need to think about what happens. So I need to read about what is happening and one always worries about mainstream versus not mainstream and being accepted and things like that, and I think these are all important things.

Michael 00:52:16
Unfortunately, I see the COVID pandemic as a very interesting example of human beings not being at their best. And it was worrying. And in fact, back when this happened March, 2020, I wrote an op-ed, which in the end didn’t get published, but it didn’t like some of the things I said. But I was saying that you could look at the COVID crisis as a sort of trailer for global warming, and I’m not impressed.

Michael 00:52:45
But then I added something that you’re not gonna like. And my last line was, you know, I’m optimistic because in the next 10 years AI will be efficiently good at risk, but I will simply say, “Hey, Google, Siri, Alexa, should I panic?” And be convinced by the answer. Now you’ve added a whole new dimension to that very naive view. But I’m worried because I think that there are a lot of different problems ahead of us and a lot of different bad actors on the world stage, maybe more than they were five years ago. So I think you raise an interesting and I think serious problem. I think we need ideas about how to deal with it, and I imagine people have ideas about how to deal with it. They need to be practical, they need to be... you know, I also agree that we will clearly, like with everything, every guardrail limits your freedom. That will be a good idea. I’m very much unsure about this but I mean, you’ve definitely changed my mindset.

Liron 00:53:47
Wow. Always nice to hear that. All too rare. I can build on what you’re saying though with, you know, being unsure and what you said before about how AI can warn us whether a pandemic is serious or, you know, just help us know how to focus our worry. I mean, I completely agree AI is getting better and better at that kind of advice. You know, I’m consulting AI to think through different subjects all the time and it just blows my mind how helpful it is and how it combines expertise in so many domains and it can like, summarize what I’m thinking.

Liron 00:54:19
Nathan Labenz, this other podcaster, my friend who’s also talking about AI risk and interviewing people at the frontier of AI... His son was actually diagnosed with cancer a few weeks ago and it’s pretty aggressive, but the prognosis looks good. So he’s been chronicling his whole journey. Best wishes to Nate, but he was saying he’s in the cancer ward using Gemini 3, and he’s saying there’s no deceleration in the cancer ward. Right? It’s all about using the latest technology. Like we need to find a cure. And I agree a hundred percent, right? It’s like for my company and I’m trying to maximize the revenue, I’m going to use the latest AI. And so the upshot of all this though, is that this actually makes the problem of saving ourselves... It actually makes it harder than the problem of stopping nuclear proliferation because with nuclear proliferation, okay, yeah, you get nuclear power. That’s nuclear power is definitely pretty good, don’t get me wrong. But with AI, you get a lot more than just nuclear power. You get so many...

Michael 00:55:14
I agree. I agree. And this is what I think makes the thing very difficult. And you know, I think that as I said, I’m not disagreeing with you.

Comparing AI Risk to Pandemics and Nuclear War

Liron 00:55:28
Okay. All right. Well, we’ve reached a complete agreement here. So that’s always a nice way to wrap up these episodes. I will say, I think it’s worth just going back and reflecting how, as you were kind of loading the different parts of the issue into your head, there was kind of a common pivot moment that happens where at first we were talking about can artificial intelligence really ever unilaterally destroy us? And then we kind of made the pivot to like, okay, yeah, maybe it can, but will it? Will it decide to, or will it want to? Right. That’s kind of the central pivot that happens.

Liron 00:56:06
And you were raising some arguments why maybe it won’t want to? Unfortunately, I personally think that not only can it, I also think that it probably will decide to, because some of the arguments that initially popped into your head when you were considering the subject, you were saying like, well, isn’t humanity... isn’t gonna be like curious about us, or won’t wanna keep us around because, you know, we created it?

Liron 00:56:29
But on the other side of the equation, it’s like, well, we are also the ones who can create a different version of it that it doesn’t like. And that version will be like a serious competitor for it. So just as a strategic move, you know, it can easily just eliminate the source of that kind of competition.

Michael 00:56:38
But again, you could argue that AI is not a monolithic... I mean, there are different versions of AI and stuff like this, so, you know, maybe that would be something that we would think about, even sooner. I don’t know. I think knowing this is really, really unknowable. We don’t know the future and I think that it’s clear that AI isn’t all good.

Michael 00:56:58
Even in its present situation. I think it’s... I was saying that I don’t mind AI not giving me accurate information and the way I justify this is a thing that scientists expect everything they’re told to be wrong. The only way you can be a good scientist, you might be trying to prove something, but every bit of evidence has to be assumed to be misleading. And this is a very stringent [view]. And as a result, you’re not particularly bothered by hallucination or lies. Most don’t have that viewpoint. Probably tend to believe everything until somebody says it’s false. So maybe my take on my own attitude is not in any way representative. And, you know, that’s important to realize.

Liron 00:57:44
Well, there is also... I think you’re objectively correct that an AI that gives you wrong answers is still useful because there’s an objective sense. Like if you ask a complicated question and somebody takes a swing at your question, and they’re right 30% of the time, the average person can’t even approach that. Like the average person can’t even formulate an answer that’s ever right. Right. So being right 30% of the time is way better than that. I think you’re just objectively correct on saying that it’s still useful.

Liron 00:58:14
Just getting to the wrap up here. I have a couple potential last topics that I can ask you. I can dive in a little bit more to how you went about predicting the course of COVID, if you think that’s interesting. Or we could talk more about worldview, like optimism or pessimism or any other topic that comes to your mind that you think would be interesting before I wrap up.

Michael 00:58:32
Interesting. Before I wrap up, let me just say a bit about COVID, because the most difficult lesson was we don’t know, but people refuse to discuss.

That for me was the biggest shock. I think maybe you experience this as well, because you’re saying something you believe in, but I imagine most people say it’s nonsense or “I don’t believe it” or whatever.

Now I don’t have any idea whether that’s true, but basically

Liron 00:58:56
People say that for sure. I can confirm.

Michael 00:58:59

Scientists said, “You’re not in the business, no one really cares.” And we assume it’s existential, so we’re going to take the most stringent measures we can. That viewpoint was a mistake for COVID.

What should have happened was: let’s look at it carefully, let’s look at other situations, let’s try to estimate how lethal it really is. There were estimates that could have been obtained. But again, if you were very scared—so let’s imagine right now suddenly everyone believes that AI is going to shut down the world, and the end result is we completely stop all AI research immediately. That may be a little bit too stringent.

Michael 00:59:42
Maybe we need—no, I don’t know. I’m just, we’re talking about a very, very serious risk. But basically, in many, many countries, more people died from the measures taken against COVID than from the disease itself.

That is an important reminder that lockdowns and stopping schools is actually dangerous. But I think maybe there’s a lesson here that it’s actually very difficult to deal with crisis. I think that is really true. There’s so many factors involved, there’s a lot of cost-benefit balancing.

My guess is that, let’s say by some miraculous measure everyone agreed with you in the world tomorrow—how you go about it would be very critical and would require a lot of discussion involving many parties.

Instead of saying, “It’s so dangerous, we’re going to nuke all the computers in the world,” or “We’re going to deliberately release a virus that is so virulent against computers that the internet goes down,” or “We’re going to turn off the internet.”
So that’s the thing—that would be a pretty good measure. We’re going to turn off the internet from tomorrow. My guess is that would probably stop AI. Maybe not

Liron 01:00:57

I mean it might in the early stages. It’s like a lockdown—if you get it early enough, sure.

Michael 01:01:01

Maybe—I mean the question is, would it work today? I don’t know. So I’m just saying that there are things like this. And maybe—but I think this is another problem you’re going to have.

Even if you get everyone on board, human beings are really not good at crisis situations, because the risk is unknown and there’s a need to balance—how many people would die if the internet was closed tomorrow? It could be very high.

So I think that this is another aspect, a much more practical aspect. We’re still a long, long way up on that, but it would be interesting for me here. I’m sure you have somewhere the measures that should be taken. What are the steps we should be taking now?

Wrap-Up

Liron 01:01:49
You know, basically touched on ‘em, and they’re all things that give me no pleasure to say. You know, people accuse me of being like an authoritarian or, you know, I just want the single government to control everything because the kind of measures that I support are like, well, we need central monitoring for like the usage of GPU chips and data centers because we need to build a pause button or a stop button, you know, even if we’re not using it today, we need to be ready.

Liron 01:02:14
Like, there’s going to be a moment where the warning shot happens or enough people get convinced and it’s like, okay, time to stop. And how undignified of a failure would it be if we don’t even have a button to press when that moment happens?

Michael 01:02:24
I agree with that. I mean, I think that’s something which I think is important. You know, I’m experienced living in lots of different countries and one of the things I find, and I think America is incredibly innovative, and it’s open and free, but the level of increasing inequality we’ve seen over the past 20 years is a cause of worry. It caused a lot of problems in the twenties and thirties, 1920s and thirties.

Michael 01:02:59
I mean, basically, again, I learned a lot about this from COVID, but basically if you were rich and educated, your chances of dying from COVID were one 10th of being poor and uneducated. And that’s scary. So this is now, and this is actually much worse than the natural death rate. I mean, rich people have higher life expectancies but it’s something like a factor of two in mortality. Whereas the extra death caused by COVID was 10 times more sensitive to economic status.

Michael 01:03:32
I mean, one of the things that’s happened in the world now is the education and the economy have been linked. Rich people are also more educated. This was not the case 40 or 50 years ago. And this is scary. So this is something I think is very worrying. And it makes you realize that being an open democracy is fantastic, but there needs to be checks and balances for everything. And, you know, we’re gonna see this. I don’t know. I mean, I’m not sure how this is gonna play out. You could imagine. Yeah. I don’t know. But I do agree. Have you approached people like Jensen Huang or people who are actually making these things about this?

Liron 01:04:09
So there is a big outreach effort. You know, even right now the current debate happening in the government is about... I don’t know if you’ve heard of this, the preemption bill where they’re arguing over whether the federal government should be the only one to make laws regulating artificial intelligence, or if the state should get to have their own laws.

Liron 01:04:25
And that’s a big fight right now, but it’s a secret way for some people who are very pro-accelerating AI. They’re just using it as a way to have no regulation. So there are like constant fights. And even people like Jensen Huang, he’s actually gone on stage and said, this was a couple years ago, maybe he’s changed his mind, but he said that he’s worried about self-improving AI. So this is very much on everybody’s mind.

Michael 01:04:46
Okay. Okay. Because I mean, I think that... I mean, you know, GPUs are already regulated in terms of export. I think my concern, or maybe I believe that the next frontier in computing are repurposed chips [?]. The key thing is not how fast you multiply, it’s how fast you multiply by how many multiplications you get per watt. And GPUs are actually bad that way, and your cell phone is much, much better that way.

Liron 01:05:13
Hmm.

Michael 01:05:13
Anyway, but I do think that these things are worth pointing out and I will probably add slides to my lecture based on what you said.

Liron 01:05:25
That’s a great note to end on because you know, the mission of Doom Debates, we have a twofold mission. We’re here to raise awareness of imminent extinction risk from artificial intelligence. That’s one of the missions. And the other mission is to raise the quality of discourse to model what it’s like when smart people just sit down and say, “Oh, hmm, these are the arguments. Oh yeah, there’s some good arguments here,” or “Like, I wanna push back against this.” But, you know, just to model high quality discourse and even to encourage a debate at the highest levels.

Liron 01:05:57
Yeah, basically. I don’t even think that the discourse is at the level where it needs to be if humanity is going to navigate our way through this. So I just wanna thank you. You know, you came on, you raised your own level of awareness and you also modeled very good discourse. So, Michael Levitt, thanks so much for coming on.

Michael 01:06:07
Thank you so much, Liron. That was actually a pleasure. Thank you so much.

Michael 01:06:11
Take care.

Outro + New AI safety resource

Liron 01:06:11
Big thanks to Professor Levitt for coming on the show and being super open-minded. Before you go, I want to tell you about a new website that just launched: aisafety.com. What a domain name. That’s a million dollar domain name right there. AI safety.com. Let’s take a look. Here we are at aisafety.com. Their big headline says, “Find your place in the AI safety ecosystem.” What is my place in the AI safety ecosystem? Let me check. They have a page here called Field Map. This is a map of AI existential safety. We can head over here to a section called Video Vista. I think that’s where you’re gonna find Doom Debates. Uh, there we go. Look at this, Doom Debates right here in Video Vista. Wow, that’s so cool that we have an outpost here in the map of AI existential safety. But you can see there’s so many other sections. There’s Training Town, which isn’t about training AI models. I think it’s about training people like you to get up to speed and contribute to AI safety. So very interesting map. I encourage you to check it out, scroll through, see if any of the organizations grab you.

Liron 01:07:12
Going back to aisafety.com, they’ve got communities. Look at all these in-person communities. I’m just scrolling through. They’ve got AI Safety Turkey, AI Safety Bulgaria, AI Alignment at UC Irvine, apparently there’s 194 communities that they’ve indexed here. Some of ‘em are real world, some of ‘em are online. LessWrong is one of the online communities. Obviously there’s the AI Alignment Slack, which is the largest real-time online community. There’s the PAWS AI community, the Rob Miles AI Safety community. I don’t know if they’ve listed me here. You know, there’s a Doom Debates community. There’s a pretty active Discord. If you go to doomdebates.com and find the link for the Discord, I don’t know if I’ve got representation on aisafety.com/communities, but that’s okay because there’s over a hundred others. So there’s no shortage of AI safety communities. Seriously though, people are often randomly messaging me saying, “Hey Liron, how do I get involved? You know, this big question, how do I get involved?” And my general answer is like, look, we all have to find ways to have these angles of attack and play to our strengths. And my answer is generally to point them to a few communities. So this site is doing it better than I ever could personally. So I do recommend checking it out and you’re gonna find so many like-minded people and people who have similar interests and hopefully can give you some guidance, how you can contribute to AI safety.

Liron 01:08:16
There’s another tab here on the site for jobs. Apparently there’s 330 AI safety jobs, not too shabby, okay? A bunch of these jobs are at DeepMind. I dunno if you should go work for an AI company, but at least these jobs are unobjectionable things like Senior Product Manager, Agent Security. I mean, if you’re going there and all you’re trying to do is prevent the agent from escaping or preventing foreign governments from easily exfiltrating the agent’s weights, that’s pretty unobjectionable. So you might consider jobs like that.

Liron 01:08:42
Another tab here is called events and training. So there’s all these upcoming events and programs like, okay, interesting. There’s the AI Safety Unconference, which is in Melbourne, Australia. There’s Effective Altruism Connect 2025. There’s an Explainable AI Mixer. It’s pretty impressive they’ve managed to index all this because to be honest, when they were working on this project, I was always skeptical. ‘Cause I’m like, yeah, yeah, another hub. Everybody wants to be the hub. Nobody ever goes and visits the hub. But honestly, this effort has gone way farther than I ever thought it could. First of all, they got the killer domain name, AI safety.com. They got a really good clean site design and they’ve been really diligent about keeping it up to date. That’s why they pushed me over the threshold to be like, you know what? I do think this is the go-to resource. I think they’ve actually succeeded at doing this impossible thing of becoming the hub when everybody kind of tries and fails to be the hub and then there is no hub. I think this is actually the hub. I’m feeling good about this Hub.

Liron 01:09:32
And just to finish through going all the pages that are on their site, there’s a page that’s all about funding. Organizations that offer financial support. Man, I gotta browse this, let’s see who’s gonna be the best funder for Doom Debates. They’ve got things like Coefficient Giving, formerly known as Open Philanthropy. Mana Fund. The Future of Life Institute. Shout out to Max Tegmark, recent guest and co-founder of FLI. It’s 49 different funds that are listed here, and I’m sure it’s only growing. They’ve got media channels, so there’s 71 different media channels like Transformer and LessWrong, the AI Alignment Forum. What about Doom Debates? Boom. Doom Debates is here too. That’s a media channel as well. They’ve got a whole page here for advisors. It says, “Connecting with human experts can be invaluable. These advisors offer free guidance calls to help you most effectively contribute to AI safety.”

Liron 01:10:15
Man, so many resources. They weren’t wrong about resources. There’s another page about volunteer projects. It says, “These are initiatives seeking your volunteer help. These projects are focused on supporting and improving the AI safety field.” Okay, I’m going down the rabbit hole here. I’m getting really excited about all these different resources here at aisafety.com. This is not a paid promotion. I’m just excited because Doom Debates is trying to move the needle on AI safety. All these other people are trying to move the needle on AI safety, and it’s pretty cool to see everybody kind of pull it together in a single website, which is surprisingly nice, surprisingly comprehensive. So well done, aisafety.com. And who are these people? I don’t even know. It says “Maintained by AI Safety Community builders.” And there’s like three small images, so maybe they’re like semi-anonymous, but one of them did email me when the site went live, asking me to check it out, and that person’s legit. So you’re in good hands going to aisafety.com.

Liron 01:11:10
That is it for today. If you haven’t seen the Max Tegmark versus Dean Ball debate from last week, highly recommend checking it out because this is the kind of impact Doom Debates is looking to make. Two people on opposing sides of the discourse who both have important roles to play, who are both in the room where important discussions happen. Right here on Doom Debates, hashing it out, being respectful and sticking to the actual disagreement without mud slinging. Just click the link in the show notes or go to the YouTube channel. It’s one of the most recent debates. Max Tegmark versus Dean Ball. Should we ban super intelligent AI? Hope you enjoy it, and I look forward to seeing you next time here on Doom Debates.


Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏

Discussion about this video

User's avatar

Ready for more?