0:00
/
Transcript

AI Will Take Our Jobs But SPARE Our Lives —Top AI Professor Moshe Vardi (Rice University)

One of the most decorated computer science professors in history argues that AI poses an existential threat… to our jobs!

Professor Moshe Vardi thinks AI will kill us with kindness by automating away our jobs. I think they'll just kill us for real.

Who’s right? Tune into this episode and decide for yourself where you get off the Doom Train™.

Some highlights of Professor Vardi’s impressive CV:

  • University Professor at Rice — a rare distinction that lets him teach in any department.

  • 65,000+ citations, an H-index above 100, and nearly 50 years spent mechanizing reasoning, which makes him one of the most decorated computer scientists alive.

  • He ran the ACM’s flagship publication for a decade, and now bridges CS and policy at Rice’s Baker Institute.

  • He has been sounding the alarm on AI-driven job automation for over ten years.

  • He signed the 2023 AI extinction risk statement, and calls himself “part of the resistance.”

Links

  • ♪ 1 Way Ticket (to the Doom) ♪

    0:00
    -3:22

Timestamps

00:00:00 — Cold Open

00:00:54 — Introducing Professor Vardi

00:02:01 — Professor Vardi’s Academic Focus: CS, AI, & Public Policy

00:07:18 — What’s Your P(Doom)™?

00:12:28 — We’re Not Doomed, “We’re Screwed”

00:16:44 — AI’s Impact on Meaning & Purpose

00:27:47 — Let’s Ride the Doom Train ™

00:35:43 — The Future of Jobs

00:39:24 — A Country of Geniuses in a Data Center

00:41:04 — Corporations as Superintelligence

00:45:49 — Agency, Consciousness, and the Limits of AI

00:50:07 — The Mad Scientist Scenario

00:54:02 — Could a Data Center of Geniuses Destroy Humanity?

01:03:13 — The WALL-E Meme and Fun Theory

01:04:01 — Why Professor Vardi Signed the AI Extinction Risk Statement

01:06:02 — Wrap-Up + 1 Way Ticket to Doom

Transcript

Introducing Professor Vardi

Liron Shapira 0:00:00
Moshe Vardi, he’s one of the most decorated computer scientists alive. Moshe isn’t just building the technology, he’s pushing us to reckon with its consequences.

Moshe Vardi 0:00:08
Suppose we succeed in automating everything. What’s the purpose of humanity?

Liron 0:00:14
So you think a lot about resisting humans becoming soft, kind of like the movie WALL-E, which is the AIs are gonna kill us with kindness. And I still think the AIs are more likely to kill us by melting our cells.

Moshe 0:00:26
We have to be incredible idiots to let technology run amok to the point that it will just decide, “Let’s do away with the pesky humans.” I can’t say it cannot happen. Let’s put it this way, very unlikely.

Liron 0:00:37
How does the entire coalition of humans compete with millions of geniuses in a data center? Who wins that fight?

Liron 0:00:54
Welcome to Doom Debates. Moshe Vardi is a professor of computer science at Rice University, where he holds the rare distinction of University Professor, meaning he’s empowered to teach in any department across the university. He’s one of the most decorated computer scientists alive, with over sixty-five thousand citations and an H-index above one hundred. He pioneered the field of automata-theoretic verification. He figured out how to use mathematical logic and automata theory to prove that computer systems work correctly. He was also editor-in-chief of the ACM for a decade. He is a fellow at Rice’s Baker Institute for Public Policy, bridging computer science and policy. He has been warning for over a decade that AI could eventually automate most human labor.

So in other words, Moshe isn’t just building the technology, he’s one of the most prominent voices in computing, pushing us to reckon with its consequences. I’m excited to talk with Professor Vardi about the trajectory of AI, what it means for society, and to compare our views on one risk in particular, which is AI extinction. Moshe Vardi, welcome to Doom Debates.

Moshe 0:01:59
It’s good to be here.

Professor Vardi’s Academic Focus: CS, AI, & Public Policy

Liron 0:02:01
A lot of bullet points here. I think a lot of people would be envious of this resume. Let me just ask you very briefly, what has been the central focus of your whole career of research?

Moshe 0:02:12
So there are people that have a life strategy, one overshadowing goal. I had a colleague, at some point she said she wanted to become university president, and she dedicated the next twenty years to becoming university president. That’s not my strategy. I’m more of an improviser. If there is a strategy, it’s be open to opportunities. Things happen, a door opens, and I say, “Let’s see what’s on the other side,” and I walk there.

People ask me now, “What are you doing today?” I say, “Well, my left brain is doing research in artificial intelligence, and my right brain is yelling, ‘What are you doing?’”

Liron 0:02:54
So in terms of the research focus, I mean, I know I mentioned automata verification. Maybe tell us a little bit about that.

Moshe 0:03:01
I’ll put it at a higher level, which is that today when people say AI, they typically think of LLMs. That’s kind of the notion of AI, or maybe even a little more broadly, generative AI. But for many, many years, when people thought about intelligence and mechanizing intelligence, they said, “What is intelligence? It’s to think and speak logically.” And so reasoning is intelligence. Think about it — recognizing cats from dogs, well, any rat can do it. How much intelligence does it take?

Today it’s called Good Old-Fashioned AI, sometimes derisively, sometimes affectionately, depends on the speaker. But for many years, people thought automated reasoning is AI, and that has really been the center of my research, going all the way back to my master’s thesis, which was about automating reasoning.

There was a book I remember when I became a graduate student, a big book, and the title was “What Can Be Mechanized?” And that was really the essence of computer science. What kind of intellectual things can be mechanized? We’re not talking about harvesting wheat here, but mechanizing intellectual things. And so my research for the past almost fifty years has been mechanizing reasoning.

Liron 0:04:29
So that’s your research. What’s your focus these days when it comes to your public policy and public communication?

Moshe 0:04:35
Again, we talked about taking advantage of opening doors. My kind of public policy career started around the two thousands. I was a member of the board of the Computing Research Association. We had a board discussion about what was then known as software offshoring, when the Indian consulting companies suddenly became software giants. And there was a worry that all the software would be written in India. Now we’re worrying all the software will be written by AI.

But in the early two thousands, the worry was the Indians — they are smart, there are many of them, they’ll write all the software in the world. Is there a future for the software industry in the United States? And I said, “Well, we need first of all to understand the facts. We can’t have an opinion and think of policy before there are facts.” And no good deed goes unpunished. I was asked to chair a study about software offshoring, and the study ended up saying not to worry, this is not the end of the software industry in the United States. There’s nothing to fear but fear itself.

And then ACM asked me, “Well, based on your experience, would you take on editor-in-chief of Communications of the ACM?” Which is the flagship publication of ACM, the premier society in computing. I accepted, and then it became clear that I have to write editorials and start thinking about computing in society.

More and more, that has become one of the main focuses of my public writing, looking at technology and its societal impact. I would say the bulk of my education career, this was not a topic of conversation. It’s not a mainstream topic. Most computer scientists are nerds. They’re interested in the technology, and that’s why they got into the business. That’s why I fell in love with programming.

But more and more I’m thinking, “Wait a minute. Are we doing something good for society?” People started worrying about automation in the early two thousand tens. Then social media came about, people started worrying about social media. Now we’re worrying about AI. So I would say for the past fifteen years, there are more and more voices with concerns about the societal impact of technology, and that is something that I spend a lot of time thinking about and writing about.

What’s Your P(Doom)™?

Liron 0:07:18
Well, are you ready for me to ask you the number one question that summarizes what I think is most important?

Moshe 0:07:25
Yes. P(Doom). What’s your P(Doom)? What’s your P(Doom)?

Liron 0:07:32
Professor Moshe Vardi, what’s your P(Doom)?

Moshe 0:07:38
So P(Doom) is not a well-defined concept because doom is not a well-defined concept. There could be many different dooms. There are many different scenarios. Here’s a way to think about it. Will humanity be here in one thousand years? That’s not a lot to ask for, one thousand years.

So I’ve done the following thought experiment. Economists like to say that for them, the ideal economic growth is about 2.1 percent per year, real economic growth, and that’s roughly what we’ve had since the beginning of the Industrial Revolution. But suppose we have it for a thousand years. You can do the calculation. You can do 1.021 to the power of one thousand. You can stick it in your Google search bar and do it.

And the answer would be, in a thousand years — even considering how big the population on Earth will be, let’s suppose we go from about ten billion now to about a hundred billion, which will already be incredibly crowded — the average GDP per capita will be in the billions. Everybody will be a billionaire.

Now, the thing is, Earth will not be able to sustain a hundred billion people spending like billionaires. It’s a finite planet.

Liron 0:09:07
Yeah. There’s just not enough material goods to mine out of the Earth.

Moshe 0:09:11
It’s a finite planet. So what happened? About three million years ago, every species in nature is in equilibrium. If it’s not in equilibrium, eventually something will happen — too much prey, the predators will go and eat them. Eventually they get into equilibrium.

About three million years ago, we invented technology. We started chiseling stones, and we broke out. We’re not in equilibrium with nature. For long-term survival, to me, we have to be in equilibrium with nature. We are not. So right now, if you’re asking what is the probability of doom overall, unless we change very dramatically how we live on this planet, I think we’re doomed.

Now, it could very well be — and that’s one reason that I think the concept of doom is not well defined — we might get wiser. We might decide, “No, we have to change things.” We have made changes. All kinds of things we used to take for granted, we’ve changed them. We might learn to live in equilibrium with nature.

So in a big way, the reason that doom is not well defined is because it assumes some kind of determinism, and we still have agency. We might be able to do something about technology in general. We might be able to learn to live in harmony, in equilibrium with the planet, because we’re stuck on this planet. We’re not going anywhere else, not for a very, very long time, if ever.

Liron 0:10:47
You’ve made some points that I really agree with. One thing I interpret you to be saying is we’re all doomers on a long enough timeframe. I mean, if you go all the way to the heat death of the universe, I don’t think anybody thinks that that’s not a pretty likely source of doom. So the only question is what’s the timeframe. Is that fair to say?

Moshe 0:11:04
Yeah. Even before that, the sun will go nova. We don’t need the whole universe to die. The sun will go nova at some point, and then humanity on Earth — humanity as we know it is doomed. But that’s a billion years, and that’s just too abstract and far out.

Liron 0:11:17
And also within a couple hundred million years, we’re gonna have a major asteroid impact.

Moshe 0:11:26
Yeah. But let’s — why I tell people, “Let’s talk about one thousand years.” It’s just, human history overall — the human story is about three million years. A thousand years is a non-trivial amount of time, but not a ridiculously long amount of time, and I think it’s a good goal for humanity to figure out: are we going to be here in one thousand years?

Liron 0:11:53
Personally, I think the most interesting timeframe to look at at this point really is just ten or twenty years, if not less. I don’t even think we need to focus on what’s happening after twenty years, because I think so much stuff is happening so fast, especially with AI. So I’ll just ask you really quick to get a baseline, what’s your P(Doom) by 2050? So a little bit more than twenty years.

What do you think is the chance that the vast majority of humanity, if not literally everyone, is just permanently going to die, no descendants, we’re never going to conquer the galaxy? It’s kind of a sad end to the species. What’s your P(Doom) in that sense?

We’re Not Doomed, “We’re Screwed”

Moshe 0:12:28
So I’m actually giving a talk on Thursday, and I have the same time frame — about 2050. What will happen by 2050. So I don’t think we’ll be extinct by 2050. However, it doesn’t mean things are going to be good.

So in the sense of people taking the scenario where AI is super intelligent and it says, “Well, these are pesky humans, they’re just pollution on Earth. I can make more paperclips without all these humans around, so let’s get rid of the humans” — to me, that’s not a very likely scenario. But my point is that, look where we are today. I would say, first of all, the world is not in very good shape, and this country is not in very good shape.

There is a lot of reason to believe that we’ll see a wave of automation of low-skill white-collar jobs. Why white collar and not blue collar? Blue collar has already been hit with the decline of manufacturing. But what happened is manufacturing went up the food chain, looking for higher added-value manufacturing. We don’t make sneakers in this country, but we make jet engines, we make cars. These are capital intensive, and it makes sense to automate them.

So there was a huge wave of automation that affected our manufacturing industry, and many millions of jobs in manufacturing have been lost. Now, the next wave of automation would be, in my opinion, low-skill white-collar jobs.

Liron 0:14:13
That makes sense. So we’re gonna talk about unemployment doom, but I would like to get a headline number for you. Let’s say by 2050, probability that the human race not only will have a hard time finding jobs for some people, but will not even have descendants. We’ll basically all be killed by 2050. I think you mentioned the probability is pretty low, but can you give me a rough number?

Moshe 0:14:34
To me, it’s not a likely scenario. The scenario that is realistic to me is more that social changes are going to affect us. The reason I say we’re not doomed but we are screwed is because I see social changes that make me very nervous.

One is, we know that when people lose their jobs, they lose a lot. People lose their sense of value. There’s a whole bunch of things that people lose when they don’t have jobs. And if you want to understand the politics of today, it’s the politics of blue collar, working class resentment in the United States. Now, what happens when this happens to white collar people?

Even if you just look practically at what happened over the last fifty years here — the Democrats used to be the party of the working class people, and somehow they’ve gradually became the party of educated professionals. And the Republican Party, that was the party of business, now became the party of the oligarchs and the working class people. Very strange.

So the point is that social changes and socioeconomic changes bring about very profound political changes as well. Right now, we are a polarized country, lots of resentment, and I don’t see this getting any better. That’s what worries me.

Liron 0:16:09
Very interesting point about white collar workers following in the footsteps of the blue collar workers and getting worried about their career path. I think I agree about this kind of AI unemployment wave, and I don’t even think we’re looking at a fraction of the population. I would ride it all the way to just everybody. I consider myself not only a white collar worker, but also a technical worker, so I have this high opinion of my own skills, but I’m already watching myself getting made redundant by some of these latest AI software tools. So would you argue that the unemployment wave goes all the way to every job, basically?

AI’s Impact on Meaning & Purpose

Moshe 0:16:44
In some sense, I would say you can look at this as a criticism of myself and my own field — we’re pursuing the dream of artificial intelligence. And why? You can go back all the way to Turing, who wrote this famous paper in 1950 on the possibility of machine intelligence, and he argued that yes, machines can become intelligent. He said, “Okay, let’s do it.” And why are we doing this?

The idea of artificial intelligence first came up with Leibniz in the seventeenth century. But if you look at what he talked about, he talked about intelligence augmentation. For example, I’m wearing glasses now. You’re wearing glasses. This is technology that augments us. It does not replace us. Even Steve Jobs, when he talked about the personal computer, said it’s gonna be like a bicycle for the mind. We will be able to think faster. But we’ve been focusing on machines that replace us. And we should ask, “Wait a minute, why are we doing it?”

Why are we developing technology? The reason we develop technology is because we are lazy. We don’t like to walk. But also, thinking hard is hard. What happens if we just don’t have to think anymore?

So I wrote some years ago, about a decade ago, suppose we succeed in automating everything. Suppose we build machines that can do whatever human beings can do. What’s the purpose of humanity?

And some people say, “Oh, we will all just become artists and write poetry.” Wait a minute — some machines now can write better poetry than I can. Maybe not about you, but definitely better poetry than me. And generative AI can generate art. Again, people debate how creative it is. But the idea that all these people with no jobs will just write poems and produce art — to me, that’s just incredibly naive.

In fact, we have data. We talked about the declining employment ratio. People have asked, “Okay, people don’t have jobs, what do they do?” And the answer is, women that don’t work actually work, just not for pay — they usually take care of someone, either someone young or someone old. But men that don’t work, especially prime age men that don’t work, very often what they do is play video games.

So to me, that’s a pretty bleak future for humanity — playing video games. Because the machine will play video games better than us also. It’s not clear that our great purpose in life is to play video games.

Liron 0:20:05
I agree. It’s weird that even the non-doom scenario is hard to specify. This is a recent book by Nick Bostrom, where he was basically saying, “Okay, imagine we can do anything. The AIs can do everything for us. So how do we even build our heaven, because everything is too easy?” So you accidentally make everything easy, and now it’s too easy.

I agree, you could argue that’s a good problem to have, but I do think that it’s somewhat of a problem. But I would love to get a ballpark range here on this thing you think is unlikely — this thing of everybody getting truly doomed, permanently extincted by 2050. Can I get a number for that?

Moshe 0:20:42
I wouldn’t put a number, but to me, it’s a very, very unlikely scenario.

Liron 0:20:47
Less than one percent?

Moshe 0:20:48
Less than one percent.

Liron 0:20:50
Okay.

Moshe 0:20:50
Again, these are all numbers that we get just out of thin air, so to speak. But it assumes — humanity, there is a standard phrase now: “I’m worried not about artificial intelligence, I’m worried about natural stupidity.” So we have not proved to be very wise as a species, but we have to be incredible idiots to let technology run amok to the point that it will just decide, “Let’s do away with the pesky humans.” So that to me is — I can’t say it cannot happen, but not very likely. Very unlikely.

On the other hand, the scenario where we are screwed, to me, is a very, very likely scenario. And what worries me about AI is exactly that we don’t like to make an effort, we don’t like to think hard, and so people now talk about cognitive deskilling.

So for example, I do not use AI to do writing, because when I write, I think. And when I write a one-page op-ed, I usually have a general idea what I want to say, but I just start writing, and my thoughts clarify themselves to me. I discover what I’m thinking as I’m writing. And if I start using generative AI for writing, then I stop thinking, and I’m not going to do that. So I’m part of the resistance.

Liron 0:22:27
All right.

Moshe 0:22:27
Remember the film talks about the resistance? I’m part of the resistance.

Liron 0:22:30
So you think a lot about resisting that outcome — resisting humans becoming soft. Kind of like the movie WALL-E.

Moshe 0:22:39
Yeah, exactly. That’s what the movie was about. But the point is that it’s not clear to me — I’m worried, especially about young people who have not yet learned to think hard, and they may never learn to think hard because there’s always a crutch to use.

And the newest kid on the block is having virtual relationships. Human relationships are hard. Jean-Paul Sartre, the French philosopher, was a bit of a misanthrope, and he said, “Hell is other people.” He did not like people in general. But what we learned from COVID is also that heaven is other people. The social isolation was very harsh. But having a relationship is hard. It’s hard to get along with your fellow human beings.

Now suddenly you have a virtual friend, and everything is easy. There are no fights. There are no, “You promised me you’re going to do it, but you haven’t done it yet.” Everything is easy. But if everything is easy, it’s not a real relationship. It’s just what I call an ersatz relationship.

Liron 0:23:59
Yeah. Nick Bostrom uses the analogy of the exoskeleton, where we have all these problems in our life, and normally we wish the problems would go away, but if all the problems go away, it’s like the exoskeleton of our life goes away, and then we just turn into mush. We don’t even know where to begin with our day. I think there’s a lot of truth to that. I feel like the optimal heaven — there’s a reason you want to be in the resistance, because the ultimate world we want to create probably is still a world where we have interesting challenges.

Moshe 0:24:26
You look at your life and you ask someone, “What are you proud of?” And it’s usually the hard things that you’re proud of. It’s not the easy things. You’re proud of, “This was hard, but I did it, and I did maybe not perfect, but I did okay.” That’s what you’re proud of. It’s where you did something hard. Nobody says, “Oh, it was trivial and I did it. I’m very proud of it.”

We’re proud of overcoming difficulties and challenges. So what happens to life if there are no challenges? I think life without challenges is meaningless. And it’s a paradox. We don’t like these challenges when they happen. When things are hard, we don’t like it when it’s hard. But that’s what makes — that’s part of the condition of being human, overcoming challenges.

Liron 0:25:16
So the difference between me and you is we both agree that there’s challenges to this supposedly good scenario where AI makes life too easy. We both agree there’s an interesting struggle to build a society that still has enough structure in the form of challenges and problems. I agree that’s a very rich area. I’m glad some people are focusing on it. I think the only difference is that you have this definition that you sometimes call doom or being screwed, which is the AIs are gonna kill us with kindness. And I still think the AIs are more likely to kill us by melting our cells.

Moshe 0:25:53
Actually, recently there was the social psychologist Jonathan Haidt from NYU, and he’s been the prophet of doom from social media. He’s been arguing that social media is very harmful for young people. And he wrote an article recently where somebody asked, I think ChatGPT, “What would you do to destroy humanity? How would you destroy humanity?” And I think the answer was, “Well, I’ll do it in a way that they will not even know it’s happening to them.” It was really much, “I’ll kill them with kindness.”

I mean, look, why is ChatGPT such a sycophant? It’s not kindness. Kindness is a human emotion. It’s been built like that. It could be built differently. The companies that built it can make an argumentative one, and maybe there will be an industry to have an argumentative girlfriend or boyfriend instead of a sycophantic virtual friend.

Right now, they’re not very realistic. They are very non-human friends, and you’re not dealing with the human challenges of relationships. I think if life is too easy, it’s meaningless. In fact, people seek challenges. People say, “I want to climb Kilimanjaro.” They know it’s going to be hard. They don’t want a robot to carry them up Kilimanjaro. That’s kind of meaningless. They want to say, “I was able to climb Kilimanjaro.”

So we seek challenges. Having challenges in just the right amount — sometimes we fail. It’s good to have some failures in life. If we were successful in everything, then success doesn’t mean much. Everybody has some failures in life, and we learn from success and from failures. That’s kind of the richness of the human story.

Let’s Ride the Doom Train ™

Liron 0:27:47
Okay. Everything you said now makes a lot of sense, but the audience of this show — it’s called Doom Debates — they like a little debate action. They like to find the part where you and I disagree. So let’s get back to the less than one percent P(Doom) of extinction by 2050, because we’re both smart people here. That’s what our parents say. And I have a fifty percent P(Doom) by 2050, and you have less than one percent. So this is—

Moshe 0:28:14
But not zero. But not zero.

Liron 0:28:15
Yeah, not zero.

Moshe 0:28:17
Let me tell you my scenario. I’ll tell you my scenario. My scenario is not that AI will decide, “Let’s get rid of the pesky humans.” But think, what was the movie? I think it was War Games. If I remember very vaguely, it’s a movie from a long time ago. It described how they decided that asking people to push the button — who knows? Will the people push the button? If you have to push the nuclear button, will they do it? And in fact, you needed two people to do it, and one gets hesitant. So they decide they need to automate it. And then the AI, some very early version of AI, decides to launch a nuclear war.

And so some people are now talking about a Hindenburg moment with AI. There was a period where we had gas-lifted airships, and there was the Hindenburg moment, a catastrophic accident. The kind of thing that has me on my toes now is that we are becoming more and more reliant on AI in different applications. People talk about vibe coding. What happens when AI builds software for nuclear reactors, controlling a nuclear reactor, and there is a catastrophe because the software was not reliable?

As we are becoming more and more reliant on the technology, I think there will be a moment of catastrophe. The question is: what’s the scale of the catastrophe?

Liron 0:30:00
Yeah.

Moshe 0:30:01
Is it planetary scale? There’s a difference between one plane crashing and suddenly all the planes in the United States in the air all crashing at the same time because of some AI malfunction.

Liron 0:30:21
Yeah, I feel good about never having all the planes in the US accidentally crash. I feel like that’s the kind of scenario that engineering, especially of the caliber of airplane flight — we’re not gonna screw up in that particular way. And even if we did, it would still have survivors. And I’m concerned about a different doom scenario.

So I think at this point, I want you to take a ride on what I call the doom train.

Moshe 0:30:47
Okay.

Liron 0:30:47
I’ll run arguments by you of why I’m convinced that we’re doomed.

Moshe 0:30:52
Let’s get on the doom train.

Liron 0:30:54
Yeah, we’re gonna get on the doom train, and if you’re like me, you just ride all the way to the end. You don’t get off on any of the stops.

Moshe 0:30:59
Let me suggest, you may want to write — you can probably use AI to write a nice poem, “One Way Ticket to the Doom.”

Liron 0:31:06
Exactly.

Moshe 0:31:07
You know the song, “One Way Ticket to the Blues”? We can compose something that will be “One Way Ticket to the Doom,” actually write a song and create a singing.

Doom Debates House Band 0:31:17
♪ One way ticket, one way ticket,

One way ticket to the doom! Woo-hoo! ♪

Liron 0:31:33
All right, great. So yeah, the doom train. The stops on the doom train represent arguments people make why they’re optimistic, and they get off on various stops. There’s so many stops. There’s so many arguments why people say they’re not doomed. I’ve managed to hold on and never get off the train, so I’m all the way at the end at Doom Town just thinking that we’re doomed. And my background for this show represents the doom train right here. You can see the train is going over here to Doom Town where the fire is.

So one of the first stops on the doom train is the idea that AI is about to get more powerful than humanity. You seem to think that’s the case when it comes to being able to do all the jobs. Do you accept the premise in general that everything a human brain can do, it seems like an AI will be able to do soon?

Moshe 0:32:20
Soon, I don’t know. I’m a materialist, and to me, the brain is a machine. It’s an amazing machine crafted by evolution, but it’s a physical machine nevertheless. It’s hard for me to accept the argument that the brain is more powerful than any machine we can build. That I don’t understand.

To me, we’ve been able to build machines that are stronger than us, faster than us. I don’t see any argument — in this sense, if you go back to Turing in 1950 — I don’t see any argument why a machine cannot be intelligent, even more intelligent than us. We accept that there are some people who are more intelligent. So if we’re going to build an intelligent machine, why are we going to build it with IQ 100? Why can’t we build it with IQ 200, 300? Who knows.

Liron 0:33:05
Exactly. And there’s sites that aggregate predictions of when this moment will happen, if ever, and there’s a pretty big consensus, as far as I can tell, which happens to also coincide with my own best guess, that sometime around 2030 is when this moment will happen of AI just surpassing the human brain in every economically relevant way. Does that sound like a reasonable estimate to you?

Moshe 0:33:26
No. 2030 is very close. There are various arguments. One is because right now, essentially, the industry was on one track of LLMs and just scale it up. But there are people who argue that this — for example, Yann LeCun says, “These are all language models. We need world models.” Judea Pearl says that a huge part of intelligence is understanding causality, because we understand how one thing causes another thing. That’s a big part of how we understand the world. And he says we have not been able to make such progress in causality.

So even on the scaling, people debate between one camp saying scaling is all you need, and then people saying, “No, scaling has already started to slow down. We’re not getting there.” 2030 is very, very soon in some sense. So I am very skeptical about 2030.

But before that, you gave me another date in terms of employment — 2050, and that’s about fifteen years from today. That feels a bit more likely. But again, the answer is that these are all very speculative because we don’t really know what it would take to get to human-level intelligence, and what it would take to get beyond that.

But let’s say 2030 seems to me very soon. I’m willing to bet money that we won’t get there by 2030, but I will not bet against 2050.

Liron 0:35:12
Wow. Okay. That’s already pretty bold, and that’s load-bearing. We could even extend out to 2070. Let’s give me a time extension here. I’ll just argue that we’re doomed by 2070 because we both agree that there’s a pretty strong chance that by 2050 you’ll just have AI being able to outclass the human brain at everything, correct?

Moshe 0:35:32
It’s not a crazy assumption. 2050, or definitely 2070, it’s not a crazy assumption. 2030 seems to be very aggressive. 2050, medium. 2070, conservative.

The Future of Jobs

Liron 0:35:43
You think a lot about the unemployment consequences of AI, as we’ve already discussed. So I did want to pause on this part of the doom train and just ask you — do you think that any particular job is likely to be robust ten years from now? If somebody’s in a certain career track, do you think they’ll have an advantage over other people? Because I personally am having a hard time knowing any career track that’s not doomed.

Moshe 0:36:09
I’ll tell you about an interesting kind of virtual debate that happened about a decade ago. It was my argument and Joseph Aoun, who was then the president of Northeastern, who wrote a book around 2017 or 2018 called “Robot-Proof.” How should we change education to prepare people for the robot-filled world?

And part of what he says is one thing unique to human beings is empathy. There are many, many jobs that require empathy. This is a unique human talent. This is robot-proof. On the other hand, in 2016, I was in a big meeting, and we talked about the future of work, and a reporter from The Guardian asked the following question: people say that if AI and robotics can automate all jobs, the jobs that will remain uniquely human will be ones that require emotional work. And I looked at him, and I said, “Would you bet against sex robots?”

And it was a soundbite, and The Guardian ran with it. They had a title: “Would You Bet Against Sex Robots?” And now, about a decade later, who is right? Joseph Aoun, who said that empathy is uniquely human, or me? I was right. Now we have all these virtual companions. They can at least fake empathy. I don’t think it’s real empathy in some sense — when I feel empathetic, it’s a feeling inside me, not just a behavior, it’s a feeling. But they can fake empathy as well as anyone else.

And in fact, I think OpenAI announced that they’re going to allow what they call adult content. So I think sex robots, by 2050, we’ll have sex robots. I’m convinced. Because human beings are desperate for certain things — companionship, sex — so somebody will take advantage of that. That’s going to happen.

It is hard to think of anything that in the long term — if you go back to Turing’s argument, it’s hard to find anything humans can do that ultimately a machine cannot do. Yes, we are a machine that nature has worked for hundreds of millions of years to perfect. But we have been doing pretty well in learning from nature. AI today is partly — people forget where it started. The first paper that talked about neural nets was about understanding how the brain works.

Liron 0:39:01
Yeah. I often use the analogy of birds. A bird’s wing is this perfect creation. It’s so light and so nimble, but at the end of the day, we figured out the basic principles of flight, and a jet engine can do things that birds couldn’t dream of.

Moshe 0:39:16
Yep. So I cannot think of anything that humans can do that in principle machines will not be able to do.

A Country of Geniuses in a Data Center

Liron 0:39:24
Okay. Well, I’m with you there. So the basic sketch of why I think we’re doomed is, I think we’re going to at least have what Dario Amodei from Anthropic is going around saying — “a country of geniuses in a data center.” I think that’s probably going to be a good description of what we have very, very soon.

I can tell you personally, I’ve been using Claude Code pretty frequently at my day job as of the last couple weeks, and it’s really better than a human software engineer. There are little corners where it’s not yet, but the bulk of what it’s doing, it’s clearly — I’m basically now out of a career, or I will be in a couple months. I’ve been a software engineer for a couple decades now, and I think I’m done as a software engineer. I might be a product manager. I might be managing an AI software engineer, but my ability to engineer software is pretty much moot at this point.

So this idea that there’s going to be software engineering geniuses, writer geniuses, designer geniuses, movie-maker geniuses, all living in a data center, and there’s millions of them in many data centers — a country of geniuses in a data center, to use Dario’s language — to me, that seems extremely likely in less than a decade, if not sooner.

And that’s kind of the beginning. I see the intelligence actually growing from there. I think what we’re going to have is gonna be so intelligent that even a human genius doesn’t really begin to describe it. But just to finish my argument — you have the country of geniuses in the data center. If they get it in their heads, if somebody issues a command to the data center, if they have root-level privileges to tell all of those geniuses what to do, how does the entire coalition of humans compete with millions of geniuses in a data center? Who wins that fight?

Corporations as Superintelligence

Moshe 0:41:04
Well, first of all, history has shown us that societies that get too much inequality, at least in the modern period — too much inequality is destructive in the end. I hear arguments now like, “The Industrial Revolution happened, and look, we are fine.” People forget the Industrial Revolution nearly took two communist revolutions, the Soviet and Chinese revolutions. That’s a story of about one hundred million people.

Anybody who is blasé about the havoc that industrial revolutions can yield is ignoring history. And I’ll say even now, we are in some sense in a period of — if you’re looking at what’s happening now — it is some kind of a revolution. It’s not exactly so far a violent revolution. We don’t have the guillotines yet, but the big question will be, are we going to go back to some kind of semi-normal in 2028 or not? I’m not convinced that this is just an aberration. This might be an expression that we built a very unequal society, lots of people with a lot of resentment.

I don’t know who’s going to be in control. People who decide to burn the data centers — a lot of resentment right now. There was an article in the New York Times just on Sunday about the growing resentment against AI, and the tech companies are worried about it. I think Time Magazine had a cover, “The People Versus AI.” So we are heading into — unlike when the internet happened, I don’t remember any resentment. The tech lash started to go serious around maybe 2018, and then it kind of subsided because we were too busy with COVID. Who knows what’s going to happen.

Liron 0:43:16
Okay, so the whole doom train — it’s a long track. I’ve identified eighty stops. I’m sure I could think of eighty more. People are very creative. They take many positions. But you can factor the entire doom train track into half of it being the question: can AIs disempower humanity? Can the AIs take over? And the second half of the track is, okay, let’s assume that they can, but then will they choose to do it? Or will they be motivated in any way to do it? So there’s “can they” and there’s “will they.”

And the sense I’m getting from you so far is you seem very open-minded that they can. I definitely think they can. Do you think they can?

Moshe 0:43:52
Again, I remember when Nick Bostrom — you know about the paperclip experiment.

Liron 0:43:59
Yeah.

Moshe 0:44:00
I was saying to myself, but I can tell you now, “Nick, open your eyes. It’s already happening.” We already have super intelligent beings. What are these super intelligent beings? They are large corporations. Large corporations are super intelligent. They can marshal together enormous resources. They bring to it the intelligence of many, many people to do amazing things that no individual person can do. So these tech corporations are more intelligent than human beings.

Now, what are they trying to do? They’re not trying to maximize paperclip production. They’re trying to maximize profit production. So the scenario that Nick Bostrom described, how we have super intelligent AI that will maximize paperclip production, is already happening. We have corporations, they’re super intelligent. We have not found a way to regulate them and restrain them. And they have — they’re sitting very close to power. People talk about the oligarchy, and they have enormous power.

That’s what worries me, not AI. I’m worried about the power of a small number of people with enormous amounts of wealth and desire just to expand that wealth using technology that might be risky. And look at all this energy being used now — to what end? And the answer at the end of the day is to make more profits. That’s what worries me. Not that AI will doom us. I’m worried about the tech corporations dooming us.

Liron 0:45:42
That’s fine to be worried about, but I’m asking a very specific question about something else. I was hoping to just focus on that.

Agency, Consciousness, and the Limits of AI

Moshe 0:45:49
So you — the question is, so far, here’s the thing: so far, we do not seem to have AI with agency. If you look at human beings, in AI it was called BDI — belief, desires, and intentions. We are acting as agents. We have some beliefs, we have some desires, and we have some intentions, and then we act upon them.

Now, so far, we have not tried to do that, and I’m hoping that — even if you look at the topic of AI agents, they don’t have agency in the sense that we have agency. They have agency in the sense that I’m telling them, “You are an agent for booking flights, so go ahead and book the best flights possible.” I don’t see AI saying, “I want to get rich, and I’m willing to take your money in order to get rich.” I see people willing to do that. But I don’t see AI so far having these desires. Remember, Buddhism tells us the source of human suffering is that we have wants and desires.

Liron 0:47:05
So on this question of can a sufficiently large data center of geniuses take over and go head to head versus all the human militaries of the world — you know, it’s the “can” question. And in response to the “can” question, you’re pointing out that maybe the answer is no, because maybe agency is the bottleneck. That’s what you’re saying?

Moshe 0:47:27
If you look at human history, sometimes we have two types of miseries. Sometimes nature causes miseries for us. But most of the time, miseries are caused by other human beings that have desires and intentions. They act on them. So far, we have not seen AI technology that comes even with autonomy — but not just autonomy, this uniquely human thing of wants and desires.

Liron 0:48:06
Is there a particular test, an objective test — achieve this particular task or score on something — where we can test them and be like, “Aha, you’re always getting a low score on this benchmark because of this Achilles heel that you don’t have agency”?

Moshe 0:48:22
This partly has to do with — you can find on YouTube, I gave a talk recently about “Are AI Minds Genuine Minds?” And it’s actually very hard to find genuine minds. Take even something very simple, which is consciousness. There was some debate — remember, there was an engineer at Google who claimed that the AI model they had at the time was sentient.

Liron 0:48:51
Yeah, I remember. Blake Lemoine, I think.

Moshe 0:48:53
Yes, Blake Lemoine. But philosophers have been asking for quite some time: is there a test for consciousness? If, for example, I think you’re conscious — why do I think that you’re conscious? Well, mostly because I think you’re human, and so you’re almost like me. We may have different opinions, we have different tastes, different desires, but we are more alike than not alike. And if I’m conscious, then you’re conscious.

But there was in fact the argument: is there a behavioral test for consciousness? Philosophers have written — there was a philosophy paper from Samir Becal, “Conversation With A Zombie.” Supposedly zombies are not conscious, but if you meet a zombie who looks human, how would you converse with someone and decide that they are conscious?

Liron 0:49:54
Maybe there is no causal test, but you’re bringing it up in the context of answering why you think maybe AIs can’t kill everybody. So it’s gotta be some kind of limitation to their performance somehow if they don’t have agency.

The Mad Scientist Scenario

Moshe 0:50:07
Nick Bostrom, if you look at his scenario of why super intelligent AI will destroy humanity, the purpose was making paperclips. But why does the super intelligent creature in Bostrom’s experiment want to make paperclips? It was built for that end.

So now we have to think: someone is going to build this technology. Imagine there is a crazy inventor now. Maybe this is a more possible scenario. Not that the AI will just wake up one morning and say, “Humanity is just a plague on Earth. Let’s go destroy them.” But what happens if there is a crazy scientist? This is called the crazy scientist scenario, which we worry about — what happens if the crazy scientist decides one day to start a nuclear war, push the button?

And actually, in that context, people have worried about it. People have worried about it, and what they have done is built some systems to ensure that a single person does not have that power — for example, to launch nuclear ICBMs. Two people have to push the button, making it less likely that one person acts alone.

Now, what happens if two people get crazy at the same time, or a group of people get crazy? We have this worry. We have the capacity today to destroy humanity. It’s very clear that if there’s a massive nuclear attack between the United States and Russia, that might be the end of humanity.

Liron 0:51:52
I think there’d probably be some survivors.

Moshe 0:51:54
Maybe some survivors.

Liron 0:51:56
But I agree it’s very bad, and I think it’s underrated.

Moshe 0:52:00
Remember the group that calls themselves the Bulletin of Atomic Scientists.

Liron 0:52:07
It’s funny you’re bringing that up because my guest last week was actually Professor Daniel Holtz, who’s the chairman of that group, and we talked all about the Doomsday Clock.

Moshe 0:52:15
The Doomsday Clock. They thought the most realistic scenario for doomsday would be a nuclear holocaust.

Liron 0:52:24
Totally, yeah. And I agree that was the most likely scenario up until now — now we’re in AI.

Moshe 0:52:29
Has he changed it? Has he moved it from nuclear holocaust to AI holocaust?

Liron 0:52:36
He’s a little bit vague about which one is bigger, but he agreed with me that both the nuclear extinction risk and the AI extinction risk are both really strong, and that’s one of the reasons why the clock is now closer to midnight than ever.

Moshe 0:52:50
So the thing that I’m now — I’m actually kind of revising my beliefs as we speak from this conversation, because I always, when I thought about P(Doom), it was AI deciding to destroy humanity.

Liron 0:53:03
Right.

Moshe 0:53:03
But if we learn from the Bulletin of Atomic Scientists, their worry was just things getting out of control. That was their worry. In fact, we know that there have been scenarios where we were very, very close to nuclear war.

So now let’s change the doom scenario for AI. It’s not caused by AI. AI is the means. Imagine the growing tension between China and the United States, and the nuclear forces are now controlled by AI. Go back to the War Games scenario. Humans can do something to trigger AI.

I’m not so worried about the AI deciding to get rid of humans. I’m worried about humans. Anything on the planet that is proving to be a very unpredictable factor — it’s been humans.

Could a Data Center of Geniuses Destroy Humanity?

Liron 0:54:02
So you’re more worried about humans than AIs, but you’ve also accepted the premise that—

Moshe 0:54:06
AI is means. AI is means. Look, a person, a crazy person with an axe can be an axe murderer. We’ve even had mass shooters. We’ve had people going — what was the guy in Las Vegas some years ago? Started shooting from a tower and killing many dozens of people.

Liron 0:54:26
Okay. So just, we’re heading toward the wrap-up soon. I was asking, on behalf of the doom train, can the AIs kill everybody? And it seems you think that, well, they can if used as a tool.

So I’m willing to entertain that scenario. How about the scenario where one bad guy who happens to be a good hacker hacks into this huge data center of geniuses — a network of data centers equivalent to a hundred million geniuses. But it turns out that this person is a good enough hacker that he’s able to run a command on this super cluster, and the command just says, “Kill all humans.”

So I’m willing to say let’s make the scenario that. Let’s mix in the human agency. There’s a spark of human agency to type those letters, “Kill all humans,” and now the ball is in the AI’s court. In that situation, can the AI kill all humans?

Moshe 0:55:16
I’m still not a doomer even for that scenario, because again, I look at what happened with nuclear weapons, which was a technology people thought could end humanity, and we have found ways to deal with it so far. We are always a few minutes to midnight, but we’re still here to discuss it.

What have we done? We negotiated treaties. We designed human systems. We did a whole bunch of things to reduce the risk. We did not eliminate the risk. We’re always a few minutes to midnight, but we have found ways to control the risk, and so far we have not made that existential step.

If — now we’re talking about the risk of human-mediated AI disaster, which at some point we might get to the point that this becomes possible — we will need to use what we know about humans, how to control humans. That’s what I’m worried about — humans right now. That’s my big worry.

Why am I worried about the tech corporations and the technology? Because the tech corporations are run by people. I’m more worried about — again, the issue with nuclear weapons was not the weapon per se. It was how humans will use them.

Liron 0:56:36
I just want to clarify. Are you then saying that if the data center full of genius AIs had the command, root-level command to kill all humans, you think their chance of success is very low?

Moshe 0:56:48
No. I think that we will not necessarily build such a data center. We will start to build protections. For example, why has no hacker been able to hack into the system that launches ICBMs and launch a missile? It has not happened yet.

Liron 0:57:16
We don’t know that it hasn’t, but to give you my two cents on this subject, very simple. Hacking is pretty hard. The amount of humans who could actually pull off hacks on high-stakes targets is a very small fraction of the human population, and those humans are thinking, “What do I want? Make money, not go to jail, not get killed.” So their incentive structure tends to be not to hack ICBMs, but rather to just go make millions of dollars.

Moshe 0:57:43
So why would someone try to hack into a data center and launch a command to destroy humanity? It’s kind of the same thing.

Liron 0:57:49
Well, yeah.

Moshe 0:57:52
There are easy ways to make money and get laid.

Liron 0:57:54
I agree that the percentage of humans who are going to hack into a data center and type “kill everybody” is pretty low. I was factoring the argument. The reason I asked you that hypothetical is just to focus on the question of “can they” — can the data center of geniuses kill humanity if it wanted to.

Moshe 0:58:08
But you’re asking — if we’re talking about “can,” we can do scenarios. But we could all be dead by next year because a large meteor or comet is going to hit us.

Liron 0:58:19
Right. Yeah, but this scenario is just “command gets into the shell.”

Moshe 0:58:21
In terms of probabilities—

Liron 0:58:23
I mean, what if somebody accidentally types the command in the shell at one point?

Moshe 0:58:27
Then we would be stupid to have a keyboard where you can easily type such a command that this can be done. Again, we have been pretty good—

Liron 0:58:35
You’re a hundred forty characters away from accidentally kicking off doom, is my issue.

Moshe 0:58:39
I don’t think so. We are not going to design a system where a random janitor sits by a keyboard and types a command to destroy humanity. That’s what we have — again, with very risky technology, we have been able to build systems that so far have protected us.

Liron 0:59:05
Can I factor your claim, though? Because you’re saying, “Hey, the data center of geniuses that accepts arbitrary commands is not going to exist. We’re gonna be smarter about filtering what commands it’ll accept. It’ll judge the commands.” You can claim that, but I just wanted to factor the doom train into the first stop, which is: hey, if the data center of geniuses existed, and if it was configured to just not judge people’s commands — to just be like a genie, “your wish is my command” — then do you think its probability of wiping out humanity would be high?

Moshe 0:59:36
But it’s now a very conditional probability. You have to say if there is a whole chain of events.

Liron 0:59:40
Yeah. A couple of conditions, because you can imagine you could give me an answer like, “Then I think the answer is yes,” but all of your non-doom argument rests on the next step of “we won’t build it.” So I want to see where you’re — I’m factoring your worldview here.

Moshe 0:59:55
No, the question is — look, will humanity still be here in 2050? Let’s put it this way. I think there are bigger risks between here and 2050. I’m worried about runaway climate change, for example, where we seem to be doing nothing. Imagine we have right now more and more hurricanes per year. What happens if we have some mega hurricanes, bigger and bigger? People talking about the possibility of a tipping point in the climate. So there are lots of risks that can happen.

I lose sleep over being screwed by AI. That really makes me lose sleep. What will happen is mass unemployment.

Liron 1:00:43
Yeah, the kill-with-kindness scenario.

Moshe 1:00:45
Kind of kill with kindness. We’ll do everything for you, but our socioeconomic system requires people to — people talk about this being a utopia, but so far, over the past fifty years, we have economic growth, but it’s not a utopia here. I’m worried about our wisdom in navigating these challenges.

But AI run by a mad scientist — yeah, there are such scenarios. I cannot say the probability is zero. But here’s the thing about small risks.

Let’s suppose that you ask me, “You live in Houston. Would you come and give a talk tomorrow at Austin?” Austin is about two hundred miles from here, about three hours’ drive. I don’t like long drives, but suppose you offer me a ten thousand dollar honorarium. I’ll probably take it. What is the probability that if I drive from here to Austin, I will die?

Liron 1:01:55
Yeah, I mean, we can estimate that. It’s not that hard.

Moshe 1:01:57
I think it’s something like one in a million.

Liron 1:02:01
Maybe a little bit higher than that, but yeah, roughly.

Moshe 1:02:04
Okay. But even just one in a million. It’s not going to cause me to not go to Austin. I’ll say, “I’m going to stop on the way and have coffee in the center, so I will not drive three hours.” But I will drive there. People do it all the time.

So if the risk of something happening is one in a million, we just ignore it. That’s the reality. I don’t think the risk of extinction is in the one percent. One percent is significant risk. If you told me the risk of me dying going to Austin is one percent, that’s a high risk. I will not drive to Austin if it’s one percent.

Liron 1:02:40
Okay. So yeah, I’m starting to get the lay of the land of your views. Your conclusion is that P(Doom) is very, very low. I don’t know if you’ve sat down and factored which parts you’re optimistic about. It sounds like you have kind of a general sense of optimism that something along the way is going to work out.

Moshe 1:02:58
I’m a pessimist. I think the probability of being screwed is much, much, much higher.

Liron 1:03:04
Yeah.

Moshe 1:03:05
My worry is the probability — if you ask me about the probability of being screwed, I would say fifty percent. Very high.

The WALL-E Meme and Fun Theory

Liron 1:03:13
Yeah. I mean, this is — have you ever seen that meme from Star Wars with the four panels where Anakin is saying, “AI is gonna kill us,” in this version of the meme, and then Padmé is saying, “With kindness, right? Kill us with kindness, right?”

Moshe 1:03:27
No, I’ve not seen it.

Liron 1:03:27
It’s a meme template, yeah. That’s basically what our conversation is. I agree that if we survive my biggest concern — which is the AI doesn’t kill us — then we have the doom from Nick Bostrom’s book that I think you’re also mindful of, which is this idea of, okay, we can have everything we want, but then we all just become the lazy people from WALL-E, and I agree. We should try to avoid that. There are interesting challenges there.

There’s this whole concept of fun theory, which is: okay, you can do anything you want. How do you make it fun? How do you optimize the fun of it?

Moshe 1:03:59
Yeah.

Why Professor Vardi Signed the AI Extinction Risk Statement

Liron 1:04:01
One more quick question I want to ask you, and I know we’re coming up on time. You did sign the famous 2023 statement from the Center for AI Safety, the statement that says, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” So do you still see AI as a prominent existential risk?

Moshe 1:04:22
So many people, because I signed that statement, asked me about it. And I said, when I sign such a statement, it doesn’t necessarily mean that I agree with everything it says.

The reason I signed it is because I do believe that the system we have right now, in which just the drive for profit is driving this development — especially in the United States in particular — no regulation, no liability. One of the issues about this technology is they have no liability because we have all signed click-through contracts that waive liability.

This situation where society does not really control what’s happening with technology — what is the direction? How do we ensure that it contributes to human welfare? I very much object to this idea of, “Let the tech companies decide what to do, and that’s it, and in fact, we will support it and subsidize them.” I object to that, and therefore, I’m willing to sign statements even if I don’t necessarily agree with every line in the statement.

The main issue to me was not necessarily the extinction, but the thought that we need to exercise some societal control over this technology. And to that end, I signed the statement.

Liron 1:05:42
Nice. Yeah. Maybe we can think of it in your case as, “Mitigating the risk of getting screwed should be a high priority.”

Moshe 1:05:48
For example.

Liron 1:05:49
Sure.

Moshe 1:05:49
Yes. I’m worried about this technology, and I think as a society, we should have a say in where it is going. And right now we don’t. We don’t have a say where it is going.

Wrap-Up + 1 Way Ticket to Doom

Liron 1:06:02
Okay. We can wrap on that. Professor Moshe Vardi, thanks for coming on and being a good sport and debating me here in the arena.

Moshe 1:06:09
It was fun.

Liron 1:06:12
Nice.

Moshe 1:06:12
We should do it again. Let’s do it again in 2050.

Liron 1:06:17
Okay. I’ll see you then. Mark my calendar.

Moshe 1:06:28
We can compose something — “One Way Ticket to the Doom.” You can play it at the end of the episode.

♪ One way ticket to the doom. ♪

Doom Debates House Band 1:06:37
♪ Churning through the code. Data flowing overload. Ooh, yeah. Every cycle, every zoom. Bringing closer what we’ve ruined.

One way ticket. One way ticket.

One way ticket to the doom.

One way ticket. One way ticket.

One way ticket to the doom. Woo, woo. ♪


Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏

Discussion about this video

User's avatar

Ready for more?