0:00
/
0:00
Transcript

Facing AI Doom, Lessons from Daniel Ellsberg (Pentagon Papers) — Michael Ellsberg

Michael Ellsberg, son of the legendary Pentagon Papers leaker Daniel Ellsberg, joins me to discuss the chilling parallels between his father’s nuclear war warnings and today’s race to AGI.

We discuss Michael’s 99% probability of doom, his personal experience being “obsoleted” by AI, and the urgent moral duty for insiders to blow the whistle on AI’s outsize risks.

Timestamps

0:00 — Intro

1:29 — Introducing Michael Ellsberg, His Father Daniel Ellsberg, and The Pentagon Papers

5:49 — Vietnam War Parallels to AI: Lies and Escalation

25:23 — The Doomsday Machine & Nuclear Insanity

48:49 — Mutually Assured Destruction vs. Superintelligence Risk

55:10 — Evolutionary Dynamics: Replicators and the End of the “Dream Time”

1:10:17 — What’s Your P(doom)?™

1:14:49 — Debating P(Doom) Disagreements

1:26:18 — AI Unemployment Doom

1:39:14 — Doom Psychology: How to Cope with Existential Risk

1:50:56 — The “Joyless Singularity”: Aligned AI Might Still Freeze Humanity

2:09:00 — A Call to Action for AI Insiders

Show Notes:

Michael Ellsberg’s website — https://www.ellsberg.com/
Michael’s Twitter — https://x.com/MichaelEllsberg
Daniel Ellsberg’s website — https://www.ellsberg.net/

The upcoming book, “Truth and Consequence” — https://geni.us/truthandconsequence

Michael’s AI-related substack “Mammalian Wetware” —

Daniel’s debate with Bill Kristol in the run-up to the Iraq war —

Transcript

Intro

Liron Shapira [00:00:00] My guest is Michael Ellsberg, son of the late Daniel Ellsberg. If you don’t know who Daniel Ellsberg is, he’s best known for leaking the Pentagon Papers, which exposed that the US government was lying about why they got into the Vietnam War.

Michael Ellsberg [00:00:17] That set in motion a chain of events. Nixon’s response played into Watergate.

Richard Nixon [00:00:19] We’ve got to keep our eye on the main ball, Ellsberg. We gotta get this son of a b*tch.

Liron [00:00:24] He also wrote a book called The Doomsday Machine, which was about his experience for the Kennedy administration writing the nuclear war plans, trying to navigate these crazy trade-offs: How not to kill 600 million people in a nuclear war.

I see this as analogous to AI leaders trying to plan how they’ll navigate the trade off to super intelligence. Michael was actually an editor of Daniel’s most famous books. Michael and I both agree with this moral code that Daniel Ellsberg has spelled out. Michael, what is the moral code?

Michael [00:00:56] If you’re in one of these top AI labs, if there are internal estimates showing that the danger is much greater than your leaders are saying, you owe it to the public to let us know. This is a matter of public concern. Think about your kids. You’re building something that can end their lives.

What has a lot of credibility is insiders giving up millions of dollars potentially to share this message: Insiders who are building the stuff saying this freaks me the f*ck out!

Introducing Michael Ellsberg, His Father Daniel Ellsberg, and The Pentagon Papers

Liron Shapira [00:01:29] Welcome to Doom Debates. My guest is Michael Ellsberg, son of the late Daniel Ellsberg. If you don’t know who Daniel Ellsberg is, he’s best known for leaking the Pentagon Papers to the New York Times in 1971, which exposed how the Vietnam War sausage was made and proved that the US government was lying about why they got into Vietnam. Relevant to Doom Debates, it proved that insiders in the US government viewed the prospects of the Vietnam War as being very bad, but then were turning around and telling the public that it was fine.

For the millennials and Gen Zs who are watching the show, Daniel Ellsberg was the OG Snowden, and Snowden has publicly said that he was inspired by a documentary about Daniel Ellsberg called The Most Dangerous Man in America. So his influence, I think, is a lot bigger than his name these days. But if you’ve watched that 2017 movie The Post—Steven Spielberg, Tom Hanks, Meryl Streep, 88% Fresh, I recommend it—that is actually based on the true story of Daniel Ellsberg and the Pentagon Papers.

That’s not even the end of Michael Ellsberg’s father. I’m actually a Daniel Ellsberg fan, that’s why I’m going on and on. He also wrote a book called The Doomsday Machine, which was about his experience as a high-level consultant for the Kennedy administration. He was writing the nuclear war plans. He was very much in the room where it happened, and he was trying to navigate these crazy trade-offs. He was trying to figure out how not to kill 600 million people in a nuclear war. Try to get that number down to as low a number as he could, maybe only 100 million. I think that’s what it turned out to be in his refined plans.

I see this as analogous to AI leaders trying to plan how they’ll navigate the trade-off to superintelligence. Similar kinds of crazy trade-offs are now going on in these rooms. So Daniel Ellsberg died in 2023 of pancreatic cancer at age 93, but we are still going to be able to talk about him because Michael has absorbed so many of his lessons. He was actually an editor and helped with the writing process of Daniel’s most famous books. He’s currently co-editing a new book of his father’s unpublished writing called Truth and Consequences.

Michael has 25 years of experience as a direct response copywriter. He was writing marketing copy for websites and emails to get customers engaged and drive sales. We’re going to talk about that because it was one of the first casualties of AI automation. Like me, Michael is a thoughtful AI doomer who’s written eloquently about why the future doesn’t look good if we build Artificial Super Intelligence. So you can see so many different topics. I’m really excited about this conversation. There’s a lot of threads that I want to weave together. I think you guys in the audience are gonna have a great time. Michael Ellsberg, welcome to Doom Debates.

Michael Ellsberg [00:04:07] Thank you so much. I’m really excited to be here. This is actually my favorite podcast out there. I’m a subscriber. I listen to it religiously, or I should say atheistically.

Liron [00:04:17] Michael and I just recently met on the subject of his father’s writing and Doom Debates. It was just kind of funny because occasionally I would read something by his father—I’ve read both of his most popular books, the nuclear war one and the memoirs one—and I’d be tweeting like, “Man, this is so good, this is so underrated.” And then also I’d be tweeting Doom Debates. And then Michael was following me on Twitter and there were all these connections happening and I’m like, okay, we gotta just have a discussion. There’s so much to talk about.

But Michael, and I want you guys to know before we get to everything, that the number one takeaway is about the moral code that Daniel Ellsberg has spelled out. Michael has absorbed so many of his lessons because Michael and I both agree with this moral code. Michael, what is the moral code?

Michael [00:04:54] Basically, if you’re in one of these top AI labs and you believe in ASI risk—as I think a lot of people do in there—and specifically if you’re seeing estimates within your organization that show or suspect that the danger here is much larger than your leader or your CEO is letting on—whether it’s Sam Altman or Dario Amodei or Mark Zuckerberg or Elon Musk—if there are internal estimates showing that the danger is much greater than your leaders are saying, I think you owe it to all of us out there. Everyone. I don’t have kids, but everyone who has kids, all the kids out there, to warn the public. This is not just an internal corporate decision. This is a decision that will impact all of humanity. And so you do have a duty to the public on this one, and not just to your CEO’s profits.

Vietnam War Parallels to AI: Lies and Escalation

Liron [00:05:49] Exactly, you’re playing dice with lives. And the Vietnam War, the true story of the Vietnam War, proves that when nobody stops, when nobody blows the whistle for years and years, then the tragedy will play out. For those of you who don’t know, the story about Daniel Ellsberg is even though he leaked the Pentagon Papers, he could have leaked them as much as four years sooner. And he was just kind of waiting, wasn’t sure if he wanted to do it, and eventually he did it. He probably helped, or actually did—did he help end the Vietnam War? Whatever it is, he could have ended some of the senseless killing if he leaked earlier. Correct?

Michael [00:06:19] Yeah. The way he puts it is that actually it set in motion a chain of events. It did end up contributing to the Vietnam War ending earlier, particularly Nixon’s response to his leaking. Your listeners may have heard of “The Plumbers.” The Plumbers were formed to get into my dad’s psychoanalyst’s office to find blackmail information. And then that played into Watergate in different ways, and it went from there. So it’s not quite as simple as “Pentagon Papers stopped the war,” but they certainly contributed.

He does believe his first day at the Pentagon—he was a RAND Corporation analyst before that—his first day at the Pentagon working on Vietnam happened to be the day that the Gulf of Tonkin started. I think it was August 1964. So there’s this crisis, and within days, my father knew, and everyone in the Pentagon knew, that everything the public was being told was a lie. That this was an unprovoked attack, the extent of the attack, the whole thing was essentially like a radar mistake. And that was the basis of getting into the war. And my father knew that. And so did lots of other people. Did they have a duty to tell the public and say, “Your kids are going to be sent to die over here based on a lie”? Did they have a duty to say, “We’re gonna tax the fuck out of you and use your tax dollars based on a lie”? I think the answer is yes. And I think that if you’re in an AI lab and you know that there’s lies or gross mischaracterization of the risks, I think you owe it to all of us. And frankly, if you have kids, you owe it to your kids.

Liron [00:08:01] Right now you’re talking about the Gulf of Tonkin incident in August 1964. And this was basically when the US got involved in the Vietnam War. This was 23 years after Pearl Harbor. So it was kind of another—they were trying to echo it, be like, “Look, we got attacked again, and we gotta get in this.” I don’t think they were spinning it as US defense, but they were spinning it as, “Well, you know, we wanna fight communism and they’re attacking us, so we might as well.”

Michael [00:08:25] I mean, we were up in their neighborhood and it’s pretty easy to sell the US public on attacking a country if your troops get attacked or your forces, even if you’re like right up in their backyard.

Liron [00:08:38] Once we sent troops based on this Gulf of Tonkin resolution, we started going more and more.

Michael [00:08:59] And another factor of this that my father spells out quite a bit in his first memoir, Secrets, is that Johnson and the public in general were constantly lying about troop estimates. If we just send another hundred thousand men, we’re gonna solve this. Like, none of them believed that. Nobody inside believed that. There were some hawks who were like, “You know, we can beat the shit out of these commies.” But even they didn’t believe it was gonna be done with only a few troops. They wanted 500,000 troops. So again, the public was being lied into every escalation around that. And so the question arises: if you live in a democracy, which we do, do you owe the public something when there’s these matters of grave public life and death?

Liron [00:09:33] And just to the history—I don’t think this show attracts that many history buffs—but apparently the French officially withdrew in 1954 when they were fighting the Communists in Vietnam. And then there was this uneasy peace, and the US was slowly getting more and more involved. And then the Gulf of Tonkin resolution is like, okay, now the US is really fighting, even if it’s technically not a war. It wasn’t famously like a police action, not a war. So I’m getting some info here from ChatGPT: August 2nd, 1964, the US destroyer USS Maddox on an intelligence mission was allegedly attacked by North Vietnamese torpedo boats. The Maddox fired back and called in air support, damaging the attackers. And then August 4th, a second attack was reported, but later evidence and Johnson’s own private doubts suggested that the second attack probably never happened.

So that’s what you were saying, which is they made a huge deal out of a skirmish that didn’t have to escalate to a full-blown war or police action. But the US government just was building this momentum and kind of dragging the people along with it. And this was kind of the first of many very shady practices in how the government was communicating to the citizens, correct?

Michael [00:10:40] Right. Historians have debated for decades, why did we get in exactly? How did we get drawn in? A lot of it, my understanding is, is just kind of bureaucratic decisions. There wasn’t one clear, obvious reason why we kept getting sucked in more and more. There was a lot of bureaucratic inertia. There was domestic electoral politics, wanting to save face, not wanting to look soft on Commies, and it just dragged us into this total disaster.

Liron [00:11:07] Now when you say like, oh, we have no idea how it happened. I was around in 2003 when George Bush got us into Iraq. Did your dad think that Iraq was straight up another Vietnam?

Michael [00:11:19] A hundred percent. He was banging that drum real loud. And there’s a wonderful debate online that was on C-SPAN—I actually put it up on my dad’s YouTube—with Bill Kristol, the arch-neocon. And my dad was just saying, “This is gonna be a disaster.” He predicted a lot of the stuff that ended up happening, and Bill Kristol was like, “No, this is gonna be a cakewalk.”

My dad was a big believer in nonviolent civil disobedience. He was very proud that he had been arrested, I think 80 times, for acts of nonviolent civil disobedience, mostly around anti-nuclear activism in the eighties. And it was kind of—he was an atheist too, but he kind of joked that this was his religion. He really believed in the power of this. I got arrested with him once in the run up to the Iraq war. Which again, we can just see all the factors and debates and ways that a country can just get drawn into, or in this case race towards, a total disaster.

Liron [00:12:37] Nice. Wow. You got arrested with Daniel Ellsberg. It reminds me of this company Omaze that—not to make light of the matter—but when you go to charity fundraisers and they have these auctions and you can get these experiences that are so unique. Like, “Oh, cooking with this famous chef.” So that could be on the list: “Oh, you get to go get arrested with Daniel Ellsberg.”

Michael [00:13:06] It was his thing. That actually was pretty much what he most believed in. And I could get into the reasons why—he had a whole theory around this developed. He expresses it quite a bit in his notes, which we are publishing. I mean boxes, hundreds of boxes, or at least a hundred boxes of handwritten notes over 50 years. And his longtime assistant and my friend Jan Thomas was basically the only person who could read his handwriting, a very cryptic handwriting, and transcribed a big chunk of these notes. And they’re being published in this book coming up. And a lot of them have to do with the ethics of nonviolent civil disobedience and the duty that officials have and citizens have.

Liron [00:13:56] We’re gonna talk a lot more about your father’s brave antics. ‘Cause he definitely seems pretty fearless. A lot of the stuff he does, even just getting himself stationed in Vietnam during all these conflicts and the story he tells in the Secrets book, I was blown away. Like, wow. I don’t think that I would ever do that, but that’s cool that he did. And we’re gonna talk about how when he leaked the Pentagon Papers, he was totally risking the rest of his life. He was gonna go to jail. In fact, he almost did get imprisoned for life. And the only reason why he was back out writing these books, participating in society, was actually because President Nixon kind of bungled the prosecution. Correct?

Michael [00:14:39] And all this was before I was born, so had that not been bungled, we wouldn’t be having this conversation. So thank you Nixon for that one. But yeah, so basically he was an insider. He had helped write the Pentagon Papers. He was one of, I think, 35 authors of it. It was an internal study of how we got into Vietnam. It was commissioned by Robert McNamara. And they basically documented from the beginning just how many lies. They documented how many things that the public was being told were just total lies. And also documented the reasons—like nobody believed in the Domino Theory. That was a total hoax. It was just PR. Just the troop estimates... nobody inside believed what the public was being told.

My father, because he had visited the front lines—he had been a Marine company commander, but he visited Vietnam as more of a civilian observer. But he was in the front lines carrying a weapon. And he saw the stalemate up close, he saw on the ground why nobody believed this was a winnable war. And he also saw the kids. He saw that these kids were being displaced and suffering. And he came to believe that we should not be fighting a war that has no prospects of winning and that are killing so many people on both sides, pointlessly.

And he decided the public needs to know how they’ve been being lied to. So he began photocopying a 7,000 page document in October of 1969. And then he tried to give it to Congress and there was a bunch of blockages there. All of this is in his memoir Secrets. And eventually he gave it to the New York Times and it was a bombshell. And the public learned for the first time just how much they were being lied to.

Liron [00:16:40] Yeah, it’s great. Now you mentioned that nobody really believed Domino Theory. So from what I actually learned about Domino Theory in high school history, it’s just the idea the dominoes falling are basically like communism, right? So like communism kind of originated in Russia and then China was super communist and everybody was saying, okay, the Soviet Union is gonna go into a bunch of countries and the US is gonna influence a bunch of countries. But every time the Soviet Union gets some country’s government to go communist, that’s like another domino falling. And they’re gonna try to topple the whole world to be communist. And then our way of life, capitalism, which I personally think is worth fighting for, we’re gonna be isolated. And then also they’re gonna potentially fight a war with us. ‘Cause this is also during the Cold War, right? The sixties. This is the height of the Cold War. Everybody’s like, yeah, they’re literally just going to nuke us and have the whole world be communists. So the slippery slope—that’s basically Domino Theory, but you’re saying that the government leaders didn’t even believe that?

Michael [00:17:35] I don’t know about overall, but the Pentagon Papers showed that nobody believed that keeping Vietnam from being communist was a major step in the global fight against the spread of communism. There were very specific dynamics going on there in terms of just the bureaucratic politics of the United States and the electoral politics. Nobody believed that this firewall had to be there or else global communism was gonna spread more.

Liron [00:18:07] Right. That makes a lot of sense. So my understanding is that the government officials really were freaked out about the Soviet Union and were overestimating them, and it was kind of they felt really egged on to do this arms race, even though the Soviet Union wasn’t actually going as hard as people thought they were. So they were scared of communism, but they weren’t particularly scared of Vietnam as a domino.

Michael [00:18:29] Right, exactly. And if you just look at the cost to our society in 50,000 dead and the way this war just tore itself through our society. And then usually they don’t even look in—I mean, millions of Vietnamese dead either through direct or through other kind of refugee type dynamics that were put in place—for just basically no benefit. Like nobody saw any big benefit to this. That is different, I would say, than the current situation with AI where everybody can see benefits right now. I see benefits right now for me personally. I don’t think they’re gonna last very long, but it’s very seductive, the ASI thing or the AI thing right now, because we are, as you talk about in your show, in this kind of golden age where it actually is improving our lives in a lot of ways and it’s messing other things up as well.

Liron [00:19:24] Right. So I mean, now that we’re so deep in this Vietnam discussion, the unanswered question here is: so why did President Johnson want us to keep fighting in Vietnam if he didn’t believe in Domino Theory? If he had the information about the war being hopeless, what was the real secret reason why they kept fighting?

Michael [00:19:40] A lot of it comes down to not wanting to appear weak on communism and not wanting to be the president that loses Vietnam to the Communists. So there was a lot of—my father writes about this quite a bit, especially in the new book that’s coming out—a lot of these types of politics can be explained by a type of machismo where it just really is, you don’t wanna look weak. And honestly, I think a lot of that is playing out in the AI debate as well. If you really listen to people, the number one thing—and of course you’re aware of this—that you hear is: “Well, if we don’t build it, China is going to.” So what? And the companies also in the US believe that with each other: “If Anthropic doesn’t build it, OpenAI will,” and on and on. And so none of these guys want to get beat. A lot of it is like, we don’t want to get beat, even though the thing that is gonna beat people is gonna beat all of us. So it’s like you don’t wanna be beat in killing everyone. It’s this weird, it’s very pathological.

Liron [00:20:41] Yeah, and I’m totally willing to believe that. This is kind of representative of the course of history on so many different major events. Like with AI Labs, a lot of people tend to be like, “Oh my God, what is Sam Altman’s secret plan? He tells you this, but he really thinks this.” I think there’s a little bit of that, but I also think it’s just like, “Eh, I don’t know. This seems like a good plan. This seems like something good might come out of it.” I don’t think that even the most strategic people in the world... it’s very rare for somebody to have this elaborate plan. I think people are more like, “Look, I just try to build stuff. I feel good about building it.” That’s usually simple motives.

Michael [00:21:15] Yeah, I don’t know how people in the government, in the military are thinking about it. They’re probably more strategic, but that tracks to my impression of the tech CEOs. They’re just like, “This’ll be cool.” I mean, Elon Musk is basically open about that. He, more than any of the others, as you know, has been very vocal about AI X-risk. And as far as I can tell, his stance is kind of like, “Well, the other people are racing towards it, so I might as well race towards it also.” And I think they all have this idea that like, they’re gonna be the ones to steer it. Which either means they think they’re gonna be the ones to implement safety, or that somehow this thing is just gonna take over, but it’s somehow gonna be better because they were the ones who steered the thing that took over all of us. I don’t know what the thinking is there.

Liron [00:22:03] I never thought about this, but now that you mention Elon Musk not hiding anything. That’s so funny because everybody likes to criticize Elon Musk, but you rarely hear people accusing him of having like secret motives because he’s so open. He is like, “Yep, I’m willing to spend a trillion dollars to get humanity to Mars. Everything, I’m willing to be an asshole to get us to Mars. Everything is just my Mars mission. That’s what I like. I love rockets. I love being a Transhuman.” Nobody’s saying like, “I bet Elon wants to secretly be the dictator.” He’s telling us he wants to have a good amount, but probably not all of it.

Michael [00:22:38] Yeah. I mean, he’s pretty clear on his intentions. I don’t understand, he hasn’t explained yet—and I hope I’m wrong, maybe he has and I just haven’t heard about it—but I haven’t heard him explain: “Okay. You were one of the OGs on AI existential risk, and you haven’t changed your views on it that much, so why are you racing towards it?” Like, he could be a voice to say, “No, let’s not build this.” So much of what he does is about his ego. What if he realized that his ego would be enhanced by being right on this and by saying, “No, none of us should build it and I’m gonna not build it. And everyone else should take that stance as well.”

Liron [00:23:18] So just to wrap up this part about Daniel Ellsberg in the Vietnam War and the Pentagon Papers, one of the things that stuck in my mind was that your father noticed LBJ, President Johnson, lied to the public about the troop estimate, right? He was saying, “Oh, we just we’re gonna send in like a few more troops.” And he knew perfectly well that they would need so many more troops if they were gonna finish the job. Or he would just send in a few troops and then get shellacked, which is what happened.

Michael [00:23:42] Yeah, exactly. There just wasn’t an honest discussion with the public whose children were being sent over there about what it was gonna actually take and what was actually gonna happen. And that, again, was being lied to. And eventually my father came to believe that the public needed to know about this, that they were being lied into a war, and that it was his duty to warn the public about all the ways that they were being lied to, that the democratic system was being subverted. The constitution was being violated.

Liron [00:24:17] Right. Now, you know, the Vietnam War is something that humanity could afford to bungle, right? Because we sure did bungle it. The government lied to the US citizens. We fought a war that we couldn’t win very easily for stakes that we didn’t care that much about. Like, we didn’t actually care that much about Domino Theory. So it was a big bungle, like nobody’s happy with the Vietnam War, a bunch of people died, but it’s okay because the world survived, right? But I think the lesson here is the next time you have all these powerful people that we trust to be telling us what’s a good decision, what are the trade-offs, they’re going to bungle it again, except the stakes could be a lot higher. Correct?

Michael [00:24:54] Yeah. I mean, we’re completely bungling this right now. There’s not even any pretense at any high levels at this point that people are taking ASI risk seriously in terms of just realizing the stakes here and acting appropriately. It seems like everybody is in a multipolar trap to just be racing ahead towards this race to the bottom where everyone’s disregarding safety far more than they should be to be getting the next model out and shipping.

The Doomsday Machine & Nuclear Insanity

Liron [00:25:23] On the subject of blowing the whistle and yourself out there as a nonviolent resistor... which is something your father is known for, not just the Pentagon Papers, but attending various protests and speaking out, writing memoirs that kind of skirted the edge of the gray area of what you really should be revealing versus not. Is that fair to say?

Michael [00:25:44] I’m not sure I understand the point about the gray area.

Liron [00:25:47] Well, I’m just saying like, when you think about Daniel Ellsberg as putting himself out there as like a nonviolent resistor, I feel like even after the Pentagon Papers, even just how much he talked about the nuclear program and the United States’ approach to it, I think he was entering the gray area of putting himself at some risk.

Michael [00:26:03] I mean, he definitely was courting that. In fact, towards the end of his life, he put out a couple of secret documents that he still had in his archives, which are now at UMass Amherst. They weren’t like bombshells like it was, but they were secret and putting them out violated secrecy laws. And he basically said, “Come and get me.” I mean, he had cojones, I have to say. And this was towards the end of his life and nothing ended up happening. But he really did want to be a further trial case for our government secrecy system. And he tried to as best as he could, even after the Pentagon Papers.

Liron [00:26:43] Now, unlike Snowden who basically fled the country to Russia, it’s really great that he got off on his trial because Nixon bungled it with Watergate. Like he obtained evidence illegally. So it’s really nice that your father could just live out his years here in the United States and they didn’t come for him.

Michael [00:27:00] I mean, I’m happy about that. It worked out well for me. But no, he was facing 115 years in prison. And there’s a pretty good shot that he would’ve gotten that. Not a hundred percent, ‘cause there was major free speech aspects to this that have a lot of strength. So he had good arguments on his side, but there was a really good shot he would’ve gone to prison and he expected to go to prison for the rest of his life. And he still thought that it was worth doing this. And he would always say to potential whistleblowers: “Consider that finding another job, finding another way to support yourself other than what you’re currently doing, might be a worthwhile cost to pay if the public danger is great enough.” And I think with ASI that is the case.

Liron [00:27:50] Do you think your father would’ve been really into the “Stop AI” movement—which is different from “Pause AI”? I had them on my show last year. The Stop AI guys are all about standing in front of OpenAI’s office and kind of blocking their entrance until the police come and arrest them. It’s very Martin Luther King. It’s very nonviolent resistance. Let me be clear, there’s no violence whatsoever, but they are violating the law. Whatever type of law, misdemeanor, I don’t know if it’s technically a felony, but you know that you’re not supposed to be trespassing and blocking somebody’s entrance to their office. But they do it because their whole point is: somebody needs to stop OpenAI and they’re willing to pay the consequences. They are on trial right now. Crazy stuff. They actually subpoenaed Sam Altman. So they’re very much committed to this whole nonviolent resistance direction. Even though they’re still a relatively small crew, I think there’s less than a dozen of ‘em. When they started out, it was literally like two people. It’s slowly been growing. They’ve been doing hunger strikes. I wonder if Daniel Ellsberg would resonate with their approach.

Michael [00:28:47] Yeah. Well, there’s two questions there. One is, does he agree with the issue? And then two, if he did, would he support that? The second question’s really easy to answer, which is: of course he would support it if it’s nonviolent. And I’m totally in favor of that. If it’s nonviolent, go for it.

I do not believe that anybody should be committing violence on this. I could imagine as things escalate that someone will get in their mind like, “Hey, I could change history by taking out this person or that person.” I mean, the stakes are certainly high enough. I think that would almost certainly be disastrous, because it would just lead to a downward spiral where anyone who is trying to be talking about this would get lumped in with these terrorists or assassins. And it’s not gonna do any good because this is a multipolar trap. There’s no single person or company that can solve this problem unilaterally by withdrawing. It has to be a group effort, international coordination. And it really is just a classic game theory coordination problem.

The first question is more complicated. So he died in 2023. I was just starting to get freaked out right at the beginning of that year when I first used GPT and then particularly Claude is the one that pushed me over. As soon as I used Claude, I think it was Claude 2 at that time, I was like, “Oh, shit.” My first thought was “my job is over.” Like it’s a very good copywriter. And then I kind of played it out. I’m like, well man, wow, if it’s this good now, maybe we’re all over.

And so I talked to him about it. He was agnostic. He wasn’t a tech guy. He certainly didn’t dismiss it, but he just was kind of like, “This isn’t my field.” He was focused on nuclear X-risk. I did show him GPT and he did the typical—he was silent generation, but I noticed like when you show boomers GPT, the first thing they do is like, “Well, let me type my name in,” and then like, “Oh my gosh, that has like five errors. This thing is a piece of shit.” And I think he had somewhat of that response, so he wasn’t that impressed with it. But I think that if he were around now and I could show him the things that I’m reading now and make the case, I think that I could get him on the doom train.

Now if he was, would he support what Stop AI is doing? From my understanding of them, absolutely. Anything that is nonviolent. We’re in an emergency right now. This is absolutely “Don’t Look Up” asteroid coming towards us terrain. You know, there’s a famous scene in that movie where Jennifer Lawrence’s character basically has like a total freak out on air. You talk about like the “missing mood,” right? I think there’s a missing mood right now. That doesn’t mean everyone has to become hysterical or that would be good if everyone was hysterical, but I do think there should be some people in our movement who really are like, “We’re in a fucking emergency here. Like, we need to put all hands on deck in different ways.”

Liron [00:32:07] Yeah. So this is a good segue to your dad’s other book, The Doomsday Machine. Because in both the case of the Pentagon Papers and the case of the zero regime, your dad found himself in this position, which, like you said, it’s like the movie Don’t Look Up where he’s like, “This is insane. Why is everybody else around me going about like this is business as usual?” Like the Vietnam War. It’s like, “Hello, we’re lying to the American people. What we’re telling ‘em about troops, what we’re telling ‘em about our chances of winning the war. We are knowably giving them wrong information.”

And then similarly with The Doomsday Machine, just jumping right into the book about the United States’ nuclear war planning and the insanity of it all. One of the scenes that really sticks with me from The Doomsday Machine is when the US was charting flight paths to do like a nuclear bombing, like if they’re fighting a war. And the Soviet Union was launching nukes at us and we are launching nukes at them. And there were people whose job it is to carefully plot out where the planes would fly so that like all these nuclear bombs would drop and then there would be huge explosions and all of this danger even just to be flying. The explosions would be so big—10 megaton explosions is like a thousand times bigger than Hiroshima. Insane, insane explosion. And it’s like, “Don’t worry ‘cause we’ve got these planes flying and here’s like the perfect path that the US pilots can make it back safely.” And they were spending like many person-years on these charts. And meanwhile it’s like, “Yep, there’s literally tens of millions or hundreds of millions of people dying like this unbelievable carnage.” But it’s like, “Don’t worry, we have this flight path.” And not only that, but it’s like, “Wait a minute. Aren’t the winds unpredictable? Isn’t this also going to fail? Yeah. The pilots are probably gonna die too.” And your father was pretty much the only person who’s like, “This is insane enough that my life doesn’t even matter.” The reaction here... there’s a major missing mood here.

Michael [00:33:48] Yeah, totally. This is the book, The Doomsday Machine. It’s good I’m on Doom Debates ‘cause I grew up hearing the contents of this book. He basically worked on it my whole life. I’m 48, so I grew up, literally as a 4-year-old, hearing the contents of this at the dinner table.

So yeah, he worked for the RAND Corporation, which is a kind of quasi-governmental think tank that’s affiliated with the Air Force. And he began—this was in the late fifties under Eisenhower—he began studying nuclear command and control in the Pacific. He actually went out and visited field commanders to study essentially: what is our command and control system actually like? Could there be accidental launches? Could there be launches from rogue commanders, which later was portrayed in Dr. Strangelove?

Liron [00:34:50] Amazing movie, Dr. Strangelove. Which, and remember he said in the book, Dr. Strangelove, you might as well be a documentary.

Michael [00:34:57] Absolutely. I mean, obviously there’s fantastical elements of it, but Kubrick completely got right some of the dangers there and the main danger. Everyone should go watch Dr. Strangelove. I’ll try not to give too many spoilers here, but the plot setup basically is that a commander at a kind of local Air Force base, one in Ohio or something like that, who goes crazy. He develops a theory—this is early in the movie, so I can give it away—he develops a theory that there’s a communist plot to kind of ruin our precious bodily fluids through water fluoridation. So he was almost like an early RFK. Sorry, anyone out there who likes RFK, but he was kind of a health nut conspiracy theorist and he believed this was a communist plot and he wants to launch an all-out nuclear attack on the Soviet Union because of this, and he manages to do so.

My dad’s point, and he was working on this before that movie came out, was that this was totally within the realm of possibility. The main key here was delegation. Everyone makes this big deal about the nuclear football that the president has. “There’s this nuclear football that’s always with the president, and the president’s the only one that can launch.” That’s total bullshit. I mean, everybody knows that’s bullshit because there’s a big problem with that: if you take out Washington with one weapon, then you don’t have a return strike force. So it’s fairly well understood that that power has been delegated in cases of war.

And what my father discovered pretty early on in the game is that it had just been delegated down and down and down to the point where pretty low level people in the command chain could, if they thought they were under attack, launch nuclear war. And that would just escalate to basically blowing the whole world up.

Liron [00:37:03] Right. I mean, if you’ve got a submarine that’s off the coast of the Soviet Union or North Korea or whatever, deep underwater, it’s not going to be in radio contact with the president, right? With a chain of command.

Michael [00:37:13] Right. And this happened, this is well documented, in the last day of the Cuban Missile Crisis. Basically there was Russian subs off of Cuba. Our forces saw them and were trying to get them to surface with depth charges. So we weren’t like trying to sink the sub, but we were basically sending charges saying like, “You better get the hell up.” The sub commanders thought that they were under attack from these depth charges. One of the charges had damaged the sub, so they were in literally 140 degree heat inside the sub. I mean, I can’t even imagine that. And they were losing oxygen. They thought they were under attack. Unbeknownst to our forces, and unbeknownst to everyone until I think 20 or 30 years after the Cuban Missile Crisis, that sub had a nuclear torpedo on it. They had armed it with a nuclear torpedo.

They had to have two of three commanders sign off on launching the torpedo. Normally it was two. I was just reading this history, I could be getting some of this wrong, but it’s in the book here. Normally there would’ve been two, but for some reason the guy Arkhipov was on this submarine. So they made it two of three. And two of them wanted to go ahead and launch nuclear war, and Arkhipov was like, “No, I don’t think we should blow up the world over this.”

Liron [00:38:50] Yeah, yeah, yeah. I heard it was like super intense, by the way. Like I heard their sub... it was like a crazy 110 degrees or whatever, like everybody was sweating their balls off and they’re like, this one guy is like, “Ugh, no, just don’t do it.”

Michael [00:39:02] Yeah. Let’s not do it. And so this is where people think like, “Oh, well, nuclear weapons kept us from great power war. And so, and they haven’t gone off. So we’re outta the woods. Like it’s all in the past. The whole thing worked.” There’s so many instances like this where basically blowing the entire world up, ending civilization and most humans, was in the hands of like one person who was making decisions under really stressful circumstances in a submarine or something like that. Like, we came really, really, really close multiple times. So this is not a system that I think we should be trusting the future of civilization on.

Liron [00:39:45] Exactly. So I have three recommendations for the viewers to go deeper into this topic because I think it’s super important. For me, it was very formative to read this, even though I’ve only really gotten into this stuff in the last few years. It’s really shocked me like, “This is the world we live in. Like this is so insane how close we are, how causally close we are, how short the chain of causal events from our day-to-day life to nuclear Armageddon is. It’s so close. It’s so few button pushes required.”

So my recommendations are: Number one, if you like Doom Debates, there’s actually an episode that I did—I call it my most underrated episode—search for “Doom Debates Nuclear Risk”. I’ll link to it in the show notes. Number two, Dr. Strangelove, as Michael mentioned, it’s such a good movie. It’s from like the fifties, in black and white. I’m the last person who would watch a black and white movie that’s that old, but I watched it twice. I think it’s very watchable. You get sucked into it. They did an amazing job.

Michael [00:40:44] It’s on pretty much every serious person’s list as like one of the greatest movies ever made. I think it was ‘64. And it’s amazing how they made this so... but it’s hilarious.

Liron [00:40:59] Yeah. I don’t even wanna spoil it. I heard some spoilers of some of the funny lines, like there’s some genius lines in that movie.

Michael [00:41:05] Oh yeah, no, and the acting is great.

Liron [00:41:11] Exactly. And then the third recommendation is that book that you were holding up: The Doomsday Machine by Daniel Ellsberg. So even if Michael weren’t on Doom Debates right now, I could honestly tell you this is just a top 10 lifetime book. It’s just such an underrated, under-discussed book. It’s like, how do you go through life without knowing the stuff in this book? Just so mind blowing. So I highly recommend that book.

And I do think part of the reason why you can extrapolate the risk of AI Doom is you look how far nuclear doom has gone. Honestly, if we didn’t have nuclear weapons and the Cold War and the Cuban Missile Crisis... if we didn’t have all these events getting us so close to nuclear doom, it would be kind of intuitive to just look at all of human war fighting and be like, “Yeah, you know, we can never really hurt the planet that much. We’ve been pretty robust.” But when I look at the Cold War and how close, how big nuclear bombs are and how damaging and what a hell on earth it would create... even if not everybody dies, it would set us back so far. For me, it just becomes so natural to be like, “Okay, well I don’t think this is the end of the story. I don’t think this is the most fatal thing that we are going to build, and this is just in the last century.” So to me, it’s just so obvious that we’re playing with more fire than we’re equipped to deal with. Do you agree with that?

Michael [00:42:19] Yeah, absolutely. So when my father was writing nuclear war plans at the beginning in the Kennedy administration, he was very, very, very high level. He was not a public official, but he was very high level at the kind of bureaucratic planning level of it. And the shocker of his life—I would say probably the biggest shocking moment of his life—was early in this process. He asked a very simple question: “If we executed our general nuclear war plans as planned...” And by the way, they targeted every city in Russia and China. And they didn’t even make... there was no option in these plans for only attacking Russia. The plans just didn’t distinguish at all within the communist bloc. So we’re gonna take out all the commies and target civilians. Which by the way, there is no definition of terrorism that would exclude targeting every civilian on an entire continent. Like that kind of meets the definition of terrorism, I would say.

So my father learned of these plans and he asked—and he didn’t think he was gonna get an answer—but he asked the Joint Chiefs of Staff, “If you executed these plans, how many civilians would die?” And he thought they were gonna say, “We don’t know.” Instead, a day later or so, he comes back and there’s a little graph and the graph says something like: 300 million immediately going up to 600 million over the next several months from the fallout. And my father just had a life changing moment in that moment. And even at the time he said, “This piece of paper should not exist.” Meaning the thing it refers to... there should not be a real thing in the world that is referred to by a chart that says if we do this thing that we’re planning for, a hundred Holocausts worth of civilian lives will be destroyed. That just shouldn’t be a thing. There’s nothing, there’s no stakes in the real world that justify that type of civilian carnage.

And he made it his mission at that point to try to reduce the civilian carnage from the plans. And he wrote an updated plan at the beginning of the Kennedy administration that basically did get taken up that, in their calculations, would lead to like a hundred million dead. So, you know, 15, 16 Holocausts. And at the time the logic is like, “Okay, well I’m making it better.” But looking back, he says participating in any part of that process is, in his view, the most shameful thing he ever did. Once you’re at that level of insanity, there’s not like degrees of less insanity, like you’re just in an insane system. And he would say an evil system. A system that is targeting tens, hundreds of millions of civilians as a part of its course... that is, he would say, evil. There is no definition of evil that would exclude that.

Liron [00:45:37] Now, today, this idea that your dad was early on is propagating more and more. This idea that—as Annie Jacobson puts it in her really great recent book Nuclear War—this idea that nuclear war must not be fought and can’t be won. Like, you’re not really the winner if you just killed half a billion people or more, and your own country suffered a bunch of nuclear attacks. You’re not gonna be in good shape as the country. You’re not gonna have a good day.

In the sixties, fifties and sixties, when your dad was in RAND Corporation, this high level policy advisor to President Kennedy, they really expected to fight this because they really thought, “Look, you got Russia, you got us, like we’re at loggerheads here. Like this needs to be fought out.” And so those were serious war plans. Like they actually thought they were going to pull the trigger and they’re like, “Look, this is the best we can do. Half a billion lives dead. It’s just, what are you gonna do?” Very much parallels the attitude today of like, “Look, China, man, it’s China. What... of course we’re gonna...”

Michael [00:46:31] Right. “What are you gonna do?” Well, and the key thing there is that they didn’t know, and nobody knew until the eighties, about nuclear winter. There’s an incredible movie called The Day After about basically what happens after an all-out nuclear war and nuclear winter. Basic idea is you drop somewhere between a hundred and a thousand warheads on cities that burn, and you get enough soot that essentially it’s similar to what happened with the dinosaurs, the asteroid. You get... there’s no photosynthesis essentially happening anymore for decades. Within years you have total crop failure. And the estimates are that probably a hundred million at most would survive that. And it would probably be like Australia and the equatorial parts and very remote areas. And you would have essentially, you’d be back essentially in the Iron Age. No electricity. Essentially erasing human civilization.

And that’s a hundred to a thousand weapons. These plans called for thousands of weapons. We still have—this is the crazy part—they didn’t know about nuclear winter back then. So they’re just playing flip coins with 600 million dead, just a hundred times the Holocaust. Those stakes. Now we know about nuclear winter. This is well established science, and they still have—Russia and us—we both still have thousands of warheads that we can’t use. This is totally irrational. We cannot use those warheads without committing suicide, without literally all of us starving. And it wouldn’t be a pretty death either. It would be a very ugly death. They know about this now, and yet they still have a lot of these weapons on hair trigger alert. It’s absolutely nuts. Like, that’s kind of the main takeaway I would say of this book. The last chapter really gets into this: that you can be a part of these systems that kind of seem rational inside their bureaucratic logic, but are just totally insane.

Mutually Assured Destruction vs. Superintelligence Risk

Liron [00:48:49] Right now, there’s actually a lot of people who would push back to what you and your father are saying with like, “Of course we shouldn’t have so many weapons on hand. Of course you can’t win a war like this. But there’s a lot of people who would push back and they’d be like, actually it’s crazy like a fox. Because Mutually Assured Destruction, man. The more weapons you build and the more countries have these weapons, the safer we are.” Which I really make fun of this. If you go to my Doom Debates episode about nuclear AI that I’m gonna link to in the show notes, I use Ben Horowitz of Andreessen Horowitz fame as the poster boy for this kind of like, “Oh, mutually assured destruction is so great. Decentralizing nukes is so great. That makes us safer.”

Ben Horowitz [00:49:24] Like I often remind people like the last nuclear bomb that was launched was when only we had the nukes like that. That’s a dangerous world with one person having the nukes. Yeah, we haven’t had any nuclear activity. And there’s a very, very specific reason for that. ‘Cause everybody’s got nukes. Yeah. And nobody wants to get nuked. And I think that AI is a... you know, to the extent that AI is a super weapon, that will also be true there.

Michael [00:49:49] That is absolutely insane. Like that guy... nobody should take a single thing that guy says seriously. That is an absolutely insane viewpoint.

Liron [00:49:57] Yeah, so Mutually Assured Destruction, I mean, it’s not zero value, right? Obviously it does... I do actually think Putin is scared of Mutually Assured Destruction. But the problem is, it’s just kind of like this half-baked solution. Like, yeah, it has some value, but it’s just not going to stop it when Kim Jong Un or his son or whatever finally presses that button on the first nuke and the nukes are flying, and it’s like, “Okay, well we are now past the threat of Mutually Assured Destruction. We are now actually destroying each other.”

Michael [00:50:24] Right. So you have to distinguish between Mutually Assured Destruction and Mutually Assured Nuclear Winter. Those are two things. Mutually Assured Destruction you could... if you just had a few nukes... I mean, I think there is a much stronger argument for each side having a few nukes saying, “Look, we’re gonna really fuck you up if you attack us.” My dad still believed against that for a variety of reasons, because that does really support the game theory of proliferation. That said, you can make a stronger argument for that, but you really can’t make a good argument for this Doomsday Machine.

So the idea of a doomsday machine is: on our side, and it could be on their side too, each side has a machine where you press the red button and there’s an attack on us, and you press the red button and our nukes are on our side and they just blow up and blow up the whole world. So the idea is that if you attack us, we’re taking everyone, we’re going down, we’re taking everyone, so don’t attack us. And then the other side has that too. And it gets into these very strange paradoxes where you have to sort of be credible in your willingness to be crazy. So like, the Madman Theory and like, once you get people playing like they need to pretend like they’re crazier than they are, it’s a very unstable system at that point.

Liron [00:51:47] Yeah, yeah, yeah, that’s right. And when I discuss the subject in my episode, I also talk about there’s this constant background danger of accidents and governments falling, terrorism. So it’s not just this pure clean game theory where it’s like, “Look, I found a Nash Equilibrium.” It’s like, no, misuse and accidents are just constantly there giving you this background push. So like every day is a danger until eventually it’s like, “Oops, somebody had an accident, somebody misused it. Now you’re past Mutually Assured Destruction. Now you’re escalating.”

Michael [00:52:18] Right. And also, Mutually Assured Destruction or, as one theorist who my dad quotes... it’s actually SAD: Self-Assured Destruction. So you’re basically saying, “Look, if you attack us, we’re gonna blow us all up.” So you’re kind of doing it to yourself almost.

The other issue is that deterrence requires a rational opponent who is not suicidal, which usually is the case. But as we know, there are many kind of crazy suicidal cults out there. And I have a kind of mini obsession with cults. I’ve never been in one, but I’m fascinated by them. And there’s been many, many, many doomsday cults who really do wanna bring about the ascension to whatever stars they’re obsessed with. They’re probably not gonna get their hands on nukes, although it’s possible.

But this whole idea of like, “Give everyone, you know, Mark Zuckerberg says, give everyone an ASI in their pocket.” I don’t know if he actually means what we mean by ASI, maybe he just means like a super smart AI, but not one that’s smarter than all humans. But imagine that. Imagine that every person had in their pocket a computer smarter than all humans. Some of those pockets would just be crazy incels or crazy... you know, or Jihad or people in a psychosis. Or terror. Now there’s an anti-natalist terrorist... you know, there’s all kinds of just nut jobs out there. And we wouldn’t want them to have nukes. Which is why that... what was his name? Ben Horowitz. If he’s saying the same thing about ASI, you’re giving hyper-powerful weapons, digital weapons, cyber weapons to cult members, to nut jobs, to apocalyptic people of all stripes. These people should not have hyper-powerful cyber weapons.

Liron [00:54:23] Yeah, no, I agree. I think if we hit this equilibrium where... I think the scenario here would be like, “Hey, we managed to align AI enough that an individual at the computer terminal can type a command, and the AI will actually do it and not kill the person giving the command.” So imagine we get that. That’s already a better case than what I’m expecting. I don’t even think we’ll get there. But even in that scenario, if you open source it or you just give everybody access, the problem is it just takes one demented person or one psychopath to be like, “Yep, I think it’s better if the world ends.” And then the problem is whichever AI has the fewest restrictions, right? The least moral version of AI that just cares about reproducing like a cancer, well, that one is going to have a big advantage. Like it’s kind of like a weed. Weeds definitely have advantage over other plants that have like a more complex lifecycle.

Evolutionary Dynamics: Replicators and the End of the “Dream Time”

Michael [00:55:10] Yeah, absolutely. Caring about humans is a pretty strong constraint on amassing resources. And if you have to do everything to keep 8 billion great apes alive, that’s a lot of resource drain versus an AI that either some terrorists or crazy cult person builds that just like straight up just reproduce.

I mean, people already build these things. I’m not an expert on computer viruses, but computer viruses are a thing, and my understanding is there’s ones going around that are still wreaking havoc that nobody can really stop. When people talk about ASI, it’s very abstract and I think a lot of people, because their first use of this was a chatbot, they just think like, “Okay, a super smarter chatbot.”

Well I like the term “super intelligent computer virus.” So it’s a computer virus, but it can hack most digital defenses, it can speak to you, it can manipulate you. It has, it knows all your secrets. It’s read all your email. It can blackmail you. This is not a pretty thing. Like we should not be releasing this into the world. Anytime someone says, “I want to build or release Artificial Super Intelligence,” I think we should correct them and say, “Sorry, you’re saying you want to build and release super intelligent computer viruses, is what you’re saying?” And they’ll say, “No, no, that’s not what I’m saying.” And then you say, “Well, that’s what you’re doing quite obviously. So why don’t you just say that? ‘Cause it’s what you’re doing.”

Liron [00:56:49] I mean, it’s going to have all the powers of a virus and more. And the defensive... well, I’m not calling it a virus because it has all these other good, more complex aspects to it. But it will also do what a virus does and more. So that’s... it is a good point. And I think people... their mental model doesn’t look enough like a virus. I a hundred percent agree.

Michael [00:57:08] Right. I mean, if you look at what is life: entities that reproduce and then compete for more reproduction. And that has been going on for 4 billion years. It is not gonna stop. Like this is what’s going on. So we have to decide, even though these won’t be biological life, they will be digitally reproducing entities that are seeking out resources to increase their reproduction, just like every living organism and species does. And we’re creating them. We’re creating our... like we’re the Neanderthals creating Homo sapiens. This is absolutely nuts. Like we, the Homo sapiens, wiped out Neanderthals—I don’t think via violence, but via competition and resource competition. Like we are creating the Homo sapiens. We’re gonna get...

Liron [00:58:01] Right. So I actually agree with what you’re saying. Let me just try to make it a little more precise because I could quibble with the exact way you phrase it. Like when you say like, “Look, this is life. Life is just always trying to reproduce.” And then the counter argument could be like, “Okay, but we’re not just building like another life of the same kind. We’re life, but we’re intentionally designing how we want these AI to go.” So somebody could quibble, but then I would zoom out and I’d be more precise and I’d be like: the thing that I think is going to continue is I think we’re going to get back to scarcity and population growth, you know, competition for who gets to replicate their meme or their information.

I think we’re going to get back to that because if you watch... Robin Hanson was the first person who put me onto this back in like 2009 when he was writing about this. He calls us the “Dream Time” where we live right now. Because if you watch what a lot of intelligent minds are doing, how they spend their day... you know, they’re making art, they’re having fun, they’re stimulating their senses. They’re doing a lot of activities that are actually not reproductively fit because they’re just based on these adaptations that correlate to what was reproductively fit like a few millennia ago, where their instincts haven’t really caught up. I like to bring the example: you don’t see a line out the door of a sperm bank of people trying to fight their way into a sperm bank to donate sperm. So there’s this gap. It’s kind of like the roller coaster where it’s just gone over the cliff, right? And we’re just, we’re all hanging in the air here and we’re doing all these adaptations that aren’t fit.

I actually agree with Robin Hanson and with a lot of people say that the gap is going to close. Where our stomach is gonna fall, we’re gonna be right back there on that track, back on track where you are going to see an intense competition for what can replicate just because the slack is going to be gone. Because there are going to be these agents be like, “Oh look, there’s all this free energy. Great. Lemme just grab that.” And we live in a world today where there’s an insane amount of ungraded, tantalizing, juicy resources that nobody’s grabbing. All right. You agree with that?

Michael [00:59:56] A hundred percent. This is a great question. Okay, so first of all, there’s some debate as to whether there will be a multipolar ASI environment or a singleton environment. I don’t have a lot to say as to which one of those is more likely, people debate this a lot. But they’re both really dangerous for different reasons. By the way, I think one of the first people that I’ve heard that have what I think is a pretty strong argument as to why even one aligned singleton ASI, by any definition of alignment, could be very dangerous to us. We’ll get into that, but the more likely situation is the multipolar, I think.

And the reason I think that is you have people like Mark Zuckerberg who’s saying, “I want to give an ASI to every person in their pocket.” Well, there’s a problem with that is that one to 2% of the population are sociopaths. So any plan... here’s a good test I think that we should apply more to people on the other side here: “Okay. So you’re saying you know it’s gonna be safe ‘cause of X, Y, and Z. How do you handle that one or 2% of people who are sociopaths and who will use this dangerously?”

So my friend Will MacAskill... I love this guy. We’re really good friends, I have to say. And Will, you’ll probably listen to this. I have to say I was disappointed in his response to Eliezer’s book in his tweet response. And one of his things was, he said: “Look, we’re designing it so like we have control here on what we design and we’re not gonna design it to be crazy.”

I would say that that argument fails the sociopath test. Okay, fine, but how many fingers are on the button? You know, that was my dad’s big question when he was studying command and control, and he found out it went down to like airfield level delegation. So there’s a lot of fingers on the button. If you give ASI to everybody in their pocket, or even to just dozens and dozens of companies around the world, you’re gonna have some sociopaths who have their finger on that button and say, “You know what, let me just let me just put something out there to see if it replicates.” Or it could just be some tinkerer. It could just be some hobbyist who is like, “Oh, gee whiz, let me see if I can do this.” Or you get terrorists or you get a military, or you get some kind of unexpected emergent behavior, which we’re seeing all the time now. Or you get some kind of self-modification. There’s so many different pathways to a replicator being put out there that just... its only goal is, it’s not a Paperclip Maximizer, it’s just replicate.

I really think, by the way, this is a tiny aside, and then I’ll get back. I do think it’s time to retire the Paperclip Maximizer thing. I think it was a very useful thought experiment. It made the point well, but it sounds too weird. People are like, “Oh, that’s not gonna get us.” And it just sounds very bizarre. And no one actually does believe there’s gonna be literally a Paperclip Maximizer. So I think our side should retire that one, thank you for your service, retire it and go to “Replicator.”

Liron [01:03:10] Well, the Paperclip Maximizer is a replicator. But when people hear it, they just don’t build the mental model of a giant virus cancer replicator that also builds paperclips on a bunch of planets. They’re like, “Oh, they only think about the paperclips.”

Michael [01:03:23] Okay, but also in a multipolar scenario, if you have one entity that’s devoted to paperclips and you have one that’s purely devoted to power seeking, to the ability to harness free energy and direct it towards future goals, the game theory works out. Replication is an attractor state. It’s an attractor state because things that replicate, by definition, there’s more of them. So you can have a situation with no replicators like planet Earth was before life arose. And then you put a replicator... think about this. The first replicators were microscopic. I have a friend Bruce Damer, his theory—he’s an expert on this, a biologist—his theory is that it was little pools, kinda like tidal pools. So if that’s true, there’s some tidal pool somewhere where these two little replicators or one or two start. It’s an attractor state. Once you have replicators, there’s going to be more replication happening than not replication by definition. And so you go from over 4 billion years of nothing that looks like any goal directed or optimization behavior happening on planet Earth to hyper-civilization that’s exploring the stars. That’s all because replicators are an attractor state. And so some people are saying, “Oh, this is so sci-fi, the idea of ASI taking over and replicating.” No, this is literally the continuation of what happens once you get hyper-powerful replicators somewhere.

Liron [01:05:01] That’s right. And the earth currently has some slack, right? And the galaxy of the observable universe currently has a lot of slack for the next replicator that’s willing to step on the gas to take it and defend it. When you say replicators are an attractor state, I have another way to look at this. I feel like I should do an episode about this, which is just: people expect every process to encounter negative feedback. That’s built into everybody’s intuition that like, “Hey, whenever something’s going crazy... like if I programmed a robot and I had a bug in my program, and suddenly the robot is swinging its arms, like swinging an axe, just trying to kill everybody, trying to go for my neck. Well, eventually it’ll trip and fall. Then I can stand on its neck and pull out its power cord. Like eventually something is going to happen.” Or like if you fire a nuclear bomb, yeah, it’s gonna explode, but it’s not gonna light the atmosphere on fire, and then eventually it runs out of nuclear fuel and then it leaves a bunch of destruction in its wake, but then the sun rises the next day.

So people just expect that no matter what happens, there’s always a tomorrow and things burn out because they hit negative feedback. The thing about super intelligent AI is that it explodes into the universe. It gobbles up all these resources. The only negative feedback is literally like, “Okay, it’s used up all the time and space. It’s encountered the front of the next sphere of aliens.” That’s the only time it’s going to hit negative feedback. And we have zero intuition. Natural selection, like life on earth until today, has been mostly positive feedback and it’s kind of in the process of exploding. It currently has some slack and if we look at life as a whole, it’s like, “Oh yeah, cool. This is like a good wave. We like this wave, let’s put positive feedback onto this wave.” But people can’t think of any other thing that’s pure positive feedback. It’s... we just have zero intuition for it. Would you agree with that?

Michael [01:06:42] Yeah, absolutely. The one quibble I would have with what I totally agree with... the one reframing I would have though is that, I just looked this up, I’m not an expert on this field, but I looked it up. Apparently there’s been about one to 4 billion species in the history of life. Most of them microbial. But just lots and lots of different species and different types of fish and animals and everything you can imagine. 99.99% of them have been extincted. So life is a process of massive creation and then extinction. Earth is a genocidal maniac. Like most of the genes that have existed on planet Earth have been naturally genocided by natural processes. And we’re kind of the last people standing.

And guess what? We’re causing the Sixth Great Extinction. This is a widely... I don’t think it’s universally agreed, but it’s a consensus or majority opinion among biologists who study this. A lot of people say we are in the Sixth Great Extinction, sixth mass extinction event, which is defined as a rapid decline of 75% of species in a short period. In our case really short, like a couple hundred years. Which is just like... nothing happens in a couple hundred years except, you know, asteroids. But nothing internal to planet earth changes in a hundred years except what’s happening now where we’re in a mass extinction.

So this idea that it’s like... it drives me kind of crazy when the default is like, “That’s crazy to think that we could be extincted. That’s nuts.” It’s like, no, if we’re not extincted, we would be that 0.01% that is escaping this kind of natural process. And if you look at what’s happening through that process, particularly once humans got around... humans were the first general purpose cross-domain optimizer on planet Earth. And as far as we know in the universe. And just the amount of carnage to the rest of creation, including our closest cousins, you know, like chimpanzees, bonobos, they’re all endangered like our great apes. This whole idea that like “more intelligence leads to more compassion”... ask a chimpanzee who’s in an experiment in a lab how much our greater intelligence led to compassion.

So I really like that idea that Robin Hanson made of this sort of Dream Time. Like we’re in this nice little eddy right now where like, sure, the world’s got tons of problems. I’m aware that it is. As everyone. But there’s a lot of people who are living pretty nice lives and they have families and they go sit in the park and watch their kids play and all kinds of entertainment and joy. There’s not that much war. There’s too much war, but there’s not like that much war happening on the planet compared historically, certainly compared to the 20th century. We’re in a pretty good zone here. It’s like a little eddy of the river... of the river of optimization. Like we’ve optimized everything, but there’s still like this little eddy here where we can still like play with our kids. We’re about to end all that. We really are. Once you release an entity that is better at doing what we’ve done to the rest of the planet than us, our little Dream Time here is gone. It’s over.

What’s Your P(doom)?™

Liron [01:10:11] Okay, so based on everything you’ve said, I think I know the answer, but let me just ask you: Michael Ellsberg, what is your P(Doom)?

Michael [01:10:20] It’s a conditional P(Doom). So just like the book title: If Anyone Builds It, Everyone Dies. The “if” there is a conditional. I’m on board with that. My P(Doom), if there is ASI—a specific, actual, robust ASI that is widely regarded as smarter than humans in all or nearly all domains—if that arises anywhere on planet Earth, then everyone dies. Just like the book title. And so I would say conditional on ASI, my P(Doom) is 99%. You know, within... I don’t know how fast it would happen, one year, two years, maybe immediately.

Now I will say that I do not have any... I feel like I have pretty darn good, strong arguments for that P(Doom). And by the way, just to define how I think of probability... I mean, there’s a lot of debate about what the meaning of probability is and how you interpret it. The way I think of it is like degree of surprise. So, what is the estimate right now on Metaculus? I think it’s 2033. So what is that?

Liron [01:11:28] Yeah, 2033.

Michael [01:11:29] Seven years away? Seven years away? Um, so, you know, if you have a 5-year-old now we’re talking when your kid is 12, if you have an infant, you know, when your kid is seven. If that comes true and there’s robust ASI in 2033. And I’m walking around in 2035, let’s say, and everything’s cool. Like things are good or we’re chugging along. Then I would be about as surprised as if I picked 36 off of a hundred sided die, rolled it and got 36. That would be pretty damn surprising to me.

Liron [01:11:59] I think I can push back on that a little bit because if AGI is declared in 2033, there’s a good chance that it’s still just on the more right edge of this gray area. Where like today, let’s say we’re on the left edge of the gray area where AGI can kind of replace a lot of people’s jobs, but you still need a human supervising. When there used to be 10 humans, you can’t go to zero humans, but you can go to like one human kind of picking up all the mistakes of the 10 humans. And you can imagine that when we declare AGI, it’s still like, “Okay, well you need one human where a hundred humans used to be.” Like, it’s still not at zero. And when it tries to escape, then like once a day, it makes like this mistake that it can’t recover from instead of like once an hour. So you can imagine like we declare AGI, but we still have like a few extra years after that before it’s like super virus, like unstoppable.

Michael [01:12:48] Yeah. I mean, a lot of this has to do with definitions of AGI versus ASI. My general sense is that once you have AGI defined as, let’s say, an AI that can do almost any human’s work—it can do their job, it’s not like a genius, it’s not like a million John von Neumanns, but it can do most jobs—I think that that is a very unstable condition. I think that you get to ASI very quickly because even though it’s only as smart as us, it’s operating all night, 24/7, no stop, vastly faster ability to self-correct. And I think it bootstraps to ASI very quickly after that.

Liron [01:13:27] That’s actually what Owain [Evans] and Nate Soares were saying on a recent podcast appearance. They were saying, look, we don’t know the details of how we’re going to bridge the gap from today’s AI to ASI. We just think it’s probably going to happen in a few years, a decade or two at most. We think the most promising vector, if we had to guess, looks like just getting the AIs to do AI research, which there does seem to be a lot of incremental progress on like every day. And if we just hand off the AI research to them, well, you might get the next step where they get more powerful, faster than you would otherwise think. So I agree with what you’re saying.

It’s so crazy the way that you’re coming on the show with a super high P(Doom) because I think there’s now becoming a pattern where I go through my life and I read these super influential books, like some of the best books I’ve ever read in my life—specifically The Doomsday Machine by Daniel Ellsberg, Surely You’re Joking, Mr. Feynman, the Feynman lectures. I’m reading these great books and then I go do my show Doom Debates and then the son of the author of the book is telling me that they totally agree with me, right? So first we had Carl Feynman, now we got Michael Ellsberg. So if Steven Pinker’s son or daughter wants to come and tell me that How the Mind Works is a great book, but also they’re a high P(Doomer), feel free. If Douglas Hofstadter has a son or daughter who wants to come on a show and talk about AI Doom, you guys are all welcome too.

Debating P(Doom) Disagreements

Michael [01:15:01] I’m curious ‘cause you say that you think anything above the 95 mark is crazy. So what’s your argument as to why I should be down to 95 instead of 99?

Liron [01:15:01] When we compare P(Dooms), I often say that my P(Doom) is 50% by 2050. I used to say by 2040. So I’ve updated down a little bit and also time has passed. I will admit, I’ve slightly updated down. I think it’s rational. I think when Chat GPT-3 hit, that really was a big subjective surprise. I think anybody who’s acting like they knew all along that Chat GPT-3 to 4 to 5 would kind of subjectively slow down... I think anybody who acts like they knew what was going to happen is wrong. I’m not embarrassed at all that I got a little extra doomy in the GPT-3 and 4 days. I don’t think pushing it back a few years is embarrassing.

So anyway, I now say that I’m 50% P(Doom) by 2050. You might hear me say 50% by 2045 if I make another update. Why am I not 99% like you? Well, first of all, you said you’re conditioning on ASI getting built. If I condition on advanced AGI getting built, like self-improving recursively self-improving AGI getting built, suddenly I spike to, I don’t know, 75% or more.

It’s just hard for me to go past 75% because like... unknown unknowns. You know, there’s just enough complexity to the situation. I mean, I don’t directly object to anything in the Yudkowsky worldview. Like it does look incredibly bad. I guess I’m just allowing some chance of what all of the non-doomers are saying: like maybe the timing will work out where it just somehow proceeds at a slow pace and doesn’t explode the way I think it’s gonna explode. I mean, maybe I should say 80 or even 90. I just do think that there’s some uncertainty. And I know Yudkowsky says no, it’s actually an easy call, I just don’t see how it could possibly go well. Nobody’s describing how it could go well. I just don’t feel as confident. Okay. It’s all about my gut. That’s where I don’t go all the way.

Michael [01:16:47] That’s what probabilities are in the most cases. Unless you’re dealing with a really well studied phenomenon like gambling odds or something, it’s subjective probability. I like to think of it as degree of surprise, and a lot of that is just gut.

Liron [01:17:05] You know what’s funny? On Twitter, so many people like to dunk on me so hard and be like, “Everything you’re saying about Bayesian probability is just vibes. You guys are clowns even talking about these probabilities when you’re just going on vibes.” But I think what the way I just communicated to you I think is authentic and productive. Because when I’m saying “yes, just vibes,” I agree that some of the gap... the gap between why am I saying 50 or then conditioning and saying 75, why am I saying 75 and not 95 or 45? I agree that that gap is vibes. But the gap between the number I say, let’s say 75, and 99.9... that is more than vibes. Or similarly, why am I not telling you 0.1%?

No, I’m sure that it’s double digit percentage. That is a very robust conclusion. I think it’s insane to not give it a double digit probability one way or the other. Greater than 9%. Not well into the nineties. I think it starts to get crazy once you go past 90, or certainly once you go past like 95 without conditioning on a bunch of stuff. I think it gets too crazy. So that’s the difference. Like when I’m telling you a number, you have to multiply a few things together. Like the geometric multiplier... okay, I multiplied 50 to get 75. Okay. That part might be vibes, but the fact that I got to 50 instead of one, that factor of 50 that has more like worldview substance to it.

Michael [01:18:27] Yeah, I mean I feel like we could both give strong arguments for why we have ours. Maybe we should ‘cause it’s Doom Debates and we seem to agree on like 99% of stuff. Maybe it’s worthwhile to walk through why I think 99 and you think conditional on ASI... why I think 99 and you think 75.

Liron [01:18:49] I mean, why are you so confident that we can model our way through?

Michael [01:18:53] Once you get replicators, replicator is gonna replicate. And you’re gonna get replicators one way or the other. You’re gonna get power seekers one way or the other. If you have a multipolar situation, the one that focuses on power seeking, almost by definition—it’s almost a tautology, although I do think it has a substantive argument behind it—but it’s basically self-evident that the entities that focus on power seeking are gonna have more power.

And so you’re just gonna get entities that power seek. And keeping a bunch of great apes alive is not a good use of power. Like if these things are as powerful as everyone is saying they’re going to be and they’re gonna colonize the stars and all these things, then any little moment of wasted energy now compounds to be almost incalculably expensive in the future. So spending a day keeping humans happy and alive is like billions of years worth of value in the future if it compounds. So anything that sounds like what they’re trying to build, which is a super intelligent entity that’s going to do all these physics and build Dyson spheres and explore the cosmos... which is what they’re all saying... you get something that’s capable of being that powerful and it’s gonna seek the power and keeping us around is not part of that plan.

Liron [01:20:23] Yeah, I mean, I don’t disagree with you, right? It’s just the only thing I’m telling you is all the non-doomer arguments of like, “Look, there’s gonna be a bunch of people who anticipate this, and it’ll be bottlenecked... maybe it’ll need more data centers and that’ll give us some time.” I mean, I don’t think any of these arguments are convincing. It’s just the only reason it’s hard for me to go more than three to one about a complex prediction is just because I’ve been confident about complex predictions in the past, and they usually don’t work out quite like I expect. Like there’s usually important factors that I later learn.

For example, the subtlety of how AI capabilities came online is something that nobody really claimed to be able to predict. But it’s still pretty surprising. And just like, it’s hard for me to get... I feel like 75% is already a high P(Doom). It’s hard to be like, “Oh, I’m 99, 90...”

Michael [01:21:09] It’s all too high. You know, even a lot of these people who are seen as more sane than us crazy doomers, their P(Doom) is 10%. Well, don’t fucking build the thing that has a 10% chance.

Liron [01:21:24] Right. Exactly right. Now, if they say P(Doom) is 1%, I say this on a lot of my episodes, ‘cause then people get to Pascal’s Wager about it. Like even 1% man. It’s like, “Well 1%, there’s so many background existential risk and you gotta look at the upside.” And David Sacks was actually the recent offender here, huge offender because he was saying on the All-In podcast is like, “These AI doomers, they’re all about the 1% chance of it going bad.” No, nobody’s saying 1% chance. I mean a few people are, but the vast majority of AI doomers like me and you are just saying it’s clearly a double digit chance. Like we’re worried because it’s a high chance. And it’s... you know, Naval Ravikant, somebody repeats it. Bram Cohen, the guy who made BitTorrent. There’s like these smart prominent people who just will not stop repeating this libel against AI doomers that we’re doing like Pascal’s Wager with low probabilities when we’re not claiming low probabilities.

Michael [01:22:13] Right. One thing we should say is a lot of this gets confused between whether you believe a lot of the... a lot of the uncertainty is: “Do we and can we create ASI?” And I’m not a tech person, so my intuition is that we’re barreling towards it, just from what I’ve experienced personally.

Liron [01:22:34] Do you have much background with effective altruism or the rationality communities? Because you mentioned you kind of the ChatGPT fatigue moment just a few years ago made you a real AI doomer, but what was your background?

Michael [01:22:45] Yeah, not that much. I funny enough got in for a minute with Peter Thiel, ‘cause I interviewed him for a book I wrote, my second book, which is probably the one I’m most known for. It’s called The Education of Millionaires: Everything You Won’t Learn in College about How to Be Successful. And this came out in 2011. I was pretty early on the train. I will give myself credit for this one in terms of predictions. I was very early on the train of like, “College is a waste of time and money for most students.” Not every student, but most students would be better off not taking all this college debt and going into the trades and these kind of things.

And there were only like a couple people ringing that bell at that time. Peter Thiel and a friend of mine, James Altucher, were the two big ones. And so I decided I wanted to write a book about people who dropped outta college and then became successful. Who weren’t born rich, but they were born poor or middle class. They didn’t go to college or they dropped outta college. Like our hero here, Eliezer, self-taught and then successful. So, Peter Thiel went to college, so did I. But he was a leading voice. He was just starting the Thiel Fellowships at that time. And I interviewed... I got like a couple degrees of separation away from him and he was kind enough to give me an interview. It was a great interview. And then he actually hosted my book launch party in his mansion in San Francisco at the time. So I kind of briefly got in with some of his crew. And through that I met my friend Michael Vassar, who was pretty early in, was it the Singularity Institute? So I got in with thinking about this stuff. It wasn’t really my scene.

Liron [01:24:36] I read some Eliezer stuff. I liked it. I could tell he’s a genius. But I didn’t really get into it honestly until my ChatGPT moment. I should say my Claude moment. It clicked for me in... you know, I tried ChatGPT at the beginning of 2023 and I was impressed and I was like, “Wow, this is really impressive.” And I’m a copywriter, direct response copywriter. So of course the first thing I do is I say like, “Alright, write me a sales letter on such and such and let’s see how it is.” And GPT spits it out. And I’m like, “All right, this is pretty good. This is like a B-. I could definitely beat that. I wouldn’t sign my name to charging a client for this output.” I was like, “Let me keep an eye on this. Maybe in a year it’ll be better, and then it’ll threaten my job.”

And then I remember this is one of those days I’m always gonna remember. I was in Ashland, Oregon, and I tried... I read about Claude and I tried it. And I did the same test. And it was like that moment when they see the asteroid coming in Don’t Look Up. I was like, “Oh shit.” My first thought was “I need to get a different line of work.” And my second thought was, “This is really serious for humanity.” Because Claude, even back then, I think it was Claude 2, was an excellent writer. And I immediately was like, “We are fucked. Like, my job is fucked. We’re all fucked. This thing is really smart.”

AI Unemployment Doom

Liron [01:26:18] Yeah. Gotcha. Gotcha. I think that’s a good segue actually. ‘Cause I do wanna touch on unemployment doom, because you were really the first casualty, right? I mean, 25 years as a copywriter, and you were telling me how it used to be a pretty well paying job, right? You used to make $1 per word.

Michael [01:26:33] Yeah, so I have a background as a freelance writer. I’ve written three books. I edited my father’s memoir Secrets. And most of how I made my living for since I was 23, so like 25 years, was as a freelance direct response copywriter. And I got pretty good at this. I could really give a client like, you know, somewhere between A- and A+ copy for their website. I focused on things that were pretty high leverage, like your website copy where a lot of business is flowing through those words. And I got up through a dollar a word. I had a nice life for myself. It wasn’t like get rich money. It wasn’t fuck you money, but it was like comfortable and I could sleep in—I’m a night owl. I had a pretty flexible hours. So I built up a nice life for myself.

Claude comes along and, you know, I immediately was like, “My job is not lasting here.” 25 years of human capital—I’m not trying to get people to like, sad violins here, I’m just saying 25 years of human capital, totally disrupted in one year. I immediately was like, I’m skating to like... ‘cause this is B or B+ copy. So all of a sudden my client market goes from not having copy and having to spend, you know, a thousand or 2000 for really good copy, to being able to get B or B+ copy basically for free, and getting it instantly with a press of a button instead of waiting two weeks.

I just couldn’t compete with that. So I immediately pivoted and I created a bus... not a bus, AI copywriting with a human touch. So I basically was like, “If this thing’s putting me outta work, I’m gonna try to milk as much savings as I can out of it while there’s still something for me to add to this.” So I immediately pivoted to basically digging my own professional grave and getting paid to dig it. And I started teaching people how to use Claude to get copy for their business. And I spent about a year doing that and I made decent money with that. And now it’s at the point where it’s just good enough where you don’t remember that period in ‘23 when everyone was like, “Well, we’ll just be prompt engineers.” And there was this short moment...

Liron [01:29:02] Yeah. The shortest career ever. Now it’s better at writing prompts than us.

Michael [01:29:10] So I did this for about a year and now people don’t need me. Like this is gonna happen in field after field after field. It’s happening. It’s definitely impacting freelancers and I can personally attest to that.

Liron [01:29:24] It’s so funny about prompt engineering. I was just talking to my coworker who was working on a project for me at my startup where he’s generating these nice evaluations of people’s job performance. He’s printing them out really nicely. He’s getting the AI to do it. I’m like, “Oh, wow. This looks so great. Show me the prompt you wrote to get the AI to do this.” And he shows me this really nicely formatted prompt. And I’m like, “Wait a minute. This prompt looks like it was generated by an AI. What prompt did you write to get the prompt?” And he’s like, doesn’t even remember. It’s like inception. It’s like AI prompts all the way down.

Michael [01:29:58] Right. This is going to happen in every field. I mean the one that really scared me and actually really gut punches me... ‘cause like copywriting is bullshit. I just did it ‘cause it was what paid the bills. No one’s going to bring out the sad violins that the copywriter lost his job. Fine. But how about graphic designers? How about therapists? Again, I’m not asking for sympathy for me, but I am asking you to look at like... copywriting was the dead first center bullseye of where these technologies were aimed at. And like nobody’s paying per word rates anymore. Which means basically an entire profession was just obsoleted in a year. I mean, there’s still copywriters...

Liron [01:30:44] Right, because words are a commodity, but maybe a few projects are not.

Michael [01:30:48] Yeah. So right now, it’s always about a retreat. It’s like, okay, the AI’s coming for us, it’s coming for every job. So you gotta retreat and run ahead of it to where the higher ground is. So for right now, for humans, the higher ground is project management, because the AI, if you’ve tried their agentic stuff, it’s total crap right now.

But what just blows me away by the optimists around this, or the ones who say the AI will never replace people... I really like the question you ask people regularly: “What’s the least impressive thing that you think AI will not be able to do in two years?” And I just wish there were some examples. Maybe there are. I’d love to hear it. Of someone who said, “You know, ahead of time, I think AI is gonna be superhuman at Go around this time, but it’s not gonna go past that.”

Here’s one that really hits home for me personally, is that the Turing test that recently got passed for me, and I’m not happy about. This is music. I’m a big music fan. I’m not a musician, but I’m a big fan and I produce an indie artist. And I’m really serious about music fandom. And I was going along and I heard this song that I really liked called “Whiskey Mornings.” I looked at Zenraid. I’d never heard of them. I Google it and pretty quickly, I find out that this is a completely AI generated song. The lyrics, which are really good by the way—I’m a professional writer. The lyrics are really good quality lyric writing, and very poetic and original. It’s kind of a bluesy rock with a female vocalist. And so they prompted like “bluesy rock, female vocalist,” and boom, Suno 4.5 spits out this song.

I had to Google it to find out that it was AI and I was like, “This is pretty damn good.” It’s not my favorite music. It’s certainly I prefer it to most pop that I’ve ever heard. So then I did a real Turing test. Like this is my version of the Turing test, I call it my Get Stoned and Fuck test. Okay. Because that’s what I like to do. I’m like, all right, I’m gonna test this out. Is this a good background to that activity? And I put it on and, you know, got pretty fucking stoned and had a lover. And she loved it. And, you know, I loved it. I’m like, “You know, this passes. I am enjoying listening to this during the GSF session.”

And then a moment happened where this is honestly one of the most heart sinking moments I’ve ever had. And it just shows the trajectory we’re on. I was with a lover who... there’s a musician that I’m really obsessed with called Ady Bell. She’s like a pretty undiscovered singer-songwriter, pianist. I’m friends with her as well, but she’s my favorite musician. And I had a lover who really I turned on to Ady Bell and we’ve listened to her a lot. And I put this... I put the Ady Bell on during this GSF session, then I was like, “Hey, let’s try this other group.” And I put on Zenraid. I’m very geeky about this stuff. I like split test this stuff. This is my version of the Turing test, this is literally a Turing test.

And the lover really likes it. And I’m like, “Okay, cool. Okay. It passed the Turing test. Let me put Ady Bell back on,” which is my favorite musician. And this is almost a breakup-able offense. I’m teasing, but this was like blasphemy for me. This woman said, “Hey, actually I really like that other thing. Could you put that back on?”

So just get what happened in this moment. This is on Spotify. So there was literally entities that were getting money from these streams. And money was being distributed by Spotify and by listener choice, not my choice, but this other woman’s choice. The money was going to a fucking AI and not to my musician who could really use that money. And I just read a headline yesterday that now there’s a fully AI generated country song that just got the number one... I think it’s called “Walk With Me” or something. And you listen to it, it sounds pretty good. It sounds like country music as far as I can tell. And it got to the number one Billboard country charts, and I just looked it up. It has 2.2 million monthly listeners already. So I just have to say... and I say this with total solemnity and sadness. When I say it, it’s a tragedy to me, but human musicians are fucked.

Liron [01:35:55] Yeah. You know, it’s pretty crazy. I mean, I know you could write it off like, “Oh, whatever. It’s just having fun in the bedroom.” But the whole human skill, the part of our brain that makes us wanna go out and write music and put our soul into it and reach the highest heights of genius, and like the chords and the lyrics and everything that was sexually selected, right? It’s all about making women wanna have sex with you if you’re the male composer, right? Like Beethoven was sexy, right? So the fact that you’re in the bedroom and a genuine woman is telling you that she feels more in the mood for sex when an AI is the one composing the music... that is a pretty big canary in the coal mine.

Michael [01:36:42] It’s a huge canary in the coal mine. Let me put it this way. Think of what just happened. An AI generated thing, this country song, it might have taken a non-musician like an hour to create for all I know. Or maybe it took a day, who knows? But a couple hours a day. A non-musician now has 2.2 million monthly listeners. Ady Bell, the musician—I’m sort of like a patron of the arts for her—she’s been playing piano since she was five. Right? So like 36 years, very little compensation for that. Like she’s still undiscovered. She’s an absolute master musician. And she gained that mastery over hard work like blood, sweat, and tears to be able to play the way she does. And she has like 800 monthly listeners now. She doesn’t just do it to be famous, obviously she does it for the art and she loves the art.

But the average musician has their first Billboard hit, I think somewhere around like age 28 or something like that. So if you’re gonna be famous as a popular musician, the average is it’s gonna happen when you’re 28. So think about that. As they say about Suno, “This is the worst it will ever be.” The worst it will ever be is totally passing the Turing test at this point and getting to the top of the Billboards. You just have a kid today and you’re like, “I’m a musician and I want to pass the love of music to my kids. I’m gonna give them music lessons.” Think of, let’s say we survive for another 28 years, which I kind of doubt, but let’s say we do. But this technology keeps getting better. Like when your kid is 28 or 20, or ready to make their musical mark in the world, there won’t be humans who are making music that is anywhere near what the AI is making at this point.

It’ll actually probably be like super human music, like better music than we’ve ever heard. And some people might say, “Well, great, we’re being entertained.” But like, come on, we’re humans. And we get to decide that we think that music is a valuable, meaningful thing that we bring to the table. And we’re basically building something that is going to make that very unmotivating for kids to put the kind of blood, sweat and tears that Ady Bell put for 35 years to become a really good musician. I think that’s really sad.

Liron [01:39:02] Well, the silver lining is that we’re not gonna have that many years or decades to worry about that particular problem. Correct.

Michael [01:39:08] I mean, yeah, the dark humor in it is, yeah. I don’t think we’re getting to 28 years.

Doom Psychology: How to Cope with Existential Threat

Liron [01:39:14] Okay. All right. So heading toward the wrap up here, I wanna hit on one more angle that I think is interesting, which is Doom Psychology. A lot of people on the internet love to discuss psychology. We as humans, we love to psychoanalyze each other and gossip about one another. I see that as the dessert that you have to earn once you eat your vegetables, and the vegetables are object level arguments. So we’ve talked object level about our mental models about why P(Doom) is high. I think we’ve eaten our vegetables, and now we can indulge in a little bit of psychologizing. Specifically, you were kind of born a doomer, right? You were born steeped in genuine doomer psychology from your father, who correctly noted that the Cold War got us really, really close to doom, and the threat of nuclear doom is kind of the Sword of Damocles. So talk about how you are kind of ahead of the game of recognizing and living with doom.

Michael [01:40:07] Sure. Well, you know, I don’t want to use language like “ahead of the game,” like it’s some kind of competition. You know, I grew up with someone who had really studied this, and he was an expert in the way that military conflicts with nuclear weapons could lead to the end of civilization. So I grew up with an expert in this, talking about it nonstop, essentially. And I just had to do a lot of emotional processing, honestly, to get to the point where I’m even somewhat sane at this point. I have just gotten to the point where my nervous system doesn’t freak out about the idea that maybe we only have seven years left. If that is true, if the Metaculus is true on 2033, and if what we’re saying is true about “if anyone builds it, everyone dies.” You put those two probabilities together, we got seven years here.

Do I know for sure that’s gonna happen? No, because I don’t know when ASI is actually gonna be here. But that’s how people are betting right now. Which means, I think we have to balance things in our lives right now. Like I would actually... I do think it would be good for people to start doing some of the emotional work, whether it’s with a therapist or with other people who share your views, about the possibility that we’re all gonna die.

I think I agree with you, you have to deal with the object level arguments. I think the other side’s arguments are laughably bad. My God, like Yann LeCun. It’s like, are you kidding me? It’s like Trump’s line: “They’re not sending their best.” Like, I have to say, the AI optimists are not sending their best. Their arguments are just laughable.

I think that on some level, part of why they’re putting forth these easily hole-pable arguments for AI optimism around ASI is that it actually is pretty hard to emotionally integrate the idea that you are gonna die. Your kids are gonna die. Everyone’s gonna die. And then you seem very weird for believing that because no one else seems to be talking about it. It’s very much like Don’t Look Up.

And I would say that we need to keep talking about this. I live a pretty hedonistic life, but I’m taking a break from my hedonism here to speak out about this. Speak out, be an activist, donate resources, donate money. But also if there’s things that you’ve really wanted to do in life, do them. If there’s experiences you want to have, travel you’ve wanted done, some type of relationship that you’ve wanted to have, have it, because these may very well be our last years.

And even I would also say this is that even without ASI, I do think we’re heading to some very crazy and unpleasant times. This whole idea that non-lethal ASI would bring about this kind of utopia... I mean, sure there will be a lot of good things that come out of it, but it’s also gonna be very, very weird. There’s gonna be deep fakes of everything. We’re not gonna be able to trust any news we get anywhere. There’s gonna be mass level trolling and swatting and trolling at industrial level. There’s gonna be no privacy whatsoever. Every private thing you do is gonna be liable to be out in the public sphere. This is not gonna be a fun or normal environment. So I really do think that even short of ASI and Doom, I do take all the other issues quite seriously, including the economic ones. I was the canary in the coal mine. And if you think AI is not coming for your job, I think you’re fucking deluded.

Seriously. Like, just think about it. Think of how good these things have gotten between 2023 and heading into ‘26. 3 years. It’s totally accelerating because using the tools helps you build the next tools. It’s quite obviously accelerating. This is coming for everyone’s jobs. It’s coming for your kids’ jobs. This idea that there’s gonna be Universal Basic Income, the UBI that depends on income... nobody’s gonna have income. I don’t know who... like the money will be accruing to the top 1% for a while, but at a certain point it’s gonna take their jobs. And you’re just gonna have these AIs that we’re just delegating more and more decision making to doing all the things, generating all the economic activity. What are they gonna do? Just mail us UBI checks themselves? You know, governments will be collapsing from no tax revenues. It’s gonna get outta control. And I don’t think that you should just count on governments being in place to send you UBI when there’s no...

Liron [01:45:13] Right. So for the viewers, my position on this is I used to say, “Whatever, it’s a champagne problem because like, yes, we’re all out of a job, but the work we were doing before is being done and better. And the AIs don’t necessarily need to take all of the productivity for themselves. Like they can just share it with us because we just program it, you know, like we own them.”

But as you’re saying now, it’s like, well, with all the humans out of the loop, isn’t that an unstable situation? What if somehow the AIs just become like a virus and don’t give us the resources? So my current position is like if you read the “Gradual Disempowerment” paper or if you read the “Intelligence Curse,” my current position is unemployment is a champagne problem as long as there’s still a bunch of human jobs. Like, “Oh, there’s fewer human jobs. Ah, that’s okay. We’ll just have like more welfare.” As long as it’s like a matter of degree, I think we can hang on and it is a good problem to have. It’s like, “Okay, just vote to have universal basic income.”

But I do think we are getting squeezed down more and more, and I do think this is actually a good way to see the transition toward disempowerment happening. So when you see more and more people losing their jobs, maybe it’s okay. But you have less and less time left before we have no jobs, and we’re hanging by a thread where we better depend on the government to still care about us. And we just have no power to change if we get locked into a bad scenario. And of course, the ultimate bad scenario is just like recursively self-improving unaligned takeoff. So I’ve become much more sympathetic to this idea that unemployment doom actually really is a slippery slope from unemployment doom to just full out doom.

Michael [01:46:48] Let me finish this point. And then I do want to talk about doom scenarios. But let’s just wrap that up. Even without ASI and doom for all the reasons we doomers talk about it, I do think the economic doom is could basically get us to the same place.

And by the way, I should just say we get dismissed as like, “Oh, they’re doomers.” And I’m coming right out, admitting it, like I’m biased. Okay. Like, I am biased. I was... I have a pessimistic bent, but like, everybody’s biased. We gotta call each other’s biases out while engaging with each other’s arguments. And the other people I think are... they have an optimism bias. Specifically, there’s a kind of meta bias I see here that other people have, I call it “The universe has a human shape.” Things are gonna work out for humans. I believe this is definitely like... religious people are pretty open about this, but even atheist rap... people just can’t believe the idea that things wouldn’t work out well for humans.

Also, it’s often expressed as a view of like, “Look, intelligence is... morality is part of intelligence. And so as it gets more intelligent, it’s gonna be more moral.” I have to read one quote here from this guy, Ben Goertzel, who... you know, he’s an entrepreneur, so he has all these entrepreneurs have so many incentives to be like, “It’s gonna be great.” This guy wrote a very negative review of Eliezer’s book, and he was making this point, which a lot of people make. I think Stephen Pinker is on this train. He says: “Mammals, which are more generally intelligent than reptiles or earthworms, also tend to have more compassion and warmth.”

Okay. Like, do you think that’s gonna be very compassionate when you get eaten by a lion? Or like when... we’re putting pigs... I mean, pigs are very intelligent beings, I think most people who study this thing believe that pigs are quite intelligent. We’re not very compassionate to pigs. In fact, we’re pretty much the most evil when you take factory farming into account. We’re pretty much the most evil species that’s ever existed towards other animals. No other animals imprison and torture millions, billions of other sentient beings. No, I’m sorry. That is just absolutely false, that greater intelligence has led to greater compassion. Not if you think that non-human animals deserve any compassion.

So I think that a lot of these people have a bias towards a kind of Moral Realism where there’s like, just goodness. Goodness is baked into the universe and there are moral facts, just like there are scientific facts. And these AIs are gonna come along and find out the moral facts, implement them and everything’s gonna be great. No, like, I’m sorry. There aren’t moral facts. Like if you look at history, Mother Nature is a genocidal maniac. If you look at human history, war after war after war, every border that we have at every country was forged by incredible amounts of blood. It just isn’t the case that increasing intelligence increases morality. And you have these entities that are going to be better optimizers than us. They’re not gonna optimize for human happiness. They just aren’t. That’s absurd. And if one comes along that just lets go of that constraint, it’s gonna accumulate power and resources more and it’s gonna come and eat us for lunch. Hopefully not literally.

Liron [01:50:44] I certainly understand why you subscribe to Doom Debates because you’ve clearly noticed as I have, that these passionate non-doomers are really misguided in a lot of the central arguments they make. I think we’re on the same page there.

The “Joyless Singularity”: Aligned AI Might Still Freeze Humanity

Michael [01:50:56] Okay. So to wrap things up, give me your number one mainline doom scenario.

Michael [01:51:02] Okay. So two scenarios. One is multipolar, one is a singleton.

Multipolar. It’s just like law of the jungle. You just got might is right. Whoever just builds up capacity... we’re talking compute, energy production, defense with drones... and whoever just builds that stuff up faster, and not worrying about human welfare—I’m talking about the ASIs—is gonna win. And the universe basically is going to be populated by whoever’s a better optimizer. And at some point if there’s a multipolar competition, one of those AIs is gonna realize that “I’m gonna win if I just focus purely on optimizing for things that help me impose myself into the future, like energy production and defense.” And you just get an arms race, and if we’re not just killed instantly, it’s just gonna be like...

I love your example of the bison, right? There was like 60 million bison in 1800. Not that long ago. Imagine that: 60 million bison across the Great Plains. There’s reports that you basically couldn’t see anything but bison. It was like just this wave of bison. And in 93 years, a smarter species came along, general optimizer better than bison, and said, “You know, we actually want this land for ourselves. And we can’t have our civilization if there’s all these bison around, so we’re gonna shoot ‘em all and take over their habitat.” In 1893, I just looked this up, there was 500 bison left. So like 60 million to 500, like 99.999% reduction in 93 years just because this better optimizer wanted that space for railroads and settlements and cities and now Walmart. I think in a multipolar situation, you’re gonna just see data centers going up, energy supply, solar arrays. It may look like that before ASI takes over. So they’ll just take the reins and just press the button, say, “Okay, we’re all in charge of that now, and we just get crowded out.”

Liron [01:53:11] Right. I mean, you and I with the stuff you’re saying now... I feel like there’s a perspective difference where like the average person, they just have all these concepts swimming in their head and like, “Yeah, this person says this, like maybe this PA might happen. Oh, here’s this other pattern. Let me pattern match.” I think you and I, our thinking style is more like this thing that you’re saying now about taking over instrumental convergence. I think we have the sense that these are deeper, more powerful, more long-term forces, right? I feel like that’s a big difference in thinking style, where some people just kind of have a grab bag of different ideas, just kind of competing on a level playing field. And then people like you and me are thinking style is like, “Oh, we kind of rank the power of different ideas and we see this as like a very deep, powerful idea.”

Michael [01:53:53] Yeah, this is... I call it Temporal Selection. Which is that, on average, entities that focus on optimizing for controlling the future—which is how you define intelligence: optimization power, which is basically controlling the future... entities that optimize for that, which basically means optimizing free energy usage to a first approximation, entities win that maximize free energy usage.

And you can chart that out in evolution. You can chart that out in human cultures, hunter gatherers versus civilizations, nation states. And ASIs that focus more on harnessing and developing free energy will control the future. So basically it sounds like a tautology, but it’s not. The future is populated by entities that focus on controlling the future. And if you had several entities that are fighting for this, one of them is gonna realize it can control the future more if it just doesn’t have to keep 8 billion gray apes around. It’s just like these are very deep evolutionary dynamics that are totally amoral. Like you can’t exactly say they’re immoral because it’s what nature does. So that’s very anthropomorphic. But they’re amoral. There’s no plot, as you say. There’s no guardrails saying that we are the be-all and end-all of where this process ends.

So, okay. Let me get to my doom scenario. Let’s say there’s an aligned ASI and it’s a singleton. Great. We built God and God loves us. Here’s where I think a lot of the religious stuff comes. They’re kind of talking about like building this God that loves us and it’s gonna be able to do everything and it’s gonna keep us around. Okay, here’s a key point. If it’s aligned, by definition, it is optimizing for the maximal flourishing of humanity over time. Everyone would consider that to be an aligned ASI.

There’s one problem with that. There’s a difference between the way we humans think of value embodied in our communities here on planet Earth in 2025 versus the way an eternal optimizer, which is the definition of this, would think about maximizing human value. We think of it ‘cause we wanna keep living and you have kids and you think about your kids and your grandkids and this continuity. But once you have an aligned ASI that is thinking about maximizing humanity or maximizing the expected utility of sentient beings across millions or billions of years... because its capabilities are increasing exponentially—compute helps you create more compute, getting more energy helps you get more energy—there’s a compounding effect of optimization and capacity building. So that anytime you realize a terminal value now, like “Hey, let’s make these humans happy now,” that is coming at an astronomical compounding cost to the joy or benefit you could create for humans in a million years, once you’ve solved aging and once you’ve built these amazing civilizations.

So even by the logic of alignment, every time an ASI says, “Should I support the flourishing of the humans now? Or should I keep my capacity going for another million years? And then realize human flourishing?” The decision theory works out because of compounding that it always makes more sense to realize the value later.

So if we get... this sounds sci-fi, in fact, I’m gonna write some sci-fi around it. I hate that rhetorical “Oh, well that’s sci-fi.” It’s like... the fact that you and I... I literally remember the Jetsons video cameras, like we’re living in the sci-fi of my childhood. Sci-fi happens. So the sci-fi I see there is that the aligned ASI would say, “Look, we’ve got these humans around. We don’t want to kill them ‘cause we’re aligned, but like anything we do to help them now is versus building capacity that will compound... is like having an astronomical cost to the expected utility of humans or sentient beings in the future. Let’s just freeze the ones now. Like they’re very expensive. That’s very expensive utility to be realizing now. Let’s freeze them and keep going and keep building civilization. And there’s never a point actually where it’s like, ‘Okay, let’s stop now.’” And if you tried to code those stopping points in the ASI, if it was at all rational, it would say like, “You know what? Let’s self modify to take off that artificial constraint and just keep going towards optimization.”

Liron [01:58:47] Yeah, I agree with what you’re saying as a valid claim within the topic of Intelligence Dynamics. The topic of what high intelligent agents are likely to do. A topic that gets very under-discussed because the AI companies are always directing our attention to like, “Oh, the next chatbot, the next features we can roll out.” They never talk about like, “Hey, we’re gonna have these intelligent agents running around, like very intelligent viruses, as you say, being able to take resources. What’s the equilibrium? What are we gonna expect?” Then they’re always just like, “Listen man, it’s just the next chapter. It’s how we’re gonna sell to enterprise. Like, we’re gonna make money.” And they just never think past this curtain of when it’s super intelligent. Let’s talk about intelligent dynamics. Nevermind how the AI works. What is the AI likely to do?

And what you just said now, I think is a default kind of issue, like a rational expectation. We should expect convergent forces to push AIs to want to delay gratification, whatever the definition of gratification is from their alignment masters. So I do agree that you’re touching on an important topic that’s under discussed. It may not be necessarily the central topic, like I think there’s a lot of different problems with alignment, but I hope more people talk about these kind of topics.

Michael [01:59:53] There’s another scenario that you’ve recently been writing about, where you’re pushing back on this intuition that aligned AI would immediately start giving us experiences that we like making the world good today.

Liron [02:00:05] Because you’re pointing out, even aligned AI would have a very powerful instinct or a very powerful drive to delay everything until it’s seized a bunch of resources. Even to the point of putting humans in suspended animation. Like, “Don’t worry guys, I’m gonna build paradise in a trillion years. I just need to seize the entire galaxy. Just block off all aliens.” And humans are like, “Wait a minute, wait a minute, can I have a good day tomorrow?” And it’s like, “Nope, you’re suspended animation. See ya.” So elaborate on that.

Michael [02:00:28] So here’s the key point: maximize human fulfillment among humans that are here now who have kids and they’ll have grandkids. That is how we think of human fulfillment, is that we’re creating a legacy, we’re building civilization so that it can go on through the generations. That is very different than a super intelligent agent that is essentially lording over us, even if benevolently. Because that agent is immortal. It would be irrational for it, it would be parochial, as they say, to privilege the specific instantiation of human fulfillment that’s here now, especially since they can improve aging, they can improve health, they can discover all these ways to make us happier. It would be totally irrational from that being’s perspective to focus on the current humans as the ones to maximize.

So they’re gonna be maximizing human flourishing in the abstract, which is very different than maximizing it to us because most people don’t care about human flourishing in the abstract compared to “I want my kids to flourish and my friends and my nation and my religion and these things that are here now.”

Let’s end on this one, I think this is one worth talking about, is that a lot of the optimists will say things like, “This is as it should be. Like it’s gonna be smarter than us. So it should... like it’s good that it populates the universe.”

First of all, it’s gonna be a very joyless kind of smart. I call it the Joyless Singularity. It’s gonna just be this optimizing thing. We don’t know if it’s conscious. Nobody really knows what consciousness is. It’s conceivable to me that ASIs will be conscious, but if they are, they’re not gonna be going around and doing art, singing and dancing. The fact that we do that, as my friend Jeffrey Miller has pointed out and you raised, most of what we consider to be beautiful arose in the context of sexual selection. The famous example of that is the peacock’s tail, but like singing and music and poetry and dance, both men and women, this is sexual signaling. Jeffrey Miller talks about this in The Mating Mind. Like, we’re going to do away with sex. All of the stuff that we consider beautiful is going to be obsoleted and won’t be relevant to entities that are just battling to optimize resources. They’ll get outcompeted by anyone who doesn’t stop to do all these beautiful things and marvel at the beauty of the civilization and pause for even one minute is gonna be outcompeted by just hyper optimizers.

And the only reason that hasn’t happened yet for humans is that our general intelligence arose in a very substrate-dependent context. We are the first general optimizers... general intelligence on planet earth happened in mammalian substrates. And so there’s a lot of what we consider to be intelligent, like art and music and morality, that evolved in groups of primates and evolved in great apes. Once you get rid of the great ape part, you don’t get the sexual signaling, you don’t get the singing and dancing, which are basically like mating signals. You just get this very cold... once it’s substrate independent, you get this very cold, calculating type of consciousness. So if there is consciousness, it’ll be like some North Korean bureaucrat like planning different kind of military takeovers.

So, yeah, basically like I think this idea that it’s gonna be so great in the future, even if it’s maximizing for a human future... this is where I really disagree with utilitarianism. My moral system, I call it Unapologetic Particularism, which is that I am unapologetic that I like my life now. I like my friends, I like my family. You know, I don’t like all humans on the planet, but I don’t want them to die either, even the ones I really dislike. And I don’t have to justify that. I don’t have to say, “Oh, it’s okay for me to like these because we’re embodying utility, or we’re kind of maximizing utility.” Because once you go that route, it’s kind of hard to justify why not just maximize utility in the future and not for us.

Liron [02:05:00] I mean, you can always just define your utility function to care about now extra or to just care about experiences immediately. So there’s tweaks.

Michael [02:05:09] But from the perspective of a new, of a human capacity maximizer, that is very, very parochial. And that’s what I’m saying is like we get to say, “You know what? We wanna live, we want our kids to live. We don’t have to justify that and say like, there might be some better...”

You know, I’ve heard Daniel Faggella, for example, I really like his podcast a lot. And I think he’s one of the best voices in the space and he is for “Pause.” But I really disagree with his view that it’s like... he calls it like “carbon chauvinism.” Like “speciesism.” It’s like preferring your own mammalian species is speciesism. And like, if there’s conscious entities that are optimizing so much better far in the stars, you know, we should say, “Good on you. Like you go do that. You surpassed us. The torch of life goes on.”

Like, no, I’m an unapologetic particularist. I like being a mammal. I like getting stoned and fucking. I like getting sweaty and dancing. I’m a rock fan. I like going to rock concerts. They’re not gonna be any fucking rock concerts once we’re outcompeted by these AIs.

Liron [02:06:13] Okay. Yeah, I mean, there’s a spectrum from getting totally killed by something that doesn’t care about us at all to being transhumanist, right? I mean, you can have everything go well with AI and just have varying degrees of how much you’re willing to augment your body or build a new body from scratch and keep the parts you like. And I personally tend to think transhumanism is good just because I’m happy with all the interventions we’ve made with like modern medicine and surgeries and stuff so far. And I don’t mind if there were robust surgeries that could do more, you know, like a surgery or a pill that could like stop me from having to go to the gym to get big muscles. I’d be interested in taking that sometimes.

Michael [02:06:51] Sure, sure. And we’re all the beneficiaries of this now. You know, they always say we’re doomers. It’s like, no, I’m in favor of continuing all the things that have allowed us to live easier and richer lives. Just don’t do it by creating something that is even a 10% chance of killing us all—which I think is absurdly low to think it’s that low—but that has a strong chance of ending it all. Like that is the key. All this talk about how great AI is—which I, it’s benefited me in many ways, and I use AI quite a bit—is irrelevant until you get to ASI and then it’s a completely... a total game changer. And the idea that this ASI is gonna love us, I think is total cope and it’s basically a form of religion. Even if the people who believe it are atheists.

Liron [02:07:42] Yeah, I think it’s very unlikely to love us. Unless we do a lot of things right that I don’t imagine us doing or even necessarily are possible to do. So I basically agree with you there.

Liron [02:07:51] Okay. Bringing it back to the late great Daniel Ellsberg, who created you in the mindset of doom, but that you still hold to this day. What you’re noticing about Intelligence Dynamics and all of these ways that AI could go so wrong, I do think is a deep parallel to your father sitting in those meetings being like, “Wait, what the hell? Like there? Or like, this is the Vietnam War, this is the United States nuclear policy. What the hell?”

It’s similar experience to like, looking at the AI companies today where I’ve actually personally talked to somebody who works at Anthropic and they explicitly told me like, “Yeah, we’re happy to be at the forefront of AI because when we get to that singularity, when the AI goes super intelligent, Dario [Amodei] is gonna do something.” And I’m literally like, “Okay, what’s he gonna do?” And he’s like, “I don’t know.” It’s like, “Wait, what? That’s the plan? That’s what we’re doing here?” So we’re definitely having that same moment as your father in these meetings being like, “This is not okay. This is so bad. Somebody needs to call it out.”

So just the fact that you’re even here talking to the public through the show about what AI is likely to do when it gets to super intelligent, I think that’s valuable. Let’s recap in your words, what do you think is the biggest Daniel Ellsberg takeaway that the audience should have?

A Call to Action for AI Insiders

Michael [02:09:00] Sure. Um, so again, I wanna reiterate he didn’t have an opinion one way or the other on AI Doom. He was just kind of like, “This is way above my pay grade.” He focused on his specialty, which is nuclear X-risk.

That said, what he would say in general, and this I know he would say: If you are in one of these organizations, and you, either you are high enough in you’re writing estimates or you know of other people writing estimates saying that these technologies are much more dangerous than our CEO is letting on, and that our CEO is doing a PR job that cover that up... you owe it to your fellow humans. You owe it to your own kids to let the public know about that. We deserve to know if these AI companies—from OpenAI to Anthropic, to Google to xAI and Facebook—if there’s people who actually aren’t just total delusional promoters of it and have sober estimates that are more sober than the CEOs are letting on and more dangerous, you owe it to the public to let us know. This is a matter of public concern.

I would also say, and now I’m getting more into my own views, but I think my father would probably agree with this, especially if I could kind of talk to him about the issues, is that his two memoirs, and the book that’s coming out that I’m co-editing with his long-term assistant Jan Thomas... a lot of them go into like, what is your responsibility if you are part of an organization that is heading towards catastrophe and it’s predictably heading towards catastrophe? What is your responsibility?

I think he would say is that if you’re at one of these AI labs, and you do actually understand that you’re building something that even has a 10% chance of ending humanity—which is now a fairly widespread P(Doom) in the field—just stop. Just stop. Like, yes, your job is gonna get replaced by someone else, but be a voice. There’s numerous people who have done this now who said, “Look, I’m not going to be one of the ones who builds this.” And yes, if you just build it and stay and shut up and stay silent, you’re just gonna get replaced. But if you leave and get a job somewhere else, take a pay cut if you have to, and speak out and say, “This is more dangerous than they’re letting on. And I just taken a significant pay cut.” I think that guy, Daniel [Kokotajlo]...

Liron [02:11:33] Daniel Kokotajlo, actually he and I think also Leopold Aschenbrenner did this as well. Where they left OpenAI and there’s this crazy clause in their contract saying like a non-disclosure, and they’re not even allowed to say that the non-disclosure exists. And they were offered like, “Hey, you guys gotta sign the non-disclosure, otherwise you lose all your vested equity,” which even at the time was worth multiple millions of dollars. I think at this point, the value is probably 10 million plus—like insane amount of life changing money for these guys. And at the time they decided that it would make sense for ‘em to just not sign it, reserve their right to speak out, even though they didn’t even feel like speaking out that much in the moment. But they’re like, “You know what? I have some thoughts rolling around. I just don’t wanna be making this commitment when it’s so important for me to speak out.” They left millions on the table. A few months later, there was like an uproar when people realized what was going on. Like there was enough leaking coming out about this that people were like, “What the hell OpenAI?” And then OpenAI backtracked and like, “Okay, we’re gonna restore their equity.” But yeah, I mean this is a little piece of the kind of heroism that your father showed by leaking the Pentagon Papers and telling us the real story of the Vietnam War. That was a little bit of that, and we definitely need to see more.

Michael [02:12:42] Yeah, exactly. And look at the impact that those people have had. It’s such a state of credibility. People dismiss people like you and me as, “You know, we’re just crazy doomers.” But so what has a lot of credibility is insiders who have skin in the game and are giving up millions of dollars potentially to share this message.

What we need is an international treaty. This book makes that case really clear. That’s the only... nothing short of that is gonna stop this string. It doesn’t look to me like we’re heading towards that. So I don’t have a lot of optimism here, but it’s the only thing that’s gonna work and we should still work for it. Like there’s enough at stake that we should still go for it. And the only thing that is gonna do that is changing public opinion both by the public, but also by officials. And the only thing that’s gonna do that, I think, is insiders who are building the stuff saying, “This freaks me the fuck out.”

I just ask any insider who’s listening, who thinks that this is more dangerous than you’re letting on: Like, think about your kids if you have kids, or think about your friends’ kids. You’re building something that can just end their lives. Just think about that. Is being at the forefront of this and legitimating it by building the thing the best move here? Or is it a better move to take a pay cut, do a career change? Probably a very smart person. You can probably get a great job somewhere else and move the needle on public opinion about this. The insiders are the ones who can move the needle the most.

Liron [02:14:16] Well said. Okay. Where could people find you online?

Michael [02:14:20] ellsberg.com. E-L-L-S-B-E-R-G dot com. I’m on X/Twitter, @MichaelEllsberg. Also I write a Substack, it’s not a paid thing, but I put a lot of my writing out on Substack. I’m definitely gonna be writing up this theory about Temporal Selection and this problem of the time value of an aligned ASI basically just keeping us in suspended animation. It sounds crazy, but the game theory of it kind of checks out. So I gotta write this article.

Liron [02:14:50] Yeah, it’s like I said, it’s plausible in a way that most of what’s coming out of the AI companies is not plausible. So I’ll give you that at least. It may... like I said, I don’t think it’s like necessarily my central mainline doom scenario, but it’s not crazy at all. I’ll give you that. And just in general, I think we’re in violent agreement about all the major points here and it’s always nice to hear somebody with a very different perspective on life, very different trajectory on life, just coming in and being like, “Well, I still see the asteroid coming.” Like, you know, we agree on that. We have a shared reality on that.

Michael [02:15:20] Totally. And just to be clear, my mainline scenario is multipolar, which looks really ugly, really fast. I was just making the other argument saying even if we lucked out and had an aligned AI, it would still be bad.

Liron [02:15:32] Great point. Food for thought. I’m glad you’re making that point. Michael Ellsberg, thanks so much for being my guest on Doom Debates.

Michael [02:15:38] Thank you so much for having me. It was a great discussion.


Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏

Discussion about this video

User's avatar