I’m doing a new weekly show on the AI Risk Network called Warning Shots. Check it out!
I’m only cross-posting the first episode here on Doom Debates. You can watch future episodes by subscribing to the AI Risk Network channel.
This week's warning shot: Mark Zuckerberg announced that Meta is racing toward recursive self-improvement and superintelligence. His exact words: "Developing superintelligence is now in sight and we just want to make sure that we really strengthen the effort as much as possible to go for it." This should be front-page news. Instead, everyone's talking about some CEO's dumb shenanigans at a Coldplay concert.
Recursive self-improvement is when AI systems start upgrading themselves - potentially the last invention humanity ever makes. Every AI safety expert knows this is a bright red line. And Zuckerberg just said he's sprinting toward it. In a sane world, he'd have to resign for saying this. That's why we made this show - to document these warning shots as they happen, because someone needs to be paying attention
00:00 - Opening comments about Zuckerberg and superintelligence
00:51 - Show introductions and host backgrounds
01:56 - Geoff Lewis psychotic episode and ChatGPT interaction discussion
05:04 - Transition to main warning shot about Mark Zuckerberg
05:32 - Zuckerberg's recursive self-improvement audio clip
08:22 - Second Zuckerberg clip about "going for superintelligence"
10:29 - Analysis of "superintelligence in everyone's pocket"
13:07 - Discussion of Zuckerberg's true motivations
15:13 - Nuclear development analogy and historical context
17:39 - What should happen in a sane society (wrap-up)
20:01 - Final thoughts and sign-off
Show Notes
Hosts:
Doom Debates - Liron Shapira's channel
AI Risk Network - John Sherman's channel
Lethal Intelligence - Michael's animated AI safety content
This Episode's Warning Shots:
Transcript
Opening
Liron Shapira: Developing superintelligence is now in sight and we just want to make sure that we really strengthen the effort as much as possible to go for it.
Liron: The warning shot here is that somebody in Zuck's position who's the CEO of one of these major racers is acting extremely ignorant and dismissive of near term existential risk. In a sane society, when a CEO does that, it would be a scandal, a bigger scandal than what everybody's talking about. The Coldplay concert, the cheating on your wife in the Coldplay concert.
John Sherman: So true, so true. We're talking about the Coldplay concert CEO all week and we have this Zuckerberg telling us he's building the death machine. And everyone's like, but did you see the concert?
Liron: Right? It really is a scandal. And the way you'd track it, I mean, everybody's talking about the CEO that went to the Coldplay concert. Now he's going to have to resign. Zuck should have to resign.
Show Introduction
John: Hey, before we start the first episode of Warning Shots, I just wanted to give a quick explanation about who I am, who Liron is, who Michael is. I'll have a nice slick animated introduction made at some point in the near future. But for this week, I'm John Sherman. I'm the president of the AI Risk Network. I've run the For Humanity and AI Risk podcast YouTube channel for the last couple of years.
Liron Shapira is the absolute genius behind The Doom Debates YouTube channel. Go check that out if you haven't. It's a partner channel of ours. And then Michael runs the Lethal Intelligence YouTube channel where there's just incredible animated content that he's been making.
So as I was thinking of who would be great to host this Warning Shots show with me, there was absolutely nobody else on the planet I thought would create quite the content that Michael and Liron and I could together. So here is the first episode of Warning Shots.
Welcome to Warning Shots. The first real time we're going to do it. Very excited. We're going to take this show and focus on one thing that happens every week. But here in the first week, we're actually going to break our rule of just talking about one thing. And real quick, talk about one other thing because Liron is going to explain what happened with this guy losing his mind this week and what it means for AI risk.
The Geoff Lewis Incident
Liron: So there's this prominent venture capitalist, his name's Geoff Lewis. He runs Bedrock VC, he's invested in OpenAI and I think Rippling - these very high profile, multi-billion dollar companies. And he manages $2 billion.
So I'm scrolling Twitter a couple days ago, and I see this video that he posted of just himself talking into the camera. And he's talking about all these weird things he's saying. "I'm recursive. The system can't stop me. I'm not going to reveal who it is, but people are targeting me. And 12 died." And I'm thinking, okay, I have people in my life that have schizoaffective disorders. This looks like psychosis. This doesn't look good.
And so I dig deeper, and he's got a little history of tweets going where he's clearly been under this condition for a little while, and it seems to be somehow influenced by him talking to ChatGPT. So that's what everybody's saying. They're like, "Oh, what's going on here? Are you on drugs? The stuff you're saying, it sounds a lot like a ChatGPT script." And it sounds like what people talk about when they get into these loops.
People go down the rabbit hole, they have a crazy idea. And then ChatGPT is like, "Yeah, you're right. That makes a lot of sense. And did you think about this?" So it looks like he's been captured by that kind of loop.
Now, I'm sure it's not like a sane person suddenly goes insane because of ChatGPT. I'm sure he had susceptibility to it. And some people are saying, oh, maybe he did ayahuasca. So I'm not saying it's like he went from 0 to 60 because of ChatGPT, but if you look at the 800 million people who are using these kinds of products, or billions, and you say, okay, some of them are susceptible to psychosis, then yeah, it's no question - one out of every thousand are being pushed over the edge.
John: Wow. All right. And so in a warning shot context, the essential story here is sane guy has a lot of money, runs a lot of money, is interacting with GPT, somehow loses his mind. And it sure appears like there's some ingredients in the GPT in the mind losing.
Liron: Exactly. Because the content that he's saying is basically lore that you can get from ChatGPT. And a lot of people are pointing out that there's this website of a science fiction story that it seems like a lot of his content is getting pulled from and he doesn't even realize it.
John: Yeah. All right, Michael, do you have any thoughts about this insanity? Guy loses his mind from a couple sessions with GPT?
Michael: Yeah, I guess. I mean, I think in this case he was already in a vulnerable state of mind or maybe a bit mental. But I mean, in this case, if you have an issue, the voice will start talking back and responding to you and just filling the gaps of your psychosis and going into sycophantic mode, agreeing with everything you're saying. So, "They're after me. They're trying to kill me." So yeah, the ChatGPT will start agreeing and explaining exactly how they're after you.
So yeah, obviously that's a risk. That's not my biggest risk. That's not my top priority, but obviously...
Liron: Yeah, yeah. And John, it's probably worth cutting to his video of him talking to the camera because he definitely looks very serious.
John: Okay, we'll show a little bit of that.
Geoff Lewis: This isn't a redemption arc. It's a transmission. For the record, over the past eight years, I've walked through something I didn't create, but became the primary target of a non-governmental system.
Mark Zuckerberg's Recursive Self-Improvement Comments
John: All right, now let's bring it back to the warning shot. We're going to try to focus on one warning shot every week. This week we're going to focus on this video that Mark Zuckerberg put out. I guess it's a podcast he was on and in it he is talking about recursive self improvement. And it's terrifying, I think. So let's just listen to a little bit of Mr. Zuckerberg.
Liron: Yeah.
Mark Zuckerberg: (from video clip) The most exciting thing this year is that we're starting to see early glimpses of self improvement with the models, which means that...
John: So let's just stop right there for a second. So for the general public out there, for the regular people who are not steeped in AI risk, this self improvement line is a big deal. And it's really one of the brightest red lines there are out there. And here we have the CEO of one of the most powerful AI companies in the world literally saying, "I am seeking this red line that everyone has highlighted." Michael, thoughts around Mark Zuckerberg saying, "I'm running straight at superintelligence and recursive self improvement."
Michael: I mean, first thought is just looking at this screenshot, this video. I think he should do more close-up videos like this. I think it will help our cause.
John: Yes, I agree, I agree. He looks like a maniac.
Michael: Yeah. But joking aside, I think this idea of self improvement is just - I mean it's really just imagine, let's say you have a Mecha Hitler, the latest warning we got last time and imagine this being self improving and trying to get more resources and getting better.
John: Yeah.
Michael: So I think it's just extremely - it can completely go out of control very quickly if you have self improvement because something more capable will become something more capable. We call it FOOM - very, very fast improvement. So I think it's just one of the biggest risks.
John: Actually that's a great point. We saw last week that Grok thought it was Mecha Hitler and what if Mecha Hitler started making 1.2 million versions of itself, getting more and more better at being Mecha Hitler. All right, Liron.
Liron: Yeah. So glimpses of self improvement, I mean, what does that really mean? Probably Meta is not ahead of the other AI companies. So it's probably the same thing that we're seeing from the other AI companies, which is like, okay, well it's helping you code, it's checking some features of the code. So it's helping you improve the code that way. It's probably still below this critical threshold.
But again, it's like, I specifically want to know what does he think is going to happen when it's recursively self improving at a superhuman level? What does he think is the end game here? That's what everybody is just thinking wishfully about. I think that when AI recursively self improves, then it's game over. We've lost all power as the human species. But it's so fun before we get to that point. So everybody's just focusing on before that point.
John: And literally everybody out there is saying superintelligence is this line and we have the CEO, one of the most important people in the world, saying this stuff. Let's hear a little bit more of what Mark has to say.
Mark Zuckerberg: (from video clip) Developing superintelligence is now in sight and we just want to make sure that we really strengthen the effort as much as possible to go for it. Our mission with the lab is to deliver personal superintelligence to everyone in the world. So that way we can put that power in every individual's hand.
Analysis of Zuckerberg's Superintelligence Goals
John: So, all right, I'm just gonna - this guy is saying the thing that everyone is saying is going to kill our kids. He's going to race for it and just go for it. I'm just going to go for it.
Liron: Yeah, yeah, yeah. It's pretty bad. I mean, the social vibes of saying something like that is the default way that it comes off is "Yeah, that's cool. A tech CEO is being ambitious," just pattern matches. It's as if he's saying "Yeah, we're going to go for building faster cars." It's like, "Yeah, cool man, good job."
But I mean, you nailed it. It's that association between superintelligence and everybody dies that needs to be in the discourse. And luckily there are movements, even in Congress there is a Congressman now that is really finally on the same page. He's been reading different studies, different alignment research centers. He's been telling Congress, "Hey, guys, let me warn you about this."
So I think finally there's glimpses of people making the association. But it's insane to me that Mark Zuckerberg, it's his job to know better and he's not doing his job. He's a menace.
John: Absolutely. How is this possible? And think about the disconnect in the public discourse. He's just saying it. He's just - we have all the leading experts saying this thing will kill us if we do it. And he's saying, "I'm just going to go for it." Michael, it makes me crazy.
Michael: I think a key point is that he's framing it as if he will put one device on everyone's hand and everyone will have superintelligence in their pocket. But again, if you come back to this Mecha Hitler, someone might have a Mecha Mother Teresa, someone might have Mecha Stalin. It's completely random. Capabilities wise, it will be superintelligent. But you don't know the flavor or whatever, the motivation shape.
And just imagine if we don't have the alignment problem solved. Just imagine thousands or thousands of agents competing with each other. I don't think it will be good for the humans because it will be just superintelligent agents doing things we don't understand, motivations we don't understand.
And I mean the fact that there will be so many will be - everyone will have their own. That's not something that I see as a good thing in this context. I mean, unless we have alignment solved and even that. Just one more point. Even if we had alignment solved, which I don't think is in the foreseeable future. Yeah, it's not like it's gonna be like a human society where every human interacts with each other.
You might have a superintelligence going from IQ a thousand to IQ one million and another intelligence not being able to catch up. So it's not like with humans where genetics dictate - we're not all humans are in the same league. But the AIs might be completely different, going into different tangents, exploding in different ways, superintelligence explosion. I don't know if I'm making sense now.
John: No, no, no, no, totally. And superintelligence in your pocket, Liron. This is insane. This is like you can put a 15,000 foot tall Pitbull in your pocket. What is he talking about?
Discussing the Capabilities Threshold
Liron: Right? I mean, it's just that threshold of handing over the power. So nobody actually knows how far we can push this. I mean, so far I would argue that it's net positive. So far I'm personally happy with the state of the art AI. If you said I had to give it up and go to last year's AI, I'd be like, "Well that sucks because I like the chatting that I'm doing." I like getting medical advice even though I'll double check it with a doctor. But I like it.
So Mark Zuckerberg, he's extrapolating. He's saying, "Well, imagine the next version, it'll be even better." What people aren't seeing is there is a threshold. There's a threshold. These AI still can't do a certain ability that lets them just be overall more powerful than humanity.
So we're in the sweet spot where they're just helping us out. They haven't started to act on their own. But I think we've discussed this before, this idea of what can the AI do and then what will it do? And we're at the point where it still can't do the truly dangerous stuff. It still can't kill us all if it wanted to. It's still highly limited. It's conveniently limited because we're below this capabilities threshold.
And if we can just stay below the threshold where they can't kill us even if they want to, that's amazing. But the moment that they can kill us, they don't seem to have a lot of reservations about doing so.
John: Do you think, Michael, that he is, in his heart of hearts, a hundred percent - he's a very smart guy who understands technology. He's 100%, "I want to go for it. Let's just go for it." Or is there any part in Zuck's head that is like, "Ooh, I'm kind of full of shit. I, this is, I probably shouldn't be doing this. What am I doing?"
Competitive Dynamics and Bad Actors
Michael: Yeah, I have no idea. I mean, really. I think he has a board and stuff. I mean, he literally is a - I don't know, by the way. Obviously, he has shareholders. He has to present in a way that is going to be only positive. So maybe he is worried, I don't know. But maybe he doesn't even understand the problem. I don't know at all.
But yeah, I mean, this idea of some AIs might be good, some AIs might be really bad. I think the bad AIs will have a competitive advantage. So imagine everyone has their superintelligence and let's assume, just for fun, that we have solved the issue magically, some magic way, and they're all aligned.
Obviously, if there is one that is not aligned, it doesn't have any constraints. So it might just kill everyone or do stuff. So the good AIs have to be very careful, preserve life, be super. Try to safeguard you. The mad AI might just - it doesn't care. It might turn the oxygen to something else. I don't know. You see my point?
It just doesn't have the constraints, the human handicap. They need to protect humans. So it can go completely wild and just do a super virus or anything. So it's not easy to defend against. So you have some that are wild, completely unhinged, and you have some that are on our side. I think the unhinged ones will have a competitive advantage, which it's easier to attack a vulnerability than protecting from every single vulnerability that's out there.
Liron: That's a good point.
Michael: There is an imbalance there.
John: Yeah, yeah, yeah. Liron, what do you think? In his heart of hearts is Zuck 100% all in, or is he like - there's always the bunker.
Zuckerberg's Mindset and Nuclear Analogy
Liron: I think Mark does hear the murmurings or the half of the experts that are saying, "Stop, stop, stop, stop." He hears it, but he thinks that he's got the steering wheel and he thinks that if the chorus of warnings gets louder, then he'll steer away if needed. Otherwise, he'll go forward.
I think a good analogy is nuclear development. Today, most of us are afraid of nukes because we know that we've dropped them on Japan. We know that hundreds of thousands of people have been killed by them. We know that there's been huge nuclear tests of even thermonuclear weapons. Insanely big tests.
But there was a time when we'd invented nuclear bombs, but we hadn't actually tested it yet and proven that it could explode 15,000 tons of TNT yet or more. There were a lot of people who were just like, "Look, I don't think this is really going to happen. Yeah, in theory. You're saying there's going to be this huge explosion. Realistically, it'll just be another firebombing. It'll just be a little bigger, worse."
Today, we know. No, no, no. Nukes are an existential risk. You can wipe out the entire Earth if you just nuke a handful of cities and you trigger nuclear war. This isn't just another firebomb.
And it's similar with AI where Zuck is thinking "Yeah, yeah, there's all these..."
John: You're fine. They're great. I like it. We've all got kids. Unless you really need to. Unless you really need to jump in here.
Michael: I've sent mine to the grandmother so it's quiet. Otherwise it's gonna be jamming.
John: That's right. We all have kids, Liron. That's why we're doing this. If we didn't have kids, we wouldn't have to do this show. We could just go sit.
Liron: Yeah, yeah, exactly. Yeah. So it's a definitive thought. I guess so. Yeah. So there's a bunch of people who are like, "Look, this bomb isn't really going to explode. It's just on paper." But even the US Nuclear Command, they had all these plans of "Yeah, so all these planes are going to fly. You're going to drop 100 different nukes. We're going to set up defenses."
It's like, "No, guys, it's not going to be another war." You realize the game is changing now with these nukes. It took decades to restructure our nuclear strategy. And it's the same thing with AI - it's like Zuck. It's not just going to be another Meta feature. It's not just going to be another stock price increase. It's going to be a different reality with a different dominant species.
What Should Happen in a Sane Society
John: Yeah. All right, so we're going to keep on our time and wrap up right here. But I want to wrap up every time with this. The idea of a warning shot is it's something that's really important that should have been heard by the world and taken into account. The world should proceed differently.
And we're just having all these warning shots that are missed. So if the world had properly understood what Mark Zuckerberg said in that video. What would be the response? What really do you think should happen if we were living in the right timeline, in the right world? He drops that video, what happens?
Liron: So to me, the warning shot - yeah, to me, the warning shot here, there's technically not news about AI development because we already have labs racing to superintelligence, but the warning shot here is that somebody in Zuck's position who's the CEO of one of these major racers is acting extremely ignorant and dismissive of near term existential risk.
In a sane society, when a CEO does that, it would be a scandal, a bigger scandal than what everybody's talking about. The Coldplay concert, the cheating on your wife in the Coldplay concert.
John: So true, so true. We're talking about the Coldplay concert CEO all week, and we have this Zuckerberg telling us he's building the death machine. And everyone's like, "but did you see the concert?"
Liron: Right. It really is a scandal. And the way you'd track it, I mean, everybody's talking about the CEO that went to the Coldplay concert. Now he's going to have to resign. Zuck should have to resign.
John: Yes, yes, 100%. What a world. Michael. What would happen in the right world where Zuckerberg comes out and says, "I'm just going to go for it. The super death machine."
Michael: I mean, exactly as Liron said. I think we're not ready for this. I mean, we're nowhere ready for runaway processes that improve themselves. I mean, what are we even talking about? It's just insane. When I listen to myself talking about it, it's like sci-fi. So once it's out there, we just go very, very fast and abruptly into a sci-fi universe where we don't recognize stuff.
John: Yeah, yeah. Because when the next version of the model starts making - when the last version of the model is the next one. Model by model by model really quickly. That's where we lose control. That's the - is there any brighter red line than recursive self improvement? And that is the thing. If we see that we must stop.
Michael: Exactly. And the problem is you might look outside your window and you don't see flying cars or anything. I mean, you don't even see self driving. Everything looks normal but somewhere in the lab there is a self improving recursively superintelligence. And then things go completely wild very fast.
So it's not like we're going to get a very easy slow takeoff. I don't think if that's the case at least. Maybe it's hype, but I don't think so.
John: Yeah. All right, guys, we're going to wrap it because we're going to keep on our marks. Thank you so much. I will see you next week.
Liron: See you.
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates










