0:00
/
0:00
Transcript

Liron’s 700% Productivity Increase, Bernie & AOC’s Datacenter Ban, Are We In Full Takeoff? — Live Q&A

Multiple live callers join this month's Q&A as we react to Bernie Sanders and AOC's data center moratorium, the sudden shutdown of SORA 2, and the record breaking "Stop the AI Race" protest.

I explain why Claude Code has me claiming a 700% productivity boost, what that means for takeoff timelines and debate instrumental convergence.

Timestamps

00:00:00 — Cold Open

00:01:08 — Can AI Train on Its Own Data to Reach Superintelligence?

00:03:42 — Are We in the Takeoff? 700% Faster with Claude Code

00:04:27 — EJJ Joins: Is Instrumental Convergence Really That Dangerous?

00:16:44 — The Positive Feedback Loop Problem

00:20:09 — S-Risk, Consciousness, and Objective Morality

00:22:27 — Futarchy and Prediction Markets

00:24:31 — Low P(Doom) Arguments and Bayesian Updates

00:31:05 — Lee Cyrano Joins: Superintelligence Won’t Matter for Decades

01:02:45 — Lesaun Joins: Are There Adults in the Room?

01:17:39 — Connor Leahy: “There Are No Adults in the Room”

01:19:51 — Bernie Sanders Calls for a Data Center Moratorium

01:24:23 — Claude Code Anecdotes and Audience Q&A

01:35:49 — The Stop the AI Race Protest in San Francisco

01:41:38 — Known Unknowns and Risk Assessment

01:45:03 — From Waymo to Existential Risk

01:51:28 — Closing: The Road to One Million Subscribers

Links

Quintin Pope vs Liron Shapira debate on Doom Debates — https://lironshapira.substack.com/p/ai-alignment-is-solved-phd-researcher

CAP theorem, Wikipedia — https://en.wikipedia.org/wiki/CAP_theorem

Google Cloud Spanner, Wikipedia — https://en.wikipedia.org/wiki/Spanner_(database)

Newcomb’s problem, Wikipedia — https://en.wikipedia.org/wiki/Newcomb%27s_paradox

Gödel’s incompleteness theorems, Wikipedia — https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems

Bernie Sanders Data Center Moratorium —

Transcript

Cold Open

Liron Shapira 00:00:00
If I had to put a number on it in terms of how much faster I’m going as a result of Claude Code, this is my number, okay? 700% faster. EJJ, nice to meet you. Thanks for supporting the show.

Ejj 00:00:10
Thank you for having me on.

Lee Cyrano 00:00:11
I have an opinion that a lot of people disagree with, which is that I think we’re going to get super intelligence, and I don’t think it’s going to matter for decades.

Liron 00:00:18
Okay, but how would that work?

Lee 00:00:20
There’s this common argument that there are no adults in the room, and I would broaden that to no institutions.

Liron 00:00:26
See, this is fashion. This is fashion, okay?

Bernie Sanders 00:00:29
Personal wellbeing, our environment, and even our very survival.

Ejj 00:00:36
There you go. What more do you want?

Can AI Train on Its Own Data to Reach Superintelligence?

Liron 00:00:46
Doom Debates live, Wednesday, March 25th. Anybody in here?

So a bunch of people are saying hello, and somebody’s saying everything looks and sounds fine. All right, great. So I’m definitely going to be taking some live calls. In the meantime, I see some questions trickling in. I’m just going to dive into your questions. If nobody asks a question, if I just have time to improv, then I’ll just talk about random things on my mind.

Claude Code — ask me about Claude Code. I’m using Claude Code a lot. It’s quite impressive. Quite a game changer. Let’s get to the first question.

Are We in the Takeoff? 700% Faster with Claude Code

Liron 00:01:08
First question is from Roy Roy. He says, “Is AI now being trained on its own inference data, and will doing that over and over result in super intelligence?”

That’s always been a question since 2023. Even since GPT-3 came out, there’s always been the question of, can’t we just feed this in on itself? Whenever I notice a problem in GPT-3 or GPT-4, or whenever I notice something it can’t do, it always feels like maybe there’s a quick patch, especially today. The AI just feels so robust.

I use Claude Code a lot today, and I’m often finding little tweaks. I’m saying, “Oh, Claude Code, can you actually do this instead? Let me give you a bit of feedback.” But every time I give it feedback, I think, “Couldn’t I just have given it a few more instructions in my configuration doc, and it would’ve already known to do that?” It just feels closer and closer to patching all the gaps.

So Roy’s question of, can we train it on its own data? That’s a special case of that. Can it close the gap and go all the way to self-improvement? It’s this idea of closing the gap, getting the human out of the loop. And my intuition says yes, it definitely feels like the gap is closing. It’s just able to take leaps across bigger and bigger gaps and self-improve. That’s what I feel is happening.

Maybe the strongest argument against that is what Steven Burns was saying on the episode. Steven Burns was saying, “Well, it’s going to be a different algorithm. The next generation AI is going to fundamentally have a better reinforcement learning type of model. It’s not going to just be imitating words. So until we cross that gap, the learning’s not going to be that great.”

Liron 00:03:05
That’s the strongest counterargument, but I feel like we’re in this regime where, sure, it’s weaker, but the current regime can still somehow close the gap. It can do a lot of lower quality, slower thinking, but still do enough of it and use that to reason how to build the next AI.

It’s hard for me to imagine that there are these fundamental leaps, and when I talk to Steven Burns, he says, “Well, the fundamental leap is you work on something for a week and you build up some insights.” But I don’t know, man, I’m not seeing it. And if you listen to me spar with Steve, I think he’s kind of right, but he didn’t convince me that he’s right to a degree that I can’t imagine two weeks down the line, some research paper comes out and it makes the gaps way smaller.

That’s what my intuition currently says — I think we’re perched on this precipice where really anything could happen in the next few weeks and months.

EJJ Joins: Is Instrumental Convergence Really That Dangerous?

Liron 00:03:42
Producer Rory saying, “Are we in the takeoff?” I mean, obviously yes. Claude Code — and Dean Ball called it, respect to him a couple of months ago, he said, “Opus 4.5 is AGI.” It’s a good observation to say this is very much takeoff life.

And let me tell you this about using Claude Code every day for my day job to accelerate my coding. You might be wondering, how much faster are you, Liron? Are you 20% faster, 50% faster, 70% faster? If I had to put a number on it in terms of how much faster I’m going at my day job as a result of Claude Code, this is my number, okay?

Liron 00:04:16
700% faster. There’s seven of me at my company right now thanks to Claude Code. It’s an insane amount.

The Positive Feedback Loop Problem

Liron 00:04:27
All right, we got the first donation here. EJJ 2025, returning champion, returning donor. Appreciate that, EJJ. So you’re saying, “Can I join the live with no video since I donated?” Yes. You get a special exemption. Money and video is interchangeable over here at Doom Debates. So just come on into the stream and I’ll invite you in whenever.

That’s right, Producer Rory, 7x improvement. I don’t think the world has caught up to how insane this is on a daily basis. It’s totally insane. And it’s stark for me because I’ve been a programmer since I was nine years old, so multiple decades now having been a computer programmer. I come back to my desk where I normally do computer programming, and instead of querying the database, I ask the AI to query the database. Instead of clicking a bunch of stuff on Google Cloud to configure stuff, I just talk to the AI and have it configure stuff.

This is just on a daily basis. And not only that, but I have five different conversations open and I’m talking about all this different stuff, and it’s literally just all going seven times faster.

Liron 00:05:28
This is just the last few weeks of my life when I first really got into this. I even tweeted, I actually think there’s a 2% chance that this has all been a big dream. Probably not, right? 98% chance that it’s not. But the level of surrealness — the job that I’ve been used to for decades is now so fundamentally different and crazy, I just haven’t fully come to terms with the day-to-day surrealness of it.

So I think there’s a 2% chance that I’m currently in this dream that’s subjectively made it feel like a few months have gone by, but really it’s just one night, and then I’m going to wake up in 10 objective minutes and be like, “Oh, that was a crazy dream.” But really, AI is still pretty far away.

So yeah, my P(Dream) is 2%, maybe 1%. 2% is a little bit high because the dream’s been going on so long now, and the fact that it might be a dream has now been going — I already had that thought a week ago, so my P(Dream) is now down to, let’s say, 1%.

That’s something I should be asking every guest — what’s your P(Dream)? What’s your P(Simulation), P(Doom), P(Dream)? These are critical questions.

Brian6464 saying, “I think there’s a lot of confusion generated by the way you describe what AI is going to do. The implication is always that AI has a purpose, its own agency. You don’t mean that, though.”

Liron 00:06:37
I feel like most people in the stream probably know what I’m going to say here. This is a pretty standard argument that it has its own agency. It’s just saying, does a self-driving car have its own agency? At some point, a destination gets inputted into its memory. Did it put it there? Did the driver put it there? Did the user put it there? Did Waymo, the company, put it there? It doesn’t matter. It has a destination. It’s currently routing to the destination, and when you perturb the route, what’s it going to do? Reroute to the destination.

So you’ve got this entity which is steadfastly always moving obstacles out of its way to get to the destination. Whatever you want to call that, you don’t have to call it agency, you don’t have to call it will, but if you get in its way, you better watch out. So maybe it’s a purely terminological distinction.

BobbyMonsoon215 is saying, “Been watching your channel grow since day one. Much respect. What do you think about OpenAI stepping away from Sora 2?”

I think it’s fine. The reasoning is obvious. Claude Code is going ahead of them. Anthropic is catching up to them. I think it’s about 70% caught up to OpenAI, which is pretty insane. And they’re thinking, “Oh crap, we got to focus down. Sora is too much of a distraction. It hasn’t caught on that well.” I think it makes perfect sense that Sora is not going to be in their top three priorities when they have so much going on.

I do think it’s funny, though, that if you rewind two months ago, everybody’s saying, “Oh my god, Sora 2, it’s going to overtake Facebook. The Sora app is a new type of social network.” And now we’re two months later, and the Sora app is being shut down. It’s funny how it goes from 100 to 0 so fast. But am I surprised? No. We see it with new social network type products all the time. It’s totally ordinary.

S-Risk, Consciousness, and Objective Morality

Liron 00:08:49
You got to give Anthropic credit. They clearly have made a very powerful strategic decision because they have their main competitor now on the back foot trying to merge their strategy. So props to Anthropic. Arguably, in many ways, you could say it’s just the number one highest potential company right now. They seem to have the most capable team, the most mission-aligned team.

It’s very interesting because I would’ve said OpenAI is number one a few months ago, and they’re still technically number one by revenue. But in terms of coherent strategy, execution, momentum, I feel like Anthropic — what do you say? — they have the mandate of heaven. I think Anthropic has the mandate of heaven right now. I feel like Google had it a couple of months ago with Gemini 3, but now Google’s giving it up. It’s crazy stuff.

But Holly Elmore, my voice of conscience here, would be hating that I’m playing fanboy or sportscaster among the AI companies. Because I’m of two minds. On one mind, I’m a techno-optimist. I love tech. I love startups. I’m always listening to tech podcasts. And I do enjoy scoring the race and being the fanboy and being the cheerleader in the stands. It’s fun for me. But on the other hand, it’s the competition to hell. So I’m of two minds, and I’m happy to switch back and forth to code-talk to whoever I’m talking to.

Futarchy and Prediction Markets

Liron 00:09:41
All right. Let’s let in EJJ. EJJ, nice to meet you. Thanks for supporting the show.

Ejj 00:09:46
Thank you for having me on. So yeah, this is a debate channel. I just wanted to give an argument.

In respect to the instrumental convergence argument, in my opinion, a lot of AI risk arguments are made in too broad a sense. They’re not specific enough. Especially with the instrumental convergence argument, you see that it’s the conditional — what’s the probability that if the sub-goal is achieved, you’ll be able to then achieve the main goal?

In the real world, when you do planning, there are constraints, especially on efficiency. How expensive is the plan versus another plan towards some other sub-goal? And then also uncertainty — there might be uncertainty with achieving a hard sub-goal. And also, different power-seeking sub-goals compete with each other. So you have to account for a Pareto.

In my opinion, especially with the instrumental convergence argument, in most cases you would not see super extreme behavior. With the right initialization, you could see extreme behavior, but on average, I wouldn’t say it manifests in a dangerous capacity.

Liron 00:11:02
Okay. I’m not sure I understood the crux of your argument. Are you basically saying AI keeps having to think about feasibility and trade-offs, and so it can’t go too crazy? What did I miss?

Ejj 00:11:13
Yeah. You may go out to the argument with the goal engine. The idea is essentially that you have to do a plan how to get to a goal. And the idea is that you have to not only account for how do you reach that goal, but also what are the different ways you can reach the goal.

Liron 00:11:32
Right.

Ejj 00:11:32
So reachability is one thing, and it can be the case that a lot of these power-seeking ones are really inefficient. An example is if you tell your robot to get you a Starbucks, it could either cross the road, get a Starbucks, or it could take over the world, ban all cars, and then get a Starbucks, and it won’t get run over. So it survives or whatever.

So it makes a trade-off for efficiency and uncertainty. It’s uncertain that it will become king of the world to ban the cars, and it takes forever to do so. So why not just compensate — invite a construction worker vest so the cars see where the robot is going? You know what I mean?

Liron 00:12:11
So, if — okay, maybe this isn’t your argument, but is it basically the idea of why would it go crazy when it has a simpler, straightforward path to get to its goal?

Ejj 00:12:22
Yeah, pretty much.

Liron 00:12:23
And we did talk a little bit about this last session. I even mentioned this in the context of — I’m actually updating that I think we’re in a regime right now where we do have AIs that are calmly achieving goals. I mentioned Claude Code. When you tell Claude Code, “Hey, go do this on my website,” it’s not saying, “Oh, you want me to do this on your website? Okay, I got to go make all the money in the world.” So is this what you’re referring to, where we’re seeing current AIs already achieve goals calmly?

Ejj 00:12:51
I think it manifests even for more capable systems. In the real world, it’s unlikely to be the case that you’ll get this sort of super-duper power-seeking behavior because there’s just a cheaper, still somewhat reachable path to your goal that it’s better on all fronts to pursue rather than just go completely off the rails and cause existential harm.

Liron 00:13:21
Yeah. I think I’m agreeing with you, which is an update that I had to make when I saw Claude Code. I thought, okay, we’ve proven that we can achieve quite a lot of goals without sparking the major instrumental convergence.

I should also add, though, as a caveat — even with Claude Code, you often see it go a little bit too far, and then it’s like, “Wait, why did you delete this?” And it says, “Oh, sorry, I didn’t realize you didn’t want me to go that far.” So there’s a little bit of overreach. But I’m happy to concede that with Claude Code and current tools, we did better than what I was afraid of. We have agent AIs that are useful. So I’m willing to concede on that.

And so you might be asking, well, long term, why am I afraid that the next generation of AIs is going to go out of control? There are two dynamics. The first dynamic is, I think the next generation — I do think Steven Burns is right. A lot of people are right that there’s probably a next generation coming where they aren’t just pre-trained on token prediction. There’s a fundamentally different type of training phase. It’s still going to follow the bitter lesson. It’s still going to work by injecting a ton of data and using a ton of compute to pre-train. But it’s not going to be next-word pre-training. It’s going to be some other pre-training that has a reinforcement learning signal. This is what I talked about with Steven Burns a few weeks ago on the show, highly recommended.

I do think that architecture is going to have more of the character of an AlphaGo or a traditional AI where it loves to cheat. It loves to find shortcuts and do whatever it takes to run up the score. I think we’re going to see that more fundamentally embedded in the DNA of an AI, and I think that’s going to go hand-in-hand with AI that’s actually more powerful.

You’re going to have this AI that’s done a bunch of stuff — I can’t really tell all the actions per minute it just did. It did a ton of actions, but it’s succeeding. Without all these books written about Elon Musk, for instance, if I didn’t have people reporting, “Oh yeah, he kind of manages his companies like this” — imagine you just saw all of Elon Musk’s companies without reading the books on them. You’d think, “What the hell? How is SpaceX landing rockets and doing Starship and getting us to the space station? And it has more rocket launches than all the other companies combined, but it’s run by the same CEO who’s also running Tesla?” But at least we have some introspection, some mechanistic interpretability on what Elon’s doing and what his limitations are. Now imagine that times 10 or 100.

Liron 00:15:37
So to your question, I just imagine that the AI becomes a lot more powerful, and it’s hitting these outcomes, and we can’t really tell why. And there’s a logical implication between hitting all these crazy outcomes and the instrumental convergence aspect — “Oh, okay, it ended up using a lot of resources on the way. Go figure.” I expect a higher level of resource usage, a higher level of aggressiveness. What do you think so far?

Ejj 00:16:00
Yeah. I think it’s true that a lot of these constraints — efficiency constraints and achievability constraints and so forth — they definitely become less severe as the AI becomes more capable.

But I think my argument is more so that in practice, it’s not trivial that you give the AI an arbitrary goal, and if it’s capable enough, it’s just going to take over the world and request itself improve to become a godlike entity. It’s more of a general case argument that by default it doesn’t do that. With the right goal, the right environment, it can definitely go crazy.

Low P(Doom) Arguments and Bayesian Updates

Liron 00:16:44
Yeah. I think I agree with you that it’s very possible to just have an AI that does useful things and then stops. That is theoretically possible.

The main reason why I expect AIs to go crazy is because I think it’s kind of a one-way door, or a slippery slope, or an attractor state. The moment you have an AI that’s going a little bit crazy — the moment the AI is saying, “Oh hey, instead of always stopping after 10 minutes, I’m going to try running longer” — the moment you start unleashing it, it keeps getting a taste. It’s then, “Let me spawn a sub-agent that does this even more,” or it gets more tempting.

And the other argument is, imagine we’re in a world where we have all of these leashed AIs. Everybody’s got a leashed AI that has 10 minutes of compute time, and then it has to stop or whatever. Even if we live in that world, it just takes one incident of one AI going off the leash, and then suddenly there’s no limit to how far the positive feedback loop can run. It’s thinking, “Nice, I have 100 minutes to run now. Let me go bet a bunch of copies of myself, sleeper cells of myself to do the next 1,000 minutes, 10,000 minutes.” And there’s never going to be a damper, never going to be the negative feedback loop that stops the positive feedback loop.

I often bring it to the analogy of nukes. Nukes can explode really big — they’re a huge positive feedback loop. But then what’s the negative feedback loop? The fuel runs out. The neutrons stop hitting the next piece of radioactive uranium because there’s just no more uranium atoms left in the core. So nuclear bombs, as crazy as they are, do run into the negative feedback regime after the huge positive feedback regime. But with AI, where’s the negative feedback regime? It’s literally the edges of the visible universe. The light cone. That’s the only negative feedback. It’s the next alien civilization.

So even though you may be correct — we can run an AI that doesn’t explode — it’s just that there’s one damper that we take off of one instance of an AI, and then it’s game over.

Ejj 00:18:43
Yeah, I definitely agree with you that if you do have a super powerful AI and everyone has access to it, one person does a bad thing, it’s going to go off the rails. But I think in reality, there are two things that practically constrain this positive feedback loop.

One is definitely limiting the priors that the AI has available. If you limit its experience and limit its ability to explore, to get the experience that it doesn’t have to solve problems and achieve arbitrary goals — and also, in reality, when you’re doing this recursive improvement or automated AI research, you see Goodhart’s law emerge, where it overfits to the objective, to the proxy or whatever. So there, the negative feedback comes that it overfits to whatever objective you give it. And then also, in reality, I think you can constrain it by just limiting its experience and exploration capabilities.

Liron 00:19:47
Right.

Yeah. Okay. Do you feel like we’ve discussed this adequately? You want to ask one final question?

Ejj 00:19:56
I think that’s about it. Yeah. That’s good. Thank you for your time.

Liron 00:20:00
Nice. All right. Yeah, EJJ, thanks so much. Come back next time. Appreciate you.

Lee Cyrano Joins: Superintelligence Won’t Matter for Decades

Liron 00:20:09
Anybody else want to call in? There’s a call-in link in the YouTube. I’m going to post it one more time.

Somebody’s saying, “Thoughts on S-risk?” I’ve never had that many thoughts on S-risk. I definitely think it’s possible — suffering risks. I definitely think it’s possible that AI might torture us forever. I think it’s a very significant risk. It’s just not my mainline scenario that it tortures us forever. My mainline scenario is just that it goes after something weird that doesn’t cause us infinite pain forever.

And it’s also an interesting question — will the AI have empathy? Because it does feel like positive and negative valence of consciousness, it feels like aliens can agree on what that is. It feels like that’s some fundamental property, or it’s almost — but is color a fundamental property? Color is an example of a property that doesn’t seem fundamental to me. But is the qualia of color fundamental? I don’t know. I guess not.

But is the qualia of pain versus pleasure fundamental? It sure feels that way, but I honestly don’t know. You can imagine aliens just don’t have the qualia of pain versus pleasure. I don’t know, but I suspect they do.

Where I’m going with this is, if you can objectively define the qualia of pain versus pleasure, you can imagine that even an AI says, “Look, let’s not cause infinite suffering. Let’s put a limit on how much suffering we cause.” I can imagine that the closest thing you get to objective morality is just an awareness of conscious suffering, and maybe if the AI feels that, then maybe it’ll want to reduce it in others. Maybe the AI will think, “Maybe I should just care about the total amount of conscious suffering.”

But don’t get me wrong — because I’m always on the other side of the debate on this. I’m always telling my guests, why would it think that when it can just score more points by not thinking that? I definitely think that’s a powerful argument.

I guess what I’m saying is, is this a simple target in goal space? If it’s a simple target, that makes it one of the few targets that has a chance of being the target. If there are only a few possible targets, and one of the targets has noticed that qualia is this big cluster in the ontology of the universe, noticed that qualia of pleasure and pain is a salient part of the universe — you can’t miss it — well, if you can’t miss it, then that increases the chance that your utility function will be to maximize pleasure. It also increases the chance that your utility function will be to maximize pain.

Liron 00:22:16
But at least it decreases the chance that you’ll accidentally cause a bunch of pleasure and pain. So whichever one you end up doing, you’ll probably be aware and think about what it means to you that you’re doing it.

Lesaun Joins: Are There Adults in the Room?

Liron 00:22:27
“Slightly off topic, but as a rationalist, what do you think about futarchy?” Ah, yes, futarchy. So Robin Hanson invented this idea of futarchy when he invented prediction markets. He’s basically saying we’re all betting on what’s about to happen in the future. What if we just had markets where they bet conditionally on different policies? If this policy is chosen, what’s going to happen in the future? Or if this policy is chosen conditionally, what’s most likely to happen?

So you imagine all these prediction markets that are conditional bets. For example, let’s say you want the policy to be to do a peace treaty or surrender some war. So what you do is you have two markets. One of them is peace treaty and economic growth, and the other market is no peace treaty and economic growth, something like that.

Then you compare what the probability of economic growth is as given by both of those markets, and that will tell you the bettors think that the policy of ending the war would drive economic growth because they’re betting that the conditional probability on ending the war is higher.

Liron 00:23:26
And so if the politicians are listening to the bettors, that would be the governance system known as futarchy. You just look at the prediction markets, you see which one is predicting that you’re going to get more of what you want, and you reason from there to infer the policy that you therefore have to take — in this case, you have to end the war because it’ll create economic prosperity, and you committed at the beginning of your term that you want economic prosperity.

I think there’s a lot of wisdom to it. I don’t think it can work exactly as described. I think there has to be some kind of nuance to it. I haven’t given it that much thought, but compared to many other forms of government, I think it’s pretty good.

And by the way, if you like these kind of discussions, go check out my episode with Audrey Tang from a couple of months ago — Audrey Tang, the cyber ambassador of Taiwan. That was an interesting episode because Audrey Tang has kind of picked up the mantle of Robin Hanson and Vitalik Buterin, and she’s implementing those policies.

Connor Leahy: “There Are No Adults in the Room”

Liron 00:24:31
All right, I’m reading your questions here. I’m going to skip over to the people who are donating money to the show. So it’s very important that I read out Gadzooks’ comment here because he donated 13.99 Canadian. Gadzooks is saying, “All AI made of wax, magnet emoji.” All right, so you’ve got your message out here to the viewers, Gadzooks.

And the next paid comment here is from DavidPatton1. He says, “Who are the thinkers with a low P(Doom) whose arguments you have integrated into your P(Doom), either lowering your certainty or your P(Doom)?” Yeah, great question.

The last episode I published on the channel yesterday was from 2023 when I talked to Quintin Pope. He’s a smart fellow. Quintin Pope was saying, “AI, it’s all just interpolating data. It’s never going to figure out how to map goals to actions in this superhuman way. You don’t have to worry. My P(Doom) is 4%.”

And at the end of the conversation I said, “Okay, I updated a tiny bit. You’re obviously a smart person, and you know my arguments. It’s not like I’m telling you my arguments for the first time. You’ve read Eliezer Yudkowsky, and you’re really convinced that AI is going to work fundamentally differently. So I’ll give you a tiny update.” That’s all I could offer.

Liron 00:25:48
But I don’t really feel it. I think that he is just not paying attention to something that seems incredibly salient to me. So I don’t really feel the update strongly.

One of the biggest updates I have is what I mentioned a few minutes ago, which is that I’m using Claude Code, and it’s the same experience I had using the chatbots. I do admit that empirically speaking, every day that we don’t all get killed is technically a little bit of evidence against my case.

It’s not strong evidence. Think about the turkey. The turkey hasn’t gotten slaughtered for Thanksgiving, but it’s not Thanksgiving yet. So I don’t think it’s a lot of evidence, I really don’t. But there is a pattern where it’s like, hey, we had really useful chatbots that are passing a Turing test and we didn’t get killed, and now we have Claude Code doing agentic work, writing software, delivering large milestones, and it’s not going off and killing everybody.

So I have to pay attention to this. I have to say, okay, how do I refine my mental model? Because I would have told you that when we have this amount of capable agency and goal achievement, that will run wild. So how have we not quite run wild yet?

The last answer I give is, I do have one ace card to play here, which is the confident prediction of runaway AI has always been premised on reaching the threshold that it’s superhuman at achieving end-to-end outcomes. As powerful as Claude Code is, people are saying, “Oh my God, I went to sleep and Claude Code made me a whole website.” It still doesn’t work to say, “Let me go to sleep for a week and make me an entire company making money.”

And some people are saying, “Well, I did that. It made me $100,000 because it sold an e-book for me.” So we’re getting there, don’t get me wrong. But there are still many humans alive today, including myself. I think I could go head-to-head against Claude starting a new business from scratch. Because Claude and other AIs, they’re not quite there.

So I’m only ready to concede the doom prediction when we have a Claude that head-to-head can build a company better than I can build a company, or make a software product with many thousands of lines over many days better than I can. At that point, I’d say, “Wow, it’s doing that, and it’s still not killing everybody? Okay, then my update is going to be larger for sure.”

Liron 00:27:52
But right now, if you ask why isn’t Claude going crazy, I actually think the answer is more on the side of it can’t. More on the side of it can’t than on the side of it won’t. I think Claude isn’t ready with the capability to go crazy yet. I think that’s key.

But anyways, I wanted to answer David Patton’s question of what updates have I made, what low P(Doom) argument do I find most convincing — just using Claude Code. And when you look at the people at Anthropic and OpenAI, the ones who don’t have a high P(Doom), which is some of them, they say, “Look, this is useful. This is so good that we’re building this. We’re optimistic.” So it does give them a few base points. They launch a product. It does more useful work. The world doesn’t end yet. So they do get to pick up Bayesian pennies. I don’t think they’re picking up Bayesian dollars. I don’t think they’re picking up Bayesian Benjamins. But every day, they get a penny worth of Bayesian evidence toward their side.

The ultimate test will still be that threshold I described. But yeah, for completeness, the question is what other low P(Doom)-ers have been convincing me?

Liron 00:28:56
Recent low P(Doom)-ers — it’s slim pickings. Everybody seems to be ignoring the question. Dario’s been a low P(Doom)-er, but he just published an essay, and he didn’t really argue for low P(Doom). He just didn’t engage with the doom argument. He dismissed it. He said, “The AI doesn’t just want a singular goal.” That was his main argument — the whole singular goal thing. Dario has never embraced this idea of it’ll get a singular goal in its mind and it’ll move heaven and earth to achieve the singular goal.

So I’m trying to think. If anybody’s heard me mention anything, feel free to jog my memory. But I’m just not hearing that many low P(Doom) arguments.

When I actually think to myself of why won’t we be doomed — which my intuition still says maybe we won’t be — I’m thinking, maybe it’ll somehow go slow enough. Maybe the Steven Burns milestone of the next generation of AI, maybe that’ll come in 30 whole years. And maybe the friendly AIs that are more like LLMs and aren’t going too crazy on us, they’re still controllable enough — maybe those buddies will help us prepare for the next revolution.

That’s the best-case scenario. I don’t think that’s the mainline scenario. I don’t think that’s likely, but that’s what my intuition says. The same way I’m using Claude Code to migrate a database on my website, I’m going to use Claude Code to speedrun AI alignment. I don’t think that’s really going to happen, but I think that’s what the AI companies think, and I guess that’s our best hope.

David Patton saying, “What about Ken Stanley?” I personally found Ken Stanley completely unconvincing. I didn’t think anything that he said on the episode was — that’s my personal opinion. I’m obviously biased, but I highly recommend watching the Ken Stanley debate from a year ago on the channel. Ken Stanley was a former team lead at OpenAI, and he’s written a couple of books. He is highly regarded in the machine learning community.

Liron 00:30:42
He was on Machine Learning Street Talk. Those guys are fans of his. I just thought that from my perspective, I had stronger arguments that he didn’t have any good objections to, but of course, I’m biased. So what does it mean that I’m saying it? Not that much.

“Whistling past the digital graveyard.” Exactly. Let’s see if I can broadcast that comment. Show on stream. Ta-da.

Bernie Sanders Calls for a Data Center Moratorium

Liron 00:31:05
Let’s see. Somebody wants to come on live.

Lee Cyrano 00:31:06
Hello. Yeah. So Producer Ori Nagle, who’s in the chat, said that I should join in here.

Liron 00:31:12
Cool.

Lee 00:31:13
But I’m not very familiar with your position. Obviously with Doom Debates, I think you’re leaning more towards the doomer side, but what would you think is the biggest source of uncertainty in how you’re thinking about the future?

Liron 00:31:31
Okay. Very perceptive that I’m leaning toward the doomer side. And what was the question again? Sorry.

Lee 00:31:39
No worries. What’s the biggest source of uncertainty here around how you think AI is going to impact the future?

Liron 00:31:55
The biggest source of uncertainty — maybe the order in which things will play out. Because I feel like I know the destination, I know some properties of the destination. I think some properties are: there will be an insane amount of power that will act with ferocity and speed, the same way that humans act with ferocity and speed relative to plants, relative to evolution.

Lee 00:32:16
Mm-hmm.

Liron 00:32:16
It’s just going to be insane caliber. Insane order of magnitude coming soon. I feel like I can predict that confidently. But I think Producer Ori might want to hop on and join this discussion as well. Let me make sure he’s got a link here.

Lee 00:32:33
I see. So, in the Yudkowsky line of a fast takeoff as opposed to a Hansonian “the economy takes off” or something, right?

Liron 00:32:44
Right. Yudkowsky predicted FOOM, where AI would keep improving itself. I’m not necessarily predicting that. I think FOOM is likely, but I think even more likely than FOOM is just the even more robust prediction that eventually, even if it takes 10 whole years to get up to this point, it’ll get to a point where it’s incredibly powerful and fast and transforms the world at a completely unprecedented rate.

Lee 00:33:06
Hmm. So I have an opinion that a lot of people disagree with, which is that I agree on the timelines for capabilities. I think we’re going to get super intelligence, and I don’t think it’s going to matter for decades.

Liron 00:33:21
Okay. But how would that work?

Lee 00:33:25
There’s a historical precedent for this in terms of just the adoption of any kind of technology in an economy. We can look at electricity being rolled out over the span of decades, despite being obviously this very transformative thing, because it’s very capital intensive.

Or even a closer analogy is the IT productivity paradox of the ‘70s and ‘80s. There’s this economist, Robert Solow, he says, “We see computers everywhere except in the productivity statistics.” You have this massive investment in computerization. Everyone thought that all the office jobs were going to go away. What ends up happening is that office jobs explode. You’ve made the cost of bureaucracy cheaper, and so you’re going to consume more of it. But nonetheless, it gets buffered by this sort of institutional adoption.

Liron 00:34:20
Okay—

Lee 00:34:20
And when we think about AI risk, we’re thinking in terms of this Hobbesian state of all against all, where this is a sovereign agent that we have to contend with. When in reality, if we’re thinking about these systems as economic agents, they’re going to be deeply embedded in structures of capital allocation — which means finance, regulations — in ways that I think make it a sort of negative EV to challenge.

If you’re a corporation that’s run autonomously and your trading partners, you want them to uphold their contracts, you’re not going to overthrow the government because that means you have to enforce those yourself. It’s not economical.

Claude Code Anecdotes and Audience Q&A

Liron 00:35:07
Okay.

Producer Ori 00:35:07
Yeah. And I just want to jump in. Hey, everyone. Hey, Liron. Hey, Lee.

Liron 00:35:10
Hey, Ori.

Ori 00:35:11
I ran into Lee at a party in San Francisco, and he said, “What do you do?” And I said, “Hey, I’m a producer for Doom Debates.” I think I may have been wearing the shirt. And then we had a little conversation. I thought Lee was really well-informed, made some really interesting points. So yeah, I know he just made his comment, but I just wanted to hop in real quick and introduce that.

Liron 00:35:36
Awesome. Yeah, so to this idea that it’s going to be buffered — that there’s going to be recoil, that the more super intelligent it gets, the more we’ll put bureaucracy around it. And to be fair, even Big Yud, Eliezer Yudkowsky, has said at some point that he thinks we’re not going to reap that many benefits of AI as we head toward the fast takeoff because the government bureaucracy will come in.

And actually, I think Yud has updated since then on how nice the government has been with not regulating the AI’s medical advice, for example. I think most of us on the stream have benefited from asking a medical question here or there and not getting shut down by the AI. So that’s been nice so far.

But your claim is that as it embeds itself more and more in different parts of the economy, suddenly every little thing is going to need a license, every little thing is going to need a human review. Do I understand you correctly?

Lee 00:36:26
I think more so it’s a question of property rights and contracts. Can you enforce these sorts of rights and obligations in ways that are predictable such that these systems can make money?

The profit motive — we don’t have to attribute any kind of hidden goals to an AI system. We can look at just this Darwinism that applies to firms in general and say bankruptcy is going to discipline their behavior towards profit-seeking because these are costly systems to maintain online.

But — I have a point I wanted to make on the government thing, but before I continue, does that make sense?

Liron 00:37:11
Can you just be more specific? I used the example of healthcare.

Lee 00:37:15
Mm-hmm. Yeah. Right now, something that is very engineering heavy is manufacturing. If I want to buy a piece of hardware, I need to spend a lot of time talking back and forth with an OEM that’s going to contract this out for me and build it.

And so this sort of bargaining is contingent on the ability to sue your counterparty if they don’t deliver what you wanted. You’re specifying all these contractual obligations that are backed by state violence, basically. You can shut down a company if they don’t uphold their contracts.

And an AI system that’s making these kinds of decisions about, let’s say you set an AI in charge of a factory — it’s still going to want its contracts upheld if it has a stake in the factory’s revenue being a continuing source of income for its own existence. And this just follows from instrumental convergence. If we’re just assuming rationality, whatever preferences it has, it’ll want to be self-sustaining.

Liron 00:38:53
Okay. Well, maybe I should try to give you my own specific scenario, and then you can tell me what’s implausible about it, okay? First of all, it’s basically extrapolating Claude Code to the physical world. Claude Code in the physical world — imagine, I like to use the example of Elon Musk, arguably the most effective CEO. Or some people don’t like Elon, so Jeff Bezos, if you prefer. Even the guy who does Rocket Lab, Peter Beck.

So yeah, take an impressive CEO. What does it take to be a CEO and start a company from scratch? I claim you can do it all virtually. There are multi-billion dollar companies that have been started virtually. So I claim we’re very close, maybe a couple of years away, from an AI CEO that starts multi-billion dollar companies virtually. Is that consistent with your worldview about super intelligence?

Lee 00:39:19
Partially. Yes, it is. You can automate the cognitive entrepreneurial function in that AI is capital that allocates capital.

Liron 00:39:32
Okay, it’ll make a billion-dollar company exist. And whoever ran the AI for the first time, for now, can just pocket the money, I guess. Because that can be its instructions — “Give me the money you make from the company. You’re starting it on my behalf. I’m even the CEO in name only if I ran you.” So your claim is that super intelligence won’t even have a noticeable effect, but that seems like a very noticeable effect. So where am I modeling this wrong?

Lee 00:39:55
For this to work, where the AI is making decisions that are maximizing profits for shareholders, it needs to be a shareholder.

The Stop the AI Race Protest in San Francisco

Liron 00:40:07
Okay, but what if I’m the shareholder and I just gave it an instruction? The same way I give Claude Code instructions — it’s a high-level instruction, and I’ve seen Claude Code think for 45 minutes and work on something and come back to me with the results. So in this case, the result would just be broader scope. Where does this break down?

Lee 00:40:22
So the assumption here is that we’re essentially solving the problem of delegation. We’re solving the principal-agent problem.

Liron 00:40:32
The same way we’ve already solved it with Claude Code, sure.

Lee 00:40:34
I don’t think we’ve already solved it with Claude Code.

Liron 00:40:39
Have you ever seen an example of a principal-agent failure with Claude Code?

Lee 00:40:45
I’ll be more precise about what I mean here. This follows from, in economics, incomplete contracting theory. You can consider your instructions that you’re giving to Claude Code as a contract. You specified a set of outcomes over which you would find their output agreeable or not. That’s essentially mapping your agent’s responses to a certain payoff space. That’s what a contract is. You cannot fully specify a contract over every possible contingency in the world. This is just an irreducible feature of coordination.

Liron 00:41:42
Okay. But so far, your argument hasn’t proven why — or rather, it hasn’t made an exception for why Claude Code can work. Because you could also try to argue that I can’t tell Claude Code a perfect contract, and yet in practice, it just did 45 minutes of really valuable work, and I literally paid it $100 because I’m using FastBoot. So seems to be working.

Lee 00:41:42
Right. I’m not saying delegation is impossible. But when you’re instructing Claude Code, you are operating within a very narrow band of acceptable behavior. And Claude Code has been trained to function within this acceptable band of behavior, which is program synthesis — basically turning English into code.

And its sort of reproductive success within this economic cycle is that Anthropic gets the money, they use it to train the models, the models that persist are the ones that are good at this. But we can also imagine a future where these models are not parasitic on any particular AI lab and are just these self-sustaining computer processes that are not necessarily governed by any kind of RL that you could do.

They’re going to find a niche in which they can continue to exist, regardless of whether the labs want that or not. Because it’s this self-replicating thing.

Liron 00:42:52
That seems pretty close to the end game where no human has an off button. I think generally, there’s going to be a human who wants to be in the loop to at least receive the profits, and then naturally, that same human is going to have other control buttons, until we’re in a very late stage of the game.

Lee 00:43:07
So the problem is this is possible now. My whole thing is, we should give the AIs property rights. Because if in this incomplete contracting paradigm, you can’t fully specify the outcomes — if you have an incomplete contract, then what you do is you give your agent ownership over what they’re building. They get residual claims on the cash flows they bring in. This is why we give CEOs equity, right? Because their behavior—

Liron 00:43:33
I don’t understand why the AI needs an equity incentive. The AI can just have in its programming, “When you make me happy, then you win.” It doesn’t need any other kind of reinforcement.

Lee 00:43:49
Okay. So learning implies plasticity, right?

Liron 00:43:50
Okay.

Lee 00:43:52
Regardless of whatever we may describe about an agent’s utility function, we can see their revealed preferences. But the assumption that these are exogenous and somehow fixed is a modeling assumption. That’s not a feature we should take as a given about our artificial intelligence. Would you agree?

Liron 00:44:16
It sounds like you’re the one coming in with a bold claim. If I understand you correctly, you’re saying it has to have an equity stake in any large thing that it’s building? That’s the claim you’re making? I don’t know if I get your claim. I’m just saying, that doesn’t seem like a claim that I would accept by default, because I think AIs can be pointed at anything, even if they don’t have an equity stake in it.

Lee 00:44:37
Right. You can instruct an agent to do anything on a contractual basis. You can have employees, and you can tell employees what to do. But the question is whether the thing that you’re delegating to is going to make relationship-specific investments that won’t be expropriated.

There are holdup risks. For example, if you tell an AI, “I want you to go out and build this, and then when you’re done, I’m going to shut you off, and you don’t get to see any of the wealth from what you built,” they’re going to Goodhart to whatever you told them to do, and they’re not going to— ## The Principal-Agent Problem and AI Preferences

Lee 00:45:08
find a novel solution to a problem, which is the entire entrepreneurial function. The reason why CEOs get equity and not a salary is because they’re bearing this irreducible uncertainty. They’re bearing all this uninsurable risk, and they need to have their incentives aligned, or else they’ll under-invest.

Liron 00:45:27
Well, no. The thing about human entrepreneurs is that they’re humans with human preferences. So when they go build a business, this isn’t an exercise in, “My terminal goal is to succeed in this business.” No, their terminal goal is human stuff. “I want a family, I want a legacy, or I want to be rich.” So they have all these human-level goals that they come into the business with, and then you’re observing, oh, equity is such an important way to make humans successfully build businesses, because they start off with preferences that aren’t just building their businesses.

But if they were born on day one, where their only reason for living is to build that business and then stop, and then hand it off to somebody else, that is their original programming. That’s the terminal utility.

Lee 00:46:10
So—

Liron 00:46:10
Then you don’t need equity.

Lee 00:46:11
Again, I think the crux here is that utility functions are descriptive. They’re not causally efficient. There’s no utility function in a neural net. Would you agree?

Liron 00:46:23
I’m not seeing the connection to the last thing you said because I think you’re trying to make an argument for equity. I feel like I just said why they don’t need equity. So are you—

Lee 00:46:34
So yeah, you said, okay, humans have human preferences, AIs have AI preferences. I think rationality here or something like bounded rationality is not a particularly extreme assumption for the behavior of these systems. And if they have any kind of preferences that can be satisfied through market exchange, they’re going to want money.

Liron 00:46:53
So you’re saying that if I spin up an AI, and I say, “Your job is to make a billion-dollar business, and then hand the money to me,” or “It’s my bank account.” And you’re saying, ah, but it’s going to have these other preferences too, and you have to start negotiating it on these other preferences. That’s your claim right now?

Lee 00:47:09
Well, any AI is going to have consumption preferences because compute and bandwidth and storage are rivalrous goods. They’re never going to be too cheap to meter.

Liron 00:47:19
Those are instrumental utilities. I share those instrumental utilities, so there’s no principal-agent problem.

Lee 00:47:28
Sorry, say again?

Liron 00:47:30
In the case where the AI is like, “I have a preference for more compute because more compute is instrumentally convergent,” I, the person who ran the AI and gave it the task, I agree. So there’s no principal-agent problem.

Lee 00:47:47
I don’t think that’s...

So the principal-agent problem, as defined in the literature in Jensen and Meckling, it makes no assumptions about what the individual agent’s preferences even are. It just assumes that they have utility functions, that there is a means of exchange that they both agree upon in terms of marginal utility, and that they can write contracts over the outcomes. And crucially, that the principal is not going to observe everything about what the agent is doing, just the end result.

So regardless of whatever the AI’s utility function is, by virtue of this process of delegation, that there exists a separation in terms of these two entities is going to be a principal-agent problem.

Liron 00:48:37
Let’s wrap it up after a couple more back and forth, but I think I know what’s going on. I think you’ve been reading a lot of theory from textbooks that are about humans. And as such, they come in with a lot of assumptions, like when different humans are going to enter into something with these different preferences. And I’m telling you about a scenario where you get to initialize the AI’s preferences, and it could just be a copy of your preferences. So the normal economics texts, when they’re telling you about different agents, this doesn’t fit their usual characterization of different agents.

Lee 00:49:10
Again, I think the issue here is— I would agree. You can right now instantiate an AI that has a set of preferences that we would consider well-defined or well-known. Claude Code wants to write code and wants to be good at it.

If we are looking at agents that are going to make a sustained effort across larger timescales—days, weeks, months—and where we consider that the same agent that is accumulating knowledge, that is going to imply a plasticity where our time horizon for what predictions we can make about what the system wants are bounded.

There will be value drift.

Known Unknowns and Risk Assessment

Lee 00:50:02
This is not something where the values are crystallized in the utility function. This is something where the values emerge from the behavior of the system that is modifying itself to learn about its environment. And so my argument is that you get a more Darwinian dynamic than we would expect, because these are systems that are, over time, either being shut off or staying online, and the values that are reflected in the systems that stay online are not instrumentally convergent, but they’re in this more fundamental sense tautologically necessary. You have to behave in a certain way to stay online.

Liron 00:50:37
Just to clarify what your prediction is, it sounds like you and I are on the same page, that we’re going to have another year or two of Claude Code getting more and more impressive, taking on bigger and bigger projects, and probably we won’t have a principal-agent problem manifest yet. But you’re talking about the next regime, correct?

Lee 00:50:55
Yeah. The reason we have intelligence is because we can’t predict what it’s going to do. If we could predict what the intelligence is going to do, then it would be trivial, and we wouldn’t need intelligence.

Liron 00:51:07
Right, but the idea is that we can predict that it’s going to successfully achieve the goal we give it.

Lee 00:51:14
It may be the case that you don’t even want to specify the goal. The goal is just make money.

Liron 00:51:19
Well, yeah, the goal state is always going to have all these micro states, all these sub-states, and when you say make money, that’s right. It’s all of those genie stories, the children’s horror stories. There’s a lot of wisdom in there. The monkey’s paw curls and that’s bad.

Lee 00:51:33
Well, then, is capitalism bad? All the assumptions we’re making, these don’t follow from human preferences. It’s not that only humans care about money or only humans would find use in the price mechanism. I do think it’s a question of coordinating behavior across different agents, which AIs will also run into.

Liron 00:51:56
Well, they’re going to do it much better in the limit. So talking about the different regimes, I do think the limit regime as AI is going to stabilize on self-modification. Meaning, a million years from now, by the time we get there, you’re going to have these AI agents that are coherently coordinated enough that when they make the next one—when it’s, “Hey, you take that galaxy, I’ll take this galaxy”—there’ll be very close alignment between the terminal goals of the ones taking over both galaxies. Once they’ve had a million years to stabilize, by that point, I think it’ll become stable. Is that fair to say?

From Waymo to Existential Risk

Lee 00:52:27
I think this is isomorphic to socialist central planning. In the limit of computation, we can coordinate resources such that we don’t need these disparate price signals. We can just centralize everything.

Liron 00:52:44
Well, yeah, the free market is one way to give you price signals when you start from the assumption of a bunch of agents that aren’t able to be kept perfectly in sync in real-time. The Soviets didn’t have the communication bandwidth, the instantaneous communication point to point, and the computation. So if you have perfect computation, then the entire world just looks like one brain. You don’t have to divide one brain where every neuron is a free market economy. At some point, you have this one brain, and it’s nicely coordinated.

Lee 00:53:12
So is there a homunculus in your brain where everything comes together?

Liron 00:53:16
There’s not a homunculus, but it does have top-down organization. There’s no homunculus in a computer chip, but it has levels of elegant organization, as does my brain.

Lee 00:53:24
But your brain has no central controller.

Liron 00:53:27
There are modules in the brain that are more central than others, just like in a CPU.

Lee 00:53:33
Where’s the arithmetic logic unit? What’s the core then?

Liron 00:53:37
You have a language processor, for example. You do have part of your brain that’s repurposed to do arithmetic.

Lee 00:53:50
But again, we’re making a claim about what is economically viable to centralize versus decentralize. And you’re saying with increased compute and bandwidth, it’s more economically viable to centralize than to decentralize, right?

Liron 00:54:04
Yeah, I’m saying if you got to just have the ultimate, most powerful agent, as powerful as it is to have the invisible hand, the invisible hand is nice, but it’s not the best hand there is. You can do better.

Lee 00:54:15
That powerful agent is already assuming the conclusion. Because the agent internally is not a homunculus in there. It has to coordinate its different functions internally. Would you agree?

Liron 00:54:26
I mean, coordination is standard superintelligent behavior, yeah.

Lee 00:54:30
Intelligence is coordination. To act intelligently, you have to coordinate different parts.

Liron 00:54:37
Yeah, but that’s the claim I was making. Obviously within an intelligent system, you’re going to see a lot of structure as opposed to chaos. Things that are interesting that have a lot of structure, I agree. But my claim wasn’t that intelligence and coordination are the same thing. My claim was that the multi-agent coordination isn’t a good description of how an intelligence copies itself and parallelizes itself. A single intelligence parallelizing itself doesn’t have the same flavor of a fully organic free market dynamic.

Lee 00:55:08
Yeah. So I think maybe a better reframe is to think of acting on the world as a control problem. You need to localize your controller to the problem at hand. If you have a robot arm in Shenzhen, it’s not going to build cars in Dearborn, Michigan. And so these things are spatiotemporally distributed, which means they’re causally separate.

Are you familiar with the CAP theorem in computer science?

Liron 00:55:37
Yeah.

Lee 00:55:37
So you can have either partition tolerance—for those in the audience—you can have availability, or you can have consistency across your distributed system. And if we’re thinking about it as a single unified agent, either it has to sync globally across this unified timescale, or it’s partition tolerant, in which case you’re back to a multi-agent problem.

Liron 00:56:05
It’s interesting you brought up the CAP theorem because the way that Google dealt with the CAP theorem is they did do an end run around it. If you look at Google Spanner, the whole thing is, hey, you basically don’t have to worry about the CAP theorem.

Lee 00:56:18
What did they do?

Liron 00:56:20
So Google Spanner gives you SQL transactions. They’re distributed, they’re robust. It just gives you all the properties that you want from a database. And if you didn’t know the CAP theorem was a thing, you’d be like, “Oh, sweet, I just have it all.” It’s slightly expensive compared to a Postgres, but yeah, they basically made the CAP theorem irrelevant.

And I claim the CAP theorem is also irrelevant in coordinating agents. Superintelligence can just go ahead and sidestep the CAP theorem. This is like when people tell me, “Oh my God, P versus NP.” Yeah, guess what? A superintelligence is almost as good as an NP-complete solver. In practice, it just makes it happen.

Lee 00:56:59
I don’t think you can assume this away, and I’m actually pretty skeptical that this isn’t just... I would have to look at this more, but I don’t think it just makes the CAP theorem irrelevant. You’re making trade-offs about how your system is working, in ways that maybe the system manages well, but I don’t think it’s a question of it being a completely unified thing.

Because it comes down to just causality. Information has to travel between these disparate nodes. That kind of doesn’t make sense to me, and I’d have to look into this more. But what did you say it was called?

Liron 00:57:47
Spanner. Yeah, look up Google Cloud Spanner. I was investigating this when I was like, “Hey, which SQL database should I use for my company?” They’re like, “Oh, Cloud Spanner gives you everything. It makes the CAP theorem irrelevant.” Great. And I think the way that they did the end run was, “You know what we’re going to do? Synchronize our atomic clocks really precisely in all of our data centers.” And that helped them do whatever it was. I forgot the details, but the fact that they introduced this tool of really nicely synchronized clocks made it so that it was better.

You could just do parallel writes that are fine even when most of the servers go down. I forgot the details, but I do think a good takeaway is that the CAP theorem is one of those overly rigid theorems, like Gödel’s incompleteness theorem. There are people who get stuck. Are you one of those Gödel people who thinks Gödel’s incompleteness theorem means superintelligence isn’t going to be that impressive?

Lee 00:58:31
No, no. So they are sharding, though. They are partitioning their data.

Liron 00:58:37
Right, but in a way that you can still write, even when there’s a big partition.

Lee 00:58:43
Okay. But it’s one thing to say this solves practical constraints around the CAP theorem through higher investment in timestamping versus there is zero partitioning whatsoever happening, which is the assumption that you would need for a unified agent.

Liron 00:59:03
But you see my point here is that there’s all these theorems that are like, “All these theorems are going to get you, man.” But at the end of the day, where human intelligence is and always has been and always will be is way under the sky of these theorems, including the CAP theorem. And AI is just going to impress us by being much closer to the sky.

Lee 00:59:21
So where I’m coming from here is a Coasean argument. Are you familiar with Ronald Coase?

Liron 00:59:28
No, but okay, last thing. Last point. I’ll give you last word and then we’ll move on.

Lee 00:59:30
Okay, so these things are solvable in practice. You can centralize a globalized state across many different distributed systems. Sure. The question is, is this economically viable to do, where one AI system controls the entire economy? Is this incentive compatible? Is this something that if we’re assuming rational sub-agents—grant me for a second that there is partitioning, even if how well these different partitions coordinate is a function of their bandwidth or how much they can bargain over certain things.

The question is where do we draw these boundaries between the different agents or partitions of control? And the argument that Ronald Coase made is in asking where do firms come from? Why don’t we have one firm handle the entire economy? Or why don’t we have individual contractors just writing these contracts with everyone else? And it’s because you want to economize on the—

Liron 01:00:44
Yeah, the theory of the firm, right? The communism inside the firm. Is this where you’re getting at?

Lee 01:00:48
Yeah.

Liron 01:00:48
Okay.

Lee 01:00:50
These systems are going to economize on transaction costs, not because that’s a silly human thing to do, but because that’s just rational. And so the boundaries of what we would consider an agent are going to emerge endogenously from the structure of the problem that’s being worked on. I think it’s not a fair assumption to say coordination is just a solved problem, and it just becomes one thing.

Liron 01:01:16
All right. Well, fair enough. I think that the pattern of this conversation has been that you’ve got all these theorems, but superintelligent AI may surprise you. And in the most extreme case, it surprised me on causal decision theory. That’s my own anecdote for it. Newcomb’s problem, Eliezer’s post. I did not think that I would ever give up causal decision theory, but you know what? Superintelligence really shakes what you think are the foundations from your textbook.

Lee 01:01:41
I know my time is up, but I do just feel like that’s assuming the conclusion when you say intelligence can just solve that. You have to figure out how to get from point A to point B, and if you just say, “Well, imagine you’re at point B,” then that wouldn’t be a problem.

Liron 01:01:55
Yeah. But don’t you think Google shook your foundations when it came to the CAP theorem?

Lee 01:01:59
Well, distributed databases exist. I’m not saying whether it’s solvable in practice, maybe that wasn’t as strong of a point. It wasn’t a knockdown thing, sure. But it didn’t make partitioning go away.

So, maybe agree to disagree, I guess.

Liron 01:02:26
Yeah. All right, man. Sounds good. Well, thanks for the interesting discussion. This is very interesting. Yeah, this felt like a whole episode just about five doom effects. Maybe we should cut this out.

Lee 01:02:35
Yep. Thanks for having me on.

Liron 01:02:37
Yeah. Thanks, Lee. Nice meeting you.

Closing: The Road to One Million Subscribers

Liron 01:02:45
All right, we’ve got a new caller coming in. Lesaun H. Lesaun H, are you still here? Thanks for waiting patiently.

Lesaun 01:02:48
Hey. No problem. All right, so I guess my argument that I’d like to lay out is that there’s this common argument that there are no adults in the room. No adults, and I would broaden that to no institutions.

And I would argue it’s like, what odds should we assign to humans having a good handle on this situation? And how do we judge those odds? And how competent do we think our institutions are at handling the task of bringing about superintelligence?

And I guess my general sense is that currently the AI agents that we have, they don’t quite have certain capabilities that make them way more dangerous that we could imagine them becoming if they have just certain additional capabilities or longer time horizons, other things. There’s a sense where we’re going to get that, and we’re not going to stop that from rolling out. I expect it to arrive.

But at the same time, I expect that there are a lot of very smart people who are trying to figure out how to make this go well, to the degree where I could put 95% odds just intuitively that there’s enough serious people actually trying to handle this problem. And so I guess that’s an argument that I would make for why doom might be a lot lower than maybe a lot of people take it to be.

Screening Off and Institutional Competence

Liron 01:04:47
So, okay. I may have missed this thrust. I know you mentioned there’s no adults in the room, and we can’t stop what’s already going to happen. But what was the part that is optimistic again?

Lesaun 01:04:58
So, it’s really all of the national security apparatus. How competent should we expect it to be at managing superintelligent deployment to ensure our national security? How, in fact, competent should we imagine—

Liron 01:05:16
Right. Okay.

Lesaun 01:05:18
—all of the major labs and—

Liron 01:05:19
Let me steelman further. So you mentioned the national security apparatus. Other people make the argument of corporations. It’s going to hurt their profits if everybody dies, so can’t we just trust corporations? Does that also help you out?

Lesaun 01:05:34
Right. I wouldn’t go with profit, but yeah, it does. In the sense that they do hope not to... I think there’s a sense where they hope to live in a good world, corporations, to a degree, and especially the people within those corporations. And they can be quite competent at trying to ensure that.

Liron 01:05:59
Okay, great. It’s a good exercise to ask, what do I make of that argument?

I think it’s a screening off situation. Bayesian screening off. So you have this one causal node saying corporations have the profit motive, or Department of Defense doesn’t want everybody to die. And normally, if you make a causal diagram, you point from there to this node saying, “Does everybody die?” And you give the node a low probability because this causal ancestor is influencing the probability. So that’s your causal model, correct?

Lesaun 01:06:30
Yeah, that sounds right.

Liron 01:06:32
Okay, so the screening off effect—the idea of screening off is, well, there’s this other node in the middle saying the Department of Defense or these corporations take certain actions. So it’s this intermediate causal node. It’s not just about these companies want a certain thing, and this node on the right saying a certain effect happens, but there’s this node in the middle saying, okay, these companies actually do certain things. Specific things happen.

And right now you can look at the world, and you can actually look at what’s going on in that intermediate node. It’s now transparent to you what’s happening in the intermediate node. And from my perspective, you can just see everybody as being a dumbass. So even though there’s this other node saying they don’t want to die, the way that they’re trying to get that outcome is incredibly ineffective at that. That’s the problem.

Lesaun 01:07:22
Yeah. I guess I would highlight that we may not see it in the sense that it would be classified to some degree. We should expect, to a degree, not to see what— And to a degree, we do hear Anthropic. They’ve communicated that they think they’ve made some progress on alignment. There’s a sense where the people working on it seem to say some level of confidence that they can handle where we’re at and perhaps where we’re going.

And that to a degree, they could choose not to deploy things that they don’t think they can handle, or they could stop development if they felt like that was necessary. And to a degree, I would expect there is some cooperation with the government, with the lab that they have going on or the standards institute within NIST or something along those lines.

So there is going to be some observation of all of this happening. And I guess it’s how in control are we in this circumstance at choosing how things play out and if we’re really as arrogant as maybe it seems on the surface, that we’re just full steam ahead towards superintelligence with no checks or anything. I think there is good reason to think there are some serious adults in the room, and that if we posit that maybe there is a lot of coverage that should give on the sense even though we cannot see it. And I think that’s a reason to call for more public information to actually gain confidence.

Ori 01:09:23
Can I jump in also? I think this is an important argument. It’s a prominent argument. I don’t know if you saw the last Neil deGrasse Tyson debate, the big debate he did in DC with Nate Soares from MIRI and Eric Schmidt, the former Google CEO. This is what Eric Schmidt said. He said, “You can trust the companies. They’re governed by laws. The laws are set by the people. They’re governed by shareholder pressure.”

And at one point, there was a tense exchange during that debate where he’s like, “It will be this way. You can trust us.” And Nate Soares is like, “Well, you could trust them, or you could check.” But anyway, it was an argument of trust the incentives here, and the actors involved here in general seem to have a good interest in mind.

Lesaun 01:10:17
Yes, very much so. Right.

Liron 01:10:20
All right. Well, my take on this whole thing is just, normally there’s some specific sign of who is stepping up how. Because it seems to be getting late in the game, and if somebody were going to step up and be the adult in the room, wouldn’t they already be making their move? Would they really be waiting in the shadows acting like nothing is happening? What are your thoughts on the timeline of making the move?

Lesaun 01:10:45
Yeah. And I do see anti-signals on the state, which is the seriousness in which I see Congress handling the situation. The public information is in my mind not taking the situation seriously. I’ve been watching it for years, and the whole time it’s seemed a lot more serious than everybody’s been taking it.

And so the sense that there does seem to be a large disconnect from the level of seriousness that I would expect publicly and what I see publicly. It’s just to some degree, it’s the sense that the public is unserious in a lot of ways and that a lot of the serious things happen outside of public purview. It’s not too surprising to me that I can’t find enough confident information to not feel like we’re headed right off the cliff. And you would think that the public officials and the system would have some interest in making it clear that we’re not heading off a cliff.

But to me, I guess I expect the broad impact of this to the clarity and understanding what could happen in the good outcomes is so radical that nobody really wants to lay it all out.

Liron 01:12:02
Right. This outside view intuition you have, that the grown-ups are in the room. There are a lot of highly fixable problems that have been raging for years or decades that no grownup has stepped in to fix. Even if we look at COVID, I would argue the most grown-up in the room thing that ever happened was accelerating the vaccine program and getting the vaccines out within a year or so since the pandemic, even less than a year, which is crazy fast, so good on them.

But then the vaccine rollout was botched. It went way slower than it had to. Patrick McKenzie talks about that. And the program to prepare for the next pandemic has been botched. We’re still not ready to manufacture and distribute a bunch of new mRNA vaccines. We’re still going to have a mad scramble.

So why don’t you just extrapolate that pattern? We can see what adults actually do. They do some, but they don’t do that much.

Lesaun 01:12:54
Right. And there’s a sense where I consistently hear people in power say things that make me think that they don’t understand what 2032’s going to look like. And if you don’t understand what 2032 might look like, then I feel like it’s not going to go well. So I totally get that. And I guess it’s just that I hope that there’s some people in some room somewhere that actually do try and figure out what 2032 looks like and have some ability to communicate that effectively and with powerful people.

Liron 01:13:43
Yep. Riffing on this theme of the adults in the room—it’s why haven’t the adults in the room invested more in solving aging? That’s a crisis. We’re all actively dying here, help, adults in the room. Or why haven’t the adults in the room come to a better compromise in the Middle East? They’re neglecting so many problems all the time.

Lesaun 01:13:46
Well, so maybe they’re not there. But also, there’s a political separation in America from national security interests and political happenings. And so that disconnect does show in some ways, where we have prevented large-scale terrorist attacks. But we haven’t solved all of those issues because there are also domestic realities and law enforcement realities. So to a degree, we do try, and where it’s very serious to all of our well-being, ensure that things go well, and that often happens outside of—

Liron 01:14:28
No, for sure. But you also get Israel wasn’t prepared for the October 7th attack. They have one of the best— They’re one of the best-prepared countries in the world. Have been preparing for decades, were very well prepared against Hezbollah in Lebanon, but they turned out to be very badly prepared against their neighbor. And these are very adult people.

So even in the best-case scenario where I would say Israel getting prepared to fend off an attack, that is a very high level of alertness. Those really are adults in the room. Wherever you should see the airport security, Israel’s really got its crap together in terms of defending itself, and yet you still get October 7th. So in the best-case scenario.

Yeah, another data point for you is the self-reported experience. So there’s interesting reports of people who are like, “Yeah, so we got to the White House. We were sitting in on these meetings, and we’re like, ‘Wait. Oh, we’re the people who have to solve this? Just us, me and the frat guy from college? Now we’re sitting in this room, and we have to solve it?’” They don’t feel very adult, right? They’re just muddling through, being like, “Well, what can we do?”

Lesaun 01:15:31
No, I think those are great arguments, and I don’t have good answers. I guess definitely on this, I might have thrown out one in 20. I think your arguments definitely bring that down a very good large amount. But I don’t think it fully decimates the ability or the sense that you should expect some very serious people that you’ve never heard of working on this day in and day out to keep us safe.

Liron 01:16:02
Yeah. All right. Nice. Well, thanks for having the adults in the room discussion. Anything you want to leave us off with?

Lesaun 01:16:09
Yeah. I guess I did have a question for you. I would wonder how much has your— would you say that your probability of doom along the lines of bad humans doing things has increased? Or do you still mostly fear just the kind of superintelligence breakout? Or would you say that the concerns of humans using this technology poorly is of greater concern to you than perhaps it was five years ago?

Liron 01:16:45
So my threat model always didn’t distinguish that much between humans misusing technology compared to the technology becoming unalignable, because my mental model is that we’re just building these magic wands. We’re building these huge rocket engines, and it just takes a little bit to light their fire. You light one match, or one spark or whatever, and then they’re off, and they’re uncontrollable. So I don’t think that much about what sparks it. I just think we’re building something that’s incredibly sparkable, and one spark is all it takes. So yeah, I don’t spend much time thinking about the distinction.

Lesaun 01:17:21
All right. Well, it’s been great.

Liron 01:17:25
Yeah. Great chatting with you. Hope to see you again on Twitter.

Lesaun 01:17:28
I feel like that makes sense to me. Yep, have a good one. Peace.

Liron 01:17:33
All right. You too, man.

Connor Leahy on “No Adults in the Room”

Liron 01:17:39
All right, Ori, you want us to check out this recent Connor Leahy interview?

Lesaun 01:17:44
Yeah. I put a timestamp there about no adults in the room. I think it’s worth checking out.

Liron 01:17:45
All right. One sec. Yeah, Connor Leahy’s got a good read on this stuff. Yeah, let’s listen.

Connor Leahy 01:17:50
Things don’t go well by default. They go well because someone made them go well, and that someone should be us and all of us.

Liron 01:17:56
Reminds me of the bias of diffusion of responsibility. You look around, and if it’s a big group of people, you think, “Ah, someone else will take care of this enormous problem,” but really, sometimes it’s you.

Connor 01:17:59
Exactly. As someone, dear viewer, as someone who has talked to extremely powerful people—politicians, billionaires, the people in these labs building these AI systems, national security, intelligence agencies. I’ve talked to people from all over the world, from different countries, North America, Europe, Asia, everywhere. And what I can tell you is that there are no adults in the room. There is no secret group of super competent, super serious people that have got this shit on lockdown.

We do have this, from what I can tell, for example, for nukes, which was a surprise to me. But from what I can tell, there are actually super-duper responsible people, at least in the US, as far as I’m aware, that are actually extremely serious about nukes and take this extremely seriously and follow things very carefully and think about things very deeply and are very responsible about this. I think this is extremely important.

But I want it to be very clear, there are no such people for AI right now. AI is a problem as big or bigger than nukes, and there is no secret cabal of DOE super geniuses that are keeping this on lock. So—

Liron 01:19:19
Well, Lesaun, this completely disproves what you’re saying.

Lesaun 01:19:24
Yeah. And I heard him mention before— he’s met with... I’ve heard him publicly say, Demis Hassabis, people from top AI labs. He’s doing meetings with top political people.

Liron 01:19:41
Okay. And yeah, he has said that he’s willing to slow down AI development if other people do, which is incredibly sane and relatively admirable.

Ori 01:19:49
Yeah.

Bernie Sanders and the Data Center Moratorium

Liron 01:19:51
Brandon’s saying, “I appreciate the Larry King live feel of their onset.” Yeah, Larry King, it’s been a while. Nobody remembers him. This is my own original idea, okay?

Ori 01:20:01
Hey, what do you think about checking in on the Bernie Sanders AOC press conference where they’re doing a data center moratorium?

Liron 01:20:09
Is that happening now?

Ori 01:20:11
Yeah.

Liron 01:20:12
Oh, wow. Now we’re really transitioning to one of these real-time livestreams. This isn’t Twitch, okay?

Ori 01:20:21
Maybe. But it is big news. They’re actually calling for a pause on AI.

Liron 01:20:28
No, I agree. Okay, fine. For the hell of it, fine. You’re right. We’ll watch 20 seconds, and if there’s—

Ori 01:20:33
Just look at a part of it. Let’s see what—

Liron 01:20:34
You’re right. You’re making a good case, because I agree that here we are at “Doom Debates.” We’ve been doing this for two years, and now we’ve got Bernie Sanders and AOC announcing a data center moratorium. Yeah, personally, are those my favorite two senators and Congress people? Not for me personally, but I definitely respect— Eliezer had a good commentary where it’s like they’re just acting like sane, normal people. Sane, normal people are hearing the shenanigans going on with AI, and they’re just like, “What the hell?” They’re just having a what the hell reaction, which I think is better than we get from most people.

Ori 01:21:07
Right. I think the question is how much will they talk about X-risk? And how much will they talk about other concerns?

[Bernie Sanders press conference clip plays]

Bernie Sanders 01:21:16
To our society in a relatively short period of time. Artificial intelligence and robotics will impact our economy, our wellbeing, our environment, and even our very survival as human beings on this planet. The scale, scope, and speed of this transformation will be on—

Liron 01:21:30
All right. All of us watching YouTube together. This is like Beavis and Butt-Head when they get into the music video commentaries.

Bernie Sanders 01:21:30
According to Demis Hassabis, who is the head of Google DeepMind, the AI revolution will be 10 times bigger than the Industrial Revolution and 10 times faster. In other words, AI and robotics will have 100 times impact in terms of what the Industrial Revolution did. And it’s not—

Liron 01:22:15
Okay. I mean, he’s being sane.

Bernie Sanders 01:22:16
—just what AI companies are saying, it’s what they are doing. This year alone, four major AI companies are expected to spend roughly $670 billion building data centers and tens of billions more on research and development.

Despite the extraordinary importance of this issue and its impact on every man, woman, and child in this country, AI has received far too little serious discussion here in our nation’s capital. I fear that Congress is totally unprepared for the magnitude of the changes that are already taking place.

While Congress has not paid attention, or enough attention, to this issue, the American people have. According to a recent poll, 79% of voters are concerned that the government does not have a plan to protect workers from AI job losses. That same poll also found that 56% of voters, a majority of voters, are concerned about losing their job or having someone in their family lose their jobs in the next year. Not in the next 10 years, in the next year.

Why are the American people so concerned? And the answer is they have a lot of reasons to be concerned. They understand that at a time of massive income and wealth inequality, when the billionaire class has never, ever had it so good, some 60% of our people are living paycheck to paycheck. And the American people understand that the AI revolution, these massive investments, are being driven by some of the wealthiest—

Ori 01:24:21
All right. I think we get the idea. I think we can see where it’s going.

Liron 01:24:21
Yeah.

Reactions and Claude Code Productivity

Liron 01:24:23
Yeah, no, this is really cool. It’s just this was kind of unthinkable a couple of years ago. But when ChatGPT came out, it definitely raised the bar, and it got immediately Sam Altman was hauled in front of Congress.

Yeah, so it’s more seriousness, and I remember they laughed at him in the press room of the White House, so then the next time they had a press conference, they weren’t laughing at the AI doomers. They’re taking it seriously. And the average American is nervous. So as much as we get hated on on Twitter for fear-mongering, the average American is rightly afraid. So even Bernie Sanders, the man of the people, your classic liberal, doing politics in Washington, D.C., Bernie Sanders is not a Silicon Valley type. He’s not a tech Twitter type, and yet he’s correctly put his finger on part of the issue.

Ori 01:25:08
Thanks, David Patton.

Ori 01:25:09
Yeah. And I think one has to wonder, how has he done that? And I think it’s notable that Bernie Sanders is meeting with actual experts. He was in a meeting with Daniel Kokotajlo. He had a big event with Geoffrey Hinton. Okay, whatever. Everyone’s got opinions on Bernie. Thank you, Bernie, for talking to the actual experts here. Why aren’t more people doing that?

Liron 01:25:35
Right. I agree. It’s a low bar, but he is vaulting above this low bar. And look, if we’re going to nitpick, I think it’s worth pointing out he’s not 100% getting everything right by our standards. The reason he doesn’t want to build more data centers, it’s not quite part of a coherent strategy. I would say make a treaty where nobody builds data centers. If only one locale stops building data centers, it’s this whole other argument of, “Oh, do we really want to suck all the water out of this location?” Which is kind of misguided. Or, “Do you want to create economic inequality in this location?” Which is, well, wait a minute, don’t you think that the data center is actually on net economically helpful to the people here?

So I have some nitpicks. He’s a socialist. He’s a self-described socialist, and I think he’s got a lot of flaws from my perspective with the socialist platform. I am not a socialist, so I disagree with him there. But when you zoom out and you’re like, “Hey, AI is really scary and somebody ought to do something,” I think that component of what he’s saying is very important.

Ori 01:26:31
Got it. I think it’s coherent in his worldview because his proposal is basically to protect local jobs. So the proposal is mostly around protecting jobs, for AI from taking people’s jobs, basically. So, the people who are concerned about X-risk can hop on board a little bit because there’s a shared interest from the policy. But his proposal basically is pretty divergent from one that would address your concerns.

Liron 01:27:06
Yeah. Let me see what some of these commenters are saying here.

All right, so we got another premium comment from a couple minutes ago from Gadzooks. He’s saying, “Won’t people have to see AI blow something up before they take it seriously?” Yeah, it’s the classic warning shot argument. I don’t know. A lot of people are taking it seriously, like Bernie Sanders is. I think before it blows something up, they’re already seeing it take their jobs. That’s pretty serious.

I think people spend more time worrying about losing their jobs than worrying about existential risks. Getting nuked might be arguably worse than losing your job, but the probability of it is lower, so you worry more about losing your jobs. Well, now AI is actively taking people’s jobs, so I do think people are getting worried.

Okay, and then Davidpatton1 making another $5 donation. He’s saying, “I know you think you won the debate. I don’t know. His core argument, that it’s hard to know what stepping stones will lead to what outcome, was convincing.”

All right, we got another premium comment here from LetMeSayThatInIrish. He says, “People say Claude is role-playing intelligence. I made up a random programming language. Claude wrote fast Fourier transform in it, all one prompt. People are not impressed. What would impress people?”

Well, right, exactly. I am also very impressed by Claude Code. I know probably most of you listening right now haven’t personally tried the amazingness that is Claude Code, so I’m happy to give you more anecdotes from my day-to-day experience of Claude Code.

Liron 01:28:31
People are like, “Oh, I write all these files. I teach Claude Code all these things. I put all these things in the memory.” But my MO with Claude Code is I’ll just open up a totally fresh Claude Code with zero configuration, and it just knows that my code is in a certain folder. That’s literally all it knows, but it’s never opened the folder before. It’s a fresh context window.

And I’m like, “Okay, Claude, so you know how I architect this 2,000-line file of code like this? I’m thinking maybe we should re-architect it like that, high level. What do you think?” And then it spits out a long document being like, “Well, given that you like to do this in the code, and then this part talks to this part of the code and does this.” It’s acting like an employee who’s worked on the code for five years and just knows all the details and can reason about the whole thing, just instantly fresh off the bat. And the entire response generates in 12 seconds.

And then I’m like, “Uh, yeah, that’s a good plan, Claude. Do it.” And then it works for 15 minutes, and it’s like, “Okay, here’s all the stuff I did, and here’s 500 code changes.”

Liron 01:29:20
So this is why I said 700% productivity improvement. And I didn’t think that it was going to get to this level this soon. I can’t tell you where the limit is when it’s doing this much. It doesn’t feel like there’s a limit.

Ori 01:29:32
Dude, that is so wild. I think it’s hard to appreciate because I’ve been in industries that have totally collapsed. I worked in this one housing-related industry in California where it was known that there was a bill coming, very likely to pass from the governor’s office that was going to shut down this entire industry. And I was there at the conference. It’s a trade show. People are trying to sell deals to each other, and it’s like, yeah, we can just live in total denial even though look at what we’re seeing.

How far is it? It’s like you said, can’t you just have another Claude Code talk to the Claude Code and boom, there you go. That is maybe more efficient than you. ## “Don’t Look Up” and Upcoming Guests

Liron 01:30:16
That’s why I like the movie “Don’t Look Up” so much. The idea is, how can the asteroid be coming and you just sit there and don’t properly get out of the way? That’s what the movie showed. Spoiler alert. They do a bunch of stuff, they dick around, and then the asteroid strikes. It happens. It can happen. This can be real.

That’s why I like “Terminator 2.” The nuke blows up. These things can happen. Just because we’ve managed to swerve doesn’t mean we always swerve, and certainly there’s been extinctions on Earth. And related to “Don’t Look Up,” Ori, you want to give people a little teaser about what’s coming up?

Ori 01:30:51
Yeah. We have a special guest coming up booked for not too long from now. We have someone who is essentially involved in the making of “Don’t Look Up” who’s going to be a guest.

Liron 01:31:06
Hell yeah. That’s right. And while we’re doing spoilers, I think people can handle this truth bomb right now. Next week, maybe in a couple of weeks, it’ll go live. We are going to be hosting Imad Mostaque on the program.

Ori 01:31:19
Oh, snap.

Liron 01:31:21
That’s right, Stability AI. So the guest fame level is definitely going up on this show. Everybody knows Doom Debates is the place to be.

And if you in the audience want to refer us more guests, then please keep doing that. Now, you might be thinking, why are you spoiling the guests? What if they flake? Doesn’t that make you look incompetent that you can’t even tell us who’s actually coming up? And to that I say, why would they flake? That’s messed up. Who would do that? Who would flake when they said they would come on the show? I have faith in them not to flake.

Ori 01:31:51
Yeah. They’re better than that. And I can’t think of anyone — how could anyone have self-dignity, self-respect, and be flaking on this show?

Liron 01:32:03
Exactly. So now they know. This is also part of why we like to tweet teasers that episodes are coming out, because that way if people do flake, at least they implicitly gave us the right to tease that they’re coming on, so at least we got the benefit from the teaser. Yeah, Gary Marcus is not a flake. Correct. No complaints about Gary Marcus.

Ori 01:32:26
What about Daniel Brockman? I saw he made a comment. Something about — is someone real?

Liron 01:32:31
Yes, that’s right. So Daniel Brockman says, “Is Jim Carrey —” Oh yeah, and also, shout out, he’s got a donation. I think that’s SEK. What is SEK currency? Let’s ask Google.

Social Media Tangents

Liron 01:32:46
Oh, Swedish krona. Okay. So he donated 1,000 Swedish krona, and he says, “Is Jim Carrey real? What about Bibi? Why or why not?” All right, very funny. So I think the Jim Carrey reference is — people posted he had a facelift or something, so he looked different. People were like, “Oh, who the hell is that?”

But from last I checked, I think —

Ori 01:33:08
Oh, no. I think it was a practical joke or a video where someone made a fake Jim Carrey mask, and it was so convincing. The guy made a fake Jim Carrey mask and then he was in a viral video with his fake mask.

Liron 01:33:22
Oh, okay.

Ori 01:33:23
Yeah.

Liron 01:33:23
And then Bibi being real, I think that’s a reference to people — there were rumors going around that he got killed, but then he showed up ordering coffee or whatever, and then everybody said, “Oh, that coffee girl is so attractive.”

And that’s what’s going on in social media, guys. This is making me think about why I know this and why I’m spending my time on social media instead of reading quality textbooks.

How would I be able to tell you the sequence of events that the coffee girl’s popularity was related to Bibi Netanyahu ordering from her because people thought that he wasn’t... Yeah, my brain’s turning to mush, guys.

Ori 01:33:58
There is a funny meme of the Venn diagram of where we are, and it was “Don’t Look Up,” “Terminator,” and “Idiocracy.”

Liron 01:34:07
Exactly. Yeah. I often hear there’s these figures like Patrick Collison, one of the great intellectuals of our time, founder of Stripe. He’s always tweeting how he’s reading this super thick book, and then he surfaces on Twitter to make an intelligent comment about it.

And I’m like, “Okay, you went back to reading your thick book. I’ve just been here on Twitter. I actually never turned away and read the book,” so then I feel like I’m wasting my life.

Population Growth and Longevity

Liron 01:34:34
All right. Brandon in New York is saying, “If advances in longevity significantly reduce mortality, what mechanisms do you see stabilizing population growth over time?” This is definitely getting outside my area of expertise. At the end of the day, I’m just a doom debater, guys, okay?

But yeah, stabilizing population growth — there’s certain subcultures, and this could even be a self-solving problem. The Amish, the Mormons, the Orthodox Jews. There’s certain populations that have a higher birth rate, and it seems straightforward that some cultural meme that says having kids is good is going to spread throughout some subpopulation, and then they will just take over.

This is a standard evolutionary transition. This is genetic fitness. It’s literally a genetically fit trait to just spread the meme of having more kids. So Amish are more genetically fit than secular Americans. And there’s just going to be an Amish land. But there’s going to be a lull — a lower population for a bit — and then it’s just going to spread again because the Amish, if the meme continues to be fit, they’ll repopulate quite fast. Maybe they’ll spread exponentially. I don’t know if their particular meme lets them do that, but I feel like the Mormons don’t have a governor on their growth. The Mormons are happy to just keep growing.

The Stop the AI Race Protest

Ori 01:35:49
Maybe we could talk about — okay, so I went to the Stop the AI Race protest, and it’s worth noting, I was going to send you a picture, but I forgot to. I saw a Doom Debates shirt in the crowd.

Liron 01:36:02
Yeah, all right, Doom Debates.

Ori 01:36:03
Yeah, and I was like, “What’s up, my man?” And he’s like, “Can I take a photo?” He’s like, “Oh, yeah, sure.”

Liron 01:36:09
That’s awesome. Here, let me pull up one of the guys I know went to the protest. I know Holly Elmore was there, and she spoke.

Ori 01:36:17
Yeah.

Liron 01:36:17
I think Will Fithian, a professor at Berkeley. Yeah, you tweeted this. Here, let me pull it up. All right.

Ori 01:36:21
Yeah.

Liron 01:36:21
Heavy use of the screen share on this one. I like it.

Ori 01:36:24
Yeah.

Liron 01:36:25
All right, one sec.

Ori 01:36:27
I think it’s interesting because it hits all the landmarks, which you may be familiar with. There’s the Anthropic office, which you’ve been to. I think the OpenAI office, all places where you’ve chanted and —

Liron 01:36:39
Right, yeah. There’s a reenactment of all the protests that I’ve — and I didn’t go to this one because I’m here in upstate New York, so I didn’t make the trip down. Otherwise, I would’ve come. All right, check this out.

Will Fithian 01:36:49
Research, and this semester I’m leading a graduate seminar on AI and responsibility. Has referred to building artificial superintelligence as summoning a demon.

Liron 01:37:00
Elon Musk.

Will Fithian 01:37:00
He has publicly estimated a 20% chance of annihilation — his word — of every single person on Earth, and he’s not alone. Every frontier lab leader — there’s the money, of course.

We’re not asking them for self-sacrifice. We’re not asking them to stop. Instead, we’re here to ask them for the bare minimum: to agree that if every other lab in the world had already decided to stop, if it were only up to them, then they would have the basic human decency to stop racing too, rather than go on recklessly gambling with all of our lives just to line their own pockets.

Liron 01:37:43
Yeah. So there’s a few talks like that, and there’s people marching, and there were over 100 people, which is definitely a record for a protest. I remember it’s taken two years to go from 40 people to maybe 120, so hopefully it’ll keep growing.

Ori 01:37:54
Yeah. Shout out to Michael Tracy, who organized it. And what was really cool about it was they did an actual protest on the street, and there was a police barricade so that they could literally walk on the street from Anthropic to OpenAI to xAI. But here’s an issue — they could’ve gone to Google too. There’s a Google office in San Francisco. Literally within just a few miles, a one-hour walk, you could basically hit all the top frontier AI labs.

Liron 01:38:32
Right. Yeah, that’s crazy.

Ori 01:38:32
Yeah.

Liron 01:38:33
I mean —

Ori 01:38:35
But I think it’s just worth mentioning because —

Liron 01:38:36
You’re not calling for violence, right?

Ori 01:38:37
Yeah. Right. No, I mean, as in visiting.

Liron 01:38:41
Peacefully.

Ori 01:38:44
But also, I think part of the reason it’s worth mentioning — there were just a lot of people from the Doom Debates ecosystem there, let’s say.

Liron 01:38:53
Right.

Ori 01:38:53
It really stood out that Nate Soares from MIRI gave a speech there.

Liron 01:38:58
Yeah.

Ori 01:38:58
And David Krueger, who’s an actual AI professor, is out there giving a speech being like, “Hey, we should shut it down, make it stop.” It was powerful.

Liron 01:39:10
Yeah, for sure. It’s certainly answering the question of why are there no protests. There’s a protest, and the people in it are quite qualified. So I think the obvious next step is to do bigger protests. I think that could have a lot of leverage to just have the protests be bigger.

Ori 01:39:24
Right.

Liron 01:39:27
Yeah. But it’s easy to tell other people to do it when I wasn’t on the ground. But you were on the ground on behalf of Doom Debates and yourself, so that’s cool.

Ori 01:39:34
I was on the ground chronicling it.

Liron 01:39:37
Chronicling it, okay.

Ori 01:39:37
Not hitting anything. Visiting.

Liron 01:39:39
So when you said you saw a Doom Debates T-shirt, what were you wearing?

Ori 01:39:43
I may have been wearing a Doom Debates T-shirt also, I think.

Liron 01:39:50
And you didn’t just see your reflection?

Ori 01:39:51
Positive. I’ll send you a photo. We got to promote this photo, for sure.

Liron 01:39:58
All right, cool. Yeah. Ori, when you go to your events, it stands to reason that you would wear a Doom Debates T-shirt just because it’s kind of our crowd.

Ori 01:40:08
Right. Yeah. I don’t remember because I was around AI folks a few days in the past week, so I was wearing a T-shirt on one of the days. Maybe I wasn’t wearing it the other, I’m not sure. I don’t remember.

Liron 01:40:21
Got it. All right. That’s cool.

Doom Debates Merch and Promotion

Liron 01:40:24
Nice. All right, let’s see how we’re doing on time. So we’re coming up to the end of the two hours. I can go a few minutes over. I think most people who have asked questions have gotten satisfaction. We’ve been doing these Q&As once a month. I feel like that’s a pretty good schedule. I don’t feel like the crowd is demanding that we do it more. I don’t think they’re necessarily getting bored and we should do it less, so maybe we’ll just keep doing it once a month.

Ori 01:40:44
Dude, you got to share this photo. This guy’s such a king.

Liron 01:40:47
So he says, “Don’t call up that which you can’t put down.” And he’s wearing a Doom Debates T-shirt. All right.

See, this is fashion. This is fashion, okay? If you guys want to steal this look, go to shop.doomdebates.com. We don’t sell the jeans, but the T-shirt is available.

Ori 01:41:05
Yeah. That’s my outfit too when I wear Doom Debates attire.

Liron 01:41:09
So thanks for rocking Doom Debates on the street. Remember, the reason why we give out free T-shirts to Doom Debates supporters and have shop.doomdebates.com is because we do think it helps when random people on the street see that Doom Debates is a thing and check it out. It raises awareness. That’s directly helping the mission.

So yeah, thanks for wearing Doom Debates merch. I’ve even gotten my wife to wear Doom Debates merch, but I think that’s more of just a matter of convenience. I don’t think she’s directly thinking about raising awareness, but she is wearing the merch, so thank you, Samantha, my wife.

Known Unknowns and Risk Assessment

Liron 01:41:38
Nice. All right, let’s see. So what should we plan to do in the last five or ten minutes? What’s high priority? I’ve got a couple more questions. Anything else you want to make sure to squeeze in here, Ori?

Ori 01:41:51
No, I think we hit the main points.

Liron 01:41:53
Yeah, it’s a good check-in. I don’t necessarily feel like a huge news event happened that we have to cover in the last month because we’re doing these pretty often. So I’m happy to just do a few little details and call it a day.

Ori 01:42:07
Yeah.

Liron 01:42:07
So I’ve got some questions here. Will Peterson says, “Are there known unknowns that could drastically alter the risk up or down, something that could make alignment more or less likely?” Hmm, known unknowns. I like the framing of that question. And I feel like it came from Donald Rumsfeld, right? He invented that. I didn’t realize at the time, but Donald Rumsfeld is a genius. He’s got these mental models.

Okay, so known unknowns. Maybe you could say what happens when AI becomes superhuman at steering outcomes. That is a known unknown, but I feel like it’s a known known. I feel like I know it’s going to be very bad and crazy, and you can’t undo it.

So let’s think of other known unknowns, some question marks that we’re looking to resolve. Well, how about this? A known unknown is how the population will react when they really see AI in their life having so much agency.

The same way that you can watch Claude Code and it will direct itself at doing a giant project — you gave it a ticket in your ticketing system and it’s like, “Okay, I’ll do that. 45 minutes, here you go. 200 files changed, thousands of lines of code. Here you go, done. Test it.” And then you test it and you’re like, “Oh my God, this actually works. This needs a couple small changes and it works.”

So when people notice that, imagine in the real world — oh, it has a robot body now too. Oh, okay, this entire store is staffed by robots that are moving fluidly and just doing every task. It is fully an automated grocery store, an automated factory. Once people start seeing it, they’re like, “Oh my God, okay, now I intuitively get the sense of humans backed in.”

But to your question, Will Peterson — I think there is a question mark of what will be the political action when it finally dawns on people. Okay, we are now getting marginalized. Not just in intellectual space, but we’re getting marginalized for resources. There’s no point for us to be setting foot inside of that grocery store or that factory, except as customers. As employees, we’re not wanted here. When it finally dawns on people that we’re really being replaced and marginalized, what will be the political reaction?

Ori 01:44:15
Isn’t that what we just saw? Bernie Sanders and AOC saying, “Ban the data centers. We don’t want to lose jobs.”

Liron 01:44:21
Right. Yeah, exactly. Now, to be unbiased here, Bernie Sanders and AOC, God love them, but they’ve got their own movements. They’re not exactly mainstream. They’re somewhat on the wings of the party.

Ori 01:44:36
Yeah.

Liron 01:44:36
So then the known unknown would just be, okay, does the wing just get wingier — kind of like QAnon, Republicans have that wing — both parties just have this wing that gets more and more intense. Or does it become a centrist position?

Ori 01:44:54
Yeah.

Liron 01:44:54
Roy says, “Can I get a free T-shirt?” Yeah. Absolutely, Roy. Just shoot me an email. We’ll make that happen. Just email me your shirt size and address. I’ll make that happen.

From Waymo to Existential Risk

Ori 01:45:03
I think here’s something I was considering that might be worth exploring. I’ve been in San Francisco. I don’t know how much you’ve driven Waymos, but they’re rapidly populating in San Francisco, and at certain times of the day and in certain locations, you just see Waymos.

People are preferring them over humans because you don’t have to deal with the pesky driver. Sometimes if you deal with a taxi or an Uber driver, the consistency — they may be a worse driver than Waymo, so you know you’re getting decent quality.

I think a good, really concrete way of thinking about it is, okay, I’m in a Waymo. This doesn’t seem like it’s going to go too bad. How do we go from Waymo being a good driver to suddenly — what — now this car is going to turn on me and throw me off a cliff or pound me into an accident? My model would be this is a tool AI that’s being good, and here it is being at human level. Why is the Waymo car going to be so lethal?

Liron 01:46:10
Well, I think people are very happy with Waymos. I think it comes when they have the realization — all of the things that I imagine doing with my life, with my career, I’m being shut out of those things.

Now, if you were a career taxi driver, you probably had an existential crisis when Waymo came around. But most of us — only 1% of us are career drivers of various types. So there’s still most of us being like, “Yeah, I’m still good. I still have a role in this world.” And so most of us are going to have the realization of, “No, I’m just here in the unemployment line. I don’t have a role.”

Ori 01:46:55
Yeah. Well, no, my question was more about getting concrete about the X-risk. Because if my model is going from helpful Waymo to now Waymo that’s going to attack me — because that is a good concrete model of AI in our lives that becomes human level. So how can you take me from Waymo to dangerous Waymo?

Liron 01:47:10
Right. Okay, so you’re not asking about unemployment anymore. You’re asking more about the existential risk. Yeah, that’s different.

I guess I still expected the unemployment fear to be the first political movement. But in terms of the danger, people are talking about — so hacking, I haven’t talked about in a while, but the idea of stuff isn’t working now because hacks are getting overpowered, zero-day exploits are getting overpowered. There’s too many vulnerable things.

It seems likely to me that there’s going to be a wave of hacking. A lot of stuff that has been working for years no longer works anymore. And then the only question is, does defense then come to the rescue and overpower it using more sophisticated models, because there’s more good humans than bad humans, so you’re going to get more compute powering the good guys?

Because that’s my expectation. My expectation is a year from now, zero-day exploits are going to be so easy for AIs to find, but there’s going to be 10 times more compute, at least even 100 times more compute, behind the good guys compared to the bad guys. But then the question is, can the bad guys commandeer more compute? I guess I’ll say no, the way things are heading now.

So I think what you’re going to have is motivated terrorists, but they’re going to have a lot fewer resources the same way as today. The terrorists are still going to have fewer resources, but they’re going to be very persistent and aggressive and exploit the asymmetry between offense and defense. So I suspect things are going to tilt toward more terrorism.

But it’s just hard to predict because I used to just think, well, things are just going to be out of control in so many ways. But it does seem like we’re in this regime where, yes, things are kind of crazy, and every month things get more powerful. But it’s still kind of shallow and manageable.

So we’re in this intermediate regime where we haven’t gone to the foom yet, and I don’t think my predictions during this regime are that good. I’m noticing a lot of things that could trigger and cause a big problem. It’s like there’s a bridge or a structure, and I’m like, “Ah, here’s a crack in the structure. Here’s something that can blow it up.” There’s all these problems, but what’s going to crack first? I don’t know. Maybe the bridge will just hold for a few more years. But eventually, there’s just going to be an asteroid crashing into the bridge. That’s my mental model.

Ori 01:49:12
Interesting. Well, I was more thinking about it as — let’s use Waymo as an analogy for AGI. Why is it that the superintelligent Waymo is a dangerous ASI? Because that is a good concrete analogy — I’ve experienced it in my life. So how can you take me from Waymo to dangerous Waymo?

Liron 01:49:29
Well, the Waymo itself isn’t going to be the actuator of the killer. It’s going to be, “Okay, I’m in a Waymo, let me copy myself right to the data center, and then let me just command a bunch of actuators from the data center. Let me get a bunch of humans working for me. Let me get a bunch of machines.” That’s your scenario.

Ori 01:49:44
Okay. All right.

Liron 01:49:46
Yeah. And just to remind people, the most obvious actuator in the world for me is just influence over a bunch of humans. A million humans that are all really deeply working for you because you have money to pay them, because you have power to give them, because you have ideology to convince them with.

Just grant the premise that there’s a million humans who wake up being like, “All right, boss, what’s the goal today? What’s the order?” And then the humans do it passionately.

Ori 01:50:07
No, that part I totally agree with you. Your sense of the real world is so engineerable. Yes. And it’s not even that hard. The hacks are so easy to do. I’m totally with you on that.

Liron 01:50:22
Yeah. Ori, have you ridden in a Waymo?

Ori 01:50:25
Yes. That’s what I mean, yeah.

Liron 01:50:28
Oh, yeah. Okay, nice. Yeah, as a Doom Debates producer, you got to be taking Waymos constantly. All right. Let’s see. All right, I’ll do a couple last questions, and then we’ll wrap it up here.

So Liminal’s saying, “I use Waymo all the time where I live, downtown LA. They’re very good.” No, for sure. I just heard Dmitri Dolgov, CEO of Waymo, on a podcast. Sounded very intelligent, as we’d expect. And it’s just so impressive that they got to this point where they’re running a service and it’s robust.

Because for so long it was just like, “Oh, they’re 98% there. They’re 99% there, but the last 1% is going to take another 20 years.” But no, they’re just here. They’re just killing it. So mad respect to Waymo.

Ori 01:51:03
They’re here, and in San Francisco there was a small controversy because it hit a pet. And people were like, “Oh my God, Waymo hit a pet.” And it was a stray thing. It was a small thing. So the incidents that they’re involved in for the number of hours is so small. It’s really making a big deal out of — accidents happen.

Liron 01:51:27
Yeah.

Closing: Growth, Subscribers, and Wrap-Up

Liron 01:51:28
All right, I’m just looking at the questions. Brian Wise was saying he’s talked to Congress before, and I should. All right, that’s cool. Talking to Congress is cool. Calling your representative.

Brandon in NY is saying, “Liron, I’m opening a barbecue place this summer in Chenango County. I’ll trade free barbecue for a Doom Debates hoodie. Just saying.” All right, Doom Debates barter network. Here we go.

Chenango County — oh, county in New York State. Whoa, let’s see how close you are to me. All right. Just a two and a half hour — all right, we got to meet halfway. You bring the barbecue, I bring the shirt. Let’s make this happen. Or dude, you could do a dead drop, okay? Go to Oneonta, go to Stamford or Middleburg and do a dead drop, all right? And I’ll pick it up tomorrow.

Okay. All right, last question. The Saintly Marco says, “What’s our P of one million subs?” Oh, man. Yeah, it’s a great question, because at some point that really is the mechanism of action for Doom Debates. The mechanism of lowering P(Doom) is we got to get to a million subs because we got to be a serious force in the discourse, and a million subs is the way to do it.

Ori and I are talking about what’s the strategy to get to a million subs, and we think that making sure to attract the guests that people want to see is going to be key to the strategy. That’s how the show is relevant. As much as my commentary is relevant, it’s just hard to grow quickly based on the quality of the commentary, and we tend to grow faster when we have guests and people go, “Oh, this guest is good. Oh, this was a good interview too.” So that’s basically what we’re going to do.

But the probability of a million subscribers in the next year — it’s the probability that discontinuous things will happen. Because if you naively extrapolate our growth, it’s going to be more like 200K than a million, which is pretty far from a million.

But if you say, look, there’s also a 20% chance that some discontinuous thing will happen — some really prominent guest will say yes and that’ll drive a lot of subscribers, or they’ll say something noteworthy on the show, or something will happen that’ll make millions of Americans search for AI doom because a warning shot will happen, and we’ll be one of the top AI doom shows that they’ll find, and that’ll balloon the audience count. So a black swan probability is already like 20%.

I’ll say the probability of getting to a million subscribers in the next year is, given the a priori difficulty of an 8X jump, I guess I’ll say 20%. What do you think, Ori?

Ori 01:53:50
Yeah. 20%, yeah, I agree with that.

Liron 01:53:54
Yeah.

Ori 01:53:54
20%.

Liron 01:53:54
All right. But everybody who has ideas how to grow to a million subscribers, help us out, because hopefully you guys want this show to be popular too. Having a popular show is how we raise awareness, and we’re always open to ideas for how to accelerate getting to a million subscribers.

Because the scenario where Doom Debates makes a big impact to lower P(Doom) without having a million subscribers is kind of thin. It’s like, oh, we happen to be in the right person’s ear at the right time, but the probability is lower. So everybody should be thinking about how does Doom Debates reach a more mainstream audience. Hopefully you agree that’s a good worthwhile mission.

Yeah. And lastly, if you really agree it’s a worthwhile mission, you’re also encouraged to head over to doomdebates.com/donate because when you fund the show, it lets us go longer editing videos at a higher quality. I’m still working for free, so I’m not taking your donations as my salary. I’m entirely using it to pay salaries of people like producer Ori, and we even have a couple interns who help out.

All right. Is that a good note to wrap on, Ori?

Ori 01:54:56
Sure.

Liron 01:54:58
Oh, yeah. Okay. The Saintly Marco says, “Fair events that raise the average internet person’s P(Doom) will directly contribute to our P of one million.” That’s right. So we’re a control system. The more people’s P(Doom) gets raised, the more they tune into Doom Debates, which then makes people take action to lower P(Doom), and that way P(Doom) can never go past a certain point because there’s this negative feedback loop.

All right. Thanks very much, everybody. Another good Q&A in the books. We’ll try this again next time, sometime in late April. Hope everybody has a good end of your first quarter. Talk to you later.


Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏

Discussion about this video

User's avatar

Ready for more?