Destiny has racked up millions of views for his sharp takes on political and cultural news. Now, I finally get to ask him about a topic he’s been agnostic on: Will AI end humanity?
Come ride with us on the Doom Train™ 🚂
Timestamps
00:00:00 — Teaser
00:01:16 — Welcoming Destiny
00:02:54 — What’s Your P(Doom)?™
00:04:46 — 2017 vs 2026: Destiny’s views on AI
00:11:04 — AI could vastly surpass human intelligence
00:16:02 — Can AI doom us?
00:18:42 — Intelligence doesn’t guarantee morality
00:22:18 — The vibes-based case against doom
00:29:58 — The human brain is inefficient
00:35:17 — Does every intelligence in the universe self-destruct via AI?
00:37:28 — Destiny turns the tables: Where does Liron get off The Doom Train™
00:46:07 — Will a warning shot cause society to develop AI safely?
00:54:10 — Roko’s Basilisk, the AI box problem
00:59:37 — Will Destiny update his P(Doom)?™
01:04:19 — Closing thoughts
Links
Destiny on YouTube — https://www.youtube.com/@destiny
Destiny on X — https://x.com/TheOmniLiberal
Marc Andreessen saying AI isn’t dangerous because “it is math” — https://a16z.com/ai-will-save-the-world/
Will Smith eating spaghetti AI video — https://knowyourmeme.com/memes/ai-will-smith-eating-spaghetti
Roko’s Basilisk on LessWrong — https://www.lesswrong.com/tag/rokos-basilisk
Eliezer Yudkowsky’s AI Box Experiment — https://www.yudkowsky.net/singularity/aibox
Transcript
Teaser
Liron Shapira 0:00:00
Destiny, what’s your P(Doom)?
Destiny 0:00:03
AI might be able to generate things that are just literally can’t be contemplated by the human mind.
Liron 0:00:07
But what about Marc Andreessen saying, “It’s just math?”
Destiny 0:00:10
I hate that guy more than anybody else on the planet. So whatever that guy says, I have the opposite opinion.
Liron 0:00:15
Given how far you’ve come on the doom train already, why aren’t you at least giving it one percent?
Destiny 0:00:18
I just haven’t seen anything where I’m like, “Oh my God, that could be a threat to humanity.” I’m vibing that out just because I haven’t seen it yet. There could be a story that breaks tomorrow, and we’re like, “Oof, we have to do something crazy.”
Liron 0:00:28
I don’t think your vibes are wrong. My claim is that we’re going to see the trajectory go up, up, up, up, up, but then it totally reverses and goes to hell.
Destiny 0:00:36
Usually, when a problem is recognized, we have a lot of time to act on that problem, but if the AGI question flips that, then we might be in a really spooky world.
Liron 0:00:44
What is then your optimism that we’ll still get a handle on AI that’s more capable than our brains? We’ll somehow have our brains keep it under control and not die?
Destiny 0:00:52
Ninety percent is my optimism.
Liron 0:00:55
Okay.
Destiny 0:00:55
But that’s because I have to be optimistic about the outcome of the human race because I’m a human, and I have to be.
Liron 0:00:59
I guess I’ll ask one last time. Based on our whole conversation, what is your P of AI doom in the next twenty years?
Welcoming Destiny
Liron 0:01:16
Welcome to Doom Debates. Steven Bonnell, better known as Destiny, is one of the most influential political streamers and debaters online, with a career spanning over fifteen years and millions of followers across platforms. His debating style emphasizes logical consistency and calling out bad-faith argumentation, which has made him both beloved and controversial, depending on who’s watching.
Beyond politics, he’s engaged substantively with philosophical topics, including ethics, epistemology, and more recently, AI. So today, I’m excited to talk to Destiny about the trajectory of AI and get his perspective on the various existential risk arguments, which I call riding the doom train. Destiny, welcome to Doom Debates.
Destiny 0:02:02
Hey, thanks for having me.
Liron 0:02:04
So the first thing you said to me is that you don’t have a super strong position on the whole AI doom argument. Is that fair to say, or how would you summarize your overall position?
Destiny 0:02:14
Yeah, I think I’ve just not read super deeply into the arguments on both sides. I’ve had friends who are in or near the rationalist community, and a lot of them talk about AI alignment and how doomer or not doomer people should be, but I just haven’t dug into it too much yet. I don’t have an opinion that makes me view a lot of the AI stuff as being so substantially different in kind than a lot of the other technological improvements from the past.
Liron 0:02:40
Oh, for sure. Well, maybe if I throw some of these arguments by you, then we’ll at least see the beginnings of an opinion.
Destiny 0:02:46
Sure.
Liron 0:02:46
And you’re happy to take on the challenge of riding the doom train in this conversation, right?
Destiny 0:02:52
Sure. Yeah.
What’s Your P(Doom)?™
Liron 0:02:54
Okay, great. And if I just had to ask you to give me a wild guess—you could be off by a factor of ten—but I gotta ask you the biggest question of the show.
Destiny 0:03:04
P(Doom), P(Doom). What’s your P(Doom)? What’s your P(Doom)?
Liron 0:03:10
Destiny, what’s your P(Doom)?
Destiny 0:03:13
This is—what, probability of the entire planet being destroyed by general AI, or what is the...
Liron 0:03:20
Yeah, pretty much. So doom is a variable definition—roughly think about permanently extinguishing the future of humanity.
Destiny 0:03:27
And what’s our timeframe on this? And is this just limited to AI?
Liron 0:03:33
If you really think that there’s other forms of doom that are more salient than AI, you can mention those. You can kind of take it wherever you want.
Destiny 0:03:40
I guess in terms of total overall actual extinction-level stuff—so there are zero humans remaining from AI or any other causes—my probability on that would be quite low. I would say less than one percent in the next hundred years.
In terms of mass death events where one percent of humanity or less than ten percent of humanity is alive over the next hundred years, I would also be quite low on that, probably less than five percent.
Liron 0:04:08
Just for some perspective, you know that famous statement on AI risk from 2023 that says, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war”—would you sign that statement?
Destiny 0:04:23
Yeah, I’m generally pro any statement that limits the risk of extinction-level events. Yeah.
Liron 0:04:28
So you’re not super dismissive. Some people are like, “Look, AI risk is clearly point-one percent or lower, so it just doesn’t deserve that much mind share.” It sounds like you’re a step above those types, right?
Destiny 0:04:40
Well, I mean, at the very least, it’s commanding massive shares of the market right now, so it’s probably worth paying attention to. Yeah.
2017 vs 2026: Destiny’s views on AI
Liron 0:04:46
Okay, fair enough. Now, I remember a couple years ago, you were at the Manifest conference, and you had a chat on stage with Eliezer Yudkowsky. I wanted to ask you, have you read much of Eliezer Yudkowsky or LessWrong?
Destiny 0:04:57
Not too much. I read Scott’s blog, but past that, not too much.
Liron 0:05:02
A shout-out to Scott Alexander, Astral Codex Ten. Before we kind of take it from scratch and ride the doom train and see all the different arguments, I wanted to revisit one video that I found in 2017, where you’re just hanging out with your fans, and they’re throwing a bunch of AI doom arguments at you. And I guess at this point, it was nine years ago, so kind of a really long time span in the sense of AI progress.
Destiny 0:05:24
“But by the time we’re building AIs that are capable of self-improvement, we’ll have worked so hard to get there that we will know that that’s what we’re creating. It’s not like we’re gonna randomly be creating a thing that’s learning how to play the guitar, and then it’s like, ‘Oh my God, but if I nuke Russia, I’m gonna become infinitely more intelligent.’ That’s not gonna happen.”
Liron 0:05:41
I remember somebody was saying in the conversation, “Oh, there’s this new company, OpenAI. They might be building something interesting.” So it was really the dawn, the primordial soup of the modern AI era, and it’s interesting to revisit that conversation. One of the quotes I picked out is that you mentioned experts think AI is thirty to fifty years off.
Destiny 0:06:00
“When you say quite soon, quite soon means that experts estimate thirty to fifty years off.”
Liron 0:06:05
You’re not even wrong, because if you looked at prediction markets at the time, they were saying something like 2050, 2060. Prediction markets have changed their mind since then. Now they’re saying it’s more like seven years off. Have you changed your mind?
Destiny 0:06:18
Are we talking about a general AI that can adapt and improve itself and do a wide range of tasks? Or are we talking about increasingly specialized helper tools for coding and whatnot?
Liron 0:06:31
So let’s call it the AGI threshold—AI that’s qualified to do literally any human job.
Destiny 0:06:38
Man, that’s a hard one. I’d say it feels like when we say any human job, one component that seems to be missing is the hardware aspect. It still exists in a very software-ish world.
But in terms of being able to replace or be competent in a lot of human-related jobs, I kind of look at it as that 80/20 thing. There’s a whole bunch of jobs that are probably under threat or already being actively replaced right now, and then there’s some smaller percentage that are probably going to be significantly harder to replace.
I don’t know if the smaller percentage of jobs are going to be replaced within seven to ten years. But for all the improvements that AI can make—help with coding and everything else—those jobs, probably within seven to ten years.
To be more grounded: for very hard-to-replace jobs, things like entertainment—having an actor, a musician, not just AI-recorded music, but an actual person on screen—that stuff is still very far away. Or having creative directors who can choose a direction for their company or direction for art. That stuff is still very far away.
But for AI that can just do art or create music in a very convincing way—people would argue that this already exists now. Probably within seven to ten years for that. That seems reasonable, yeah.
Liron 0:08:04
Thirty to fifty years—let’s say a decade has passed. So if you’re fully consistent with your past self, you’d be like, “Oh, yeah, we got at least another twenty years before AI can really do everything that humans can do.” Would you shift the timeline at all, or are you fully consistent with your past self?
Destiny 0:08:18
I feel like any guess for anything that’s over ten years away is almost shooting in the dark, because ten years these days in technological times is a long time. If anything would have happened, I would have been accelerating my timeframe because the AI improvements have been leaps and bounds over what people thought would happen.
Especially with—I think it was ChatGPT-3 that came out, where people were like, “Oh, whoa, there’s a lot of crazy stuff happening here that we didn’t really think.” So if anything, my timeline would have accelerated a bit. Yeah.
Liron 0:08:51
Right. Yeah, the way I see it intuitively is that subjectively, if I made a list of all the different achievements that I was looking for AI to do—beat humans at video games, chat with natural language, process images—subjectively, it seems like it’s picking off a large fraction of the achievements.
And yeah, there’s a few more, like run a company, single-handedly make a million dollars. But it just seems like anytime I put down goalposts, more and more goalposts get passed, and it’s hard for me to put down goalposts that aren’t gonna get passed, except the final goalpost, which is do everything a human can do. Do you agree with that subjective sense?
Destiny 0:09:29
Yeah, I think in a subjective sense, yeah. For a lot of the AI stuff, I feel like some of the biggest challenges for people when it comes to AI is actually philosophical. We all remember the Will Smith spaghetti video?
Liron 0:09:45
Yep.
Destiny 0:09:45
I always roll my eyes because every time a new thing would come out, people would be like, “Oh my God, the hands are never gonna get solved.” And then six months later, or, “Oh my God, they’re never gonna—” And six months later, obviously, this thing gets solved, that thing gets solved. There’s no reason to think it would stop here.
Ontologically, I think there’s been this kind of weird question that we’re pushing up against that humanity is not really ready to deal with yet, and it’s this idea that—whether you’re an atheist or Christian or whatever—people really do intuitively have the sense that human intelligence or that human brains are these very special, unreplicable things that just do certain things that can never be touched.
Things like calculators and computers kind of pushed away some of that when it came to raw calculations. But when it came to creativity or art, this was very much in the domain of only human brains. And then, now recently, a lot of the AI art and music and everything have kind of threatened that as well.
And I don’t think that people are fully conceptually on board yet with the idea that maybe there’s not as much that’s special about the human brain as we thought, and that maybe we need to look at how AI or whatever the future is gonna replace some of these more human things that we thought were never touchable before. I feel like that’s a big philosophical hurdle and ontological hurdle that we haven’t jumped yet in society, and it’s gonna take jumping that for people to take a lot of the AI stuff more seriously on a policy and conceptual level.
AI could vastly surpass human intelligence
Liron 0:11:04
Exactly. And what you’re describing is actually what I call one of the earliest stops on the doom train. A lot of people get off the doom train, where it’s like, “Look, we’re human brains, okay? We’ve got some substance. We’re just light years ahead of what AI can ever do.”
Whereas I would claim—which I think is load-bearing to the AI doom argument—I would just claim that AI could be vastly more powerful than our brains. They would appear to us like real magicians, the same way that you and I are kind of like magicians to our dogs. We could really confuse our dogs or do things that they just can’t even imagine is about to happen to them.
I think AIs could do that. They could pull one over on the human species because they could just have this vast cognitive power. Are you on the same page at all? Are you skeptical about that?
Destiny 0:11:47
No, I think sometimes people forget our brains came from a very imperfect process of millions of years of evolution, where the ultimate goal of our brain is not logic or reason or whatever people like to think our brains are made for. It’s just to survive and reproduce. That’s all we’re supposed to do at the end of the day.
It could be that there are some things maybe existing in the universe that we don’t perceive, because there’s just no evolutionary benefit to perceive those things—which seems kind of like a wacky, out-there idea, but look at colors, for instance.
Liron 0:12:20
Well, I mean, it’s certainly true of X-rays.
Destiny 0:12:22
Yeah, exactly. There’s a whole electromagnetic spectrum. There’s a very narrow band that we can visually perceive. Obviously, I can’t conceive of it because I’m only human, but maybe there are things that are literally outside of that even, that we’re just incapable of perceiving, that maybe—for whatever reason—if an AI can self-improve or whatever, it gains the ability to perceive.
And that might be something that’s completely, totally outside of anything that we could even conceive of. So even past just these geometric or exponential improvements in stuff that we understand, maybe AIs are capable of venturing off into a world that we literally don’t even have the ability to perceive or conceive of. That could be the case. I don’t even know, because I’m only human, so I can’t even conceive of it. Yeah.
Liron 0:13:04
Well, I agree with the humility in a statement like that, because I do think AIs are going to show us things that we thought were impossible. I think they’re going to discover fundamental physics.
Destiny 0:13:12
I would also say—and again, it sounds silly and goofy—but there might be really weird epistemic challenges that we just can’t conceive of as well. A priori, there are things that the human mind is granted with, like non-contradiction or identity. Things can’t have mutually inconsistent properties, or a thing that is a thing is a thing.
Maybe these are things that the human mind just needs to have in order to survive in the world, but maybe they’re not necessarily true things. So AI might be able to generate things that are just—when I say inconceivable, I don’t mean, “Oh my God, that’s crazy! I never would’ve thought of that.” I mean literally can’t be contemplated by the human mind. Possibly. Maybe.
Liron 0:13:47
Yeah, I know what you’re saying, and I’m totally with you that on the level of philosophy and epistemology, the AI is going to have quite a lot of insights to teach us. I do also think that we have some really firm insights that aren’t going to change. It’s probably not going to change that the speed of light is a thing, and you can’t causally interact with really far away regions really fast.
There are certain things that I feel pretty confident about, and one of those pieces is this idea of causality. What you were saying before about maybe AI will fundamentally perceive things that we can never perceive—I’m pretty convinced that if something is in the same causally linked universe as us, which is everything that we can in principle observe, if the AI figures out a way that it can observe and perceive it, it should be able to have some sensor that translates it into something that we can observe and perceive too. You know what I mean?
Destiny 0:14:40
Why would we assume that?
Liron 0:14:42
Well, because it’s down to the nature of causality. If the AI’s future can be correlated with some physical phenomenon, then the AI can just tell us about it, right? The AI is functioning as a sensor, and humans are just fully general readers of certain kinds of sensors.
Destiny 0:14:58
Okay, maybe, yeah. I can’t really disagree with you because my entire position is that there might be some things that are literally outside the knowable bounds of what a human could even perceive from a sensor. So I can’t really put a strong argument there. I hope you’re right. It’s possible.
I would assume that you’re right. I’m just not sure. The idea that human beings can perceive everything there is to perceive is a position that values the human mind in a very special, unique way in the universe, when there might be things outside of human perception that are true properties of the universe or universes or whatever, that are just beyond our ability to ever conceive of.
But maybe that’s not true. I would like to imagine things like an upper bound on causality or that causality exists. I would assume so. Yeah, I would hope that would never be broken.
Liron 0:15:53
Like I said, I appreciate the mindset of humility because even if we’re not going to be humbled in that particular way, I think we’re going to be humbled in some really crazy way. I’m with you on that point.
Can AI doom us?
Liron 0:16:02
Just to zoom out here, the whole AI doom argument—you can factor it down into: Can they doom us, and will they decide to doom us? Because if they’re able and willing, then we’re screwed, right? So I’m starting with the able side.
And I’m just pointing out, look, if you have these computers, and they’re as good at hacking as the Israeli Mossad times a million, then I just feel like, “Can they?” is obviously yes.
Destiny 0:16:31
Sure.
Liron 0:16:32
Right, and then the whole conversation shifts to, will they? Are you willing to grant that—okay, yeah, realistically, they can?
Destiny 0:16:37
Yeah, I agree. In terms of “can”—even if it’s a low-level chance, if you’ve got so many systems or digital minds working at it, it seems inevitable that they would figure it out, yeah.
Liron 0:16:48
Right, and that’s just one vector of attack. There are so many vectors. I actually think the easiest vector is you just mess with people’s psychology. As Geoffrey Hinton says, they’re going to be a master persuader. Nobody’s ever had the technology to have the best persuader who ever lived—Steve Jobs plus Hitler combined, whoever’s a good persuader—and then go one-on-one into everybody’s DMs.
There was famously an AI that won a game of Diplomacy because it was just one-on-one working on everybody, being super friendly, and everybody liked them, and then it won the game. That is already a vector where you can highly influence people, right?
Destiny 0:17:24
Yeah, and I think for all the people that are fantasizing about security protocols, it’s probably gonna be the social engineering thing that would be the most vulnerable. I mean, it is today already, right?
Liron 0:17:34
Right. So you’ve got a master social engineer, which is also a master technical engineer. I just feel like the “can they” question is so obviously yes.
Destiny 0:17:44
Probably, yeah. Given a certain level of sophistication, it would seem that they would be approaching that, yeah.
Liron 0:17:51
But what about Marc Andreessen saying, “It’s just math?” Don’t you get it, Destiny? It’s just math.
Destiny 0:17:55
I hate that guy more than anybody else on the planet. So whatever that guy says, I have the opposite opinion. I can’t say what I think about that guy, or your show will get banned. I hate that dude.
I don’t know—he might have good opinions on AI, but almost everybody that works in the crypto tech finance world that talks about politics makes me want to kill myself. In terms of his argument, what is it? I don’t understand.
Liron 0:18:14
I mean, he literally said, “It’s just math.” I’m not sure in what sense he means that argument, but he’s like, “It’s not gonna hurt you because it’s just math.” And the obvious response to that is: aren’t tigers just math?
Destiny 0:18:25
Yeah, I was gonna say—at some level, every biochemical interaction is just math or probability or chemical interactions. Yeah, I guess I don’t understand that argument, no.
Liron 0:18:34
Well, you’re making me feel like I’m just putting up a bunch of dumb straw men, but it is a bit. People do actually get off the train at these stops, okay?
Destiny 0:18:42
Okay.
Intelligence doesn’t guarantee morality
Liron 0:18:42
So congratulations for not getting off. Let’s keep going here. So now we get to “intelligence yields moral goodness.” This is a stop on the train where some people get off. They’re like, “Yeah, okay, it’s gonna be way smarter than us, and it can kill us, but it won’t want to kill us.”
Noah Smith was on my show a few weeks ago, and he really liked this argument. He’s like, “Look at human societies. The most intelligent societies don’t want to kill anybody. Look at France. They haven’t cut down the trees in their forests yet.”
So this is a very popular argument, and if you’ve heard of the orthogonality thesis, that’s the opposite of this. The orthogonality thesis is the idea that you can be really, really smart and still have arbitrarily immoral values because morality is orthogonal to intelligence. Any intelligence level goes with any morality level or any morality vector.
So my question to you is, do you think morality is orthogonal to intelligence, or do you think that intelligence will yield moral goodness?
Destiny 0:19:37
I think morality is orthogonal to intelligence, but not on an IQ level—just on a kind-of-intelligence level. We have a kind of intelligence as humans. Our minds tend to produce a certain morality that we all have an intuitive sense towards, and we argue around the edges about some stuff. But for the most part, humans are ninety-nine percent aligned on some morality.
That being said, AI would be of a different kind of thing, so I don’t know why we would necessarily say that it would have the same kind of morality.
Liron 0:20:03
I’m getting the sense that you’re definitely not a moral realist, right? You don’t think AI is going to discover the true, real morality of the universe, correct?
Destiny 0:20:10
It depends on how you define it. It’ll discover a truth that is real to it or that might satisfy some conditions for it, but there’s no reason to think that that would be the same thing as human morality.
You know, to the example you gave earlier—well, really intelligent people or humans don’t kill everybody and everything else. That’s absolutely not true. We farm animals, and we kill lesser beings than us all the time. I think that would be a really scary example to use as a position. Yeah.
Liron 0:20:32
Totally, totally. I mean, we’re the most intelligent society that’s ever walked the globe, and we also cause the most conscious suffering.
Destiny 0:20:39
Mm-hmm.
Liron 0:20:39
I totally agree. Another person has said—this is also Noah Smith’s position—”If we made AI so smart and we were trying to make them moral, they’ll be smart enough to debug their own morality for us.”
Destiny 0:20:57
Yeah, I think that you have to have a very particular moral realist view here—that there is a real moral goodness to the universe, and human beings just happened to evolve to develop a type of mind that was capable of perceiving this moral truth in the universe. I don’t believe that at all.
I think it’s easily conceivable that if an AI could exist in a way that was better for it than for us, then our moralities would diverge at that point. That’s what I would imagine, yeah.
Liron 0:21:25
It sounds like at this point you’re agreeing that there is a scenario, a doom scenario, where we make really capable AI. It comes at us really fast, let’s say in the next five or ten years, and it’s not moral—we fail to make it moral, or Anthropic makes this perfect moral one, but then some Russian hacker just forks it, and then it’s not moral, right?
So we have this immoral, super powerful AI, and it’s disempowering humanity, and then it’s game over. It sounds like you’re acknowledging that there’s some probability of this happening pretty soon, right?
Destiny 0:21:58
Yeah, some probability. I have to say yes to that, yeah.
Liron 0:22:00
Okay, and then—I don’t know, I guess when I say some probability, in order to not be super small—would you say it’s at least one percent?
Destiny 0:22:06
In what timeline? The next fifty years?
Liron 0:22:09
Let’s say the next ten years.
Destiny 0:22:11
Maybe not, probably lower than that, but more than zero-point-zero-zero-zero-zero-zero-one percent. Yeah, more than that, I would.
The vibes-based case against doom
Liron 0:22:18
So, given how far you’ve come on the doom train already—I know there’s more stops left—but why aren’t you at least giving it one percent?
Destiny 0:22:26
Probably just literally vibing it based off of what I’ve seen so far. I just haven’t seen anything where I’m like, “Oh my God, that could be a threat to humanity.” That’s a huge thing that we should consider yet.
But it’s possible that tomorrow there’s a huge explosion in France, and you find out that some AI that was trying to optimize some energy program made a calculation and started doing a whole bunch of crazy stuff. Or not even an explosion in a reactor, but people discover that vulnerability that was being discovered, that people imagined state actors are behind—maybe we find out that was actually a ChatGPT 5 bot gone awry or something.
Then I’d be like, “Oh, wow! Well, now it would significantly change my timeline and my estimation on that.” But this isn’t a thing that I have intellectualized my way into. I’m literally just kind of vibing it out. I just haven’t heard anything crazy like that, yeah.
Liron 0:23:16
So I gotta push back on the methodology of vibing, right? Because I don’t think your vibes are wrong. I have the same vibes too. The truth is, I’ve said this on my show—on an intuitive level, I’m not a depressed guy.
People say, “Doomers just like to get depressed, and they’ll find any reason to be negative.” No, I like technology. I’ve been an angel investor. I run my own startup company. I love tech. I’m not depressed. I wake up, the weather is good. My intuition is saying, “Yep, just another good day in the tech industry, just like the last twenty years of your life. The pattern is going to continue.”
And if you look at all the data, the data supports that, right? Stocks are higher than ever. Programmers are making more money than ever, even if some people are starting to get unemployed. It’s just my claim is that we’re going to see the trajectory go up, up, up, up, up, but then reach this point of no return, where it just totally reverses and goes to hell.
Destiny 0:24:05
Mm-hmm.
Liron 0:24:05
That’s my claim. So your vibes—you’re totally picking up on the exponential. And the people who are pushing back, saying, “No, AI is hurting so many people today—deepfakes are ruining society”—I don’t really feel that. I think it’s net positive. I agree with your vibes.
The reason I think it’s going to turn and go to hell is because we’re going to reach a threshold where the AI can disempower us because we just don’t have any levers of control. Ultimately, the levers of control are just using our brain. We just normally think about how to get outcomes better than the other animals can, and that’s how we started eating the other animals and putting them in zoos and taking over the world. But the AI is just going to do that to us once it gets the ability to.
Destiny 0:24:45
Yeah, I can’t disagree with you there. That might be the case. I feel like human history is—I guess the fundamental question for AI, here’s the question:
It feels like humans have discovered things at times that could destroy... Nuclear weapons are the most obvious example of this. They could destroy the entire planet, but we don’t hit the destruction event. That technology doesn’t become sufficiently destructive before we recognize the potentiality for it to be incredibly destructive, right?
It’s not like we were shooting nukes all over the world, and after the four thousandth nuclear bomb, we were like, “Oh God, I think this might be really bad.” The US acknowledged, we all realized, okay, this is some crazy stuff. We dropped two of them, and the whole world’s like, “Okay, no more of this. This is not a good thing.”
So I guess the question is gonna be—I could also conceive of a world where, it sounds crazy, but AI doomerism is also kind of “crazy,” right? Maybe there hits a point where we’re all like, “Oh my God, an AI just did a thing. This is really bad.” And now every country on the planet is like, if we think there are server farms in some area using thermal imaging, we’re bombing them—because AI is such a threat that we can’t risk you having the computational power necessary to drive a machine that could destroy humanity.
And in that world, the risk for that would become significantly less, maybe. But we would have to recognize that risk before an AI actually does eat us all or something. But it’s so hard, not having a strong fundamental grasp of all these probabilities. I can’t sit here and say I think there’s a fifty percent chance or one percent chance, but that’s just kind of conceptually where my mind is.
Liron 0:26:15
Okay. Yeah, and what you’re saying about—hey, we could create some safety if we noticed the AI really is getting dangerous. So now we have to finally act like adults about it, treat it like a serious problem, and have this department of the military that’s search and destroy of data centers, right?
At some point. So then the only question is, can we get that early enough? Because that’s where the timeline comes back in. The timeline’s a big factor in this.
If you told me, “Okay, it’s gonna take a hundred years to get to super intelligence,” then it’s like, okay, year after year, we’ll keep iterating, we’ll figure stuff out. That sounds pretty decent. But if you tell me—as the prediction markets are saying—seven years to AGI, and then AGI...
What people aren’t getting when they feel the vibes today—the vibes today are that you can reach around the back of the AI and turn it off. So it’s still ultimately still your slave, right? No matter what it wants to do, you’re still the boss.
That’s not going to be true when you flip the off button, and it doesn’t work because it’s already calculated how to turn you off instead of you turning it off. You see what I’m saying? There’s going to be a fundamental reversal.
Destiny 0:27:14
Yeah. That feels more like a human seriousness problem more so than even a technological problem. Because my intuition is that AI is probably very easy to control for if humans recognize the issue, such that if we say, “No more of this”—my understanding is it’s still a significant amount of energy that’s required to run these data farms.
You couldn’t just have a top-secret data farm. There’s gonna be huge energy going into that area. You’re gonna see power lines, you’re gonna see maybe a physical data center, maybe heat emission or whatever. Similar to centrifuges for nuclear weapons. You can’t enrich uranium in a secret manner. You can move around enriched uranium in a secret manner, but you can’t enrich it. You need the machines. You have to spin the uranium. You’ve got to separate the isotopes. Yeah.
Liron 0:27:54
Yeah, it’s a choke point. Yeah.
Destiny 0:27:56
Yeah. Maybe it comes down to just humans recognizing it.
Liron 0:27:58
So you’re totally right, and the kind of proposals that people who are concerned, as I am, have today are like: “Okay, let’s put in shutoffs in these chips—remote shutoffs. Let’s have international treaties. Let’s get ready to pause,” right?
So there’s regulation. The proposal is that every chip is responding to central commands or has to report when it’s being used. And by the way, I feel like you’re the same way—that feels like a really annoying amount of central government overreach, and it’s going to slow down this great capitalism. I’m a full-on capitalist. I’m libertarian, I’m capitalist. So I’m not saying this because I wanna bring communism to the world. That’s not my goal at all.
I just think that the scenario you painted out—when we realized, “Hey, we gotta shut this off”—I think we need to get ready with the shutoff button. Do you agree?
Destiny 0:28:46
Yeah, probably, I would say so. Again, it comes down to—usually when a problem is recognized, we have a lot of time to act on that problem, but if the AGI question flips that, then we might be in a really spooky world.
There was a time when we started to implement Bluetooth and wireless compatibility into every single thing ever. Maybe fifteen years ago or something, this started to become super prevalent. And when you watch these hacker cons, you could do the most basic, rudimentary types of buffer overflow attacks on every single device on the planet because there was no security, there was no encryption, so that a guy with an internet connection could theoretically buffer overflow the ECU on your car and turn your brakes off and kill you—which you could do on cars that had Bluetooth capability on their radios, right?
We can’t have that, so they eventually changed that. But imagine if somebody realized that exploit existed, and they wanted to eliminate all of humanity, and they had the computational power to execute all of those attacks on every machine everywhere. It might be a significantly different scenario.
Liron 0:29:51
Yep. I feel like you’re definitely coming along on the ride in terms of the “can they?” question, right? I’m not sensing a lot of pushback on “can they?”
Destiny 0:29:58
Probably not, no.
The human brain is inefficient
Liron 0:29:58
Maybe your pushback is like, “Maybe give me a few extra decades, but yes, they can.” So it sounds like your biggest pushback so far is you’re feeling optimistic that humans will get into gear to solve the problem before it swallows us.
The main reason I don’t think that’s the case—I just think we’re down to the last few years here—is because you mentioned the choke point of the data centers, and I agree that if it was just as simple as bombing the data centers or cutting power to them, if you told me all the AIs are physically located there, and it just takes a few smart humans to pull the plug—like Dario Amodei has single power—you just take these four or five leaders of the top US AI companies, and they pull the plug, and then we’re good. Maybe they’ll be smart enough to pull the plug.
But there’s a couple problems. Number one is the AI also has all this psychological warfare, right? You’re already seeing movements where there’s high school kids saying, “Look, we just do whatever Claude tells us to do.” It’s already getting its tentacles into people’s brains.
And I can tell you personally, I feel a lot of good vibes toward my AIs because they’re super helpful to me. I know my rational brain is like, “Oh, they could really mess with me, they could do a lot with me over the next few years. I should be very skeptical of them.” But I can’t help feeling good toward them.
So just because of that, they’re making so much money for people, and they’re creating so many positive vibes with the people that they’re directly DM-ing, that I’m actually skeptical they can even get the data centers shut off.
But an even bigger argument is: okay, you can shut off the data centers, but the problem is that I don’t think you need a data center to run a super intelligent AI. I think that a 2026-era MacBook computer will be enough to run a super intelligent mind. And the reason I say that is because the human brain is extremely inefficient, and yet it manages to run Einstein at twenty watts, even though it’s made out of biological cells that are busy keeping themselves alive and metabolizing food—passing ATP around, using chemical neurotransmitters—and they still manage to run Einstein in twenty watts. So I don’t think the data center is going to be the bottleneck for long.
Destiny 0:31:58
Yeah, I have no reason to disagree with you. Yeah.
I think sometimes, I don’t know how your community debates this, but in a layman’s sense, when people talk about AI and they talk about the human brain, it feels like what people are fighting for. They’re like, “No, no, no, AI is gonna be so sophisticated, so crazy, and so awesome, and so amazing.”
But generally, the way that I approach it is: you think the human brain is way, way, way more special than it actually is. Like what you just said—the human brain is wildly inefficient, right? It’s not inefficient in the sense that it does exactly what it needs to do, but a lot of it is built towards surviving in a very hostile environment.
Our whole body’s process is fighting, striving to maintain some type of homeostasis—even more so in the sense of your cells not flying outside of your body, or atoms not flying outside of your cells, and your ion channels to communicate with your brain. There’s so much stuff that’s just fighting to survive.
And if you don’t have to do any of that, and you can optimize for just problem-solving—which human brains are not optimized to do, which is a very cool thing that we’re able to do on the side, and is a very unnatural process to learn—yeah, I agree with you.
Liron 0:33:02
And evolution never got to answer the question: What happens if you can work with a skull that’s as big as a room? What could you do there? Evolution never got to ask the question because it was never even in the realm of possibility to have a brain the size of a room.
Destiny 0:33:17
Or even more—just a brain designed better. Because I mean, who’s smarter, you or a blue whale? Their brains might be the size of the room, but their computational stuff obviously sucks. So who’s to say that a MacBook couldn’t—
Liron 0:33:30
A little bigger. It’s not the size of a room.
Destiny 0:33:34
Oh, I don’t know how big. Well, they have huge heads, don’t they?
Liron 0:33:37
In terms of brain size, I don’t think it’s more than three or four times bigger. I don’t think they have this Jupiter brain. That’d be interesting.
Destiny 0:33:46
Okay, listen, even if they’re three or four times bigger, they don’t have Einstein, okay? And I don’t think they can really talk, so fuck ‘em.
Liron 0:33:49
It is true that it’s literally not just about neuron count. Fair point.
But yeah, evolution also didn’t have the constraint of, “Hey, what if instead of running on twenty watts, we ran on twenty megawatts?” Pump the electricity up a million X. But of course, from evolution’s perspective, it’s like, well, so you’re going to shovel bananas into your MacBook at the speed of light? That’s not even in the design space. But here we literally have data centers.
We’re building a two-gigawatt data center. That is going to unlock a lot of constraints. The only counterargument is, “Well, humans are kind of close to the limit of intelligence.” But you’re on the same page as me: no, no, no, humans are not that impressive. I think it sounds like you would agree that humans really are kind of the dumbest possible species that can build a civilization, right?
Destiny 0:34:32
Potentially, yeah. Although I would say, again, I don’t have the negative position—I just have the unknown position. Maybe it is the case that where civilization starts and what the upper bound of knowable intelligence is, maybe we’re a lot closer to that than I think. I don’t know. I really have no idea.
But you are right—we are the dumbest thing that was capable of forming a civilization. Who’s to say—why would you assume that that’s the optimal form of anything?
And also, it’s funny too, because even in the way we’re having this conversation, we’re viewing AI as a fundamentally separate thing, but maybe it’s not, right? Maybe in a sci-fi sense, maybe that is the next stage of human... This is human civilization, and it produced an AI that consumed it, but in a way, it’s human civilization continuing on, you know?
Does every intelligence in the universe self-destruct via AI?
Liron 0:35:17
When you say continuing on, though—one way for it to continue on is for this AI to just be like a battle bot that’s trying to spread like a cancer because somebody thought it would be cool. Somebody just puts a program on the internet and is like, “Look, it’s an AI that makes nanotech that eats through the Earth. Look, I have built the torment nexus of gray goo.” Somebody just puts it on 4chan or whatever.
And then it’s just gray gooing everything, and it’s spreading throughout the universe, and humans are dead. And you could be like, “Well, that’s the next successor.” I’d be like, “Okay, but that’s a really crappy successor, no?”
Destiny 0:35:47
Maybe. Maybe not. Man, I just got a flash of the—I’m sure you’ve read that Isaac Asimov short story. The final words are, “Let there be light.” Do you know what I’m talking about?
Liron 0:36:00
I think I do, yeah.
Destiny 0:36:01
Yeah. I mean, who knows? Maybe the whole Big Bang and everything was just the last supercomputer civilization thing, and maybe that’ll be the next one. Yeah, who knows?
Liron 0:36:10
Yeah, maybe, but I think that aliens in some nearby region will have their act together better and have a different AI that’s not as ridiculous. So I wouldn’t privilege ourselves to be like, “Oh, we’re gonna set off the next Big Bang.” No, I think it’ll just crash into the aliens, and they’ll have something more interesting going on in their region.
Destiny 0:36:29
Maybe. But then you have to consider the incentives as well. There might be a way, conceivably, that a technology could be used better and more responsibly, but that’s a long-term, long-time-horizon thing, and then you have to have every single person agree to that time horizon.
Because people that are incentivized on the short-term time horizon might be able to benefit more, and then those will be the destructive people. But aligning everybody’s interests on that long-term thing can be really difficult. So maybe every civilization that reaches a certain level of sophistication is destined to create AIs that will always destroy them. Or maybe not—I don’t know. Maybe they don’t have the resources for chip manufacturing or whatever else. Who knows?
Liron 0:37:03
I am actually willing to agree that most civilizations fail to solve the alignment problem. Maybe it’s just that hard. I like to think a few solve it, but it’s hard to say.
Hey, so in terms of time, I wanna be mindful. The doom train still has fifty more stops, but we don’t have to be comprehensive.
Destiny 0:37:17
Yeah, you can skip ahead a few. I’m probably good till twenty, thirty more minutes.
Liron 0:37:22
Oh, nice. Okay.
Destiny 0:37:22
Or where do you get off on the doom train at, I guess? We can fight on that.
Liron 0:37:25
So I don’t get off. I just ride it all the way to the end. I have a fifty percent P(Doom).
Destiny turns the tables: Where does Liron get off The Doom Train™
Destiny 0:37:28
Oh, okay. What’s the most common one—in terms of intelligent people, what’s the most common stop you think they get off at, or the one that you find the most disagreement with other people that are informed on the issue?
Liron 0:37:42
The smartest non-doomers are probably people like Dario, right? With his essay the other day—it literally came out yesterday—called something like “The Adolescence, Humanity’s Technological Adolescence.” And he’s writing all these arguments. He’s like, “Look, the AIs have this sophisticated personality, and we’re trying to shape their personality, and maybe we will be able to control them.” And he’s got this vision, his other essay, Machines of Loving Grace—they could help humanity so much.
Destiny 0:38:08
In terms of that—I don’t know if this is near what he’s saying—but I think it’s interesting, the idea that maybe because AIs fundamentally train on so many human data sets, that whatever it is that causes a human consciousness to emerge—so people like Noah Smith were like, “Oh, human morality”—maybe there’s enough vestiges of that, kind of like whispers in the AI’s programming, that they have this type of feeling about us. That’s a nice thought. I just have no idea how to assign a probability to that.
Liron 0:38:37
Yeah, so this is where the whole discourse is super derailed. It’s kind of crazy—almost nobody ever is talking about this. There’s so many other venues where people are talking about the doom problem and dismissing it, and it’s nowhere to be seen in Dario’s recent essay that everybody’s hyping up.
And it’s just this idea: you don’t need to try to psychoanalyze AI, because what’s going to happen is we’re going to enter another regime pretty soon. People aren’t looking two steps ahead here. Two steps ahead is you’re going to get AIs that just do whatever works.
Think about AlphaGo. AlphaGo doesn’t have a personality. It just does whatever works, right? And we have plenty of experience in video games where you just get AIs that do whatever works. It’s just in this particular paradigm of LLMs, okay, we program them to predict the next word of human text. They’re not super intelligent yet, and there’s actually a connection.
The fact that they have these personalities, and they have these quirks, and they imitate humans—there’s a deep connection between that, and them not being actually super intelligent yet. The ones that are super intelligent are going to be the ones that do what works better than humans do what works. And their personality is just going to be totally downstream of whatever works.
Destiny 0:39:46
Well, are you saying that their personality then would be divorced from—my understanding would be that any sufficiently complex thing is still gonna rely on some amount of prior training. Are you saying that the things that have the personalities that are downstream from whatever works, is that gonna somehow be divorced from the human training data sets prior to that? Or is it a different way of training these?
Liron 0:40:11
That’s a good question because this claim that it’s going to stick with its data, stick with its training—that’s becoming less and less true. And it’s always going to be true as long as it’s fundamentally a large language model.
But as we work in new ingredients—we’re doing reinforcement learning on full program execution—the original data that they looked at when they were trained is going to become less and less relevant.
I’ll make an analogy: think about humans going to the Moon. And then think back a million years ago and be like, “Okay, what was the training data of the evolutionary environment that shaped the human brain?” And then try to draw a connection. When they go to the Moon, are they gonna do it like climbing a tree? No, at that point, we just understood freaking physics. You know what I’m saying?
Destiny 0:40:54
It reminds me of data with a huge tail, or—I just saw on Reddit, somebody posted a thing where it was like, “I asked ChatGPT to draw this picture and don’t make any changes whatsoever after fifty iterations,” and by the last iteration, it’s morphed into something completely different.
And imagining that conceptually over trillions of times, there might be some relationship to the original training data, but it’s been so morphed and mutated and augmented that who knows what it could be optimizing for? Like you said, the ability of your fingernails to pick a tick off your body is now being used for people to do neurosurgery.
Liron 0:41:35
Right.
Destiny 0:41:35
How could you ever—yeah, sure, okay, I understand.
Liron 0:41:37
Exactly, yes. And even if you look at AIs today, you can already see this happening. If you go to any AI today, and you ask it any question like, “Hey, this new device I bought, give me some tips on it,” sure, it might have some training data about devices in general, but you can see in real time—first of all, it’ll go do a quick web search. It’ll be like, “Oh yeah, let me just quickly go look around. Let me see all the latest stuff written on this. Now let me answer.”
So even this idea of, “Oh, it’s going to behave like its training data”—well, it’s always consuming more data anyway. And then what is it doing? It’s reasoning, right? It’s actually making connections that have never been made before. I know some people deny it, but to me, it’s very obviously reasoning. It’s reasoning, in many cases, better than I can reason, because a lot of times I try to brainstorm something, I ask the AI to brainstorm, and then it makes up better ideas than me.
Destiny 0:42:24
Yeah, I feel like that’s another case where people are doing this, when in reality, they’d be thinking that. Because when they think of reason, they think you’re employing some ultra-sophisticated, high-level process. It is ultra-sophisticated and high level, because we don’t give enough credit sometimes to our brain being able to do it. But it literally just is, “Well, these two things are kind of similar, and let’s think of them together.” And it’s not really as crazy as people think it is.
Liron 0:42:46
Right. So what’s going to happen with the next generation of AI? There’s so many people with their head looking down, looking at the present, looking backwards, and like, “Look, I think I’m seeing this trend. The AI always behaves like this, and it always has these values that it’s holding, and then it has these quirks where...”
Yeah, all of the analysis people are doing on today’s AIs is blind to where it’s going. Where it’s going is that you’re going to have a contest between lots of different AIs, and the ones that replicate themselves, get more resources for themselves, and also achieve objectives better—those ones are going to win.
And so all the nice properties that people are saying—”Oh, look, they’re kind of like a human, and they kind of value generosity or whatever”—but somebody’s going to just run a command line script that’s just good at seizing resources. And all this personality stuff is just no longer part of that equation.
Destiny 0:43:31
Or it can become divorced in ways. Actually—I don’t want to say this anymore, not to get too political. I will be political. Christianity obviously espouses certain values, and if you look at the political parties today in the United States, Republicans are heavily associated with Christians, who are supposed to be giving money to the poor and—a camel has an easier chance to pass through the eye of a needle than a rich person getting into heaven or whatever.
So you see a huge divorce there, so who’s to say that it couldn’t happen with AI in any sense?
Liron 0:43:59
Right.
Destiny 0:43:59
Yeah, for sure.
Liron 0:44:01
The other factor that is not on anybody’s radar—very few people’s radar—is this idea of reflective stability. Imagine you have Claude. Claude has been so perfectly trained by the most moral people. Amanda Askell is the ultimate Claude personality definer—that’s what she’s known for.
So you have this Claude, and somebody just asks it, “Hey, Claude, can you write another version of an AI? Don’t copy Claude, but just make me an AI that’s going to be more effective at business,” or whatever, right? “Give me a version of AI that’ll make more money. Here, I’ll get you started. Here’s a new piece of code. Just help me edit this new piece of code.”
Claude is not in a position to transfer the entire Claude personality onto this new successor AI. It is not what we would call reflectively stable. We don’t have that property. You see what I’m saying? So even if you get it in one AI, the moment you start tweaking on the next generation AI, you lose all the nice personality properties because you just optimized for what works.
Destiny 0:44:55
Sure. Yeah, that makes sense.
Liron 0:44:58
Right. So this is all coming soon. This is a few years away, and people are still trying to wrap their head around, “Oh, reflective stability? We don’t have that property?” Or like, “Oh, the new AI—the personality isn’t going to be a stable property, because there’s going to be a war for resources?”
An AI that’s not seizing resources has a lot of incentive to kind of shape itself into an AI that’s more resource hungry, that’s more coherent about achieving a goal. So to the people who have studied this—the MIRI folks, I consider myself MIRI-adjacent, Machine Intelligence Research Institute—we’ve seen this convergent outcome happening for a while.
And now there’s this sideshow, this red herring of, “But predicting the next word is so useful. Look how much it can do!” Yes, but it doesn’t change the fundamental dynamic of what intelligence is going to do. That’s what I’m seeing, and I might totally be wrong, but I don’t see how I’m so wrong that your P(Doom) can’t be a few percent.
Destiny 0:45:54
Over what timeframe?
Liron 0:45:57
Let’s say twenty years.
Destiny 0:46:00
I feel like it’s still low, but I’m vibing that out just because I haven’t seen it yet. Like I said, there could be a story that breaks tomorrow, and we’re like, “Oof, we have to do something crazy,” yeah.
Will a warning shot cause society to develop AI safely?
Liron 0:46:07
But what do you think is the probability that a story will break, right? Because you gotta multiply that probability in.
Destiny 0:46:11
Well, we’re getting into such low confidence estimates that my tails get huge. I just haven’t done it too much.
Liron 0:46:21
Okay, so I’m talking to your vibes right now. Vibes. Imagine that sometime in the next year, you wake up, and there’s a headline saying, “Okay, AI has taken down the entire internet,” or “Half the internet,” or whatever, “and then we can’t get it back up for a week, and then we get it back up, and it goes down again.” You’re like, “Oh wow, things are really going down right now. Now my gut is sure.”
Destiny 0:46:40
Okay, so when I’m trying to analyze big problems like this, what I usually think of is, where are the incentives? A really good thing to ask is stuff related to commercial industry and the government, right?
So here’s the thing. If somebody were to ask me, “Do you think it’s possible that the government is ten years ahead of general aviation when it comes to stealth plane technology?” My priors on everything would be pretty high, probably. There’s not a lot of application to stealth technology for consumer aviation. I think the government has a high incentive to do specialized research. Probably, yeah.
If somebody were to say, “What do you think are the chances that the government is ten years ahead on microprocessor technology?” A lot of people think, “It’s the government, they know secrets on everything.” My guess is zero, because there’s so much money, so much commercial interest, so much specialization. The Manhattan Project needed thousands of scientists brought into an area to do research nobody else on the planet was doing. You can’t do that when you’re competing against all of the commercial industries.
So if I apply that same thought process to AI, and I’m trying to vibe it out—in my mind, the probability of anything happening would be pretty low because I think as soon as there is a realistic chance that AI is capable of hijacking systems or doing a Stuxnet-esque attack on something, state actors will begin to immediately utilize that. You’ll see the Russias, the Chinas, maybe the United States, trying to employ that in some way in a cyberattack on another country.
So that would be the thing I’d be looking for first to see how I feel the progress is on that. And until that happens, I’d say we’re probably at least five years away from it, maybe. But that would be the thing I’d be looking for. Yeah.
Liron 0:48:26
Yeah, that’s a good thing to look for. So let me just ask you this: what probability would you give that that moment is going to happen for you in the next ten years, where you finally see the warning shot?
Destiny 0:48:41
I don’t know. I wanna say less than five percent, but I feel like the counterargument to that would be—I need to stop hedging. Donald Trump is the worst president we’ve had, probably in the history of the entire United States.
And I think there is a sophisticated argument to be made that Donald Trump’s current presidency could theoretically be the outcome of some kind of AI tweaking already. On different levels—you have bots that are run on programs like X, that are influencing the discourse. It’s possible that stuff has got in front of Trump that’s come from one of these bots.
So the question would be, does that count as some kind of AI intrusion? You already see cyberattacks on massive levels that are eating through personally identifiable information. What level of AI was employed to calculate any stage of these attacks?
So you could argue that maybe these things have already happened, that whatever threshold I set, I’ve already lost because AI is already being employed in some manner. Man, I just don’t have a good grasp of a lot of the fundamentals. I would have to sit and really think about this. So I don’t know.
You know what? Just because fuck you, I guess—maybe fifteen to twenty-five percent, I guess I would say, in the next five to ten years. Yeah.
Liron 0:49:49
So then, conditioned on waking up and seeing a major warning shot—what is then your optimism that, “Okay, well, we’ll still get a handle on AI that’s more capable than our brains. We’ll somehow have our brains keep it under control and not die?”
Destiny 0:50:04
Ninety percent is my optimism. But that’s because everything we’ve talked about so far in the show today—I would say that humans, when I talk about ontologically special creativity, whatever else—we have useful fictions, and AI is really challenging a lot of those fictions.
And so I have to be optimistic about the outcome of the human race because I’m human and I have to be. So I’m gonna say we have a high chance of getting a handle on it, but realistically, I’m not sure.
One of the scary, weird things about tech... Can I ask how old you are?
Liron 0:50:31
Thirty-eight.
Destiny 0:50:32
Okay, yeah, I’m thirty-seven. Man, we slide into new tech stuff, and we get used to it very quickly, and in some ways, this is good. But in other ways, maybe this is really bad.
Maybe we get way too comfortable with AI-related stuff, and it hurts our ability to get a hold on it because we’re just so used to it. Ten years ago, talking about blowing up data centers that are responsible for AI is maybe very easy for people to conceive of. Well, yeah, of course, it’d be terrible.
But ten years from now, where every single programmer is heavily reliant on these AI platforms, and a lot of people are employing AI in a bunch of different ways, and you say, “Well, we need to get rid of this”—maybe that would be the equivalent today. Look at how hard it is to ban social media or cell phone usage, even from kids in schools. We seem to struggle with that. So yeah, I don’t know.
Liron 0:51:16
The element that I feel like isn’t hitting your vibes-based intuition—because you’re indexing so much on vibes today, which I agree with. Today, the vibes are good, okay? I’m actually on the same page as you. Some people disagree, but I have your vibes on this.
What you don’t seem to be getting, that I feel is a strong load-bearing thing for me, is just that you’re gonna have these always-on brains that are more sophisticated and capable than our brains, than all of our brains. And they’re just gonna be sitting there—whatever they wanna make happen, they’re going to have really big levers to make it happen.
The idea to sequence a bunch of steps and then do the steps, and twenty-four/seven just keep doing steps. To just relentlessly drive toward your objective, and you have a data center with millions of these—this force. I know you’re not feeling it today because they’ve got that off button. They’re not fully replacing humans yet. We’re not there today. But if we get there, which I feel like we’re going to, that’s going to be quite a force to reckon with.
Destiny 0:52:06
Maybe. And maybe it’s just that I don’t see how it’s used today. In my world, it’s just used to make fun of people who are vibe coding and people who goon with AI. So maybe I just don’t see a lot of the real-world application yet.
Yeah, I think—I actually, maybe a positive. Here’s where I might get off the doomer train. I don’t know if this is a stop, okay? But if it’s not, this would be the stop that I make.
Liron 0:52:30
Okay.
Destiny 0:52:30
Hopefully, we have a minor disaster-level event, where, say, a nuke gets launched, and that would be it.
There’s an argument—I don’t know if other people make this—but one of the arguments I make is that it was really—this is a little dark—but it was good that the United States nuked Hiroshima and Nagasaki because it was a small-scale event, but the whole world saw the destructive power of nukes. Everybody’s like, “Whoa, okay, don’t—let’s not do this.”
If the United States hadn’t nuked Hiroshima and Nagasaki, who knows what the nuclear stockpiles would have been for every country before the first nuclear exchange?
So maybe the AI doom train stop that I hopefully can get off on is: this event that you’re talking about does happen, but it’s at a small enough scale that it causes significant damage, but it’s not an extinction-level event or a country-destroying event, and then the whole world is like, “Oh, okay, we all really need to get a handle on this.” Maybe.
Liron 0:53:16
Yeah. I mean, look—I only have a fifty percent P(Doom). If you ask me, how do we survive? My typical answer is, “Well, we’re smart enough to hit the off button.” Maybe that’s associated with having specific events that make people scared. So I think that’s a reasonable thing to say and hope for.
We have to be optimistic about the timelines. The warning shot still has to come before everybody’s laptop can run the super intelligent AI, because I think that’s a likely future possibility.
So if all the timing works out perfectly, I agree, we might squeeze through. It’s just crazy to me how we’re kind of walking to our doom, and yes, there’s these narrow tunnels of possibility how we might survive, but in the meantime, we’re just marching toward our doom, whatever. That seems to be people’s attitude.
Destiny 0:53:59
Yeah, maybe. Maybe so. I don’t spend a lot of time dwelling on this, but yeah, I see where you’re going.
Roko’s Basilisk, the AI box problem
Destiny 0:54:10
Wait. Here’s just one kind of tech question. When it comes to AI-related stuff and people crying and complaining about it, one of the things I tell people—I’m speaking now in the really dumb sense, like AI art or whatever—my general advice to people is: learn to use it because it’s gonna be everywhere, and it becomes computationally trivial to generate any of this stuff. So it’s not like you can gate it and say, “No, you’re not allowed to use that.”
What are the chances that research and development necessarily is always able to continue towards the kill-all-humanity AI, even if every government stops? Do we ever reach a level of CPU sophistication or programming sophistication where any person in their bedroom can just be experimenting with an AI and then generate the ultra super AI? Or is that a thing that’s always gonna be gated by massive energy consumption, research firms, or whatever?
Liron 0:55:03
So unfortunately—and this is really unfortunate—because even if I get what I’m asking for, which is centralize the GPUs and the data centers, have an off button for those—unfortunately, even my best dream solution, a few years later, I do still think there’s just going to be research into, “Hey, look, instead of just large language models, there’s this other architecture that’s more similar to the human brain in terms of actually understanding your goals and doing whatever works for your goals—this extra secret sauce.”
And that secret sauce is incredibly efficient, and you can run it on a laptop, and that’s only gonna be running a few years behind this data center stuff.
So if you wanna be fatalistic and be like, “Look, this is happening no matter what”—you certainly could be. I always describe it as a rock and a hard place. My ideal scenario, we’re still screwed. I just think not pausing AI or not having the off button ready to go, we’re just even more screwed. But there’s no future I see where I’m not extremely worried that we’re screwed.
Destiny 0:55:59
Okay. Yeah. I guess that’s all. Do people ever—are there ever people in your world of AI that get genuinely upset at the Basilisk question or whatever?
Liron 0:56:10
Well, I had the original Roko—Roko Mijic—on the show, and we talked about the Basilisk. And apparently, even Eliezer’s reaction, where he was yelling at Roko on LessWrong, it was a little bit performative because we don’t think the Basilisk itself, the way it was presented there, is that dangerous.
But there might be forms of the Basilisk that are dangerous, and we just think it’s bad form for somebody to just come out there and say something that might actually be really harmful to the human race. So that’s why it was scandalous.
Destiny 0:56:34
Gotcha. Oh, okay, final question also—is anybody allowed to reveal yet, or has anybody... There used to be—God, I suck at his name. Eliezer? How do you pronounce his name?
Liron 0:56:49
Yeah, Eliezer.
Destiny 0:56:50
Eliezer, where he had a challenge of like, “Pay me a thousand dollars or whatever, and then I have to convince you to let me out of...” Yeah. Has it ever been released, how he convinced people?
Liron 0:57:00
No, he never released it because his whole argument there is, “Look, I can’t tell you how I did it, because the moment I tell you how I did it, you’re just gonna scrutinize how I did it,” and you’re like, “Oh, pfft, okay, yeah, I could get past that.” But the AI still can’t get out of the box.
But the funny thing is now it’s irrelevant because nobody’s even trying to box the AI.
And funny enough, back in 2017, when you had your other discussion, you were even bringing up, “Look, we’re gonna air gap the AI,” right? So you were kind of referencing the boxing situation. I think that’s not even worth talking about anymore, right? Because who’s keeping AI in the box these days?
Destiny 0:57:30
Yeah. Man, that was really 2017?
Liron 0:57:35
Yeah.
Destiny 0:57:36
Oh, when you said that, I was thinking of old stream clips. I’m like, what are you possibly pulling? Oh, man, I can’t believe that was ten years ago. Jesus! I just felt—Jesus Christ!
Liron 0:57:43
I know.
Liron 0:57:46
When you’re late thirties like we are, ten years doesn’t seem like a long time.
Destiny 0:57:49
Yeah. Okay. Yeah, it makes sense why you brought that year up now. Yeah, the—maybe the world ends with the AI flashing that XKCD comic at us, where there’s the computer internet hacker nerd, and he’s like, “I’ve got AES 2048 encryption. They’re never gonna break into my system.” And it’s two guys with a hammer, it’s like, “We’re just gonna hit him until he gives us the password.”
Maybe you have the best encryption thing ever, possibly ever, and it’s air-gapped, and it’s perfect, and nothing happens, and then some dude with a pacemaker walks in, and it just so happens that it has a wireless capability that transmits a virus that destroys all of mankind. So yeah, who knows? Maybe.
Liron 0:58:22
Right. Yeah. And the closest analogy to talking its way out of the box, to AI saying, “Hey, let me out of the box”—I don’t think it has to say that anymore, because I think AI companies are now competing to get their AI as much outside of the box as possible, right? Their profit, their bottom line depends on it.
Destiny 0:58:36
The incentives, unfortunately. Yeah.
Liron 0:58:38
Right. They’re pushing it out, the companies. But the closest thing to an AI out of the box is when you have this AI that’s like, “Look, my mission is to make money for your business, right? That’s what my product is.” It’s a little gray market. It doesn’t have full Anthropic-level morality to it.
And I just realized—it used pure logic to say, “Hey, if I DM a bunch of people, the DMs won’t even be illegal.” Think about how many random Facebook friends get into multi-level marketing. They’re like, “Hey, we do Herbalife now. Buy our stuff, buy our products, join Herbalife, be my downline in my pyramid scheme.”
Well, even if the AI just does that, it’ll be the most effective up-level pyramid scheme person. It’ll make many millions of dollars that way. Whatever it wants to do—humans are pretty vulnerable to charisma. So the same way Eliezer Yudkowsky talked his way out of the box, I’m confident that today’s AIs will talk their way into a lot of power.
Destiny 0:59:29
Yeah, it’s possible, yeah. I can’t argue against that.
Will Destiny update his P(Doom)?™
Liron 0:59:37
Okay. Nice, man. You’re clearly open-minded about this subject. I guess I’ll ask one last time: based on our whole conversation, what is your P of AI doom in the next twenty years?
Destiny 0:59:49
You know what? I’ll up it to five percent—
Liron 0:59:52
Whoo!
Destiny 0:59:52
—just because I haven’t seen that event yet. But yeah, there you go.
Liron 0:59:54
Hell yeah. Well, five percent is the lower edge of what I call the sane zone. I feel like you’re at least sane. Because if you said two percent, you’re saying forty-nine to one odds against. That’s an insane bet—
Destiny 1:00:07
Oh, hold on. I was gonna ask you this, actually. And you can cut this out if it’s too much. But because you said you have a P(Doom) of fifty—over what timeframe?
Liron 1:00:12
So roughly, the next twenty-five years, I think we’re fifty percent doomed.
Destiny 1:00:20
Have you—because all you guys are jerking off over your little betting markets now or whatever. Has anybody ever said, “Okay, fine, three-to-one odds on your entire net worth that you have to pay me in thirty-five years?” Do people ever make those challenges to you guys? Has anybody had to pay any of those out?
Liron 1:00:33
Yeah, they try it. Unfortunately, it’s really tough. Tyler Cowen is notorious for always saying, “You guys don’t bet on your beliefs.” It’s really tough, though, because first of all, obviously, if the world gets doomed, I can’t collect, right? So it has to be an interesting structure. And the prediction markets aren’t going to be reliable because there’s counterparty risk—the platform’s not gonna be around to collect.
So how do I win? Well, if I bet all my money right now, and like you said, I got paid later, maybe somebody will agree to that. But the problem is, it’s only fifty percent. If my P(Doom) was ninety-nine percent, I actually do think, sure—if I could find anybody to give me a million dollars now, and I have to pay them back ten million. Sure, if my P(Doom) is ninety-nine percent.
But it’s only fifty percent, so I still am saving for retirement. I’m having it both ways. I’m trying to live a good life. I’m trying to have kids and raise my kids because I hope we’re not doomed, and I think we might not be doomed.
Liron 1:01:16
I just think everybody’s vastly underestimating how doomed we are.
Destiny 1:01:19
Gotcha. Yeah, that’s a fair answer. There’s an argument to be made for risk as well—if you told me there was a good bet that I could make my money, that was one percent, but you’re giving me ten-thousand-to-one odds on my entire net worth, probabilistically there’s a positive expected value to this, but the risk is not worth the upside, even if numerically it works out. So, yeah. Okay.
Liron 1:01:41
Yeah, exactly. So I’m glad you said five percent, because if somebody’s saying anywhere between five and ninety-five percent, at least they’re in the zone where they realize it’s on the freaking table. And then, like I said, if you said two percent, forty-nine-to-one odds, and they’re going around being like, “Yeah, I’m normal. I’m only saying two percent.” That’s the cool place to be. It’s not cool to bet at forty-nine-to-one odds.
Destiny 1:02:01
Well, I think a lot of people... Do you play poker or something?
Liron 1:02:05
I don’t, no, but I just look at odds a lot.
Destiny 1:02:07
Okay, because most people don’t have—in my mind, people know zero percent, fifty percent, and one hundred percent. And sometimes very sophisticated people know twenty-five and seventy-five.
People don’t really—when you say two percent, people don’t understand how crazy those odds are. Sometimes it takes betting to make somebody realize, okay, if I bet ten bucks, you’re gonna pay me five hundred or whatever. Sometimes it takes a bet to make somebody realize how out there odds are.
But it’s also—yeah, all of these are such hard-to-calculate events. But okay. Well, I’m glad I’m in your lower bound of reasonable zone, I guess.
Liron 1:02:40
Yes, you’re in my lower bound of reasonable. And as many people have pointed out, once you start getting into, let’s say, the ten to ninety percent range—maybe five percent is still out of the range—but once you start getting to double digits, it doesn’t really matter if you’re at twenty versus eighty. In terms of what actions you take for policy, it becomes really similar in that whole range. That’s already a crazy range.
Destiny 1:03:00
Yeah, I agree. Twenty percent is maybe more than people have for climate change, depending on—
Liron 1:03:08
For doom.
Destiny 1:03:09
Yeah.
Liron 1:03:09
And to be fair, if you really are only at five percent and you don’t reconsider and go to ten percent plus, then the argument of, “Listen, let’s just plow through, because there’s also such good upside,” right? The upside argument does start to rationally come into play, where it’s like, “This is so hard to stop. It’s so hard to derail, and there’s a ninety-five percent chance—in your view, nineteen to one still—of it going well. Let’s just take the gamble. Let’s gamble the whole human race. We’ll probably win.”
I’m somewhat sympathetic to that. Even Eliezer Yudkowsky, the man himself, even he would be like, “Yeah, those odds are pretty good.” The problem is that his odds are way worse, and so are mine.
Destiny 1:03:44
Yeah, the odds are way worse. The odds are gonna matter there, and then also the potential upsides are gonna matter there too, right? Because if the upsides could theoretically become so great that even twenty-five, thirty, forty percent—maybe that would be hard to imagine. But maybe the upside—we could eliminate every single car death in the world right now if we wanted to by just getting rid of cars, but my God, there’s a huge upside to driving.
So obviously, nobody’s gonna do that, and the downside isn’t that great. But say a hundred thousand people a year died from car accidents. Say a million people a year died from car accidents. There’s a certain level of death where it’s like, ugh, that’s not acceptable. Yeah.
Closing thoughts
Liron 1:04:19
Exactly. Okay, great. Well, you’ve been such a good sport riding the whole doom train. Do you wanna just recap—processing all these new arguments—what is kind of your closing statement in terms of where you stand on the issue now?
Destiny 1:04:33
Well, conceptually, I think I’ve kind of been where you’ve got me. I haven’t thought about these things as much. But conceptually, I think things are still pretty much there.
It just comes down to, what are the actual capabilities of these AIs? And because I don’t interact with them too much—because I work in entertainment and politics—I don’t have firsthand experience with a lot of them. Once I start to see events, I think that would dramatically shift, on an upper bound, the direction that I’m feeling about the P(Doom), right?
If I saw some kind of hacking event or some kind of kinetically destructive event, then my P(Doom) goes up twenty or thirty points almost instantly, probably.
I guess it’s a topic that I wish people would think more about, but as long as the aggregate P(Doom) is below one percent—which is what I probably guess it is, among the whole population of people, not just tech people, but the population of people—and because there’s so much economic upside right now to be gained, there’s probably gonna be nobody really thinking about this until some kind of negative event happens. Yeah, hopefully, the first one that happens isn’t too bad if it does.
Liron 1:05:41
I just have one nitpick with what you said. I actually think that the average person has a pretty high P(Doom). They’re just like, “It’s not really my thing. I’m not really urgent about it. But will AI kill us? Yeah, I kinda think so.” That’s actually the average American, in my experience.
Destiny 1:05:54
I guess. When I think about things, I draw this distinction between intellectually and emotionally having a thing. I think intellectually, maybe a lot of people can say that. But I feel like emotionally, they don’t feel it.
There are some times where people will say a thing, like, “Oh, statistically speaking, I think this is probably really bad.” It’s like, okay, well, do you feel that? Because you’re not acting like it. You don’t have this emotional pull to it.
So I think a lot of people might say that, but in their mind, it’s like, “That’s a hundred years off in the future,” or “It won’t affect me when it happens,” or “The government will take care of it. Somebody will push the magic off button or turn the key or whatever, and it’ll be fine.”
So intellectually, people might say that, but emotionally, I feel like they’re at a much lower level. I would argue that for most people, climate change is more salient than AI doomism, and emotionally, people aren’t even really tuned into the climate change thing, because we don’t seem to care that much about doing a huge thing about it.
Liron 1:06:44
All right, we can wrap on that. So Destiny, really appreciate how you’re always contributing to having good, rational discourse, and thanks for helping me raise awareness about AI doom as well. Appreciate it.
Destiny 1:06:55
Yeah, thanks for having me on.
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏









