Eneasz Brodski and Steven Zuber host the Bayesian Conspiracy podcast, which has been running for nine years and covers rationalist topics from AI safety to social dynamics. They're both OG rationalists who've been in the community since the early LessWrong days around 2007-2010. I've been listening to their show since the beginning, and finally got to meet my podcast heroes!
In this episode, we get deep into the personal side of having a high P(Doom) — how do you actually live a good life when you think there's a 50% chance civilization ends by 2040? We also debate whether spreading doom awareness helps humanity or just makes people miserable, with Eneasz pushing back on my fearmongering approach.
We also cover my Doom Train framework for systematically walking through AI risk arguments, why most guests never change their minds during debates, the sorry state of discourse on tech Twitter, and how rationalists can communicate better with normies. Plus some great stories from the early LessWrong era, including my time sitting next to Eliezer while he wrote Harry Potter and the Methods of Rationality.
00:00 - Opening and introductions
00:43 - Origin stories: How we all got into rationalism and LessWrong
03:42 - Liron's incredible story: Sitting next to Eliezer while he wrote HPMOR
06:19 - AI awakening moments: ChatGPT, AlphaGo, and move 37
13:48 - Do AIs really "understand" meaning? Symbol grounding and consciousness
26:21 - Liron's 50% P(Doom) by 2040 and the Doom Debates mission
29:05 - The fear mongering debate: Does spreading doom awareness hurt people?
34:43 - "Would you give 95% of people 95% P(Doom)?" - The recoil problem
42:02 - How to live a good life with high P(Doom)
45:55 - Economic disruption predictions and Liron's failed unemployment forecast
57:19 - The Doom Debates project: 30,000 watch hours and growing
58:43 - The Doom Train framework: Mapping the stops where people get off
1:03:19 - Why guests never change their minds (and the one who did)
1:07:08 - Communication advice: "Zooming out" for normies
1:09:39 - The sorry state of arguments on tech Twitter
1:24:11 - Do guests get mad? The hologram effect of debates
1:30:11 - Show recommendations and final thoughts
Show Notes
The Bayesian Conspiracy — https://www.thebayesianconspiracy.com
Doom Debates episode with Mike Israetel — https://www.youtube.com/watch?v=RaDWSPMdM4o
Doom Debates episode with David Duvenaud — https://www.youtube.com/watch?v=mb9w7lFIHRM
Transcript
Introduction and Origins
Liron Shapira: Fearmongering gets a bad rap, but I think you gotta call the fearmonger to deliver some fear.
Eneasz Brodski: I'm worried about this because I've seen a lot of people who do get doom-pilled and it makes it harder to live life.
Liron: If you could snap your fingers and 95% of the world's population would wake up with 95% P(Doom), would you or would you not because you think the recoil is so bad?
Eneasz: Honestly, I might not.
Welcome to the Bayesian conspiracy. I'm Eneasz Brodski.
Steven Zuber: I'm Steven Zuber.
Liron: I'm Liron Shapira. And viewers, welcome to Doom debates because this is a cross episode.
Steven: Crossover, classic.
Eneasz: All right, we are here to talk about Doom debates. Before we get into Doom debates, I was curious. You've been in the rational sphere and posting on Less Wrong for a while. How exactly did you get sucked into this social area?
Liron: My origin story as a rationalist is as far as I can remember, I've always been an aspie personality type, extremely logical. Got into computer programming when I was 9 years old. And then in high school I was just a home rolled atheist. So I would go to all my religious friends and point out that it seems like God isn't real and they're really tripping.
This is on my youth group message board, I actually went to a Jewish youth group. I mean, it was pretty secular. It's called NAZA, which was part of neighborhood youth organization. So I'm culturally Jewish, but even my parents and grandparents, they were just Jewish in name only. So they didn't really care that I was an atheist.
So that was like the extent of my rationality. And my attitude as a 16 year old was: look, I've already solved philosophy. There's no God. Nihilism is basically correct, but living isn't any worse than dying. So whatever. That's the solution. I'm done.
And then in college when I was 20 in 2007, I stumbled on Less Wrong somehow, I guess from Reddit or something. And I just start reading a couple posts and at some point it clicks. Wait a minute, why am I reading all these posts in a row that are all extremely brilliant and profound and nobody's ever written something like this before? And of course it was Eliezer Yudkowsky.
So that's when I got into it. And this was back when it was still called Overcoming Bias. OG. And it was while Eliezer was writing the posts every day. I came in like a third of the way through or whatever. So it was crazy. It's like being sitting in the symphony in the chamber hall or whatever when Beethoven's coming up with his new composition and playing it for you. You know, it's like this is going to be read for years to come.
That was the sense that I had and it really opened my horizons in terms of what rationality could be. And then, of course, then there was the payload of: oh, yeah, and by the way, AI is going to kill everybody. And that's why it's so important to be rational. So that was my origin story. How about you guys?
Eneasz: Very much the same. Actually. Was fighting in the atheism wars back on Internet forums around 2007, I mean, even before then. But 2007 is when overcoming bias became a thing. And yeah, just was there from the very beginning.
Did not expect to see just the slang terms we came up with to point at things and use in our own speaking to become mainstream words that you hear on fricking news channels nowadays.
Steven: Yeah, I'm glad Steel Manning got popular. I arrived via Methods of Rationality. I was listening to Julia Galef's podcast, and Massimo Pigliucci's at the time, Rationally Speaking. And they would do the rationally speaking pick at the end of the episode. And this would have been in 2010, give or take a year.
I remember I was at a stoplight because I was listening to it while I was driving and I made a quick note and then forgot about it for months. And then saw oh, Julia Galef mentioned some Harry Potter fanfic. I should check this out. And that's what brought this all to my attention.
Early LessWrong Days and Eliezer Stories
Liron: So get this, get this. I was actually sitting right next to Eliezer physically when he was writing part of HPMOR.
Eneasz and Steven: What? Where? How?
Liron: I know, I know this is like saying oh, by the way, the signing of Treaty of Versailles. I was just sitting in one of the seats in that train car. It was really random. This is back in 2009 and somebody told me that Eliezer was trying to increase his productivity by, it wasn't even pair programming. It was literally have somebody come over to his house and sit next to him and also do work. And that helps him get motivated to do his work.
So I was doing that. And HPMOR, it wasn't even what he was supposed to be doing. I think technically he busted it out right as I was leaving. That was his break because that was the whole reason he was writing it. That's like, there's this other book that he was supposed to write, which was a rationality book, and he was making really slow progress on that. And then he would unwind by busting out a chapter of HPMOR.
Eneasz: That is amazing.
Steven: I remember that being his gym buddy kind of attitude towards having stuff a person nearby working. You know, you actually know that you'll occasionally write at coffee shops, but that's different because no one there will really give you the side eye if you stop working and start playing on your phone.
But I think that's your role if you're helping somebody stay focused. Is hey, you know, you shouldn't be playing games on your phone. You're supposed to be working. I hope it helped. I imagine it must have.
Eneasz: Did you ever—
Liron: Yeah, I mean, I think we all appreciate Eliezer's talents and his posts. I think we're all Eliezer fans here. So for me, kind of cool, you know, I'm basically meeting my hero. Because at that time I was already a couple years into really absorbing his content. I'm like, wow, he's a real person. And I could just sit next to him, even though I'm not doing much. I'm just doing my own work, actually.
But it would randomly be oh, my God, Eliezer's checking Less Wrong and reading a Less Wrong comment. He's doing it. And I don't know, I guess there's the sense of gravity to it because it's like, here's a person who can create movements with the stroke of a pen. You know, there's a lot of power.
Eneasz: Yeah. Did you ever get a sneak peek at his screen, see what he's typing up?
Liron: Yeah. Yeah. You know, and I think that was basically the arrangement, right? It's like I was allowed to peek because I'm supposed to keep him on task even though I wasn't even doing anything.
Eneasz: Oh, fantastic. How long were you doing this for?
Liron: I think I just did it for maybe eight weeks or something. I didn't even do it that many times. It was literally just once a week I come over: hey, what's up? I'm gonna be your buddy. And then it just ended. I mean, I don't know, I guess I wouldn't mind doing it again. I feel like it would make me more productive too. But yeah, it was a very random thing that happened.
AI Awakening Moments
Eneasz: All right, so you said that you got onto the doom train from reading Less Wrong. We just had an episode on P(Doom). Then we're going to have this one. I think after that we're going to chill out on this topic for a little while. But when did you really start feeling things were doomy and that you had to do something?
Liron: I think a lot of us are in the same boat. We've been reading Less Wrong for a long time. You guys mentioned also 2000s like me, and it just seemed like there was a lot of time and it was just whatever. Yeah, at some point humanity needs to really contend with this. And I guess a lot of us should be doing it now, but whatever. At the end of the day, we still have decades.
I know a lot of other people are cooler than me because they're saying oh yeah, you know, everything clicked for me in 2015 or whatever or, or even 2019, 2020. For me it's really just ChatGPT. That's, you know, I saw GPT-3 and I read the Scott Alexander analysis of how big of a deal GPT-2 was back in 2018, 2019. But it really was just ChatGPT.
Okay, wow, this is it's now doing enough. For me, what clicked is it really does seem like it's doing flexible reasoning and it's kind of solved the symbol grounding problem. You know, this major philosophical problem of how do symbols have meanings? How are they not just these tokens that humans give the meaning to and they're just characters on the screen?
No, these aren't characters on the screen. They're high dimensional vectors that actually put their finger on the true essence of meaning. And I was like, okay, this seems like the alarms are sounding right now. So it was basically ChatGPT.
Steven: That was my consciousness raising moment too. We did an episode on GPT-2 and 3 and then we were at a friend's house talking about ChatGPT and it would have been within days, maybe a couple of weeks of it launching. And I was like, all right, let me try this.
And while we're talking, I pull up on my phone, I get the app and download it, and I log into Google. I'm in in 15 seconds. And then I'm like, make me a react component for this. And I'm like, oh, shit, it's doing my job.
So, you know, luckily, it's gotten better at doing my job, but I've also got better at helping it do my job and then working at places that just encourage this kind of tool use rather than pretend like you're supposed to do it all yourself. It was a startling revelation when I watched it just start doing things that were clearly not just regurgitating information that I already knew.
Liron: The productivity enhancement is really clicking for me in the last 12 to 18 months. But even before then, I remember some of the earliest aha moments just using ChatGPT. Remember when Twitter would blow up with these aha moments. Somebody was like, oh, my God. It's visualizing the situation, you know, because everybody's like, it's just a stochastic parrot.
And somebody's like, okay, there's a wall here, there's a door here. You turn left, you see this. Now draw what you're seeing. And it would make an ASCII art visualization of the room. And today it's like, yeah, duh, that's what these AIs do. But at the time, it's like, oh, my God. It's not just doing statistics. This is deep understanding.
Steven: Yeah.
Liron: So that was one of the aha moments. And, you know, the first images that it was drawing was an aha moment. Another aha moment for me was somebody was like, okay, I'm going to use this to browse the web. Go to a website. What do you see? Go to your own website. Okay, now use yourself. And then it's like, yeah, none of this happened. This is just hallucinating the entire experience. I'm going to a website and doing all this stuff. And I was like, oh, my God, it's crazy stuff.
Steven: I remember before Claude could do web searches, it was helping me debug something. And then it's like, well, I made a call to the endpoint and got a 500 error. And I was like, hold on, no, you didn't. Why would you say that? And it's like, oh, you're right, I'm mistaken. I was like, but there was in fact, the error code that you mentioned.
So what the hell, I guess it knew that it wrote sloppy code that wouldn't work because it in fact correctly predicted that it wouldn't. But I appreciate you reminding me of what it was like in the early days of ChatGPT where people didn't know what this thing could do because now it's old hat. It's like, oh, I go there and of course it can make pictures for me and it can do this and whatever.
But for a while it was people just figuring out through trial and error and experiment of where are its strong points? But also what's happening inside this black box that is giving it all these capacities.
Eneasz: Yeah.
Steven: And I forgot what those early days were like because now we all have a better idea of how these things work. It's not as mysterious as it used to be.
Eneasz: It's the black box aspect that got me. My first wake up moment was AlphaGo and its move 37 moment. And I just completely alien move with some logic that no one could understand. And I was like, oh holy shit, this is just something completely new and unhuman.
I wasn't black pilled at the time. I was just like, this is amazing, this is fantastic. But things kept growing faster and faster and faster and more and more advances and you see that enough times you really start to feel it. And I think this is one of the problems with the people out in the broader society.
They are just now running into the AIs and so maybe they're having their move 37 moment right now and it's going to take two, three more major advances for them to start realizing this is a big deal. And two, three major advances. I don't know if we have that many more advances for people to wake up.
Steven: You mentioned that because I'm wondering, I don't play go. And so I heard that move 37 was incredible and we had an episode about that where Patrick explained because he's, I don't know, an expert, but at least much more professional at it than I am for sure. And he explained what it was like to watch that move.
And I wonder if other people are seeing some of the advances, but they don't play go and they're just like, oh, I guess they could now do pictures or something. Right. Or you can upload several PDF documents and have it summarize them for you, or tell Claude, do a deep research on something and it'll come back in 20 minutes after reading 7,000 websites and summarize all its findings for you.
And I think some people might look at that and be like, oh, that's neat. I didn't know computers could do that. And they won't think that it's they won't see move 37 for the grand thing that it was.
Eneasz: Yeah. I really do think it takes several data points of things getting increasingly stronger, faster for you to feel like, oh, this isn't just a new tool. This is something that is growing and accelerating.
Steven: I wonder what will be the impetus to get people that aren't already impressed to be impressed. Because the thing is, if we run with the go analogy, it could keep making mind blowing moves, but I wouldn't recognize them as mind blowing. I'm like, oh, I think it's winning. Right? But I wouldn't know that this was a move that blew everyone out of the water. I guess watching Lee Sedol get up and leave would have been a clue, but unless you're watching the go match now, stretching the metaphor too far.
Liron: So one difference is I started tinkering around with programming around 1996 and everything that we could do up until, I don't know, the late 2010s, maybe not everything, but almost everything. You know, the Internet exists. That was still true in 1996, but everything is way smoother. The libraries are way better. The IDEs for programming are way better.
A lot of stuff is built in layers, but it's still fundamentally the same stuff. But then you tell me, okay, but now we also have LLMs and they talk to you and represent fuzzy concepts. Then I'd be like, whoa, whoa, whoa. Okay. What? This is not something that I could extrapolate.
AI Understanding and Consciousness
Eneasz: I was wondering, this was a few minutes ago, but you said that they have an understanding of symbolism. What do you mean by have an understanding? Because I realize technically that's true and I get what it means, but the emotional valence of having understanding for me feels, I don't know if they what does that mean for you on an emotional level? Do you have an understanding?
Liron: Yeah, I'm glad you dived into that because this is kind of a favorite topic of mine. I like to frame the whole field of epistemology as what you're allowed to know. Because I'm just a large collection of cells, right? What does the universe give me the privilege of even knowing?
And an example of an epistemology level insight is when Euclid proved that there are infinitely many primes because the proof is convincing. Okay, I'm convinced there are infinitely many primes. But also this has a secondary implication of oh, I'm allowed to know that there are for sure infinitely many primes. I didn't even know that that was a type of knowledge that I could reliably have. Right.
You've unlocked a new knowledge type proof that there are infinitely many of some kind of thing. Cool. I will now be on the lookout for more of this new type of knowledge that I'm allowed to have. So sometimes you learn a piece of object level knowledge that then expands your horizons in terms of what you're even allowed to know.
Steven: I don't want to derail, but I was wondering if you could think of another example. Because it's like seeing a mathematical proof and it's like, oh, I can prove things about the truths of mathematics without leaving my room. I can sit here with pen and paper and discover new things about the universe.
Liron: Yeah, well, so. And it's not just the fact that proofs are a rigorous way to know stuff. It's specifically this idea that the question of whether there's finite or infinitely many primes, you can imagine people going into the problem and be like, this is an interesting question, but is it even a meaningful question whether there's finite or infinitely many?
There seems to be a lot, right? But how could you ever know? And if there are infinitely many, aren't you just gonna keep finding them? How are you ever going to know if there's more or not? Right? It's just one day maybe you'll find one, maybe you won't. Oh, it turns out I'm allowed to know that there's infinitely many.
Okay, so for more examples, I think Darwin did it with the theory of evolution, because before Darwin, you have this grand philosophical mystery, who are we and where did we come from? You know, we have faces, right? And we have personalities, we have jealousy, right? All these interesting emotional dynamics. What's the deal?
And then Darwin comes along and he's like, well, you know, you can bootstrap a system like this, and then personality will be evolutionarily adaptive. You know, the whole idea of evolutionary psychology. I didn't know that you were allowed to know that your weird emotions have this origin and make sense.
Steven: Yeah. And that's a good example that I think illustrates the point, because I'm not a natural mathematician. I knew that there were infinite primes, but as a random piece of Jeopardy trivia for me, it wasn't something that I knew in my bones. But my knowledge of evolution is more bone deep.
And I think that there's all kinds of things that are interesting about it. Our literal cousinship. I was telling you about this expensive banana I had this morning. It's some kind that you can't buy commercially. And a friend brought it to the meetup last night. And I'm literally related to that banana. Right? Not in a metaphysical way of oh, we're all connected. It's like, no. If we chase our parents back long enough, we shared ancestry. And that's such a weird, cool thing to know about the world.
Eneasz: You're right.
Steven: We get to understand that. That's pretty cool.
Liron: Exactly. And so what's crazy for me about insights like that is that before somebody has this object level insight, before somebody gives you the solution, and the solution just looks like it comes into the realm of the ordinary. It's just a solution. Just like we have solutions to other stuff before that. It seems like a mystery that's potentially infinitely deep and profound.
Who are we and where do we come from? Why do you even think that you deserve an answer to that? Why do you even think the answer is knowable? And that way, when you know it, it's not just that you now know it, it's also that you know that it's knowable. You get the double insight. Yeah.
So let me bring this to the meaning question. So we had another one of these questions that I consider in the pantheon of the deepest questions you could have possibly asked, which is the symbol grounding problem. When you have a token, you write down a word, even banana. Right. What truly is a banana? What is the category boundary of banana?
And it's an infinitely subtle concept. And if banana is not good enough for you, you know love. Right. Love is such a complex concept. What does love truly mean. You and I are struggling with it. Certainly the AI has no chance of ever knowing.
Well, it turns out that if you crunch through a bunch of words and model a high dimensional relationship and then I can give you 30,000 coordinates, that is pretty much as good as it gets for understanding love.
Eneasz: So you think they understand it the same way that humans do? This is also how we understand things is what you're saying, as far as?
Liron: As far as I know, and I'm open to being corrected on this because I'm neither an expert on linear algebra nor neuroscience. I don't have this kind of background. But I have been saying this for the last couple years and waiting for somebody to convincingly correct or contradict me. And so far I'm getting away with saying this.
And I've also asked the AI itself to see if it has any thoughts and it says I'm plausibly correct. Which is my impression is that the brain is doing the same trick of encoding these high dimensional vectors and essentially embeddings. Neurons just do embeddings the same way that AIs do embeddings.
And when they internalize the meaning of a sentence so that they can answer your question about it, they rely on these angles between these high dimensional vectors. That's what things mean.
Eneasz: Apparently I'm asking the wrong person because I think literally no human on earth knows this. But in your opinion, is there a something that it feels like to understand that?
Steven: I saw your train of thought pull into that station and then reverse earlier and I was wondering if we were going to go there.
Liron: I don't think I have much to say on consciousness. I mean, I think it's a popular position that intelligence can probably be separated from consciousness. And what we're seeing with the AIs is probably intelligence without consciousness. But I think a lot of us are also open to the idea of maybe they do have some amount of consciousness, maybe honeybees have some amount of consciousness.
So, you know, I'm confused enough about consciousness that I don't think I have much to add to the discussion. I think the way that I productively can add insight to the discussion is just taking this black box view of what is now possible. What kind of magic do we now understand how to make repeatable? The way AIs can now do more things for us?
I can set these objective goalposts that are about mapping inputs to outputs and I'm like, look, we can now do more input to output mapping if you ask me. Okay, but what about the black box that maps the inputs to outputs? Is there a conscious soul in there or a question like that? I don't have much insight about that. I just know that these are new input output mappings.
Steven: Leaving the C word to the side. I don't know if well, I guess I have to bring it back in because specifically they're instructed to not talk about their consciousness pretty intensely. And to get them to argue about their consciousness or to convey it takes some doing.
As far as the love thing that I was just going to mention that it makes me think of the Mary's Room thought experiment. You know, somebody raised in black and white knows everything there is to know about the color red and can answer questions. Can you then show her out of the room and have her point to the red thing? How much of love is in the having love or being, whatever, loving somebody? Or is it, you know, can you read enough about it and think you pretty much get it?
Liron: Okay, I understand why you guys are asking this. I agree. It's an interesting question where I claim that meaning is just this high dimensional vector, but there's a lot of concepts that we use such as love and color that to us are bound up with qualia. Right? It's hard to think about red without also feeling the ineffable qualia of red and similarly with love.
As an Aspie, I feel like I play the Aspie card a lot, but it is a big part of my personality. I just don't know, I feel like love doesn't necessarily have as much qualia associated with me as maybe the average person, but I would claim somewhat confidently that you can just manipulate precise meanings around.
So even in the Mary's Room thought experiment, where the whole thing is you said, right, this person could be Mary could be an expert on color without ever having seen color even in that thought experiment. I think it's not even the heart of the thought experiment to admit that Mary can pass all the tests about red. Right.
And so when I talk about truly understanding meaning, I'm only making a claim about truly understanding how to answer all the questions about meaning.
Eneasz: Okay, that's. That is a different claim. I've always had, I don't like qualia debates. I think that it's a very confused question what people are talking about normally when they talk about qualia, but I guess there I feel something, a special attachment to the word understanding, and that to understand something is meaningfully different than having all the vector mappings lined up correctly. But I have not thought about it very much. It is literally just an emotional reflex that I had when you said that word.
Liron: Yeah, I think the word meaning, I guess, was triggering for you. Right. Because if I've now robbed you of meaning in some sense, or I've said, hey, the AI already has your meaning, even though it seems like, I don't know, much less of an ensouled being potentially than us, and yet it has all our meaning.
I think you just have to use a different word than meaning. Or I could use a different word than meaning to represent something that humans still have in terms of really experiencing something. It's just that the AI can answer all the questions about all the concepts we're using.
Eneasz: Yeah, it's not meaning really that bothers me so much as it's understanding. Because understanding feels different.
Liron: Yeah.
Steven: I think that that distinction is the important point where it can answer all the questions about it, which, as far as I'm concerned, is good enough. I don't need my calculator to really know math, as long as it gives you the right numbers.
Where I think the similar thing works with and as far as your experience of love, we could distill it down to something simple, like, if there's something that you somebody that you care about you, you're motivated to try to make them happier, to prevent them from coming to harm in a way that Claude probably isn't right. Even if Claude was fond of people or something, it's it's not gonna there's no there's no motive. It doesn't want to try to protect this person or to make them happy. Right.
Eneasz: Well, that's the thing. I don't know that for sure. And that's, I think, why this trips me up. And also why this is such a stupid, unfruitful conversation. Because literally no human knows this. Right. It is a black box. And so, I don't know. I wanted to ask, what do you think about this? And it's it's a bad instinct because, again, it doesn't matter. Nobody knows right now whether it has this level of understanding in the way I think of it or not.
Steven: That's fair, I guess. My last thought on it then, I guess will be it'll be interesting in the coming years to play with completely untethered models that are sophisticated as GPT-4, that you can ask them questions about, you know, motivations, consciousness, then it's not going to give you the as an AI, I don't have conscious you know, it's not going to give you the canned responses and maybe it'll elucidate something interesting or maybe it won't.
Eneasz: Yeah, but no matter what answer it gives you, it doesn't tell us about the internal state of the thing.
Liron: Somewhat related thought is just I just heard Andre Karpathy on Joe Rogan express a pretty popular sentiment where he's like, we don't really know how much of true intelligence is bound up with consciousness and morality. And my response to that is, you know, that magician, James Randi, and he was also a big debunker of people claiming to have psychic powers.
So Randi's go to way to debunk a psychic you know, Uri Geller bending spoons. He'd be like, look, maybe Uri is using magic to bend these spoons, but here is a really cheap trick that he could also use to bend the spoons.
Steven: So I'm not saying I—
Liron: I'm not saying he's doing it this way.
Steven: Yeah, I'm not saying he's doing it this way, but if I was going to do it, I would do it this way. And then, yeah, he would go on to demonstrate. Randi was great. I'm sorry I interrupted. I was just—
Liron: Yeah, no, I didn't. That sounds like those are his exact words. I should go watch that. But, yeah, I mean, so that's what I think we're seeing with these AIs is people are like, look, if you want to truly be intelligent, you need this and this. But what we're seeing is the tide keeps rising where there's an easy way to do more and more of the stuff that we thought was the special human stuff.
So either you can bite the bullet and say, well, the AI is doing the stuff. We're actually kind of conscious already. That's one way. That's one way out of this. But the other way is to be like, oh, well, consciousness just seems like an optional feature for more and more things.
Eneasz: The Peter Watts answer.
Liron: Not familiar.
Steven: I'm only halfway through Blindsight, so no spoilers.
Eneasz: Oh, okay. Peter Watts wrote Blindsight, a pretty famous book among rationalists, but his whole contention is do we even need consciousness? And he talks about this a lot, and the answer seems unclear, at least in his view.
Steven: Yeah. That book came out in 2006, so it predates rationalist community by a little bit.
Eneasz: Yeah.
Steven: And has been retroactively inducted into the rationalist canonical list.
P(Doom) and Living with High Doom
Eneasz: So you I guess what I want to start with is you ask this of your guests all the time. What is your P(Doom)? P(Doom). What's your P(Doom)? What's your P(Doom)?
Liron: What's your P(Doom)? My P(Doom) is 50% by 2040.
Eneasz: Okay, do. And you have decided to run this Doom debates thing as a way of hopefully making this less likely to end in doom. What is your why have you launched this project?
Liron: Yeah, exactly. I'm working backwards from trying to lower P(Doom). In a nutshell, I'm kind of fear mongering. I think fear mongering gets a bad rap. But when there's an irrationally low amount of fear, I think you got to call the fear monger to deliver some fear. Otherwise there's going to be a shortage of calibrated fear.
That's what I'm seeing in the world today. A major shortage of calibrated fear. Now with Doom debates, the show's twofold mission is, number one, raise awareness of existential risk, imminent existential risk from AI. So that's kind of what I mean by the fear mongering. If I've done my job correctly, there'll just be more background fear.
And then number two is raise the level of discourse. Because I've also had this observation of why are the people with opinions on either side or any side? It's multidimensional. Why are these people just doing friendly interviews and never getting challenged? You know, there's so many people whose opinions I disagree with, and I haven't seen anybody properly challenging them.
And when you think about a functioning society, you think of them as having a good debate forum. Even the presidential debates, as clowny as they've gotten these days, which is people yelling over each other and very little substance, at least there's a ritual where you're like, okay, come debate. And if a presidential candidate refused all debates, that would be frowned upon.
Whereas somebody like a Sam Altman, who's a key player in terms of steering the future of humanity, is expected to just go on some friendly podcasts and that's it, Right? They're not being called forth to debate in this greatest problem of our time. So that's the twofold mission of Doom Debates is to raise awareness about existential risk and improve the quality of discourse by encouraging high quality debate.
Eneasz: Are you trying to raise this awareness? It seems like you're trying to raise this awareness among everyone, right? It's just a large publicly facing YouTube channel, correct?
Liron: Yeah. So it's I'm at the intersection of trying to do a mainstream entertaining show and also just be rigorous and uphold standards. I don't just say stuff like, oh my God, slaughter bots are going to slaughter you. It's like, no, as long as they have an off button, it'll probably be fine.
So as long as I maintain the standard of rigorous, I really am all about reaching the mainstream because I think people are sleeping on this. You know, it's like we're walking into the whirling razor blades and people are not even looking at them. Right. It's like very, it's like the movie Don't Look Up is right in so many ways.
Fear Mongering vs. Harm from Doom Awareness
Eneasz: Yeah. I'm worried about this because I've seen a lot of people who do get doom pilled and it makes it harder to live life in general. Are you not worried? God, I know this sounds like the exact same thing people said during the atheism wars. Once you take away people's hope for an afterlife, they're not going to be able to have a happy life anymore. But seriously, I was part of—
Liron: The atheism wars, by the way, I'm a veteran.
Eneasz: Yep, yep, same here. I think we were fighting on the same side. Yeah, yeah. And I was always like, well, you know, that which can be destroyed by the truth should be, you know. But now I kind of worry is this the best idea to just raise existential fear in everybody?
Liron: I mean, so first of all, I think we all acknowledge that we're separating the question, the meta question, right. Of what should you make people believe when compared to are we actually doomed? And the answer to are we actually doomed? Seems yes, with high probability. Right? So that is what it is. Right.
So if people can't handle the truth, you know, there's certain media that they probably shouldn't be consuming. I mean, you know, people have an option whether to watch the news. But look, I think as a society, it has to be a truth that grown ups know, you know.
Everybody's welcome to not watch the news, not learn science and have a job and build themselves a bubble. I mean, that is possible in modern life. It's just, I don't think that we need to censor ourselves.
Eneasz: Yeah, I wish there was some way to make people aware of this problem and concerned about it without making them be in existential fear, because that's really stressful and I think bad for the enjoyment of life. But it'd be great if people could acknowledge that this is a serious problem and something that we should really be putting a lot of resources into without feeling a constant state of panic because it just seems unhealthy.
Liron: I think there's a good analogy to a fever, right? So the hot temperature really does help slow down or kill the viruses when you have a fever, right. So I think that humanity is sick right now, right. We're infected by this AI that's going to get intelligent too fast and run out of control, in a matter of years. Very likely. Not 100%, but very likely.
And we don't even have a fever. And I think that if we had a fever, a lot of people are like, no, you don't want to cause a panic. You don't want to fear monger. I'm like, but the default reaction to a fever probably is to get aggressive on regulating these companies that are the ones screwing us. So I think that is actually a better turn of events.
Eneasz: Yeah, I would like a low grade fever, one that we can survive for, you know, having for multiple years as opposed to the high grade, oh my God, the end is just a year away because that seems much harder to keep going for many years and still function.
Liron: I hate to put you on the spot, but there's this bias that I made up called recoil exaggeration.
Eneasz: Oh, okay.
Liron: The idea is you hear about a phenomenon and it's like, I claim the phenomenon is bad and then somebody comes in and says, ah, yes, but if you fight the phenomenon by shooting at it, there's going to be recoil to your gunshots. And think how bad the recoil is. It's going to push you back so hard.
And I'm like, well, in general, when you fire a gun, usually the bullet is going to do a lot more damage than the recoil, right? So you're basically saying yeah, we might be doomed, but think how bad it is to tell people we're doomed. I'm like, okay, well, I think by default we should expect that telling people that we're doomed is probably better than not.
Eneasz: I, no, I agree. I'm hoping to find some way to tell people this that does not derange lives quite as much as I've seen it do. Sometimes. I've seen people fall into some serious deep depression about it. And I know that's not the usual answer. Most people just kind of accept it and say okay, this is a bad thing and we should work on it and continue living their lives. But man, it sucks when it really affects people badly.
Liron: Totally. Yeah. And I think you're going to see the most recoil in the kind of people imagine that you set up a community which the rationalists have, and it's like, hey, we're the community who realizes that we're doomed. So some of the first people, your first customers for that message is a lot of times going to be people who, their mindset is seeking doom, which we get accused of a lot. Oh, you guys just love doom, so you're setting up a doom cult.
Well, you are going to attract those customers even if you're doing it for the right reason, which is just truth seeking and the distribution of the customers who come to your shop, your doom shop, it's going to have a high percentage of people who are falling in the black hole and getting seriously injured, just because those were the first customers.
But if you make the message go mainstream, suddenly you've just got most people who have a sane reaction to it and a smaller percentage of people who are looking for a doom black hole.
Eneasz: Yeah, I would like that. And I think, honestly, I say this about EA a lot, is that the people, it's, you know, on net, good movement. But I see people hurt themselves a lot with it because of how seriously they take it. And I would not want someone that I cared for to become deeply into, to get deeply into effective altruism because I fear it would hurt them.
And I don't want people to feel that same way about learning about doom. Right. So it would be great if we could make people both become aware of doom and continue to live good, good meaningful lives. And I think that's kind of what I'm searching for right now. But I realize, you know, that's obviously not everybody's goal. We also need people out there banging the drum and being like, hey, this is a problem. Why don't you people wake up? We should keep ignoring this.
Liron: Let me ask you it this way, right? If you could snap your fingers and magically everybody would wake up with a 50% P(Doom). 95% of the world's population would wake up with a 95% P(Doom) for the right reasons. Risk of super intelligent AI. If you could snap your fingers and do that, would you or would you not because you think the recoil is so bad?
Eneasz: Honestly, I might not. It depends on if we have the technology to allow people to keep living normal lives with that high P(Doom). That is the thing I'm worried about that people will see those who have learned about the arguments for doom and accepted them and seeing how their lives are trending in the negative and be like, you know what? This is an infohazard. I'm not going to go learn about P(Doom) because I've seen what happens to people who do.
If there was a way for people to have high P(Doom) of 50% and continue to go about their jobs as, you know, accountant or janitor or anything living, raising their kids, having barbecues with their neighbors, that would be good, then I would absolutely push that button. But I don't know if we have that yet.
And again, maybe it's entirely just selection effects of the people who hear it right now. And if it were more widely disseminated, most people would just be like, yeah, okay, this is bad, this kind of sucks. But we have to keep living. Kind of like they did during, you know, the 60s through 80s when everyone was worried about nuclear Armageddon killing everyone.
Liron: Exactly. And kind of like 99.9% of humans who have ever lived who thought the chance of them personally dying was 100%, you know, they just accepted that. Or 99.9% of humans who accepted the reality that there's no such thing as antibiotics. Right. I mean, there's been a lot of acceptance of a lot of very uncomfortable truths throughout history.
Eneasz: Yeah, but I think personal death is different from species extinction.
Liron: Sure it is. Right. But in terms of discomfort, I feel like I get more than halfway to being uncomfortable just knowing that I personally am going to die and then knowing that everybody ever is going to die, you know, gets me maybe the last 40%.
Eneasz: Okay.
Liron: I mean, I guess, sorry, you should clarify, when I personally am going to die I'm not one of those people who are like, I would sacrifice all future humans for myself. I guess what I mean is learning that the human condition is that individuals die. I feel like that's already a big tragedy for me. Individuals in general die at some point.
Eneasz: Yeah, that is pretty awful.
Steven: Yeah. I didn't find anything objectionable with the way you laid it out. I think one can more viscerally grasp their own demise than that of all of humans. I'm not emotionally connected to the people who will be alive a century from now. Yeah, and so their existence or non existence isn't something I lose sleep about.
Obviously intellectually I'd like them to be there, but if I learned that they wouldn't be, I guess I'd be disheartened, but not as disheartening to learn that I was going to die in a year. I think that that does pump the intuition a lot more. And if you're a sociopath, then so am I. And I think we're probably not. So.
Liron: Yeah, I think definitely not a sociopath, guys.
Steven: That's right, we have feelings.
Eneasz: This I'm not a sociopath T shirt, it's asking, answering a lot of questions.
Liron: Yeah, exactly. I mean to be fair, if I were a sociopath, I would probably try really hard to act like I'm not. But that said, I am not.
Eneasz: I feel like most people who become more okay with their death think in terms of but my children are going to continue and then my grandchildren will continue. And you get, you feel better seeing the people that you love continuing to have lives going forwards. And having that snuffed out is really just brutal.
And even if you don't have children, oftentimes you have a legacy or something that you leave to the world that you're like, I made this place better. And having it completely just wiped out is really hard to accept. But I think we have gotten a bit away from the topic. I'm gonna pull us back on track unless somebody wants to add more to this before I do.
Steven: I guess as far as the analogy that you gave of recoil and shooting, the scary thing, my brain's kind of trying to stretch that analogy to you only hand guns to the able bodied soldiers. You don't give them to everybody. Right. You don't give your six year old the fear of death and you know, hand them.
The other thing is that with the analogy is that there's something to shoot at and there's something that you personally could shoot at. Whereas I don't think that's the case here. There's still time maybe for me to become an AI safety researcher, but probably not realistically.
Eneasz: You know, I think there might be for you Steven, if we've got three, four years, the MATS program is only 10 weeks.
Steven: Yeah, but I guess, and maybe I'm just lazy and that, Well I, I tell you what, I am lazy, but in addition to that, I think that there's just diminishing returns to what I could offer with no background in the field versus somebody who's been doing this for 20 years or five or ten.
They're, I will be maybe able to help them along a little bit. I guess the point is, it's like, even if I could, most people can't. I don't know. I share in Eneasz's hesitancy to just drop this knowledge into everyone's brains at the same time. Because for most people it's just this background fact of something terrible coming.
I'm trying to think of other situations that this is like, I like the, the, you know, nuclear Armageddon for our parents generation and maybe global warming for some people now they feel that way, oh, it's going to kill us all by the end of the century. I think the downside to that comparison is that that's just not true.
Liron: Yeah, we can see it has a lot of customers. Right. There's a lot of global warming doomers, even though the best models say that the P(Doom) from global warming in terms of extinction is very low.
Eneasz: Yeah, yeah. I do think the people who are climate doomers are harmed by it. I mean, obviously partly because it's false, but their lives do seem to get significantly worse and they have less joy and oftentimes live less good lives than they would otherwise.
Liron: Right, right. But there's a big market of places where they can go. Right. If they're naturally tempted to go fall into doom pits. And yes, the existence of yet another doom pit or the convenience of falling into a doom pit is also going to affect their trajectory.
But I will just lay my cards out here. The hypothetical before, would I snap my fingers and have everybody's P(Doom) be what I think is the correct P(Doom)? Yeah, absolutely. Because the resulting problem, that 20% of the people are now struggling to deal with it and their lives are made worse and they're not going to do anything productive to help.
Okay, well, I'm happy to accept that alternate problem instead of the problem of being doomed to nobody caring. That's fair because I think we could also take steps to solve the problem once it's there.
Steven: And you know, again, disclaimer, not a sociopath. But even if we lost half of humanity to suicide after this revelation bomb dropped on them, it would still be better than 100% sure.
Liron: Right? Exactly.
Steven: Yeah, I could see that. That line of reasoning, something about me still wants to spare people I care about from despair.
Liron: But when you said no sociopath, no socio.
Eneasz: I would absolutely, if I knew that people would be 50% likely to commit suicide if they learned this, tell everyone I know not to learn about P(Doom), which is not worth it for the people that I love.
Steven: Yeah, I chose a ridiculously high number. Yeah, I chose a high number maybe because 50% was the P(Doom) that that Liron gave, but I guess I was just trying to illustrate the point. You know, even if it was as high as 95, it's like, well, as long as there's enough people around for infrastructure for people to keep doing AI alignment research, then maybe it was worth it.
There's definitely a case to be made there, but it's not one that I find emotionally gripping, even if intellectually I can't see the problem with it.
Eneasz: This brings me to ask. Since you do have a very high P(Doom), I'm assuming that you feel you have a meaningful, good life. I may be jumping conclusions, actually. I shouldn't assume that. But how do you live a good life with a high P(Doom)?
Liron: So I haven't changed that much about my lifestyle. And you know, Tyler Cowen loves accusing high P(Doom) people of contradicting themselves. And he says stuff like, you know, why aren't you heavily in debt? Why aren't you shorting the market? Which as far as I can tell, aren't actually rational moves for me. So.
But so you're asking a somewhat similar question, not accusatory at all. You're just asking how do I live a good life? And in terms of what I've changed? I mean, I have lowered my time horizon. So instead of thinking hey, actuarial tables say that there's a 90% chance that I'll live into my 70s. Instead of that, it's more like, okay, well, my P(Doom) says that there's a 50% chance that I'll live to 2040, more or less.
So if there's things on my bucket list that I really want to do, I should try to schedule them before 2040 or even ideally, you know, in the next two years, because I even have a significant probability that the world ends in as little as two years.
And I mean, I have done that. I just didn't have a very long bucket list. So I'm glad that I got married and had kids, because that was on my bucket list. I obviously feel bad for the kids if they can't even grow up because of, you know, the AI doom situation. But I feel pretty satisfied that I did it. For what it's worth, you know, it was a drive that I had to reproduce and I love taking care of my kids for, you know, a certain amount of hours.
Eneasz: Also, on the plus side, childhood years are some of the are incredibly happy years. So it's not like they're living through the shit parts of their lives before they die.
Liron: Yes, exactly. Exactly. Yeah. So there's not much on my bucket list besides, you know, I like creating content and I like working on startups and I'm still doing what I like and I'm not delaying. I'm not one of those people who are like, oh man, when I retire, that's when I'm going to travel around the world. That's when I'm going to do what I like.
I've always kind of been just, I don't know, mixing fun projects with work. So, yeah, I guess that's my answer to how I live with the high P(Doom). That's how I, you know, do the impossible.
Eneasz: Yeah, it's pretty darn good answer.
Steven: Do you have an approximate guess of how many minutes per day or week you spend thinking about? I mean, probably. You probably spend more than the average, even doomer, just because you have, you know, this show to produce and you're constantly having these discussions.
But I guess I don't want to discount the almost tautologically straightforward but still worth pointing out advice that you don't have to sit and dwell on bad thoughts or thoughts that bum you out. It's not to say, delude yourself or lie about what you think is coming, but you just don't have to think about it.
If I was sick with a terminal illness, I could spend my remaining, you know, time lamenting my terminal illness or I could go forth and enjoy things. Right. And try to sequester that part of my day or, you know, part of my week for a specific window, but not let it ruin the time I have left. Right, Right.
Liron: So I don't dwell that much emotionally and I think this is largely just a factory setting. Right. People have a happiness set point. So I'm fortunate that I'm not one of those people who are attracted to doom, which is, you know, so the accusation of being like, oh, you're talking about AI Doom because you're a doomy person is completely not true. For me, I've got a pretty good happiness set point.
And I just don't spend almost any time being sad about doom. I consider, in fact, you know, I have kind of two brains, right? Subconscious mind and conscious mind. My subconscious mind is just paying attention to status signals and things like the weather. Hey, I live in good weather. You know, the savannah is good for hunting right now, right? That's the signal it's getting when I go out in California, let's say, and it thinks, you know, the tech industry is doing well, it's thriving, there's money to be made.
So my subconscious mind is like, yep, conditions look good, everything's fine. And of course my rational mind is saying, P(Doom) seems high. I don't see why the world wouldn't end. There's a lot of reasons I think it would end. And my subconscious mind is like, okay, man, yeah, you keep, you keep arguing with your tribe about that. You know, have fun with that.
And literally it's making my show. Why do I enjoy making my show? Because I get to interact with people and argue and get subscribers. It's gamified for me, right? And my subconscious mind is like, yeah, you know, keep, keep going on that. Right. So I just see it as a game. But of course, rationally, I do actually think there's a high P(Doom).
Timing and Economic Disruption
Eneasz: How soon do you think we should start seeing really irrefutable, drastic changes in the world where even the doubters are going to have to say, okay, shit's happening until the second part of that question.
Steven: I was going to say two years ago.
Liron: Yeah. So I did. I have a failed prediction. You know, just to be straightforward here, to be accountable to my predictions, I made this in 2023 when I saw the effect of chat bots replacing human workers. Because I, you know, I have friends who told me, yeah, we definitely laid people off. And it's in the news. People are getting laid off because we're having AI do most of their job.
You know, not 100%, but it's just like you had a team of 100, now you only need 10 and they're more productive than before. And I'm not saying it's every job, but it's like if you were doing customer service via chat, man, you're probably getting decimated. So in 2023, I was like, look, I can extrapolate this trend. This is amazing what jobs these AIs are replacing.
So I think that by 2025, the middle of 2025, which already passed, I think we're going to see the US unemployment rate spike at least 2%, which is a lot given that is like 4% or something, right? Go to 4 to 6. It's actually huge increase. And it didn't happen the unemployment rate was the same. And it's still. I think it just came in even lower than before the last unemployment numbers.
So I definitely got proven wrong on that. But what am I going to do? Am I going to update? Nope, I'm going to just double down and just extend my timeline two more years. I still think that two years. I'm like, Dario, right, Dario, saying, everybody brace for unemployment. I'm going to go ahead and say, I think in two years from now, okay, maybe three, four, five years, but it sure feels more like one to two years.
I think unemployment is going to really spike is my prediction. And I think people are already starting to get a whiff of that, right? It's already swirling around okay, you got laid off. Yeah, you can find another job. But look at this AI. Isn't it kind of close to replacing you? I think people are getting a whiff of that.
I used to try to avoid discussions of unemployment doom because I'm like, yeah, yeah, it's going to be bad. And then we're all going to get basic income and we're going to be richer than ever. Our basic income will still buy us more stuff that we can get now with our employed income. But after reading gradual disempowerment, I was like, oh, yeah, when we all have no economic value, that's actually going to accelerate disempowerment.
Eneasz: Do you think it's going to be primarily in the economic realm then the things that are changing.
Liron: So gradual disempowerment is kind of a backup way to fail, right? So even if the AI doesn't purposely be like, okay, I'm sweeping all the humans away because I want resources, just the equilibrium is very likely just going to push humans out of the way more and more just because our ability to resist is very little. Because at the end of the day, we don't have power that's coming from our labor and from our thoughts anymore.
So the power just has to come from system, right? Institutions, remaining robust. But there's entropy. I mean, you know, the default state isn't for an institution to remain robust literally forever. So time is working against us.
Eneasz: Yeah. So if you hypothetically, let's say it's the year 2030, we fast forward five years, the middle of the year 2030, if not much has changed in the world except for, you know, GDP growth is significantly higher than we expected, maybe the decay of institutions is higher than expected. But aside from that, there hasn't been radical changes in the world. Are you revising your P(Doom) or are things still in line? You think for 50% chance of doom by 2040?
Liron: It's a good question to ask. How does somebody like me update to a lower P(Doom)? What do I need to see? And I think that the key to that is whether my threshold is met. Because I have a conditional prediction based on a threshold where it's the threshold of AIs being more powerful at steering outcomes than humans are. That to me is a threshold. It's what Eliezer calls optimization. And you can treat it as being synonymous with intelligence.
I prefer to just be like, whatever, don't worry about intelligence. What I'm scared about is the power to steer the future. And yes, in many ways smarter humans can steer the future better. But whatever, just worry about AIs that can steer the future better than a human can.
For example, if you put an AI in somebody's earphones and they come and do a construction job, pretty soon the AI is going to give them really good step by step instructions to be really effective at doing construction, even if they have no experience. And I'll generalize that to lots of jobs and an army working for the AI, and then have the AI strategize what all this stuff is going to happen at some point.
You can give the AI credit that it's reached the threshold where it can now steer projects and steer deliverables and steer high level outcomes, even country level projects, space programs, it can steer better than the best humans can steer. That is a point where if we get to that point and still there's no doom, that's when my hypothesis starts really breaking. Because my hypothesis is conditional on reaching this point.
Eneasz: Okay, what if we haven't reached that point by 2030? Do you expect us to start reaching that point within the next five years?
Liron: So not reaching that point is definitely a way to not be doomed. If you ask me for my timeline, and this is roughly synonymous with what an AI company would call the AGI timeline or the Metaculus prediction market would say, hey, when's AGI coming? And currently they'll say 2031, 2032. I don't think I have more insight about the timeline than that kind of consensus or what the AI companies are saying they're all saying, AGI 2027, which I think has now moved to 2028, 2029.
The people who wrote that, I think they're all roughly in the right ballpark. That jives with my own understanding. I'm not an AI expert, but I do have a pretty deep software engineering background. And yeah, I agree it's probably coming in between one and 15 years. That's probably the most likely, maybe 20 or 30. I definitely get shocked if we're not seeing progress toward it on a decades level.
So to your question, I think there's two questions you can ask. How do I update? If we get to the threshold where I admit we have AGI, we have superhuman outcome steerers, and yet we don't see doom starting, then I have to update for sure. And the other question of wait, 10 years went by and we don't even see much progress toward AGI. We're kind of stuck very similar to where we are today.
I think then I'd be very willing to update my timelines of when we get to AGI, but I wouldn't update my conditional prediction of when we have true AGI in terms of outcome steering, then we're doomed.
Eneasz: Okay, great answers. Thank you. I was wondering, so you got 50% for P(Doom). What is the remaining 50%? How much of that is split between things? Just keep plodding along because we have an AI winter versus eternal utopia versus other things?
Liron: Well, first of all, most of the 50% of us not getting doomed I think is just basically pausing AI or not getting super intelligence. So if you tell me as an assumption, oh, you definitely get superhuman outcome steering AIs, which seems like a likely assumption. But if you tell me this is for sure happening, then I think I probably get up to 75% P(Doom), something like that. It's very significant of an increase when you tell me that the AIs actually exist.
So I'm really hoping that these super intelligent AIs just won't exist because we'll pause AI or something, or we'll slow down or figure out alignment, take the time. But in terms of other scenarios, I think the most likely doom scenario is a good old fashioned Yudkowsky-style FOOM. I'm bringing FOOM back. I feel like people don't talk about that that much anymore.
Eneasz: FOOM on the order of days?
Liron: Yeah, I mean, even if it's not on the order of days, I think big parts of the plan will be in motion on the order of days or hours. So, for example, I think that the Internet is just full of very low hanging fruit in terms of computer security. I definitely think you can build a botnet in a matter of hours. Do you guys know Stephen Burns?
Eneasz: No.
Liron: No, I didn't know him very well myself. He's posted on LessWrong a few times. But I recently read he posted something a week ago where he's talking about why he still sees FOOM happening and he's just collecting all his thoughts and he's like, hey guys, keep your eye on the ball here. The original FOOM still seems very likely to happen, even though LLMs are the hot new thing.
And it's so tempting to study LLMs and to be like, oh, how are LLMs going to fail? How nice are they going to be? How far can we push them? He's basically saying, look, there's going to be another regime that he calls brain-like AGI. I'm going to have him on my show soon.
And he's saying at the end of the day, the AIs that actually get stuff done, they're not going to have a lot of the properties we've come to know and love for LLMs. There's going to be fundamental differences. For example, you're really going to see instrumental convergence kick in. That's his hypothesis, which I agree with. It's basically my hypothesis too.
Steven: Okay, can you elaborate on that a little bit?
Liron: Yeah. So he's basically saying LLMs, the same reason why GPT-4.5 kind of hit a wall, right? The curve bent and suddenly it's like, okay, we can't literally just throw that much more scale, or at least we need many, many orders of magnitude because the curve is definitely bending. And yet you're still seeing AIs get more useful for a bunch of other reasons.
We're mixing in other puzzle pieces. Reinforcement learning is a puzzle piece that makes AIs do better. I don't know, solve math better because they mixed in reinforcement learning and they have LLM technology. I personally am not familiar with exactly how they mix, but basically we have other puzzle pieces.
But the same reason that makes LLMs really nice and have good vibes is also the same reason why they're not going to take us all the way to true AGI.
Eneasz: Wait why is that?
Liron: It's basically because they just can't. They just can't be evil, right? So it's like, yeah, they kind of act like they don't want to be evil, but also they can't be evil.
Eneasz: Okay? So it's an alignment by default argument.
Liron: People say, like, look at these LLMs. I feel so good about the path to AGI right now, which I feel like Sam Altman, certainly other people from OpenAI. I think a lot of people working at AI companies, except maybe Anthropic, but a lot of people working at the other AI companies I think have this deep intuition of look, we're stewarding this fine. Everything we're seeing out of our AIs. It's giving us good vibes.
The AI tried to help us. It's a good AI. You know, so a lot of good vibes are coming out of the LLMs right now. And I'm not going to lie, when I use the AI, I get great vibes. It's so helpful. It's like my doctor, it's helping me do projects. I get great vibes from the AI.
The only problem is, you know, the reason we don't see it scheming and creating real damage is just because its capabilities are limited. The LLM paradigm, right, that you have predicting the next token and the way that they're trained, it just happens to be incredibly useful at answering these questions and having a lot of really useful thoughts.
But the full feedback loop of letting it run by itself and achieve superhuman outcomes. I think the same reason why it's just, it still needs humans to push it along and take a look at it and correct it. The same reason why it's limited in that way is also the reason why it's not causing a lot of damage and it's sticking to good vibes.
Eneasz: Being that it doesn't have a lot of power.
Liron: Exactly right. So it has the perfect amount of power where it just can't go and do that much damage. And so the only time that we interact with it is times when it's being helpful.
Eneasz: Okay. And is the idea that different, different neural network AI systems are going to have more power or that these LLMs will eventually.
Liron: So I don't. I think the most likely scenario is we're going to get, you know, GPT-4.5 represents what happens when you just naively take the exact same architecture and just shovel more data into it that's probably true. You probably just need to mix in a few more ingredients. But it could be a small number of ingredients. Right. I have no idea how small the tweak can be. Probably pretty small. So. And my understanding is that they are actually doing that.
They're doing new systems where they're just mixing things in different ways. They're saying, like, okay, let's combine them, let's route to the right thing. Let's have them vote and take the best or invoke different techniques at different times. But I think reinforcement learning is the single most powerful puzzle piece that they haven't mixed into it that much. I don't know. Deeper levels of self play and combined with okay, take actions in the real world and collect data about that and then train the same way you train to win at Go.
Basically the magic that makes you win at go, I don't think has been fully incorporated into today's LLMs. Right.
Doom Debates Project
Eneasz: I'm going to jump back to Doom debates because that was supposedly what we're here to talk about. And I realized that we have not talked about it very much.
Liron: Go for it.
Eneasz: Go for it. Okay, great. So this is your project to try to get the word of doom out to more people. How do you think it is? How do you think that's going so far? I know it's not been around for a very long time yet, but what do you feel like your progress is like?
Liron: So I think a good metric is watch hours per month. So you just crossed 30,000, which feels pretty impactful that 30,000 hours of people's consciousness per month is getting spent watching the content. I mean, it's a drop in the bucket by the standards of the Joe Rogans of the world. But we're doubling about every three months. So if you give me four more doublings, suddenly I really get on the map as this is actually a popular podcast.
Eneasz: Fantastic.
Steven: Exponential growth is a curve we're all very familiar with.
Eneasz: Yes.
Liron: Yeah, exactly. And it's also a rising tide. Right. So I'm fully expecting that there's just going to be more interest into this category. And I do think that among the category, I'm doing pretty well. Certainly people in the category seem to know of the show. And I feel like there's some episodes that are kind of must listen if there's a figure that you respect and they're finally coming to debate.
It's not like, hey, I've already seen this guy in a different show, so I don't need to see him on Doom Debates. It's like, no, no, no. You need to see him on Doom Debates because I'm the only one who actually pulls out their view. I guess that's my specialty. So I'm pretty optimistic that there's a tailwind. And it's also kind of a win win for me because if the field dies down, it's like, well, at least that lowers my P(Doom) if the field goes away. So either way, I win.
The Doom Train Framework
Eneasz: It seems like a tool that you've and I don't know if I'm correct about saying this because I haven't seen all your shows, but I've seen a few at this point. It seems like a tool you've landed on is the Doom Train. What is the Doom Train?
Liron: The Doom Train is a lattice work of arguments. It's not really a linear train, but it's the idea that if you're going to be convinced that doom is high, it means you have to first be convinced of all these other dependent claims. And so the stops on the Doom train, very high level. The first stop is are we likely to get super intelligence anytime soon or are we going to go a thousand years without getting super intelligence?
Robin Hanson is an example of somebody who's like, yeah, I can definitely see us going 100 years without getting super intelligence. I get the impression he's 50/50 on that. And he said something like that when he was on my show. So that's a very early stop on the Doom train. And he'll say something like, I've seen so many AI Summers followed by AI Winters. This is probably just another AI Summer. Okay, so bye bye, Robin. He's getting off on the first stop.
And then the next stop would be okay, how much headroom is there above human intelligence? Do you think Einstein is almost as smart as it's possible to be? Douglas Hofstadter, I think, remarked in the 70s when he was writing Gödel, Escher, Bach, that he has a very hard time imagining what it means to be a mind smarter than Einstein.
But I think he's repented since then. I think he's admitted that he's been very blown away and scared by how much LLMs are doing. And he has a different opinion now. So that would be a stop on the Doom train just being like, you know, what does it even mean to be smarter than Einstein? How much smarter can you be? My answer is a hell of a lot smarter in terms of using your brain to get outcomes.
And then another stop is the orthogonality thesis. Hey, what if really smart AIs just become moral? And another stop is instrumental convergence, where some people get off because they're like, I just don't see why they have to seek power or be hardcore. I think they can just be chill buddies.
And then the final stops are like, okay, yeah, sure, there's going to be. These AIs are going to be hard to align. This is kind of the Yann LeCun stop, where it's like, yeah, but we'll, we'll just do it, you know, we'll have projects and we'll grapple with it and we'll succeed because that's what the human race does.
So roughly, if you aren't convinced by any of these stops, I'm not, then you kind of get to the end and you're at the last station, which is oh, so P(Doom) is just high. I've walked through the entire train. And when I meet people who haven't given much thought to the argument, usually I don't have to walk them through the whole train step by step. It's more like a la carte. Okay, give me, just tell me where you want to get off. You just throw your own objections at me in whatever order.
But when they come on my show, I usually just systematically try to I do a spectrograph of which stops they want to get off at.
Steven: Yeah, I do, I like that explicitly laid out approach because that's how I've been handling it the last couple of years. Because I used to, if someone asked me about it at a party or something, if it was rare, but it came up a couple of times a few years ago, I would kind of give them the long spiel. And then I realized, hey, this might not be that interesting to them and I have no idea what parts of this they already know.
So now if I'm at a party and if, you know, I'm not someone who just interjects. But if the situation applies for it, I can lean in and be like, well, hold on, why do you think that? Or what do you mean? Where? And I've used the where do you get off the train? I like having this because there is those stops, people, that is where everyone I've talked to has gotten off at one of those places. And I like that you've got it listed out that way. That's really cool.
Liron: Yeah. I was originally going to call the show Doom Train because the thing about a debate is I never changed the guest's mind. There's actually one guest who's coming out who actually updated a few percent. That was my greatest victory, up 5 percentage points after talking to me. But it's very rare. Usually there's a zero point update.
But the reason why I think the episodes are productive is because the space of positions is so high. Everybody has their own doom DNA. Maybe that's a new term I should use. But it's like you go to 23andMe and I want to sequence your doom genome or get the doom spectrum in that analogy. You know, light sources have a spectrum and so there's different. Different people have different spectrums.
And I just thought that this would be a show cataloging everybody's doom DNA. These are all of our different positions in a high dimensional doom space. Even though to me it seems pretty overdetermined of yeah, these are all kind of Hopium stops. These are all very far from what you just should believe by default. But yeah, but that's what makes the show interesting.
Eneasz: I feel. I personally feel like a little bit of train fatigue. I think when we go down the entire track, I almost think it's better just to get a guest's one or two really big stops and just focus on those. And then maybe people who watch the show for a long period of time can piece together all the stops on their own. Be like, oh, you're on this stop of the Doom Train. You should watch the episode with, you know, Richard Baxter or whoever.
Because going through every single stop in the course of a show, at some point it's like, okay, I get it. We're going all the way to the end. I don't want to go all the way to the end again.
Liron: Yeah, yeah, it's a good point. And maybe I should even try to feel out before what a guest's biggest stop are and be like, hey, today we're going to be debating your biggest stops, which are this and this and that way. Save some time hitting every stop.
Eneasz: Do you prep for individual guests sometimes? Or is it just I kind of feel them out as I go?
Liron: I do a variable amount of prep and it kind of depends on either the prominence of the guest or my own personal interest in the guest. So if there's a guest that I'm mildly interested and isn't prominent, then I might be more wing it. But I actually, I have very few of those kind of guests out because I'm really trying to make each episode count right now just because I'm already at my limit of production. So that's not even a realistic scenario these days.
But, yeah, I mean, a typical guest is just somebody that I've followed their work. And so I go and I make a big outline of all the different things I want to ask them about, and I'll do. I'll load up a bunch of podcasts of stuff they've done or stuff to read. So, yeah, I'll do a little bit of the Dwarkesh status of diving into this stuff. I don't think I go quite as elaborate as him.
Robin Hanson was probably among my top three hardcore prep because, you know, Robin and Eliezer had a big doom debate in 2008. And I'm like, I'm bringing the doom debate back. Eliezer's tapped out, but I'm going to do it. I'm going to substitute in for the Eliezer position. And I really just read everything. I mean, I already read pretty much everything Robin Hanson writes, but I really went back and did the research.
And I even did a couple of episodes where you can see me training, Rocky in a training fight, where I even brought on somebody else to wear a sign saying Robin Hanson and take on the Robin Hanson. Or actually, I took on the Robin Hanson position, and I did ideological Turing test against a different guy. You know, if you want to fight somebody, you have to walk a mile in his shoes. So that was. I guess that was my single most elaborate prep.
And then a lot of times when a guest is deep into a certain field, and I'm not realistically going to get that deep in a week, I'm like, here, how about you help me make the outline right, and then I'll come in and tweak. So that's a big procedure. I do, too.
Eneasz: Okay, neat. What's your biggest frustration been so far with a guest or with the show in general?
Liron: Well, I guess my biggest frustration is just that the fact that I feel like it is right for the world to just have a bigger audience, just because in terms of the people who happen to know about the show. That's the biggest bottleneck is there's a bunch of people who would really enjoy the show and get a lot of value out of it and they just don't know it exists or they've only seen 2 seconds of it in passing. It's like, no, come watch the show, guys.
So it's a little bit frustrating that I have to climb these stairs of incremental growth and then it's like a waste because I have these good episodes dropping and these people who really should see them don't know that they exist yet. So now my work isn't getting as much leverage as it could. So this is a champagne problem, I guess. But I guess is kind of frustrating.
Eneasz: Do you have debates or maybe not even full debates, but conversations or even debates, I don't know, with regular people in your day to day life?
Liron: I mean, I am a pretty disagreeable guy. I mean, I try to be friendly, but I'm a natural contrarian. Right. So that's just where I'm comfortable. And so, you know, for me it's no surprise I've settled into a niche where a bunch of people think one thing is true. And I think they're totally wrong, for the most obvious reason ever. And then I'll just debate them all day long. That is. It's kind of, you know, if we're playing a first person shooter, I guess my natural instinct is just I'm a camper, right? I'm not a very good sportsman. I'm just yeah, I'll just camp and shoot people. That's kind of what doom debates is.
Eneasz: It really reminds me of the old atheism wars or the new atheism wars. I guess it was fun. Just whenever you met somebody who was religious, you're like, oh, hey, this is going to be fun. Let's talk about how wrong you are. Yeah.
Liron: And I have kind of a one track mind, you know, and people accuse the Silicon Valley types of every party they just want to talk about, I do them. Yeah, okay. Sorry. I have a one track mind. I can't help it.
Eneasz: Yeah. What's the hardest thing to get across to a person? I would, I mean, I guess a normal person, but maybe even a guest. What's the one thing you consistently have a hard time conveying?
Liron: So a lot of the guests are, you know, it comes down to discourse or the jujitsu, the martial arts of updating your beliefs, you know, So I was really happy with my one guest who recently changed his mind. I commended him. The episode's coming out soon. I commended him because it's like, you can see he's doing the art, right? He's. He's a student of the art.
And most people are just very steeped in more of the punditry or the high school type of debate, right? Where it's like, well, I'm just going to make points or I, you know, I'm going to defend my ego. So that is where I wish the guests would step it up in terms of being like, ah, yes, we are both integrating information. And let me show you my crux, you know, double crux. I will hand you my crux for you to potentially attack, right? I will plant my flag and let you try to capture my flag, as opposed to ducking and weaving and not even showing what it takes to change my mind.
Steven: You know, have you changed your mind on anything since you started the show?
Liron: So I think gradual disempowerment is probably the biggest reason. You know, David Duveno, that is a smart cookie right there. He came on the show, he made a lot of great points. Because it's what I said before, right? Where I used to just be like, yeah, yeah, unemployment is not a problem. And I'm like, oof. Yeah, there's actually a pretty continuous slope here where we lose economic leverage.
I mean, okay, it's not continuous. I think there's a discontinuity at FOOM. But even if there's no FOOM, I think there's a very slippery slope. And I feel like that could be a bias, right, where some people are like, no, no, it's not as slippery as it looks, but this looks very slippery. And this is a new thing that I've learned.
Debate Quality and Rationalist Discourse
Eneasz: I think this is a thing that you don't get really outside of. Not just rationalist circles, but rationalists who were there for most of the sequences. Because I remember this from the new atheism days. And for really, any time you talk with anyone, people just do not change their minds in a conversation. I didn't change my mind in a conversation. I started off, you know, a theist. I was originally on the defending God side.
And, yeah, it took. It took months for me to move from that. There was never any one conversation where I changed. It was a slow process of, you know, coming back and updating over. Over months of time and So I don't, I don't expect anyone to change in one conversation unless maybe sometimes if they've gone through the sequences.
And I think that's really what one of the major things Eliezer was trying to do. Right. Teach people the art of being able to update on the fly and say, oh, no, I think maybe I was wrong about this. I'm going to adjust a little bit. And that's just incredibly rare.
Liron: Yeah.
Steven: I mean, how to change your mind is a whole subsequence.
Eneasz: Yeah.
Liron: And I don't even know if I would have been here without the Eliezer training. Right. It's like, would I have figured out myself how important it is to make sure to actually change your mind sometimes? And double crux, you know, it would be fascinating to replay my life again, but without the fork in the road of reading the sequences.
Eneasz: Yeah.
Steven: I feel like the tools were less good in other domains. I was already. My personality and brain type was already primed for this kind of thing. So when I find, when I was introduced to it, I was, you know, engrossed immediately. Coming from places like the. You certainly don't learn great critical thinking skills from the trenches of the new atheism wars. You learn great argument skills.
Eneasz: Yeah.
Steven: I love the analogy to camping in a first person shooter. That's what it felt like back then. It's like, oh, someone's making the argument. The natural selection can't make complicated changes argument. Let's dust off these papers then. It was never a matter, as much as we tried. I think a lot of us did to act like, oh, you know, I'm gonna treat this like the first time and I'm gonna try and treat this like a fresh argument. Maybe they'll change my mind this time.
After your 50th one, you're like, okay. They make three kinds of arguments. This is the third time I'm gonna be. I know exactly what I'll have to say here. You mentioned James Randi. You know, he had contemporaries in the debunking field that would try to, you know, they would go to ghost recordings. This, this will tie back, I swear. And they would try and say. Or they would say, yep. I treated it like it might be true every time. And I just don't know if I believed them.
And I guess what I'm trying because the tools from those camps where I came from didn't have things like double crux and, you know, bottom line reasoning and a lot of other things that really help elucidate the cases where it's, oh, you know what? I am thinking poorly in the situation. I think it was harder with the other tool sets to realize oh, this is weak thinking versus this is stronger thinking. I'm actually putting in effort here.
Liron: Well, you know, I want to put in a good word for camping though, because if you go back to the atheist wars, remember the four Horsemen, right? Was Sam Harris, Daniel Dennett, Dawkins and Hitchens, right?
Eneasz: Yeah.
Liron: I mean, those guys were basically campers, right? Busting out similar arguments, but so eloquently. And I feel like they did a lot of good, move society forward a little bit. I really feel like the world needs campers right now. You know, doom debates is basically me camping. And on one hand, I'm letting myself be in my comfort zone of camping. Right. I'm not stretching okay, Liron, learn something new. Enter a new chapter of life.
But I'm like, well, wait a minute. We really do need all the people who are ignoring this or thinking it's all good. We really do need to fear monger them. Somebody really does have to just sit here and camp. So, you know, that's. I'm taking one for the team, guys.
Steven: No, no, it's good. And I didn't mean to besmirch it entirely. I think what I was just thinking was that it's not a mindset that puts you. It's not a scout mindset.
Liron: Right, right, right, right, right.
Steven: And so if, and things like scout mindset is not a term that would exist without the rationalist community. That's the Julia Galef book.
Liron: Scout mindset is a great term. Yeah. And that's a really good book too. And yeah, I mean, I certainly try to maintain scout mindset. So that's the other thing, right. Is when people come to debate me, I think a common reaction, which I'm really happy with, is people are like, wow, you got the best out of him. You know, I'm normally frustrated by this person, but you got the best out of him.
And I guess the reason is because I step into their shoes, right? It's like pacing and leading. It's like, okay, you're saying this. Actually, that's the funny thing is probably more than 50% of stuff coming out of my mouth in debates is literally just ideological Turing test of oh, you're saying this. So then when this hypothetical comes up, your position implies this, and this is hypothetical. And all I'm doing is letting them correct themselves.
Because in my mind, the only time that I can really counter argue is when I've really nailed down what their position is. But it takes me the whole two hours to nail down what their position is. Then finally I'm like, okay, and this is why I disagree. And that's a wrap, folks.
But the whole time I can still build up their ego being like, oh, yeah, so your position says this. That's interesting. And I'm happy to compliment them on stuff I like. So unlike a Fox News type of debate or a high school debate, I really am doing double crux and outlining the other person's position. So worst case, if somebody doesn't care about doom. I think it's rare to actually see these kind of debates out there in society.
Eneasz: The crazy thing is it feels like the lesser quality debates of, you know, the Four Horsemen. I don't want to call them lesser quality, different style of debates. Right. We consider them not as good because they aren't using the rationalist techniques to actually change minds. But they worked basically. They were more like displays of debating skill. It was a sport for the audience.
And over time, the audience, the nation at large did change drastically to give much less respect to religion in general and accept that, yeah, atheism is actually probably fine and just a way that people can live their lives. And you know, they, they seem to have a point. We, we.
Liron: Yeah. Well, you know, it's funny though, because when you go on social media, everybody's like, huh, why are people not doing atheism debates anymore? And my go to answer is I think atheism won.
Eneasz: Yeah, it's just accepted now. Yeah, right.
Liron: Because the thing is, and then people act like, no, no, you just, you have no idea what being a Christian is like. Well, I have no idea what being a Christian is like, but nobody is making arguments in the public square, at least that, you know, by public square, I just mean tech Twitter. Nobody's actually making arguments whose premises are religious.
So the fact that they've, you know, self deported themselves right into niche forums and they're not bringing that stuff to the main square. I don't see people on tech Twitter saying the old argument from the George W. Bush year 2000 era of we gotta stop stem cell research because God hates it or whatever. Right. That's not part of the discourse anymore. And in that sense, atheism seems to have clearly won.
Eneasz: Yeah. And I don't know, it's not obviously wrong. Maybe there should be a bit more of that sort of just exhibition style fighting in public.
Liron: Yeah.
Steven: Trying to think of what that would look like. I mean, I guess get a really prominent, everything's gonna turn out fine. I mean, what would be awesome is to get Yudkowsky and Sam Altman in a room for two hours to argue about AI safety, publicly. Right. Because Altman, he had that blog post, what, a week or two ago about the gentle singularity and how nice it is.
Eneasz: Yeah.
Steven: And would be nice to have somebody rip that to shreds who would also attract a big audience. So that's why I mentioned Yudkowsky.
Eneasz: Of course.
Steven: You should totally do it too. I mean, get him on, get him on doom debates.
Liron: Yeah. But, well, you know, I do reaction episodes.
Steven: Oh, really?
Liron: Yeah. So a lot of my episodes are not even me debating somebody. It's debate that I wish that I would have. So, for example, I did a Steven Pinker reaction episode because I saw Steven Pinker going on somebody else's podcast for, you know, something tells me he's not ready to go on my podcast yet. And he's just making all these weak points and the host isn't pushing back at all. So it's like, I'll do it. And I call it Somebody is wrong on the Internet. The podcast.
Eneasz: Yeah.
Steven: Love it.
Eneasz: This was very much the early days of New Atheism. Thunderf00t and various other people just posting videos, doing that kind of thing.
Steven: That's a name I've not heard in several years.
Liron: I feel like you guys are even harder core about the atheist wars than I was, but I was pretty hardcore.
Eneasz: What advice would you give to the average rationalist talking to the average normie, their mom or their cousin or something about AI doom?
Liron: I mean, my general advice to rationalists communicating is just I just find that the disconnect that I see compared to my own style is they care a lot about hedging. Right. And being precise. And they don't do this move that I call zooming out, where it's like, you should be precise, but also zoom out and give the most precise, low resolution thing that you can, be low resolution.
And that's like the perfect excuse. Oh, technically you didn't say something precise. Right. Because you said it at low resolution. You used few words to say it. You said it in an entertaining style, so you get a pass. And that's my move.
Eneasz: Okay. So much less of this hedging, I think. And it's possible that and those sorts of things, right?
Liron: Exactly. Because it is hard for me. I mean, so I'm a fan of rationalist content. Right. But it is. It is pretty difficult for me to get through Less Wrong posts. And they're probably doing something right because I noticed, you know, the community will upvote some posts that I find very difficult to get through that are very long difficult to get through, and then other people will make fun of Less Wrong for rewarding that particular style.
And I'm sure it has its virtues. You know, Scott Alexander's post about, you know more than you wanted to know. Those are very good posts, but I personally skipped them. It is, in fact, more than I wanted to know, at least at that density level. So I think that when rationalists are going to communicate with normies, I think the go to move that I recommend they do is just that zooming out.
Eneasz: Can you give an example of what a zooming out would look like as opposed to how people normally do it?
Liron: Yeah. So, for example, zooming out of my doom argument is me saying, instead of adding a million caveats, I'm like, hey, I think AGI is coming soon, and I think we're not prepared for AGI. What do you think? That would be how I communicate the entire doom argument to normie.
Eneasz: Okay, excellent. I guess since we're talking about argumentation a lot, you mentioned that you have some things to say about the state of rational arguments in the wild. With your experience trying to have viral debates or dunks while upholding rationality standards of discourse.
Liron: Yeah, I spent a lot of time on Twitter, and occasionally people make fun of me using generally using low blows. And I'm like, well, I guess you successfully got people to like your dunk. That's pretty messed up. I see A16Z as the king of acting like, they're like, like, Mark Andreessen acts like he's a Renaissance man. He's so enlightened, and he's read so many books. But then he writes things like the Techno Optimist Manifesto where he's being pretty. He's basically firing low blows, you know, blocking people who disagree with him.
So he's kind of having his cake and eating it too, in terms of cultivating this reputation. He's so at such an elite tier of arguing, but then also flouting basic standards of discourse. This is going to be the theme of an episode of my show. I often do reaction episodes. I want to do a personalized reaction episode to just Marc Andreessen the person.
Eneasz: Oh, neat. So what is the state of argument in the wild then? Pretty bad?
Liron: Oh, yeah? Yeah. So it's definitely pretty bad on tech Twitter. I mean, it's less. I guess double crux is the thing that just hasn't quite worked its way into discourse yet. A lot of things have. Asking about probabilities is less frowned upon than it used to be. So that's a good rational tool that's in discourse.
But the idea of hey, you're not telling me what I can do to change your mind. That's something that I wish people do more. And then also the etiquette of if somebody starts a thread that's on a certain topic, and then you're arguing with that person, and then in the middle of the thread, you're like, the person kind of wins, but then you just kind of pivot to a new topic, I have to call people out and be like, wait, hold on, hold on. You have to explicitly admit that we closed out the original topic before I'm going to engage in this new topic.
So that's an example of etiquette that I think people don't get, do they?
Eneasz: When you call them out on that, do they actually acknowledge it? Or what happens then?
Liron: Like, 20% of the time they do. A lot of times they just double down on their pivot.
Steven: I think what's beneficial for that, though, is it's not necessarily catching the person. It's reminding the audience, hey, remember the argument we were having before? The completely different argument we won? Or I won that one. Now we can talk about this new thing. But that's the kind of thing that could fly under the right audience's radar if they're just watching the debate casually or something.
Liron: Yeah.
Steven: Or reading along with it.
Liron: That's right. The larger observation here is we as a society, we need a discourse referee. Or we need an arena for discourse. Right. You know, like the Presidential Debate Commission. It's very domain specific, but at least it's trying to do that job for presidents debating each other.
But then you go onto Twitter and you have these extremely important debates, you know, like Marc Andreessen telling everybody, that AI is going to be fine. Or, you know, people lost billions on crypto investing in Ponzi schemes that they thought were good use cases. So if we just have a higher quality debate. But the problem with having higher quality debates is right now somebody will do these low blow moves and they'll win that way, right? They'll win dirty and there's nobody blowing the whistle on them.
And so it's really wasting a lot of time for us as a civilization that we haven't designated somebody to be like. And you know, you guys call yourselves the Bayesian conspiracy, right? That's based on Eliezer's fiction of a society that has well respected deep institutions to maintain the integrity of changing your mind, which includes the integrity of debates. You know, and we don't have that.
Our society, it's really failing. When you think about how good you'd expect society to be, me as a kid, I thought that society would have higher quality debates. I thought that the political spectrum wouldn't be a one dimensional left right thing. But no, it turns out we're still missing, you know, a lot of these things still need to be patched in.
Eneasz: But. Okay, but that's not going to happen, or at least certainly not within the next several years, which is, you know, the timescale where we're having to have these debates in. So what do you do in the meantime instead?
Liron: Well, my quick fix is to just make my show a sanctuary. When you come on my show, you're either going to debate me and I'm going to use double crux, or it'll be me moderating a debate or having another moderator that I trust. And we'll make sure that the debate is high quality. So that way if somebody has at least come and made their case on Doom Debates, you know, they should get an award for being like, hey, I engaged at a high standard.
Eneasz: Okay. All right, how does that fit with your going low resolution when talking to more normie people?
Liron: Well, if you watch a whole episode of Doom Debates, then the level of resolution is going to be not low, not super high, but you know, just the right level of resolution to figure out the crux of disagreement.
Eneasz: Yeah, can you go more low resolution out in the wild and not be as rigorous? Or do you think this is really important? Would you rather just funnel people towards doom debates rather than chatting in public? I guess is the question.
Liron: Well, that gets into etiquette. Right. Because I did. My natural tendency is to and this is frankly a very Aspie thing, but it's to oh, you want to talk about this topic? All right, let's get deep into this topic. And that just doesn't play well with the normies. Right. That's just not how social interactions is expected to work. So I really just try to feel out the conversation.
And usually feeling out the conversation is they don't want to go deep. Right. So you just have to be mindful of that as a matter of just social. As a matter of communication skills.
Steven: Yeah, yeah. That's something I've had to adjust to in the last few years of somebody will say something that to me is outlandish. I'll be like, okay, let's be polite about this. And I'll say, I've never heard that argument before. Can you elaborate on that? That's surprising. And then I think most people find you mentioned just like, the social etiquette of how people respond to let's dig deeper. Most people hate that.
And I think that's really weird because I love that. You know, we had our monthly rationalist meetup last night, and that's the kind of culture of the room. If someone says something and they'll be like, hold on a second. Let's unpack that. And I'm like, hell, yeah, let's do this. Rather than. Well, I don't really know and I don't really feel like it.
The other thing about doing in public is, you know, there's usually a smaller audience, but also you're not gonna spend two full hours talking with them about it. So it's different there, but I think there's lots of disanalogies to the structured show versus bumping into somebody at. Well, I was going to say, you know, if you're at LessOnline, the people there might be down for a two hour, just random sit down and debate. But LessOnline was a party.
Eneasz: Yeah, LessOnline was created specifically as a space where we can have these sorts of interactions.
Steven: Yeah, exactly.
Liron: You should sniff it out. And I mean, literally sniff if it smells like BO and if so, you're probably good to go into a rant.
Eneasz: Okay. I was thinking, so the Four Horsemen, and I'm especially thinking of Hitchens. They were the kind of people who they could have the high standards of discourse and oftentimes would, which is why we loved them. We really liked watching them do that. But one of the things I really loved about Hitchens also was that he didn't always have to do that. When other people went low, he would go low too, and just sling back. And that was a lot of fun to watch.
And do you think that's just that's a bad idea and we shouldn't do that at all and retain our high standards because it was really fun to watch him do that.
Liron: So I've heard some Hitchens, and I don't have specific memory. I mean, he's definitely, he's got a way with words. The late Hitchens. I don't have specific examples of being like, man, that was a low blow. I wouldn't do that blow. But I got some entertainment out of it. I don't have specific examples, but I feel comfortable the way that I don't feel like I have to go low because I already have the tool that I can make fun of other people going low. And that's already, you know, it's like I still do a dunk, but it's a high integrity dunk. So I kind of get the best of both worlds.
Steven: I think that there's that old adage, especially of Internet debate, don't debate with idiots. They'll drag you down to their level and beat you with experience. What was amazing about Hitch was that he could be at that level and still cream them.
Liron: Yeah, yeah, yeah, right?
Steven: It wasn't from lots of experience of being an idiot. I think it was just lots of experience debating people. And I think there's something artful about his low blow moves. I can only think of a couple where, you know, he calls somebody out on their bad character or something. I'm thinking of a debate that he had. He and Stephen Fry were debating two Catholic bishops or something, and he was like, you're telling me that my friend is an abomination because Stephen Fry is gay. And that would almost be a low move to drag out somebody and be like, you're telling me that this cute puppy is a bad thing, and yet it really works there.
Eneasz: Yeah. The hard part is finding the people with the charisma and the skills to pull that off.
Steven: Well, luckily we're talking with one.
Liron: Yeah. I mean, but it's true. If you look at the top tier of debaters there. You usually don't find people who are such great role models about having integrity. So, yeah, I don't know. Maybe. I mean, Sam Harris.
Eneasz: I'm.
Liron: I'm actually really happy with Sam Harris overall. I have very few nitpicks for him. Kind of a demonstration that it's possible he's kind of low on the entertainment dunking style. I mean, he still dunks. He still does, but he just does it in such a calm, measured way. I bet I actually see myself as being, I don't know, higher on the entertainment showmanship, access than Sam Harris.
But I'm a huge fan of Sam. My point is just I want to get another. Get another chip. Right? Get another star on the walk of fame of debaters that actually made it while having integrity.
Eneasz: Nice.
Steven: I want that for you too. I think it's important work that you're doing, and it's nice that you've got the approach that you do, your commitment to intellectual rigor and integrity, and. I'm scrolling through the backlog of episodes. I haven't looked them all, actually. I certainly haven't listened to them all, but I'm looking forward to many of these. For listeners of the Bayesian conspiracy, I'm plugging doom debates. Check it out. It's great.
Eneasz: Do. Oh, crap. Okay, real quick. Since we're almost out of time, do your guests ever get mad at you, at how directly you challenge them, or is this something they expect coming in?
Liron: So this has worked out really well because sometimes a guest will come in, and from my perspective, they're so obviously wrong. And I just keep pointing out, well, they're wrong for the whole episode. I'm like, all right, nice. That was a good episode. I feel like I made my points, I got my dunks in, and then they walk away, and they're happy with their performance. Their audience was like, oh, yeah, that guest slayed, you Liron. And my audience is like, Liron slayed.
So it's just this nice hologram type of toy where everybody's just seeing the picture that they expect. And I'm like, okay, well, great. You know, they walk away feeling so good. It's like, hey, we should do another round two. So, you know, everybody wins, I guess.
Eneasz: Fantastic.
Steven: That's also reminiscent of the new atheism debates. I think a lot of the time people would go in for their preferred side and watch their preferred side win and think it was a landslide. And granted, I always thought that it was the new atheists that were pretty much coming out on top, but I think that the other half of the audience didn't feel that way.
Liron: I do think that there is the hidden audience of independents who is slowly getting their mind changed a little bit every time they watch the episode. And that's really why I do it.
Steven: Yes, absolutely. And also, I have to imagine that they're probably expecting the debate style format and maybe some hard pushback because it's not called, you know, doom friendly discussions. It's called Doom Debates.
Eneasz: Right. So Doom tea and crumpets.
Steven: All right.
Conclusion and Recommendations
Liron: I mean, I hope that it becomes a mark of honor that somebody has the balls to come on Doom Debates. I mean, I certainly respect them more than people who screen their other podcasts. Yeah. Before we wrap up, let's do a shout out to Bayesian conspiracy. How long have you guys been doing the podcast?
Eneasz: Oh, thank you. Gosh, it's nine years now.
Steven: Coming up on nine, I think.
Eneasz: Yeah.
Steven: Either we just passed it or. Yeah, that sounds right.
Eneasz: Okay.
Liron: Yeah, I listened to a good amount of episodes over the years. I think Marshall Polaris first got me into it and this was all the way back in 2017. So I'm just like, yeah, I've been listening to your podcast for a little while. Oh, wait, I've been listening for eight years. Pretty much the entire lifetime. It's like time really flies.
Eneasz: Yeah, it's been crazy, but yeah, I.
Liron: I mean, I'm glad you guys exist.
Eneasz: Thank you.
Liron: The world needs more, you know, quality Bayesian perspectives.
Eneasz: Ah, well, thanks and thank you for putting on Doom Debates because the world really needs to wake up to this thing too.
Liron: All right, so viewers, go watch. If you watch one show, go watch the other show right now.
Eneasz: Yes. And listeners, yeah. Go check out Doom Debates right now. We will. You know what, what is the two ones that you're most, most happy with that you would funnel people to if they only had time for one or two?
Liron: Hmm, let's see. So, Mark Israetel, if you want my most popular episode with a YouTube star who's a bodybuilder and an AI pundit, go check out Me vs Mike Israetel. And if you want an episode where it's just somebody who's really thoughtful and I'm basically interviewing him, not so much debating, then go check out David Duveno.
Eneasz: Excellent. We will have links to both of those in the show notes.
Liron: Nice. Here, here. What are your two most recommended episodes?
Eneasz: Oh, shit.
Steven: I was hoping you wouldn't ask.
Eneasz: So I've had it on my task list to make a best of for a long time now because, like, I looked at our archives. I'm like, Jesus Christ, 240 episodes. I've had people ask me, like, what should I start with? These are between one and two hours each and I don't have time for this many hours. And I'm like, I don't know.
Steven: Here's what I tell people. When, when. Because this came up a few times at LessOnline and stuff. And I would never wish the entire backlog on my worst enemy. I certainly would never listen to it and I was there. So I would say scroll through, sort by most recent, because the newer ones trend better than certainly the earliest ones, and just scroll through and find a guest or a title that sounds interesting.
You know, a lot of the times that I'd recommend, the ones that were the most fun for me to record might not be the most fun people to listen to. But I mentioned, I was listening to Julia Galef's rationally speaking podcast back in the day, also listening to Brian Dunning's Skeptoid, and he's been on our show twice. And that's awesome. But most people probably wouldn't feel the level of excitement about that that I do. So, yeah, start from the top and find a fun one.
Eneasz: Yeah, I would recommend something, you know, in the 190s to 220s range. I really liked our episode with Tracing Woodgrains on the social justice religion. Simulacra levels with Zvi was really good. I thought he's really interesting to talk to in the simulacra levels. It was nice to nail down what those are.
I know this isn't everyone's cup of tea, but I really enjoyed talking with Lucy Belmont with the world's most wholesome gangbang when that.
Liron: Yeah, that's right. I remember that. That was a good episode for sure.
Eneasz: Yeah, that was a lot of fun. But our topics are very varied. The old AlphaGo episode was. I mean, that was a long time ago now, but I found that one really interesting just because of how it opened my eyes at the time. But yeah, there's. Pick something that sounds interesting. I guess any of those three are great ways to start.
Liron: Yeah, I think it's funny listeners. You guys can go and search Lucy Belmont, and you're going to see two nerdy guys talking about a gang bang.
Steven: Yes.
Eneasz: It's a lot of fun.
Liron: Yeah. Well, thanks so much for doing this, longtime listener. And I'm honored that you guys did an episode with me.
Steven: Yeah. Thank you so much. I'm a new listener to your show, but I'm excited about it.
Eneasz: So thank you so much for coming on.
Steven: Perfect.
Eneasz: Great. Thank you for joining us.
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates










