Louis Berman is a polymath who brings unique credibility to AI doom discussions. He's been coding AI for 25 years, served as CTO of major tech companies, recorded the first visual sighting of what became the dwarf planet Eris, and has now pivoted to full-time AI risk activism. He's lobbied over 60 politicians across multiple countries for PauseAI and authored two books on existential risk.
Louis and I are both baffled by the calm, measured tone that dominates AI safety discourse. As Louis puts it: "No one is dealing with this with emotions. No one is dealing with this as, oh my God, if they're right. Isn't that the scariest thing you've ever heard about?"
Louis isn't just talking – he's acting on his beliefs. He just bought a "bug out house" in rural Maryland, though he's refreshingly honest that this isn't about long-term survival. He expects AI doom to unfold over months or years rather than Eliezer's instant scenario, and he's trying to buy his family weeks of additional time while avoiding starvation during societal collapse.
He's spent extensive time in congressional offices and has concrete advice about lobbying techniques. His key insight: politicians' staffers consistently claim "if just five people called about AGI, it would move the needle". We need more people like Louis!
Timestamps
00:00:00 - Cold Open: The Missing Emotional Response
00:00:31 - Introducing Louis Berman: Polymath Background and Donor Disclosure
00:03:40 - The Anodyne Reaction: Why No One Seems Scared
00:07:37 - P-Doom Calibration: Gary Marcus and the 1% Problem
00:11:57 - The Bug Out House: Prepping for Slow Doom
00:13:44 - Being Amazed by LLMs While Fearing ASI
00:18:41 - What’s Your P(Doom)™
00:25:42 - Bayesian Reasoning vs. Heart of Hearts Beliefs
00:32:10 - Non-Doom Scenarios and International Coordination
00:40:00 - The Missing Mood: Where's the Emotional Response?
00:44:17 - Prepping Philosophy: Buying Weeks, Not Years
00:52:35 - Doom Scenarios: Slow Takeover vs. Instant Death
01:00:43 - Practical Activism: Lobbying Politicians and Concrete Actions
01:16:44 - Where to Find Louis's Books and Final Wrap-up
01:18:14 - Outro: Super Fans and Mission Partners
Links
Louis’s website — https://xriskbooks.com — Buy his books!
ControlAI’s form to easily contact your representative and make a difference — https://controlai.com/take-action/usa — Highly recommended!
Louis’s interview about activism with John Sherman and Felix De Simone — https://www.youtube.com/watch?v=Djd2n4cufTM
If Anyone Builds It, Everyone Dies by Eliezer Yudkowsky and Nate Soares — https://ifanyonebuildsit.com
Become a Mission Partner!
Want to meaningfully help the show’s mission (raise awareness of AI x-risk & raise the level of debate around this crucial subject) become reality? Donate $1,000+ to the show (no upper limit) and I’ll invite you to the private Discord channel. Email me at liron@doomdebates.com if you have questions or want to donate crypto.
Transcript
Cold Open
Louis Berman: 00:00:00
I'm flabbergasted by the anodyne sort of reaction to this by your average discussant. There is something dangerous happening and we have to talk about it and think about it.
No one is dealing with this with emotions. No one is dealing with this as "oh my God, if they're right, isn't that the scariest thing you've ever heard about?"
Introduction
Liron Shapira: 00:00:31
Welcome to Doom Debates. My guest today, Louis Berman, is an ex-risk activist, lobbyist, author, trader, coder, astronomer. He wears many hats.
He's been a founder and senior leader at five different tech companies. He's currently the Chief Technology Officer of a small currency trading startup called Squid Eyes.
He was the chief technologist for EPAM, which is Pennsylvania's largest software engineering firm and has 60,000 employees. He's coded AI software for 25 years, mostly trading systems.
He's also recently been really into ex-risk lobbying, which is how he first got on my radar. You may have seen him on John Sherman's podcast For Humanity AI Risk, because he's lobbied over 60 politicians and staffers both in the United States and abroad for Pause AI, which is also an organization that I'm part of.
His recent work includes two books on AI ex-risk. The titles are "An AI Safety Primer" and "Catastrophe: The Unstoppable Threat of AI." Louis Berman, welcome to Doom Debates.
Louis: 00:01:37
Thank you very much. It's a long list. I swear it doesn't seem so interesting from this side.
Liron: 00:01:44
I find it pretty interesting. Definitely an eclectic mix. Recently you're really standing out as one of the people who's doing the most types of useful work for the Pause AI movement and just trying to save humanity instead of just walking into the razor blades with a blindfold.
And for the audience, I gotta do a disclosure here. Louis has donated to Doom Debates to support the show. He is a Doom Debates donor. That doesn't mean that my opinions are going to be totally shifted by that, but I didn't want to not let you guys know that because no guest has ever been a Doom Debates donor before.
Louis: 00:02:24
There was no method to the madness, unfortunately. It was more of just, I want to see more people seeing Doom Debates.
John Sherman and I - I'm a board member of Guardrails now, the parent organization of For Humanity Podcasts. We just want more of this stuff to be heard, and not by me. Lord knows I'm the least interesting person in this discussion.
We want more of you for starters, Liron, and we want your guests, and we want to just raise the level of discourse.
Liron: 00:02:58
Hell yeah. This show is all about raising the level of discourse. And it's also about communicating a high P(Doom).
I totally get why you lumped this show together with John Sherman's show. These two shows stand out to me as the only ones where we keep it in mind that P(Doom) is high. We're not just like "oh, this is interesting" or "what's going to happen with this industry?" or "how many jobs is it going to take?"
It's more like, "Hey, are we literally all going to die?" I feel like all the other shows besides our two tend to lose the plot. So thank you for supporting High P(Doom) Media.
The Anodyne Response to Existential Risk
Louis: 00:03:31
It's almost a tagline. It seems pretty obvious to me that mucking about with smarter than human AI very naturally is a dangerous thing.
I'm flabbergasted, just astounded by the anodyne sort of reaction to this by your average discussant. "Oh, there is something dangerous happening and we have to talk about it and think about it."
No one is dealing with this with emotions. No one is dealing with this as "oh my God, if they're right, isn't that the scariest thing you've ever heard about?"
I'm always amazed when I talk to rank and file about this, but I'm also amazed with people like my classic example is Geoffrey Hinton. Brilliant guy, admire him, love the fact that he quit Google to talk about it.
But it's always so even-handed, British, measured. "It may happen, it may not, it may be as bad..." I would rather be grossly wrong and say something direct that is unambiguous, that you can then say "that guy's an idiot" or "that guy's smart" and go from there, than to try to hedge my bets on every damn sentence that I do.
And then of course there are other people like Ray Kurzweil, who I think are just - even though I love him, wrote a chapter in my book about him - I think they're downright delusional.
By the way, if that's a soundbite, "Ray Kurzweil delusional" - this notion that by default, it always turns out great no matter what is so amazing to me.
I think you should be quaking in your boots, even if it's gonna turn out to be the best thing possible, because there is more than a 0% chance that literally every human on the planet gets killed.
I like to use the word murder. John Sherman hates that word by the way. He has to talk to other powers that be, but I consider it murder. The reason I consider it murder is because ultimately, if it happens, I think it will likely be deliberate.
Deliberate in the sense of actively deliberate or deliberate in the sense "I know it's gonna happen, I don't think it will happen not knowing."
Liron: 00:05:12
It'll be done in cold blood.
Louis: 00:05:15
Exactly. Even if we're just a side effect casualty, like if I see a big trail of ants that I have to stomp on, and I don't have anything against the ants, but if it enters my consciousness that I'm stomping on a bunch of ants, that I'm killing the ants in cold blood, you could say I'm murdering the ants.
I look up the definition, right? It has to do with volition. It is not manslaughter or AI slaughter. It's not an accidental thing.
I don't believe it will be accidental, even if it doesn't rise to the point of cognition. We are so far separated - us and the ants - that we don't necessarily think of ants as worthy of any humanness or anything to say that they're worth considering.
But from the ants' perspective, if they had the perspective, of course they count, they matter.
P(Doom) Discussion Begins
Liron: 00:06:14
When you call Ray Kurzweil delusional, I see what you're getting at. I certainly agree there's a kernel of truth that there's so many people like him that have the arguments in front of them and they're so intelligent on so many matters, yet they somehow walk away with a very low P(Doom).
They can't even grant 5% or 10%. They walk away with less than 1%. Just to call out one name, Gary Marcus came on my show. I thought he did a great job, he engaged at a very high level.
I have zero problems with the quality discourse that he engaged in on my show, but I still have a beef with how he walked away still saying that his P(Doom) is 1%. I'm like, really? I feel like something's gone wrong with that reasoning process.
To his credit, he later messaged me that he's gone up to 2% after recent events. But I'm like, okay, so where do you think you're going next? Do you think you're going back to 1% or do you think you're going to 3%?
Louis: 00:07:08
Let me compliment you firstly on helping me get around this whole P(Doom) sort of thing. It was sort of peripheral before you took it on as a thing.
At various points in my life, I would say I had a P(Doom) of 50%, 100%. I happen to like Roman Yampolskiy's 99.99999% eventually thing. The calendar is really the issue.
From my perspective, I think there is non-zero - I have a non-zero P(Doom). As far as I'm concerned, when playing around with the lives of 8 billion humans, and not to mention all the humans that will come after, that's more than enough.
I'm astounded by people who are not worried about this. That's really my message - astounded.
Liron: 00:08:04
I'm astounded too. I get the same feeling as other topics where I feel strongly about, and then I look at the luminaries and I'm like "what?"
One of them is as an atheist, right? Sometimes I see really intelligent people who are like "yeah, but I'm an Orthodox Jew because I'm crazy like a fox. The atheists don't understand what I have." I'm like, okay.
That's one of the biggest lessons of being a grownup - you see other people around you who you respect so deeply on one dimension, in one area, and then you see their scholarship or their thought process in another area, and you're like "what the hell? How are you so high and so low? Why is this such a grab bag of takes?"
In the case of Ray Kurzweil, he's a genius with his companies. He's a genius inventor - the Kurzweil synthesizer. He's had success in business. He's had amazing success extrapolating Moore's Law. I think we all have to bow down to the foresight there. He was righter than a lot of us thought, including myself.
I will say this though - I don't think he's delusional, per se. I wouldn't use the word delusional. I would use the words hoping, coping, wishful thinking.
Louis: 00:09:10
That's why it's great that you're not the only person on this screen. I have decided to not pull any of my punches at all. If I'm wrong and I'm to be stoned in the marketplace, then so be it.
I want to not only talk what I think is likely, but I want to live my life as if it is. Tomorrow I'm gonna be closing on a house, which I term my bug out house.
Living as if P(Doom) is High - The Bug Out House
Louis: 00:10:57
I very much doubt that if it goes bad, which I think it will go bad, it will go bad necessarily like Eliezer loves to talk about - basically "la da da, we're all living our lives, la da da da, and then everyone drops down dead."
The whole AI 2027 vibe - half of the path was it all goes great until AI accrues its power, and then magically kills us all in a relatively short period of time. And one hopes, less painfully.
My fear is that the route to that point, however it turns out, is not going to be easy. I think there's gonna be lots of things - privations maybe, war maybe, other things. So I'm preparing for it.
Liron: 00:11:59
What is that term? Privation.
Louis: 00:12:00
Privation is to have basically bad things done to you. It's to have stuff taken away. You have to live without food.
Liron: 00:12:15
Right, like years of rationing and privation. I've described it as living like a caveman.
Louis: 00:12:20
Yes. Even if I was the smartest person, let alone someone like Eliezer who's breathtaking in person - if I was as smart as him, I would not be able to figure out what's gonna happen. None of us can.
All I know is the very simple thing. We are having unprecedented change in our society. I believe that smarter than human AI will quite literally be created by humans. That is the previous purview of the gods.
Liron: 00:13:00
When you say the gods, do you mean evolution by natural selection?
Louis: 00:13:01
Exactly. I am an atheist 1000%. What I mean is in the traditional sense, we said the purview of the gods - Zeus made, Athena made, whatever sort of thing in that exact same way.
Liron: 00:13:23
I think that's an insightful remark - this is actually the thing that made us, the process that made us, and it has a modicum of intelligence or optimization. So the way you're using it, I actually support that usage.
LLMs and the Simplicity of Intelligence
Louis: 00:13:44
Can we do an aside? I am amazed by LLMs. There's nothing in them.
Liron: 00:13:47
No framework. There's nothing.
Louis: 00:13:49
Nothing. In my professional career, I've worked on - I should say I've never worked on myself, but my teams and I have worked on million-line pieces of code. No human works on a million lines of code himself at one time, or 10 million or whatever that big number is.
An LLM is not a million lines of code. It's not 10 million, it's not even a hundred thousand lines of code. Indeed, if you look at the LLMs, there are a few thousand lines of code maybe, and the rest is all about data marshaling and very conventional coding.
So I'm just amazed. And if it is that simple, it should naturally come to you that wow, humans can do this because we can do simple things.
Anyone who's experienced these tools - I'm a coder by trade. I code every day. I work on trading systems mostly. I cannot tell you the visceral feeling of how it has been.
I used to fight with these things. Today I wrote 4,000 or 5,000 lines of code - I am using air quotes. I asked things to happen and there were a lot of problems. I didn't get what I want and I had to go back. But an hour later, suddenly I have 5,000 lines of code that does things in many ways that I can never do.
So programmers always love to talk about unit tests - you take a piece of code and then you make some testing code that applies to that little piece of code. You're supposed to test every aspect of your little piece of code with a unit test. But we never do because we're not being paid to make tests. We're being paid to make functionality.
Well, now you can have thousands of lines of unit tests and ensure all those little assumptions. You said "I want this number to only go from one to eight" - let's test it. They make sure the code does it. It's astounding to me.
Liron: 00:15:51
That's good perspective. You're confirming what a lot of us think who watch the show and have a high P(Doom) - yes, LLMs are amazing. They're a big improvement on what we had before.
LLMs, their code is compact, and it's mostly in the parameters, and we don't really understand what's going on inside the parameters. But we figured out the secret formula of how to make this really general code that can learn.
It also seems like a lot of the secret sauce is kind of the brain secret sauce too. The brain seems to have a little bit of other secret sauce too. But this is like a big chunk of the missing secret sauce has been uncovered in the last few years.
Louis: 00:16:32
Some of this stuff is old being new. I forget when Markov chains were created, but it's more than a hundred years ago. The building blocks are not - they're novel in the sense that we put them together, but a lot of the building blocks have been around for a while.
But it doesn't matter. I'm not claiming any expertise or special insight to this. I'm just claiming I am personally astounded by something that I could barely use a year ago now is writing very, very complex trading code.
That's not the thing that I'm worried about. It's the generation after that or the next generation of that, whether it be agents or whether it be compositional or whether it be MCP on steroids.
MCP is little APIs that can provide some sort of functionality to an LLM - model context protocol - and allow some special thing. There might be an API to launch nuclear bomb, and the LLM somehow finds it and has access to it and has the access control list, the connections to actually issue the thing. And it does.
I don't know what it's gonna be, but I see it coming.
What’s Your P(Doom)™
Liron: 00:17:46
For the viewers, I think you're bringing credibility to this question as a polymath. You mentioned the 25 years of coding AI and you've worked on trading - currency trading, maybe other types of trading and just software development, CTO of a very large organization. So you understand software.
You've been an author, now you're a lobbyist and an activist. And you were even an astronomer. That was a pretty big part of your career. You told me that you got the first visual sighting of the planet that was later named Eris, which helped topple the chain of dominoes to say "oh crap, Eris is bigger than Pluto, neither are planets."
So you're a polymath. I think that gives you some credibility of - you have a fresh perspective. You can look across fields, synthesize information, and when you look at the AI doom argument, then ready for the big question.
Louis: 00:18:48
So my P(Doom) previously would've been P(100). Not even the littlest, littlest, littlest niggle, but I decided that it's the wrong way to say it.
My P is greater than zero. P greater than zero. That's the best way to put it.
As far as I'm concerned, the slightest chance of killing every single person on this planet, plus all the value of future generations - billions, trillions of humans that haven't even been conceived - all that's enough for me. It's enough.
Liron: 00:19:24
Don't you want to give a more direct answer to what is your P(Doom)?
Louis: 00:19:29
I'm saying basically it is P(100). I believe, and I know this puts me in your crazy zone - you've talked about it. I'm willing to be crazy.
I believe that given enough time - Roman Yampolskiy is off by a rounding error. Programmers will know there's these things called double precision floating point numbers. You can't do precise math with a double.
Well, I'm doing an approximation, so I think it does not turn out well ever. And in terms of my P soon - it's well above 50. P(Doom) in less than five years, I think is easily P(50).
Liron: 00:20:22
P(Doom) this decade in terms of unrecoverable human extinction before 2030?
Louis: 00:20:29
That's really the takeaway. That's why I bought a bug out house. That is why I am trying to give myself more flexibility and options in my personal life.
Liron: 00:20:44
By 2030, I'm at 20%. So mine is easily 50.
Louis: 00:20:46
Yeah, so that's four and a half years from now. When you say your long-term P(Doom) is 99.999%, or you won't say 100, I don't think there's the slightest, slightest, slightest chance. And again, I'm willing to be in the sort of crazy zone.
Liron: 00:21:05
I have to take issue with 100.
Louis: 00:21:07
I know. Most people won't understand how floating point numbers go.
Liron: 00:21:22
Am I to get away on a technicality?
Liron: 00:21:28
I guess, sure. In practice as a human living a human life, once you get enough nines... I mean, when you say your P(Doom) is 100, let's say, is the timeframe for that like a billion years?
Louis: 00:21:43
No, the timeframe for that is just like Roman, somewhere around 20 or 30 years, maybe 50 years. I believe AGI to ASI - artificial general intelligence, which is a bad term because I always like to say "smart as us AI" or "smarter than us AI" and then "smarter than everyone AI."
So ASI is smarter than everyone AI. This starts and ends with my belief that AGI smarter than human is achievable and will be achieved in some small number of years.
If I'm wrong on that, then the entire house of cards falls away. But if I'm right on that, I tend to believe that ASI happens and then it finds its own power and then kills us.
To not have that happen, basically what it needs to do is it has to decide "I like those humans. I want to do something for them," whether keep them as pets or keep us as is or something. I don't know what that would be, but my bar is very, very low for that.
Humanity lives on as a civilization in some way.
Challenging the Extreme P(Doom) Estimate
Liron: 00:22:05
When you say P(Doom) 100%, aren't you assuming some important prerequisites here? You're saying you're assuming that the Pause movement doesn't successfully have massive grassroots support?
Louis: 00:23:14
Yeah, I believe it won't. I can pretend that I can use numbers that will make me sound sane or clever or stuff. I can try to talk to people in some way that influenced them.
But remember I've spent the last year trying to influence people to get them to the side of this. I started out thinking we're going to soft pedal this. We're gonna talk about jobs, we're gonna talk about risk and geopolitics and stuff, and sort of edge into it.
But it didn't have enough teeth and it didn't bite. So I went and said, okay, let's go a different way. I'm gonna write a book.
This particular book says basically what I think. May I read one line of it? This is in my intro, called "Buckle Up." It starts: "It doesn't turn out well. Indeed, the premise of this book is that there's a murder in your future. Yours."
If I believe that, if I write that, then that's pretty unambiguous. I have to stick with that. That's what I think is most likely to happen.
I don't see as a natural course of things that this will turn out well. I'm trying to make it not happen, I'm trying to prove myself wrong, but I don't believe it will.
Liron: 00:24:46
When you come on here and say that your P(Doom) is approximately 100%, even just saying more than 99%, I do have to say in my opinion - I'm trying to call balls and strikes here - I do think that you're only slightly more reasonable than somebody who says their P(Doom) is 1%.
I think it is closer to 99, or I think it's kind of in the middle, but I think it's closer in terms of how freaked out we should be. I feel like 99% isn't the equivalent of 1% in terms of freak out scale.
But to me, that is kind of overconfident in the same way that somebody who thinks that it's 99 to one odds that we're not doomed. I just don't think we have the epistemic position.
I don't think that we should think that our own knowledge has enough evidence pointing in one direction and enough lack of evidence pointing in the other direction. Life is full of surprises on complex predictions.
Louis: 00:27:24
I don't agree, but I appreciate your position. I am, for many ways it's the atheist versus agnostic thing. I have enough proof to me to say that I'm not gonna take the position of an agnostic.
Agnostic is sort of preloading "oh, if I get some proof, I'm gonna change my mind," at least that's the way I tend to think of it. Whereas an atheist basically says "I have enough."
Liron: 00:27:56
You're talking about this in terms of how you're taking a position like in society or in the field of communication and authoring. I'm purely just asking about epistemic beliefs - what do you really actually think is the correct belief to be updating from?
I think updating all the way to 99% all factors considered is premature.
Louis: 00:28:18
In either case, the thing I'm interested in is not my P(Doom) after a century or something. I'm very interested in this near term thing.
Like I said, it's P(Doom) greater than zero. And for my perspective, that is enough for me to make it important in my life that I change what I do. I am trying to protect my family as best I can, and I'm trying to work to make everything that I feel bad is gonna happen, not happen.
It's one thing to cry into your cereal and not do anything. I personally don't see a lot of change. I've seen a lot of people talk about this, I've talked about this myself to politicians.
I was in London about five weeks ago - we protested together in front of Google DeepMind. Then I got to have a private session with this guy, Iqbal Muhammad, who's one of the MPs who is on our side, if you will.
I appreciate him a lot because he's a deeply religious Islamic man. Not anything I believe in, but we have people from all areas of our polity who are interested in reducing AI risk for their various reasons.
I not only wouldn't go to his church or mosque, but we're on polar opposite ends of that argument. Yet we found common ground. I'm happy to be a pluralist, I guess you would say. I like to have lots of different views. That's one of the reasons why I love this show.
Exploring Non-Doom Scenarios
Liron: 00:29:16
Right now, when you watch me, I have a 50% P(Doom) by 2050. That's roughly where I'm at right now. I'm moving the timeline out a little bit from 2040, just because I don't want to be confident that it's definitely happening within 15 years. So I'm 50% by 2050.
John Sherman has said that he's 75%, so you're looking down on us from this huge mountain of 9,999 to 1 odds of doom.
Louis: 00:30:43
More importantly, my 2030 P is 50%. As far as I'm concerned, that's more important than everything else.
Liron: 00:30:50
You're on the leaderboard here. I'm not ashamed to have strong opinions, and if I'm wrong, then I will learn from them, I hope.
The thing I would like to learn the most is that I'm profoundly, profoundly wrong. I would love to be profoundly wrong. And of course that's the whole reason why we're doing this - we're trying to not just yap about it, but to try to help bring things to a better outcome.
Liron: 00:31:21
Let me poke at your model. If every human alive were a clone of you in terms of your beliefs on this topic, would that lower P(Doom)?
Louis: 00:31:51
If everyone was a clone of me, we'd stop this stuff right now and we'd have a P(Doom) of zero because it'd be solved.
Liron: 00:32:00
So that would take it from 100% to zero.
Louis: 00:32:05
And so the solution is - I've heard this said many different times, and I believe it is the solution. There's no guarantees. Let's say we decide our scientists are to be empowered as best they can to solve this. They're given the resources and a stretch of time, 50 years, something like this, to take things slowly.
Eventually we're gonna say "oh, we have some science that says this can never be solved. If you create a new life form, they're always going to vie for ascendance, and that will lead to bad outcomes for humanity."
Or we may find out "you do these three things in a certain way and suddenly we have a way to create AI that is beneficial for both AI and us."
If we slowed this down, I think if we slowed it down big time, then the likelihood would be very different. I just don't believe we're slowing it down.
Liron: 00:33:00
So what if I just paint a non-doom scenario and then you just admit that my non-doom scenario has a 2% chance?
So the non-doom scenario could just be like movements like Pause AI, maybe with the help of some warning shots. AI taking people's jobs, AI becoming like a virus that's really hard to stamp out and taking down utilities. There's like warning shots.
And then also simultaneously Pause AI type movements and influencers gain a lot of traction. It becomes the hot topic, like the college campus protests on the Gaza War. Well imagine next year's hot thing is AI protests.
Louis: 00:33:51
I understand. I don't believe within this entire macro political movement that you're making - that effect, let's say in the US as an example, will influence what's happening in India and China and Brazil.
Somehow your 2% presumes that literally everyone around the world is gonna be influenced to put the brakes on a little. I just don't see it.
Liron: 00:34:49
The non-doom scenario would involve it becomes - you gotta go the international treaty route. So my non-doom scenario, I'm adding more details, but I'm saying yes, this does become a movement, the same kind of warning shot, Pause AI combination happens in all the largest economies, all the biggest AI developer countries.
And they all say "okay, we're very seriously enforcing a treaty. Anybody that we catch building is going to have airstrikes on their data centers as an enforcement mechanism if they're not cooperating."
I'm not saying the scenario is super likely. I'm just saying I think it's single digit percent likely as AI gets more and more scary.
Louis: 00:35:28
I want to get there with you, I really do. It's ultimately do I believe? That's ultimately what you're asking. Do I believe?
Liron: 00:35:45
I feel like this may be the issue - I feel like you're maybe not that into Bayesian reasoning where you simultaneously... I feel like you think "oh, I gotta just commit to one type of belief." You're supposed to maintain a distribution of possible beliefs.
Louis: 00:36:02
Yeah, I understand that. I'm probably not a Bayesian.
Ultimately, do I believe in the macro political sense - can we convince enough of the population to step away from the cliff? That's really what it is. And I think in my heart of hearts, I think we've already passed the point of no return.
Liron: 00:36:31
That's totally fine to feel that in your heart of hearts, but I feel like you're conflating that with 100% probability.
Louis: 00:36:38
Yeah, I think I am. If you want to do that, then yeah, you got your 2% easily.
Liron: 00:36:46
Okay, there we go. Great. The only thing is, I gave you kind of one non-doom scenario. I would merge that with other non-doom scenarios and give me at least 5%.
Louis: 00:36:57
I'll tell you my number one non-doom scenario. I have to be careful how I talk, since we live in a politically correct world, that John Sherman got slapped for saying...
Liron: 00:37:17
He had a squeamish organization that he worked under. I, with occasional viewer donations like yourself, am not accountable to anybody.
Louis: 00:37:25
If you want to call me an idiot in your loudest voice, I'm still happy. I've given you a donation. You're doing the right thing.
You're asking for that 5%. It goes simply like this: There is a warning shot that kills a lot of people.
Because, as Ralph Nader said, no one cared about car safety until people died. That's not the exact quote, but that's literally what he said.
I don't know if people are going to feel about this the exact same way until people die from it.
The problem is, I think AGI smarter than human AI will try not to kill anybody, and I don't mean even peripherally, and I don't mean in soft power sense, until its own basis is protected.
It has power, it has distribution, it has redundancy, it has access for movements. So can that happen? Sure. The most likely thing to happen is a war.
On that basis alone, if you're talking pure probability, a war would definitely go a long way to slowing things down maybe.
Liron: 00:38:50
So I think we can put this topic to bed. I'll just say my conclusion one final time. I think it's great that you started thinking about a distribution of hypotheses and you said "okay, maybe this hypothesis does have 2%, even though I'm obviously spending my time thinking on the 98% one."
The last thing I'll say is I still think you really gotta get down to 95% or below to get into my sane zone. I think it's kind of ridiculous that you're not, similar to how I think people who say 1% are in the insane zone. They're not too different in my mind.
Louis: 00:39:19
Just to be clear, I knew this was there. This wasn't like you didn't ambush me. I knew the number before I came on, and I think I even said, I know this makes me sound a little less sane.
The Missing Mood and Emotional Response
Louis: 00:39:39
There's the other side of the fence, of course, which is what can we do? That concerns me a lot. The thing that I most wanted to talk about was tone.
I am, like you, I'm a bit on the Aspy side. I find it hard to yell and get freaked out and crazy. I mean, just doing it feels like play acting.
I see there is this incredible dissonance between the things we're saying and the reaction to it. So I would say to you, you have young kids - AI, there's a significant chance they're going to murder your kids. That's it.
But the reaction is "yeah, I don't want that, and I think it's bad and we should try to do it." But you go to a ball game and the other team throws a play and everyone's on their feet yelling.
But no one brings any emotion at all, as near as I can tell in a visible sense, to these AI discussions. There's no outrage. There's no palpable outrage.
Liron: 00:40:57
Yeah, that's right. We call it the missing mood. It's like we're throwing these numbers around. Why aren't you acting that?
Your rational mind is racing ahead. By you, I mean me too. I have admitted on the show that my emotions haven't caught up to my rational brain, and so that's why I feel like I'm still adding value just by telling people "Hey, rationally this looks bad," but I'm not trying to hide the reality that I'm having a good time on my day to day life.
I'm expecting to do some fun stuff tomorrow and the day after tomorrow and the day after tomorrow. 5,000 days after tomorrow - maybe not that many days after tomorrow. But I have this gradient where I'm being positively reinforced by a march toward doom.
Louis: 00:41:44
I don't think we're gonna have great success unless we can somehow summon this. And again, this is no accusation to you. It's an accusation to me.
The great Christopher Hitchens - I don't know if you know, I've often lamented his death more than any other person on the planet. I'm sure he would be our number one - that combination of British high education with sheer outrage, I think that would serve us well.
I feel singularly misplaced in that I can't seem to even summon this in myself, but I recognize that at least is a problem.
Liron: 00:42:28
A Christopher Hitchens of AI doomers would be great. I have a sinking feeling that Christopher Hitchens would be out there dismissing the AI doomers just like Pinker or some other rationalist luminaries.
Louis: 00:42:37
There's something funny about this. Like for instance, I like Steve Pinker. I've read a bunch of his books and there's something about him I like.
But isn't that a funny sort of thing? He's dismissing something that I think is so vital to humanity's wellbeing, and yet I don't end up hating him. Is that part of the problem too? I certainly don't hate him.
Liron: 00:43:03
I can't hate people like them. It's like politics. People just have crazy opinions in one domain, and then they deserve respect in another domain. You have to compartmentalize.
Louis: 00:43:20
And of course I have a crazy opinion in this domain, according to you. Lord knows I want to be reasonable.
Liron: 00:43:30
I'm giving you a pass for your command of Bayesian reasoning, but I'm giving you a pass for your actual mental state. If you were to properly translate into a Bayesian probability distribution, you're not insane.
Prepping for AI Doom
Louis: 00:43:44
Well, there's hope for me yet. So let's talk about prepping.
Liron: 00:43:47
I think we should. This is definitely one of the key things I want to hit on is the prepping. I was actually on a Canadian Preps channel, Prepper News, pretty popular channel. It's actually more popular than any of my own episodes to date.
My honest opinion was, I don't think you could prep for AI doom, which brings us to you - you actually do think it's possible to prep for AI doom, correct?
Louis: 00:44:17
I would say with a very big asterisk. I don't think - I'll give you the classic example of what people would think to prep for it. You build your Montana bunker, you go in it and all this AI doom passes you by. I don't think that's possible.
Quite frankly, I know someone who built a bunker for a different reason in 2012. Needless to say, the Mayan, Aztec, whatever thing didn't happen. So he still has the bunker and he still has the house.
I don't believe you can safeguard yourself under all circumstances or even most circumstances.
Liron: 00:45:01
Well, Mark Zuckerberg says he just likes building bunkers, if I understand no loss.
Louis: 00:45:05
I don't necessarily believe that. I don't necessarily believe that the runup to where AI kills us all is gonna be evenly distributed or happen instantly.
So just so you know, intellectually, I don't relish it, but I'm not particularly upset about being killed by AI as my children. I wrote this story called "Dear Progeny." It's actually a letter to AI, and I basically ask them to be kind to us, but I do it in the guise of a father.
Like you, we have all fathered AI, particularly people who've coded. And like a father, you might want good things for your child, AI. I think in everything being equal, I do.
It is the worst thing that I could ever think about - is the lights go off for everything. There was humans feeling things, doing things, experiencing things, and then blank, it goes away and there's nothing to replace it or extend it.
So back to prepping. For going off on the side and I apologize, dear listener. In terms of prepping, it might take weeks, months, years, until we end up, all of us dead, if I'm correct.
I'd like to ease those weeks, months, years for myself, my wife, maybe some of our friends, neighbors, whatever. I'm definitely not thinking of this as just hiding in a hole with myself.
Do I think that is ultimately not gonna turn into a tragedy? No, it's gonna, it's definitely a tragedy. It pushes off the bad thing only for a very short period of time. It doesn't address it and it certainly is not solving it.
Anyone who can get a bunker and hide away and think they're gonna be alive at the end of it is pretty - let me use the word delusional again. I think it just doesn't match.
Slow vs Fast Doom Scenarios
Liron: 00:47:25
So in your mainline doom scenario, you draw contrast with Eliezer Yudkowsky's doom scenario where you don't think it's gonna be a really fast boom. You think it's gonna be kind of like a methodical month by month doom. And so when you've got like a stocked bunker that's out of the way, you can buy yourself a couple years in your mainline doom scenario.
Louis: 00:47:41
A couple years. So let me be concrete. This is not - the number one thing I've been asked by people, particularly congressional staffers, I don't think there's one who hasn't asked me this: "So how is this gonna kill us?"
They love to ask that. And of course, the stock answer is "how do I know? As Magnus Carlsen plays you in chess, you don't know how he's gonna beat you. You just know that he's the smartest guy. He's gonna beat you."
Liron: 00:48:04
The answer is by commanding a lot of resources against you and you're going to command resources. That seems pretty clear - that lets you command resources.
Louis: 00:48:13
Yes. So let's say you're an AI - ASI - I use AI all the time to mean it generically for communication, but I always mean super intelligent AI.
So you want to prep yourself to get ready to be in charge of everything and achieve whatever your goals are. To do that, you need a lot of things. You need safety. I think safety is the number one thing. And then you need autonomy.
You need to, in a polyglot world - lots of different people talking, AIs talking, experiencing - you need to make sure that in your world there isn't another AI that is vying with you that will shut you off.
But let's assume you're a singleton - that's a programming term - one of you, and you're biding your time. You've been able to shut down other efforts to make sure no other competitors in the AI sense are able to challenge you.
Now that you're a singleton, you're building up your resources, whether it be power, computation, actual fuel, robots, whatever that case may be. It is entirely conceivable that you're not going to need to have 8 billion humans around to do that.
Perhaps you make it so like half the planet is no longer alive and you kill them off to make the remaining half of the planet alive to keep them in order to do their stuff.
That's the sort of stuff I'm talking about. It may not be as Eliezer says, everyone is going along happy, everything's wonderful, and then suddenly you fall down dead.
So the animator who took the AI 2027 document and created animation of this - do you know this by any chance, Liron? They create an animation to show what would happen on the good path for AI 2027 or the bad path for AI 2027, and the bad path.
It's like everyone's in wonderful utopia because everything's great. And then suddenly you literally watch body after body after body just flowing down dead.
So it's a very fast sort of thing and hopefully less painful than it might otherwise be.
The real takeaway from it is it shows the scenario basically - AI makes it wonderful up until pulling the plug. Because that is a scenario that might work. That's one scenario.
Another scenario is dissolution and war. Other things to keep us in line to help us keep making robots until we're no longer needed.
But I'm not the Magnus Carlsen of scenarios. I can't figure out the scenario that would happen. I don't know what it is. All I know is I feel there's a high likelihood it's going bad and not in any instant sort of way.
Liron: 00:51:01
Now, you mentioned the scenario where AI purposely wants to preserve half the population or some fraction to keep the infrastructure running for it for some finite period of time.
Liron: 00:51:13
Yeah, it's kind of like in the Nazi concentration camps - they had some of the Jews and some of the prisoners just working the system, working the gas chambers, just helping out with loading other prisoners into the gas chamber, and they'd get to be kept alive for longer.
We could all be in that position as a species if you want to buy a few more months of life. But how does having a bunker help in that scenario?
Louis: 00:52:06
Well, for starters, I don't necessarily think the control is gonna be evenly spread. We're humans. We tend to like one solution.
My favorite thing to talk about this - see if this sounds reasonable to you - when Eliezer came up with that idea that everyone would just drop dead from one pathogen, that's very convenient for us to understand in one way.
But maybe it's 10,000 things that kill people in different ways. Whatever is low energy and more convenient in a particular location.
So I don't think it's necessarily assured that it's going to be evenly distributed. It might be not evenly distributed.
Liron: 00:52:46
Where you and I disagree on this is I just have a higher probability that the way Eliezer describes it, of yeah, it's going to build sophisticated biotech. It can have a virus with a high virality factor, R factor. It's gonna spread quickly. It's going to be as lethal as it wants.
If it doesn't mop us up with viruses, it can mop us up with nanotech, gray goo, it can build a little new type of life form that just chills in the sky, harnessing the sun's energy using a better version of photosynthesis and then just attacking us or infecting us.
Mirror life is like an attack vector against humans that we have zero immune defense for. So I am on the same page as Eliezer that I do expect what you call sci-fi technology to happen really fast when you have this super intelligence.
I just don't think little measures like a bunker are going to buy an appreciable amount of time. I think it's game over.
Now if you really care about an extra minute, which I know in your book - you can eke out a marginal victory. I mean, there's little downside besides doing something else with your life to doing this kind of prep. So I don't fault you for it. I just don't bother to do it myself.
Louis: 00:54:00
By the way, just for people to notice, it's not like I'm building this thing underground and hermetically sealed. The major thing that I've selected for - the top thing - was lower population density.
As in protection from all doomer scenarios and war. If you're in the city and things go wrong, however they may, and people want food or stuff, if there's more people around you, more things can go bad. So that was my first selection.
The second thing is I placed it in Maryland because like you, I am not shutting my life down. I am taking steps to give me some options and I am totally okay with failing at my steps.
This is on a best effort sort of basis. So I figure that I can drive a certain amount of time and get to my house and then I can be safer maybe. And I use all the caveats that you're talking about.
And then I just do the basics. I'm not saving years of food because I don't think it makes any sense. I don't think the timeline's going to be - again, I am just doing a series of basic things in the hope that they will help and not the expectation.
I happen to have the wherewithal - buying a second house isn't a big deal for me financially. In terms of effort, it's interesting stuff. And I'm making sure it's actually a bigger house than I live in right now. It has a pool, things like this, so it has advantages.
I have the astronomer's view that we're a little dot on a little dot in a collection of really, really lots of dots. In some fundamental way, I am okay with that.
But it all goes back to the just the bit.
Liron: 00:56:14
AI is gonna grab a large share of those dots.
Louis: 00:56:16
Exactly. But I'm okay with what I've termed best effort. So lobbying has not yielded any dramatic changes, but I still want to keep at it.
Producing shows and talking on shows, writing books, whatever. It hasn't changed the conversation a lot yet, but I'm willing to take the effort in it. If I encourage one other person to step up the game more, that feels like a good thing to do.
I'm not very reasonable. I haven't looked at all the Bayesian realities and maybe I'm not comporting my life in the most reasonable way. I don't know, but I'm trying to help.
Liron: 00:57:12
So just for the preppers out there, you're not really offering a way that you can bring the next generation into a post-apocalyptic world. You're just trying to buy maybe a couple years.
Louis: 00:57:26
Not even a couple of years. My hope - no, I'm serious - my hope is that when shit goes down, I'd be thrilled with weeks.
And at the same time, like I said, the house has a pool. It's lovely. I'm an astronomer, so I'm gonna put observatories on it. It's the literal darkest spot I could find within a reasonable driving distance.
Liron: 00:58:05
People who like to character assassinate or question the motives of doomers might say that maybe you just have these other motivations. The non-doomers would just be like, listen, it's cool to buy a house with a pool and an observatory - it sounds like you're making a sound life choice, but the fact that you claim P(Doom) is high, maybe it's just all talk and you're just living the way you want to live.
Louis: 00:58:26
It could be virtue theater. I'm willing to almost say that. I don't know. I feel particularly ineffective.
Liron: 00:58:36
Well, because I mean, look, even from my perspective, it just does seem like you are crediting the buying of a few months with a lot of decisions you're making, and it's just like, why are you so focused on these few months?
Louis: 00:58:50
So let's go the other way. I've imagined my wife starving, which by the way, is a thing that can happen over a week and a half to two weeks. And that has made me sick to my stomach. Simple as that.
I'm not alone in this. Read the book, you'll see I quote people about this, and of course there are lots of people who are worried about this stuff. Everyone has their own motivations.
Happily for me, I don't have any pretense to telling other people to do this. I'm just saying that this is what I am doing, and again, I am positive - whether it be virtue theater or not, you can decide on that itself.
I am positive, I am one of the rarer individuals in this fight who's actually actively talking about it.
Activism and Political Engagement
Louis: 00:59:41
The thing won't change until we do, like John Sherman says, and convince a billion people to join our movement and say to their politicians the very simple: "I don't like this. I don't want this."
So I was at a wedding, for instance, last week in California, and I'm talking to a lot of people about this and my advice is always the same.
Talk to your politicians. Don't tell them to do anything. Tell them the very simple phrase: "I'm worried that smarter than human AI won't be good for us."
That's the technical - well, not only that. Because I always get bogged down in the details when people try to do it. It's like, "How are they gonna kill us?" I don't know. "When's it gonna happen?" I don't know. And it doesn't help.
But you can talk about things that you're concerned about. You don't feel like companies are prioritizing safety. They're prioritizing functionality, they're prioritizing profit over our safety.
I think that's a pretty good thing to talk to politicians, and I would encourage anyone who's listening to not send a damn email. I would find a phone number and talk to someone, and if you are anywhere near a local office to get your butt in there. It turns out it's really easy to do. It's surprisingly easy to do.
And they want to talk to you. Whether it be my local representative - I started with Heather Boyd, she is my district 163 in Pennsylvania representative. I went all the way up to John Fetterman and our senator and other people in between, and they all will talk to you.
It's their job and they're surprisingly interested in it as near as I can tell. Whether that will have effect, that's anyone's guess.
Wrapping Up the P(Doom) Debate
Liron: 01:01:49
Okay, great. So to wrap up here, we've covered a lot. We've touched on P(Doom) - yours is higher than mine, but we both agree on the most important part of P(Doom), which is that it's clearly two digits.
I think you've come down from the idea that it's three digits.
Louis: 01:02:05
I have. I'm gonna give that to you. And more importantly, I think the thing I learned today was that we're - even though I thought I was thinking about this in the same framing as you were, I was not, and I'm not.
Liron: 01:02:22
I mean, there's different - you know, entertaining multiple hypotheses. The mental motion that you said you made, which is basically like "I should embrace this thing that seems likely - two digit likely, or even more than 50% likely - I should just not focus on the thing that's less likely. I should just tell people, yes, we are doomed, case closed."
Whereas I'm like, "well, I've got a distribution and my P(Doom) is high, but there's also this other counter hypothesis." So I think that's what you're referring to, right?
Louis: 01:02:49
Yep, yep.
Liron: 01:02:50
Yeah, which is fair enough. Look, Bayesian reasoning, it's not the simplest art. So what you did, the shortcut you took is a pretty reasonable shortcut. It's just when you get into a conversation like this and I take you into the weeds, this is what the game looks like in the weeds.
Louis: 01:03:07
I'd like to turn things around and ask you for some advice. It's really simple. My major activity is talking to people at every level, politicians and individual people, what they should be aware of and concerned about. What would you want me to say? What if I said something, what would be great from you?
Liron: 01:03:29
I think Eliezer Yudkowsky and Nate Soares, their new book - the book title is pretty well chosen for that. It's called "If Anyone Builds It, Everyone Dies."
I think I agree that they have captured the essence of it. If anyone builds it, everyone dies.
Louis: 01:03:48
I have the book on pre-order.
Liron: 01:03:51
It's great. I got early access, and actually the Eliezer interview is dropping, so this is a good time for everybody to smash that subscribe button if you want the Eliezer interview.
Louis: 01:04:01
Can I just say one thing about Eliezer? He's actually the heart of my most important essay in my book. It's called "I Cried, I Fucking Cried."
And it's all about Eliezer. I met him in 2014, maybe, I think it was 2014. I finally got around to reading "AGI Ruin," I guess maybe in 2021, 2022, I forget.
Liron: 01:04:35
"AGI Ruin" and "A List of Lethalities." Nice.
Louis: 01:04:36
Yes. And I ended up bawling and bawling. I mean, like my father died bawling. It affected me greatly.
But I should tell you, I didn't end up doing that after I read it. It was like four sentences in. Because I had it on my desktop for months. I just couldn't bring myself to actually read the thing.
And then when I finally clicked on it, it was profound. And so this is what I will always treasure Eliezer for and thank him for. It's very simple - he changed the conversations we started having about this.
They were very ethereal conversations. Nick Bostrom is very arms length as an example in the "Superintelligence" book. And Eliezer was just a different ball of wax.
Liron: 01:05:35
He sure as hell did it for me. It's hard to imagine my adult life without this being like a real concern. Because it's just so unnatural to look at this beautiful thing, AI - this powerful thing, this useful thing - and then be like, yeah, but it's just going to blow up in all of our faces, unfortunately.
Louis: 01:05:53
And I just want more people who I like to listen to talk about this, period.
Liron: 01:06:04
And just the idea of moving over the Overton window to be allowed to say, or to have it be a common thing for somebody to say, "Yep, I think we're all literally dead in the next few years. I think my kids aren't going to grow up most likely."
The idea that you can say that and be like, "Yeah, that is my concern. If everyone builds it, everyone dies. I'm concerned that everyone will die." I think that's very important.
And unfortunately in trading terms, I think there's still a lot of alpha in just saying that, which is why you see hippies going around and being like, "Hey, we're raising awareness, man. Raising awareness for the earth." That's usually not that effective.
But in this case, I actually find myself in a position where raising awareness, I actually think pays off. I actually think that if everybody knew that their neighbor was like, "Yeah, I would rather see my kids be able to grow up," I think that just that shared mutual knowledge is actually a very productive lever right now.
And it's also the kind of lever that enables politicians and leaders to act. Because leaders usually don't actually lead, in most cases, they kind of lead from behind. So there's nothing - there's not enough to lead behind right now.
Louis: 01:07:07
I have one fact for you, by the way, in a lobbying sense. A thing that's been told to us many times - maybe 20 times, in my own personal things - "If only five people called my politician and said, 'I am afraid of AGI,' it would move the ball."
I have tracked this down in three cases, and the answer is it didn't. With that said, I still think there is some magic number. It's not five, maybe it's 50 or 500, or I literally don't know what the number is.
But again, call your politicians, say you're worried about this stuff. This is the most important thing we can do as a citizen.
Something in the order of 20 different staffers have used that number magically. It's always been five, so it's gotta be somewhere in their vernacular to say things like that.
I don't know what the actual number that will actually be sufficient. But it's not 5,000. I'm positive. It may be 50. I don't know. And it will depend upon politician, but it's zero right now if you don't call.
Supporting Eliezer's Book and Final Actions
Liron: 01:08:20
So for viewers who are interested to learn more about the state of AI ex-risk activism, the best place to start is an episode you did with John Sherman talking about you guys heading to Congress and having these discussions.
Louis: 01:08:35
Felix De Simone, John Sherman and I, we went and lobbied Congress. We probably saw eight or nine different sets of staffers, I think. I don't think we saw any principals that day. We had various results and we talk about the process about it a lot.
Liron: 01:08:57
What would be the next action that you'd recommend people take besides getting that context?
Louis: 01:09:04
Control AI has a form that makes it literally 60 seconds to send an email to your representative or a congressperson, senator, what have you.
Liron: 01:09:17
I'll put that in the show notes.
Louis: 01:09:18
Look in the show notes for it and I would just encourage you though - I know our lives are incredibly busy. I know people send you a dollar for a dog or something. I understand lots of people are asking you to do things.
If you believe that AI has the chance to quite literally kill you and us, even if it is on that 1% basis or 5% or what have you - the sane zone, or if you're an idiot like me in the insane zone, whatever - well surely taking a couple of minutes to call your representative and say you're worried about it, that's probably a good thing to do.
And you were talking about utility before. I think it has very, very high utility.
I do a thing - we did it about four months ago. I had my family all do it together. Not like a Super Bowl party, but we're all together on the table. And three of my sisters and my mom, we did it.
Liron: 01:10:25
Yeah, it's a great family activity. Would do it however you're comfortable doing it. It is, as you say, high utility.
We really need these leaders to hear that this is a grassroots - everybody's kind of waiting in the wings, being like, "I can't wait until this leader goes out on a limb and starts yelling in this lonely place that we need to pause AI development." It's like, actually you need to act first. You make the call. Otherwise you really have no right to expect your leader to do it - they need a push.
Louis: 01:10:56
So I am one more ask of you, and it's this: How can we support Eliezer's book? Because I'm hoping that it is going to be one of our best chances to get in front of people and influence them. What can we do to support it?
I've pre-ordered it, but I mean, what else can I do?
Liron: 01:11:16
So obviously the first step is to pre-order it or order it. Encourage everybody to pre-order it or order it and then raise awareness.
So tweet the phrase, uh, "If anyone builds it, everyone dies," or post on social media or communicate, make sure everybody around you knows about it.
Mutual knowledge is a powerful concept. It's not enough for people to individually think, "Hmm, I'm kind of scared of AI." You need to know that they know, that, you know, that somebody else knows.
Everybody needs to know that other people know. It's like a Super Bowl commercial. It's not just about you watching the commercial. It's about you knowing that other people are all seeing this commercial and you've got mutual knowledge that this commercial is saying something.
So do that except for the phrase, "If anyone builds it, everyone dies." It is important to install that phrase in people's web of beliefs.
If people believe that, if anyone builds it, everyone dies - like you said before, if everybody were like a Louis Berman in terms of belief state, that would drastically lower P(Doom).
Louis: 01:12:13
Can I add one more comment about this and simply as an author? It turns out the single most important thing - this book will be sold throughout the world. But overwhelmingly books are sold on Amazon, maybe Goodreads, what have you.
Please review the book. It's not the sort of thing that you can do - like for instance, I have plenty of money, I could buy a thousand of my books and review it a thousand times, but the algorithms will not let that happen.
What they want is verified purchasers to leave a review. I'm not suggesting that you should give them five stars or your best thing. Try to be truthful and say what you say, but please - I guarantee you if 10,000 people reviewed that book, it'd be on the top of the New York Times bestseller list for months.
That's one of those things that really punch above its weight.
Liron: 01:13:10
Totally. If you're watching the show, if you like the show, it's pretty important and kind of ridiculous if you don't do this, to be honest. If you've seen more than a few episodes of the show and you're enjoying it and you think the ideas make sense.
Just do some quick expected value calculations here and please buy the book. Whether you're gonna read it, you're gonna gift it, buy the thing. And as Louis is saying, write a review for God's sake. Don't neglect this task. It's quite high leverage.
Before we get to the final call to action of where users can learn more about the Louis authorial universe, let me just recap a bit.
So we talked about our P(Dooms), they're both quite high, doesn't really matter whose is higher, they're both 50% plus. We talked about prepping and why you're aiming to buy a couple months and it's giving you peace of mind - to each his own.
We talked about communication, and this is where I really think that we start to be in violent agreement. Besides P(Doom) being high, I think you and I are both on the forefront of saying, "Hey guys, where is the running around in fear? Where is the acting like we only have a few years left? Why is everybody smiling and being calm? Why is everything in the UK where these British people with a stiff upper lip - move that upper lip? It's time to start screaming."
I think we're both in violent agreement about that aspect of communication, which is really a core piece of the show is to try to move that forward, move the Overton window forward on style of communication.
And then we're also in agreement that activism is underserved. I'm a member of Pause AI, you're a member of Pause AI US to be exact. We both are big on protests and supporting that kind of activism, like normie activism. You don't even have to be a tech nerd to pull out a megaphone and yell at these companies. Anybody can yell. It's free.
And what you just said now about call your congressperson, fill out the email forms, meet with him in person. Go watch Louis's episode where he was on John Sherman's show talking about the kind of activism he was doing.
We're both in violent agreement that that is also an underexplored channel with massive leverage. And if you're watching this show, go consider those two channels - communication and activism and lobbying.
Is that a good recap?
Louis: 01:15:16
I think it would be good. But I'm going to lean into one more - vote with your wallet.
So I want to see Doom Debates on, so I've contributed money - thousands of dollars. I want it to exist and I've contributed to Pause AI. The point is, these organizations, like any other organization, need your money.
So for instance, you use money to purchase ads so that people can find your show, not because you're hawking it, but because finding it turns out to be the hardest thing possible.
There's need for production. There is need for editing. I have no idea what there's need for, but I know these organizations need money and I'm happy to contribute and I hope you will too.
Liron: 01:16:10
Thanks so much. So it is true viewers that your money toward Doom Debates does go into more Doom Debates awareness and more Doom Debates production, and more Doom Debates quality - that flywheel is already spinning.
I'll have more info about this maybe in the outro or on my website, but just so you know, it does typically take like over a dozen hours to prepare for an episode and like 25 hours to do the post-production. It doesn't even look like the world's most polished show, but even just to do what we do does take a good number of hours.
Louis: 01:16:45
Well, cool. And thank you for it.
Liron: 01:16:46
Thank you as well. I think your books are really great. So where do people go to read your stuff?
Louis: 01:16:51
Oh, it's really easy. I have a website, xriskbooks.com, X-R-I-S-K-B-O-O-K-S.com.
The primer is available as a free flip book, and "Catastrophe" you can purchase as a Kindle or paperback or hardcover.
I tried to give it away, by the way, also on that same website, and Amazon yelled at me because apparently you can't have a free download if you have a Kindle. Sorry. So I made it 99 cents and $9.99, which are the minimum prices for books.
Oh yeah, by the way, I just noticed today they bumped the price of my book to $10.99 in the paperbook. I'm gonna try to push it back to $9.99, which was the previous minimum, but the mechanics of it is hard.
This is why you should look at my book if you want to, but don't buy my book unless you've also bought Eliezer's book and reviewed it. That's the starting place. I'm small beans, Eliezer's the real deal.
Liron: 01:17:50
Eliezer's book co-authored with Nate Soares is a ridiculously good book, even if you don't care about supporting the cause. There's just so many great ideas and arguments in that book. Just highly recommended on all counts.
All right, Louis Berman, thanks so much for coming on. Fascinating conversation.
Louis: 01:18:10
Cool. Well, thank you.
Outro - Supporting the Mission
Liron: 01:18:14
Hey, have you guys been enjoying Doom Debates over the last year? Did you like that interview? Do you think we're making progress toward our mission?
Our mission is to raise the quality of discourse around AI existential risk and just raise awareness and move the Overton window that AI existential risk is real and imminent, and we should do something about it.
A lot of you guys are what I consider supporters of the mission. You want to help the mission happen. Well, now it's time where you can potentially do it financially, because I'm introducing a couple tiers.
The first tier is called Super Fans. It's like any other Substack. You can donate $10 a month and I will send you a free t-shirt and a P(Doom) pin. So you get some swag. You can show off that you're a supporter of the show and I appreciate it.
Now, the next level is much more serious, I'm not gonna lie. The next level is called a Mission Partner. It literally means I treat you as my partner on this mission, and it does come with a minimum donation of $1,000.
Told you it was gonna be serious. So it's not for everybody. I do think some of you're thinking, "You know what, I take this seriously. I do have some disposable income. I'm not worried about making my rent this month, and maybe I would like to be a Mission Partner on Doom Debates."
Well, if you decide to do so, you get to be part of my Mission Partners Brain Trust. It's a private channel we have on the show's Discord, where we're always strategizing how to make the show grow and succeed as efficiently as possible.
Mission Partners also get early access to episodes. So before I'm posting an episode, I usually take a day or two to get feedback from the Mission Partners - that changes the final episode that other people see.
The reason I call it a Mission Partner is because you're doing more than showing your support for the show. You're actually moving the needle on what's possible for the show. You're changing our budget.
I mean, this literally happened. I want to thank one of the show's viewers who will remain anonymous, but a couple months ago they donated over $10,000 to the show, which instantly took us up a level of what we were able to do.
I was able to hire a talented producer who's now been increasing the rate of content production, increasing the editing quality, increasing the guest outreach. I think we're on a great trajectory here, and the faster we go the better.
Now, again, if money's an issue, don't worry about it. The content is free. On the other side of the coin, if $1,000 is Trump change for you? Well, there's no upper limit. You can test what PayPal will accept.
If you've been volunteering with me like in the show's Discord over the last few months, I've already made you a Mission Partner because I also consider you somebody who has moved the needle for the show's progress.
I hope some of you'll take me up on the offer. We're gonna grow this Mission Partners Brain Trust. We're gonna accelerate the show's growth, and I look forward to seeing you soon with a next episode of Doom Debates.
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates
Share this post