California gubernatorial candidate Zoltan Istvan reveals his P(Doom) and makes the case for universal basic income and radical life extension.
Timestamps
00:00:00 — Teaser
00:00:50 — Meet Zoltan, Democratic Candidate for California Governor
00:08:30 — The 2026 California Governor's Race
00:12:50 — Zoltan's Platform Is Automated Abundance
00:19:45 — What's Your P(Doom)™
00:28:26 — Campaigning on Existential Risk
00:32:36 — Does Zoltan Support a Global AI Pause?
00:48:39 — Exploring His Platform: Education, Crime, and Affordability
01:08:55 — Exploring His Platform:: Super Cities, Space, and Longevity
01:13:00 — Closing Thoughts
Links
Zoltan Istvan’s Campaign for California Governor – zoltanistvan2026.com
The Transhumanist Wager by Zoltan Istvan – https://www.amazon.com/Transhumanist-Wager-Zoltan-Istvan/dp/0988616114
Wired Article on the “Bunker” Party – https://www.wired.com/story/ai-risk-party-san-francisco/
SL4 Mailing List Archive – sl4.org/archive
PauseAI – pauseai.info
Transcript
Teaser
Zoltan Istvan 00:00:00
They invited 80 top AI experts to this mansion in San Francisco, and I met at least four people that were building bunkers or buying islands.
So I think AI can be wonderful. I have advocated for pushing AI, but I’m not advocating for pushing superintelligence.
If I was in office, you can rest assured that I would look at every single angle to stop the creation of superintelligence to save humanity.
Liron Shapira 00:00:27
Well, you’re certainly winning the Doom Debates vote. New York State has NYC Mayor Zohran Mamdani. If the East Coast is going to have a Zohran, doesn’t the West Coast deserve to balance it out with a Zoltan?
Zoltan 00:00:39
Yes, yes, that sounds great.
Meet Zoltan, Democratic Candidate for California Governor
Liron 00:00:50
Welcome to Doom Debates. Zoltan Istvan is a leader in the transhumanist movement, who’s running to be the next Governor of California.
In 2016, he founded the Transhumanist Party, though he’s now running for office as a Democrat. His platform addresses AI as an existential and economic threat. His main policy proposals include a universal basic income and giving humanoid robots to every household.
He holds a master’s degree in Practical Ethics from the University of Oxford, and an undergraduate degree in Philosophy and Religion from Columbia University. He is a former National Geographic journalist and extreme adventurer.
In 2002, he sailed around the world on a twenty-eight-foot sailboat, then decided to snowboard down an actively erupting volcano for fun, and he accidentally invented the sport of volcano boarding.
I’m excited to talk with Zoltan about his policy platform, his thoughts about the coming AI unemployment wave, and of course, get his perspective on AI doom. Zoltan Istvan, welcome to Doom Debates.
Zoltan 00:01:53
Thanks so much for having me. It’s awesome to be here.
Liron 00:01:55
So you’ve clearly done so much in so many areas. I didn’t even mention the entrepreneurship side and how you became the owner of various award-winning vineyards, right?
You own a lot of real estate. So my first question for you is, do you have an abnormally high amount of energy?
Zoltan 00:02:11
Well, thanks for the compliments. I don’t think so. I think I just find a lot of life really wonderful, and it’s like you want to dip into as many pots of honey as you can.
If you can do so, do so. And so I try to dip into as many as I can.
Liron 00:02:26
Hell, yeah. What do you see as the main focus of your career?
Zoltan 00:02:30
I think the main focus of my career is really trying to overcome biological death with the field of transhumanism and life extension science.
And even though I’m running for office and doing all these other things, a lot of that is really geared toward trying to make people live dramatically longer.
Liron 00:02:47
How did you get into transhumanism?
Zoltan 00:02:50
Well, I used to work for National Geographic, and I was covering a lot of war zones as well as conflict zones, and I was covering the DMZ in Vietnam. There’s a lot of unexploded landmines there.
Basically, while doing the story, I almost stepped on a landmine. My guide sort of pushed me out of the way, and it really got me thinking about living and dying.
After that experience, I returned home to the United States and began working on a novel called The Transhumanist Wager, but really, it was about life extension. It’s about what can we do to get people to overcome biological death?
The book just chronicles a character who will do anything to not die. That really motivated me to move into the public sphere of trying to talk about extreme longevity and things like that, and that’s really where my career has been dedicated.
That said, I’ve been running for office because, in my space, nobody else is pushing a science agenda in terms of politics.
Liron 00:03:54
Okay, so the timeline was roughly the early 2000s, right? Like 2010. Your book, The Transhumanist Wager, came out in 2013. So when did you first become a transhumanist?
Zoltan 00:04:04
So I think I became a transhumanist probably in college. I was studying at Columbia University. There was a story on cryonics, where you put dead people into really heavy ice or really cold temperatures, and you try to bring them back to life one day.
And then I realized, wow. You have to understand something. I’m just not a believer in the afterlife or a believer in a deity. When we start talking about AI, this will become really relevant as well.
If you don’t believe in an afterlife or believe in a higher power, then it’s really up to you if you want to live indefinitely. And so that’s really where my perspective, profession, dreams of my life, wherever I’m going on planet Earth, that’s what it’s all about.
I think what’s happened is that because I didn’t believe in a higher power or an afterlife, I dedicated my life to extreme longevity and transhumanism after realizing that there might be a scientific way to not die.
Liron 00:05:04
Yeah, and by the way, I’m a transhumanist, too. I’ll take a hundred thousand years before I decide how much more I want. I feel like life is too short, so I’m with you there.
Zoltan 00:05:13
That’s perfect, because I’m not saying I don’t want to ever die. What I’m just saying is, I think having the specter of death hanging over me after seventy or eighty years, or in some cases much less, is lame.
It’s just lame. Especially if we can, as a human race, do something about that, that would be fantastic. I would like to do that.
Liron 00:05:35
Yeah. Okay, so I’m with you there. I often say on the show that I’m also a transhumanist. So you mentioned for the timeline, you’re basically talking around the early ‘90s you became a transhumanist?
Zoltan 00:05:45
Yes. So I would say right around 1996, I had that class. I was reading a lot of existential philosophers and one day I woke up and said: You know what? I’m going to dedicate my life to life extension.
I began taking the necessary steps to do that, and I think working in National Geographic was very relevant to that because it gave me a lot of exposure in writing stories on science, technology, and the environment.
I kind of just expounded upon that later in terms of real science and technology, transhumanism. Even now I’m sort of—I don’t want to say I’m a journalist—but I’m still someone who’s in front of the camera, oftentimes in the public sphere, discussing these ideas, trying to promote them.
So there was a very clear correlation, at least in my mind, after college, on what I wanted to do and how I was going to do it.
Liron 00:06:34
In the late nineties, the transhumanism discussion was largely centered on the SL4 Extropians mailing list. Are you familiar with that?
Zoltan 00:06:44
Yes.
Liron 00:06:45
How active were you on that?
Zoltan 00:06:47
I was not active at all during that period. I was still just a bystander looking at these things, but I was totally aware of a lot of them. I guess I was not really sure how to jump into this society, this movement.
To be honest with you, there are factions all over the place as well, and it really wasn’t until I wrote my novel, which was started, I guess, in 2008 or 2009, even though it was published in 2013, that’s really when I joined or started being a part of what you might call the social aspect of transhumanism.
I think that’s really when that time came for me. But there was some amazing stuff happening in the nineties and later.
Liron 00:07:28
So the heyday of Eliezer Yudkowsky and the proto Machine Intelligence Research Institute or AI Existential Risk Movement, that was also on that SL4 Extropians mailing list that you said you were aware of. I’m curious, when did you first familiarize yourself with the writings of Eliezer Yudkowsky, if ever?
Zoltan 00:07:49
I’ve been very familiar with his writings from the beginning. I guess I used to follow that site, LessWrong.
Liron 00:07:56
Yeah, LessWrong.
Zoltan 00:07:57
And I also did part of my thesis on Roko’s basilisk. I think that’s how you say it. He didn’t describe this specifically, but I remember him commenting on it, and I remember quoting him and things like that. So I’ve been following his work for quite a long time.
That said, I’m certainly no expert on him, and I haven’t actually met him in person. I’ve met a lot of people in the transhumanism movement as well as AI doomers and people that are pro-AI, but for some reason, I’ve never met him.
The 2026 California Governor’s Race
Liron 00:08:30
All right. Fair enough. Well, we’ll get into some of his ideas in this conversation. But first, we can’t go too far without touching on what I think is the main subject right now for you, which is the 2026 California gubernatorial election, correct?
Zoltan 00:08:45
Yes. So just so your listeners know, I’m running for governor as a Democrat in California, and I’m actively campaigning on a day-to-day basis. We’ve been at it about eight months. We have the primaries here in just a couple months’ time.
Liron 00:09:00
The timeline on that, I guess, the election itself is November 3rd, but there’s the primary on June 2nd. The way California works is it’s just a top two nonpartisan primary, right?
So there’s going to be a couple dozen candidates. You’re one of them, and it’s really important for you to make it into those top two if you want to proceed to November 3rd, correct?
Zoltan 00:09:19
Yes, one hundred percent. If you don’t make it into the top two, then that’s sort of the end of it. At the same time, it’s a really crowded field.
So for the very first time in California elections for the gubernatorial race, you don’t actually have to get maybe even thirty or forty percent to make the top two. It’s quite possible maybe even fifteen percent could get you in, which is unheard of in terms of California politics.
But it’s such a split field because so many people feel that California has so many issues right now, and so we’ll see what happens in the primaries.
Liron 00:09:53
Right. Yeah, ‘cause if the vote gets split, like, twenty-five ways, everybody has four percent on average, so maybe the winners will have nine percent and ten percent. So that means it’s up for grabs. Anybody could win, and the timeline is four months from now, right?
So the clock is ticking. These four months are critical. So I completely understand why you’re coming on this show, because you really got to get that Doom Debates bump.
Zoltan 00:10:14
I’m looking for that bump. I’m looking for that bump. I’m glad to be talking to you.
Liron 00:10:18
Hell, yeah. All right, audience, so perk up your ears right now. Listen to what’s coming next here. Who do you think are the leading candidates in the race besides yourself, of course?
Zoltan 00:10:28
Well, I think Swalwell is one, as well as Katie Porter and I think Matt Mahan. I’m not sure if I’m saying his name right, but he just entered the race. These are some of the main picks right now.
I’ve got to be honest, though, it’s funny because everyone that’s running thought some superstar would enter the race, like Kamala Harris, but she chose not to.
Some people speculate that she chose not to enter the race because California is too hard of a state to govern at this point, given our deficit is so huge and it’s sort of been run to the ground. If that’s what your legacy is going to be, it’s probably not going to be a pretty legacy.
Liron 00:11:10
Let me ask you, because I should try to be objective here, okay? In your—we’re going to talk a lot about your platform and why it’s way better than anybody on some dimensions—but if you’re trying to be objective, is there any other candidate who’s remotely good to vote for in the running besides you?
Zoltan 00:11:26
So look, I gotta be honest, the main thing I support is universal basic income, and I am the only gubernatorial candidate in California that represents that. And I represent that because I am convinced that AI is going to completely decimate the job market within two to three to four years’ time.
That said, I think the San Jose Mayor, Matt Mahan, is a pretty good choice. You know, for your listeners to know, I’m kind of a right-leaning Democrat, and I think Matt Mahan is a centrist as well. And even though he doesn’t support a universal basic income, at least he has a bit of a Silicon Valley vibe to him.
Liron 00:12:02
Well, that’s big of you to be able to praise some of the competitors. So that said, I think this is gonna be one of the friendlier shows for you to communicate to, because I’m personally, and a lot of my audience, we’re on the same page about transhumanism.
We’re on the same page about AI unemployment. We also think that the candidates are head in the clouds in terms of not really focusing on the number one issue right now.
So there’s a lot of good tailwinds in terms of us being likely to vote for you, except I personally moved out of California recently. That’s your only issue with me, but a lot of my audience is based in Silicon Valley.
I’m in upstate New York right now. But Producer Ory, pay attention, Producer Ory, see if he can win your vote, ‘cause he’s still in California. He’s in San Francisco.
Zoltan 00:12:45
Well, I hope so. I thank you for... You know, anyone that wants to vote for me, I’m looking forward to that.
Zoltan’s Platform Is Automated Abundance
Liron 00:12:50
So let’s get into your platform. I took a look at this. A lot of interesting points, a lot of unexpected directions. I think it’s fair to say this is a combination of platform planks that has never before been assembled. But let me toss it to you. How would you summarize your platform?
Zoltan 00:13:07
My platform is really centered around AI taking jobs and California life, as well as American life, just transforming itself. I think nobody realizes the big wave that’s coming. People are like: “Oh, AI is like the Internet.”
No, it’s not. It’s actually dramatically more transformative than the Internet. We’re talking about job loss to probably 90% of people within a five-year timeframe, at least in my opinion, and maybe seven-year timeframe. But either way, it is coming, and nobody wants to talk about it because it doesn’t poll well.
You lose votes every time you talk about it, for the most part. So my platform is centered around universal basic income to try to counter that. And then it’s also centered around this idea that we need to change the way we think about life in America and life in California.
If you’re not working, what are you gonna be doing all day long? Some people say, “Oh, you’re gonna be rotting in front of the TV and doing drugs.” Well, I’m not here to tell you what you’re gonna be doing.
What I’m here to tell you is that we want to give you enough money so you can still raise a family, have a roof over your head, have enough food to eat, and maybe even do fun things like travel to the Bahamas and learn the guitar or do your fifth PhD.
The point of the story is that I think we have to reimagine what the American dream is, and reimagining the American dream means understanding that working nine to five and being underneath the man all the time and worrying about all these different things is something that’s gonna go away.
We’re gonna start living in an age of abundance, and if we can create that age properly, it’ll be something dramatically new to us, and I think also really wonderful.
Another huge part of my platform, the automated abundance economy, is really trying to get robots into everyone’s houses, robots into everyone’s lives, so that they can serve us salmon, they can do the dishes, they do the laundry, and they basically make our lives easier.
You hear Elon Musk talking about this a lot, about this age of abundance we’re coming into, and this is really what my gubernatorial campaign is. It’s very hard, though, to convince the average Californian that this age is upon us, and the reason is because everybody I know is living paycheck to paycheck.
The main issue in California in the gubernatorial race seems to be affordability right now. Nobody’s worrying about the environment anymore. Nobody’s worrying about all the other things. People are saying: “Look, I can’t eat. I can’t send my kids to school. I can’t afford the cleaning.”
So I’m trying to paint this new picture, just saying, “Look, in the next one to two to three to four years, by the end of the next governorship, we’re gonna have a brand-new economy and one where you probably don’t work.” That was a mouthful, but that’s really what I’m all about. It’s very different than the other candidates.
Liron 00:15:51
For sure. Do you have a short, pithy campaign slogan?
Zoltan 00:15:56
Well, One robot, one household.
So everyone thought eight months ago when I started my campaign, and I promised a robot into every California household, everyone thought I was insane, until just last week, Elon Musk announced that Tesla is essentially gonna stop making the Tesla S and Tesla X.
This is huge. We have these companies in California, Fremont, California, right near me, that were making these cars. You saw them all over the road. He’s talking about now Tesla becoming a robotic company. I mean, this is something transformative that no one could have really even seen just a year ago.
So when we talk about one robot in every single California household to help with your dishes, to help do all your cooking, to do all your laundry, to do your yards, to walk your dog, this is real. This is not a fake thing anymore. This is gonna be here in eighteen, maybe twenty-four months, where you’re gonna see them all over.
So what seemed once like a crazy campaign slogan that maybe got some media attention is now looking a lot more realistic. We want to either lend, lease, or sell a robot to every California household and be the leader in the world, this state, where we’re modernizing California lifestyles.
The rest of the world looks at us and says: “Wait a sec, that guy hasn’t cooked a dinner forever, and he’s being served by a robot every single night.” I mean, this is a wonderful—this is what we’re talking about, the age of abundance.
Liron 00:17:15
All right, so let’s workshop this. Maybe like, “A chicken in every pot and a robot server.”
Zoltan 00:17:21
I would say that. That’s a good way to look at it. We haven’t worked too much on the slogans just because what we’ve been pushing forth is for an automated abundance economy as our tagline. You probably know Andrew Yang.
Liron 00:17:35
Yeah.
Zoltan 00:17:35
He was kind of the first person to really popularize the basic income, though I gotta be honest, I was writing about basic income and running on a campaign on it before he showed up. So at least I have this long history with it, but some of his people are helping my campaign out, so we’re very much just sticking with universal basic income.
I gotta be honest, it seems like everyone around me doesn’t care about a lot of the other issues you’re hearing me. I know you hear a lot about ICE, and you hear a lot about immigration, but when you come down to the nuts and bolts of what people care about, they’re just like: “I’m one paycheck away from being homeless.”
So a lot of Californians just wanna hear, “How can you make my life more affordable? Because I’m at the edge of being able to keep my existence without having to move out of this state.” As you know, people are leaving California in droves right now.
Liron 00:18:23
Yeah, you know, Andrew Yang is also where my mind went, because I remember the 2000 presidential election, and the exact way that you framed your platform, I think he went the same route, right? He was saying: “Hey, AI unemployment is coming for everybody. There’s not gonna be truck drivers, and so we have to get on this universal basic income immediately.”
Do you feel like, in retrospect, maybe Andrew Yang was a little early?
Zoltan 00:18:45
Oh, yeah, he was definitely early. And I wouldn’t be surprised if he tried to do it in 2028 in a presidential run again. Especially now that he’s so well known.
But he was definitely a little bit early, and I gotta be honest, I might be a year early myself right now because my message is still falling on deaf ears. That’s why none of the politicians are talking about it.
But if you look very carefully what’s happened with the job market recently, you’re gonna know that there has been huge firings in Amazon. There have been huge firings all across Silicon Valley. There’s been a plateau of job creation, and everyone’s like: “Oh, it’s just a gully.”
Actually, it’s the fact that everybody is using AI, and as a journalist, I can tell you the wow! It makes life so much quicker and easier. I think we’re about to do this with the job market, and we’re all gonna say: “Wait a sec, this is AI that’s actually doing this. It’s taking our jobs officially now.”
What’s Your P(doom)™
Liron 00:19:45
I agree with you. I think most viewers of the show are on the same page, that when you get a technology, specifically artificial intelligence, that can do everything the human brain can do, and then it can train to do it better, it’s on the way to doing everything better.
It’s already doing more and more things better. It’s doing the job of a junior software engineer better. A lot of our viewers are software engineers. So I think we’re mostly on the same page as you of like, yep, there’s going to be mass unemployment in the next few years. So I’m happy that you’re addressing that.
I definitely think we need policy for unemployment. But let’s segue to the next threat here from AI, which is the doom threat. It’s even worse than unemployment, is having human civilization get totally destroyed, having the future get wiped out.
So, Zoltan Istvan, are you ready for the most important question that we ask here on Doom Debates?
Zoltan 00:20:33
Yes, I am. P(doom). P(doom), what’s your P(doom)?
Liron 00:20:40
Zoltan Istvan, what is your P(doom)?
Zoltan 00:20:43
I have to be honest, I don’t have that exciting of an answer, especially after I just finished my graduate degree at the University of Oxford, where I specialized essentially in AI ethics.
My P(doom) is essentially fifty-fifty or fifty percent, and the reason is that I feel like it’s impossible to know which direction this is going to go. I have been an optimist about AI and written about it for many years in all places like Wired, New York Times, and even National Geographic.
But all of a sudden, after Oxford and studying under Nick Bostrom and some of those people, I realized this could just as easily go the other way. I did essays on that, and I feel like now I don’t know exactly if I can put anything either in this camp or in this camp. So right now I’m just saying there’s a fifty-fifty percent chance that the world is wiped out by AI.
Now, of course, that doesn’t mean that the other fifty percent chance is that AI is beneficial to the world. Maybe the other fifty percent chance is that AI just disappears and leaves us behind, as we might leave an anthill behind at our yard.
And the other fifty percent could be that maybe AI is very useful to us. Maybe it lifts us up. Maybe we merge with AI, which was my great transhuman dream. But I would say there’s at least a fifty percent chance that it ends up killing most human beings.
Liron 00:22:12
Wow, what a statement! Fifty percent chance that it ends up killing most human beings. And in terms of timeframe, are you just thinking, like, a decade or two, or a century? What are you thinking?
Zoltan 00:22:23
So, less than a decade, I would say. Let me just tell you, I was recently invited to a mansion by Max Novozhilov. He’s one of Sam Altman’s cryptocurrency buddies, and they have a coin together, a Worldcoin. Dan Fogela is his name, and he has this idea called Worthy Successor. I’m sure you’ve heard about it.
The idea was they invited the eighty top AI experts to this mansion in San Francisco. This was about four months ago. Wired did a big piece on it. And they said: “Okay, let’s discuss AI.”
And I gotta be honest, a lot of the ChatGPT software coders were there. A lot of engineers were there from Microsoft, Google, whatnot, that are in AI, as well as all the other people like myself who speak on it. I didn’t talk to one person who thought outside of a three to five-year window, we wouldn’t create superintelligence.
So that’s the first thing to know. Fifty percent of the people I talked to said AGI had already been achieved. So now we know we’re somewhere within this. We’re already kinda at the AGI level and looking further.
I met at least four people that were building bunkers or buying islands, and another five to ten were thinking about it, and these are the engineers themselves. So it got dark very quickly. Like, for two weeks after this meeting, I was very depressed. For the very first time in my life, I was purely depressed.
Absolutely everybody, in my opinion, is totally underestimating the probability of superintelligence coming to planet Earth and wiping out the human race. And let me tell you why I think it would wipe out the human race. It’s because of predation.
Predation is something I studied a lot at Oxford, and it’s this idea that we all are predators, and we’re all going after one another in this flight or fight lifestyle. I think AI will probably be interested in resources, be interested in its own survival, interested in power.
Once that happens, it will see us as probably a threat, as any human being would see any other entity as a threat to its own existence. Once this idea of predation occurs, then it’s really like it’s whoever is the most powerful. And really, humans aren’t gonna be more powerful than AI in the long run. It’s absolutely gonna devastate us.
So, I think AI can be wonderful. I have advocated for pushing AI, but I’m not advocating for pushing superintelligence. I’m just advocating for pushing AI to that point when we can maybe merge ourselves with it or use it in our everyday lives ubiquitously. That’s great.
But creation of a superintelligence, even though I realize a lot of billionaires want that, and they think it’s really wonderful, I think it’s wonderful, too. It’s like inviting aliens to planet Earth. You would never invite aliens more powerful than you to planet Earth, even if it seemed like a super cool idea.
And the reason you wouldn’t is because it would be an existential threat, like something we’ve never faced before. And that’s the same reason we shouldn’t create superintelligence until we’re able to fully merge with it or control it in some manner that increases our survivability chances.
Liron 00:25:35
Right. Well, you’re certainly winning the Doom Debates vote. You know what this show is about, right? You basically just gave our spiel. You’re preaching to the choir of why people watch the show, because they think what you just said is true, and they think it doesn’t get mentioned very much.
The weird thing about this conversation that we were having before at the beginning, and so many conversations like it—when you go on other shows, when other politicians go on other shows, when other leaders go on other shows—the weird thing is, I opened the conversation, I said, “Hey, Zoltan, tell me about your platform.”
And you’re like, “Yeah, you know, AI is coming, so we got to save our jobs.” But then later, when I asked you point-blank: What’s your P(doom)? You’re like: “Oh, yeah, there’s like a fifty percent chance we’re all gonna drop dead in the next ten years.”
Wait a minute. Don’t you think that we should maybe kick off the conversation by mentioning that? That seems important.
Zoltan 00:26:18
Yeah, yeah. I mean, I do think it’s important to mention all that, but I got to be honest, as someone who’s running for office, I very rarely mention superintelligence, because, A, it just loses votes, and, B, there’s no easy way to discuss it at this point.
I talk about the benefits of AI and the age of abundance, but if I was in office, you can rest assured that I would look at every single angle to stop the creation of superintelligence, at least for the time being, in order to save humanity and planet Earth. I realize that that’s maybe not something some billionaires want to hear.
Again, I’m not stopping AI in any way. I think we need to win the AI race against China, but this party that I was at, many of them wanted to create superintelligence. They felt they have an evolutionary, a moral code to try to create that, that it was part of their evolution.
And I was trying to tell them: “Hey, man, I want to live indefinitely. That’s my code. That’s my mantra, is trying to make sure that this person, Zoltan, and his family, his wife, his two children, people around me, my tribe, my society, we live indefinitely.”
If you create a scenario where there’s a fifty percent chance, and maybe higher, that AI will start eliminating people for its own resources, its own sense of predation, that’s too high for me.
When we have, like, for example, game theory in nuclear war, we never had fifty-fifty shot. That was outrageous. The idea was we could always talk to each other. So it was actually kind of closer to eight to twelve percent, maybe some miscommunication would happen. We have something like the Cuban Missile Crisis to go wrong.
But AI is a much higher probability, I believe, of something going terribly wrong, and nobody is really discussing that. And yet, if we stop creation of AI and China gets it, they’ll create it first.
So in this sense, no matter what happens, we’re in this terrible AI race, racing to the bottom. Just so you know, I feel like if you had to ask me, there’s almost a hundred percent probability we’re on the Titanic.
Campaigning on Existential Risk
Liron 00:28:26
We talk a lot on this show about moving the Overton window, the window of what’s okay to bring up in discourse. There’s a certain range of topics that people expect politicians to bring up.
They don’t expect a politician to open his interview saying like: “Hey, by the way, I’m really concerned that we’re all going to drop dead, and I see government’s role as helping us not drop dead before we focus on other issues.” That currently is a little bit outside the Overton window, but I would say it’s getting there quickly.
I think what’s inside the Overton window is what Andrew Yang was discussing in 2020. Andrew Yang helped shift the Overton window, and now you’re seizing on that, right? You led with that. You’re like: “Hey, my whole campaign is about AI unemployment, and also affordability.”
You’re seizing on things that are clearly getting within the Overton window, but you could really help by being like: “Listen, I know the Overton window is not here yet, but I am dragging it here because we need to drag it here right now.”
That’s the equivalent of what I’m doing at Doom Debates. If you’ll notice, it’s not just politicians. You’re going to be hard-pressed to find a show where the host can manage to remember for episode after episode that we see fifty percent doom. Like, hosts of shows seem to forget that really quick. So I’m doing my part. Maybe you could focus more on doing your part.
Zoltan 00:29:35
I’m trying to do my part. It’s a crazy balance because what happens is, the moment you start discussing AI, you start losing votes. Nobody wants to talk about the job loss, and nobody wants, even less so, to talk about the prospect of superintelligence.
The only thing everyone wants to really talk about is beating China in the AI race. And that’s the only thing I hear any other gubernatorial candidate discussing.
So I’ve been in a weird position because I kinda dig my own grave in this campaign speaking about AI as prominently as I do. People like basic income, though. People like the basic income because it’s free money, but they don’t realize why we need it so bad, and when they realize it, then it becomes not a favorable topic necessarily.
Liron 00:30:26
Yeah, and when you think about the biggest threat to humanity, right? The fifty percent P(doom). Universal basic income, I’m not saying it’s bad, I think it’s a great attempt at solving the AI unemployment problem. I’m for it. But you don’t think that’s going to be good enough to fight the doom scenario, do you?
Zoltan 00:30:45
No, no. I think the doom scenario is two different things, really. There’s this idea that we’re creating superintelligence, and then there’s this idea that there’ll be massive job loss in an age of abundance, hopefully.
In a perfect world, I would get into office, and I would be able to say something to Californians, and hopefully the country, and say: “Listen, we need to make a deal with China. We need to make a deal with other countries, and say: ‘How can we stop this race to AI immediately? And what kind of boundaries can we do?’”
Would it be a UN of AI? I don’t know. Maybe some brand new international body. I’ll tell you also, game theory—and this is something that I’ve been talking with professors, this isn’t necessarily my idea—but I would not be surprised if World War III breaks out and Trump causes this.
He must know this, he must know the game theory, in order to control and take over the countries that might be competitors to us and AI. So, for example, when Greenland was having its issues or we’re talking about gold right now skyrocketing, it’s up threefold in the last two years. I mean, I have a feeling a lot of this is precursor for what’s really happening with artificial intelligence.
There must be some type of top-down authority that is able to stop the creation of AI before a superintelligence arises. And I know quite a few of the people in Trump’s AI field, so there must be talk of this.
So if that’s going to happen, that’s certainly a plausible thing. I’m not saying that’s a good idea, and I’m not saying it’s what I would want to do. A perfect scenario is that all the countries get together and say, “Let us slow down the AI development, at least in terms of superintelligence.” But that’s probably not going to happen.
Does Zoltan Support a Global AI “Pause”?
Liron 00:32:36
Yeah, that’s what I was going to ask you about, right? Just ideal policy proposal. I know that the governor of California isn’t in the greatest position to make it happen, but he can at least influence a little bit. And that’s the question of ideal global policy.
You mentioned that we should make a treaty with China instead of trying to outrace them, right? So just by saying that, I think you’re already diverging with Dario Amodei from Anthropic. He’s written a lot about how we should get ahead of China and coerce China, almost threaten them. But you’re like: “No, let’s make a treaty,” correct?
Zoltan 00:33:08
Yes. Obviously, my first action would be to try to befriend people. I think, to be honest with you, what’s probably likely going to happen is that a world global government must be created, and it’s going to have to have a top-down authority.
A lot of this diversity and whatnot that everyone seems to parade around is going to have to, to some extent, disappear in order for us to say: “We have a common challenge or a common enemy,” and that enemy is superintelligence.
I don’t want to use the word enemy, because I don’t actually think superintelligence is an enemy, but what I do think is it’s absolutely a conflict of interest between the human race and a superintelligence.
So the best method to doing this is to get everyone together in the same camp and have a real human race against or at least somehow manipulating this type of artificial intelligence so it never becomes so great that it can take over entire countries, nations, and put us back into the Dark Ages.
That’s the real dilemma. Would I want to do that peacefully? Of course, I would want to do that through treaties and whatnot. Is that going to happen peacefully? Almost certainly not. We have absolutely no history whatsoever of something like that happening, and you’re already seeing Trump try to carve out...
That’s what the whole Greenland thing was. Everyone’s like: “Oh, Greenland, because we have this historic...” Bull crap. Greenland is about AI.
Liron 00:34:37
Mm-hmm.
Zoltan 00:34:37
Greenland is about carving out our own little sphere and probably at some point, stopping any sort of technological development that we share with other parts of the world, like China and that part of Asia, and maybe Russia and Europe.
That’s what this is about. And anyone that tries—it’s the same issue with immigration. They tried to say that the H-1B visa thing had to do with immigration. It has nothing to do with immigration. It has to do with the fact that we don’t need immigrants anymore.
In fact, no country is going to need immigrants here within one to two to three to four years. That’s why Elon Musk is starting to build robots instead of cars, because immigrants are going to not be necessary.
So but the media is crazy. They want to sell you on these ideas without actually selling you on the real idea. The real idea is that AI is more transformative than anything we’ve ever experienced in history, and it’s going to require a completely new type of thinking about it.
So all the things we once thought important, even taxes. Do you pay taxes in the age of abundance? I mean, everything’s going to be shifting.
Liron 00:35:39
Are you familiar with the Pause AI movement?
Zoltan 00:35:42
A little bit, yes.
Liron 00:35:44
That’s related to the policy idea of let’s make a treaty. A treaty to do what, right? And I think you mentioned, like: Well, we got to make sure not to let superintelligent AI overwhelm us, at least not until we know how to control it, or we know how to build it in such a way that it wants the same thing we want, which is a really tall order. It turns out to be a really tall order on the engineering side.
Unfortunately, there’s many people who argue: “Oh, don’t worry, we’re on track to make it really good. Look how helpful ChatGPT is.” And it’s like: No, unfortunately, we don’t have the engineering problem solved, so we’re likely going to make an unaligned, superintelligent AI, and that’s coming soon.
And so when we talk about cooperating with China, having an international consortium, we really are talking about pausing AI, or at the very least, building a pause button, where if things get too wild and crazy, which many of the top experts are saying they will very, very soon.
Elon Musk is saying they’re likely to get wild and crazy very soon. Geoffrey Hinton, Yoshua Bengio, the biggest names you can think of, Nobel Prize winners, Turing Award winners, many of them are warning: “Hey, guys, this is going too fast. This is getting crazy.”
So in terms of a specific policy proposal... Do you support getting together for a treaty organization and, at the very least, building hardware that gives us a pause button? Under some sort of centralized authority, being like: “Okay, we gotta press the pause button now. We’ve gone too far.” What do you think?
Zoltan 00:36:59
Yeah, absolutely. I totally support that, one hundred percent. But again, I think that that’s probably unrealistic given how egotistical so many world leaders are right now.
Liron 00:37:16
Yeah, well, you know, this word “unrealistic,” right? It’s conversations like this, you raising the issue and getting support and getting votes, that’s how we make it realistic, right? So this is the room where it happens right here, right? This is where the movement happens.
Zoltan 00:37:30
I would love that to be the case. I mean, everything I’m saying here is stuff that I would be doing on day one in office, and I think California being one of the world’s largest economies and also home to a lot of the AI growth, is the perfect place to do it.
But I just think we as nations have been worried about power, survivability, and all these other things forever, but in the age of abundance, a lot of even needing national borders falls away. Like, the whole idea of abundance is you’re gonna be able to have a robot that builds a house anywhere you want or takes care of you anywhere you want, or even builds you jets. Why not? They’ll be able to craft anything you need.
We’re all gonna be extremely... The lifestyle we could live here in just twenty, thirty years’ time could be so different. And the point I’m trying to say is, we have this idea that, oh, you wanna be in China, you wanna be here, you gotta be there, you have healthcare. A lot of this isn’t gonna make a difference anymore.
We’re all gonna have healthcare. We’re all gonna have the best surgeons, assuming superintelligence doesn’t take us over. So if we could just kinda come to agreement with this, that the age of abundance is better, and stop the superintelligence but kinda have the best of AI, I would be all about that.
I would be all about even having not necessarily having countries be countries anymore, or at least maybe we all belong to some kinda unified thing. Then begins getting off-planet.
As you probably know, there’s almost something being launched to space almost daily now, and a big part of my gubernatorial candidacy has been bringing back the space industry to California, building it so that we have this kind of brand-new era where we are launching ten, twenty, thirty spaceships into orbit every day and starting to finally do what we’re supposed to do. The Star Trek era, get out there.
But this requires a whole different type of thinking, and right now, the gubernatorial candidates, I just listened to them the other day, they just wanna fight ICE. They just wanna talk about new taxes. Well, they’re gonna tax us for driving now. It’s like nobody wants to actually pay attention to the age we’re just entering.
Liron 00:39:38
Right. This is an important point ‘cause you’re bringing up that you’re really passionate about space, and one of the planks in your platform is all about creating the next Star Trek in California. And we are actually going to get into your platform, so you can make your pitch for all this different stuff, and I’m going to agree with a lot of the different stuff.
But the way I’ve prioritized this conversation is I haven’t split the time that we have in this conversation according to the planks in your platform. I’ve split it according to the probability of different outcomes, right?
Because you said fifty percent P(doom), and most of the planks in your platform are relevant to the non-doom world. But I wanted to spend half of our time talking about avoiding the doom world, because we may very well be heading into the doom world, and your platform isn’t that relevant to the doom world, correct?
Zoltan 00:40:21
Yeah, yeah. No, my platform doesn’t do it. In fact, usually, I don’t talk about this stuff. I’m doing it with you because this is your specialty and you’re the expert on this, but it resonates so poorly with voters to speak about superintelligence and the doom, that it’s just pointless. So yeah, my platform doesn’t really reflect that, but I have plenty of ideas on it. It’s just, they’re not campaigning ideas.
Liron 00:40:44
I just wanna go back to something you also said earlier. You’re like: “Look, I support this kind of off button, this pause button. I support this central international authority,” but it’s just infeasible ‘cause you’ve got these leaders and their egos going to get in the way.
Well, your ego doesn’t get in the way. My ego doesn’t get in the way, and I don’t think it’s just because we have these tiny egos. I think it’s also just because we understand the facts on the ground better of like, “Hey, we’re just literally all going to die,” like, ego or no ego, right?
And I think it’s the same thing, people’s egos don’t get in the way of preventing nuclear war, or they haven’t... You know, they’ve done good enough at preventing nuclear war despite their egos. Why? ‘Cause they don’t wanna die, they don’t wanna have a bad day, they don’t want their favorite cities to get nuked.
So, similarly, I think it’s really just a matter of sounding the alarm. Not just a matter, but I think that’s the least we can do, is talk about it and sound the alarm. So your current strategy, where you’re like: “Oh, I’m sweeping it under the rug ‘cause the voters can’t handle it,” maybe you should just explicitly try them and feel out one crowd at a time, instead of putting the whole issue away. What do you think?
Zoltan 00:41:42
Yeah, no, I gotta be honest. Look, if I get in with a bunch of engineers, I always will be as open as I am now. But I gotta be honest, though, what happens is, when you are talking—so, for example, we were campaigning in Santa Monica the other day, and we were at a mall just asking people and being filmed: “What’s your number one concern?”
If you try to talk about superintelligence to somebody who is just cashing their welfare check and just buying some vegetables and then stuff at Walmart, they could care less. They’re like, “Let the world end,” pretty much.
It’s really difficult. So, for example, eight years ago, if you had talked to voters, I think at least half the time, they would’ve said top three concern of theirs is the environment. This time, after talking to thirty-three people, not one person had the environment on their top ten, not one, and seven out of ten had affordability and avoiding homelessness as their top priority in this place I was at.
And it was definitely not middle class. It was a little bit lower, but the point I’m trying to say is that times are changing. Like, nobody can really afford anything. So speaking about something, telling them I was at this mansion in San Francisco where all these gazillionaires were, it’s very hard for them to understand.
They’re just trying to get through the day and get their kids enough calories so they can get to school the next morning. It’s very hard to speak about these issues anymore in California.
Liron 00:43:21
Yeah, and I hear you, and there’s something to be said for meeting voters where they’re at, and if you come off really weird right away, they’ll be like: “Okay, this person is so hard for me to even interact with.” So you don’t want to alienate people. I’m not saying that it’s easy. I think it’s kind of tough. It takes finesse.
That said, you also gotta consider, like, imagine it was literally an asteroid coming at us, right? Like the movie Don’t Look Up, and imagine you could show them your telescope, and they can look through and they could be like, “Oh, damn, that’s an asteroid,” right? Or they could see it in the sky, and you could be like: “Yep, it’s heading right toward us.”
I think they would give you some leeway and be like: “Okay, yeah, I’m not sure. I don’t have enough—I don’t have a month of savings right now. I’m living paycheck to paycheck, but I’m willing to hear about the asteroid.”
Zoltan 00:43:59
So, I think the movie Don’t Look Up is absolutely brilliant. I’ve watched it like ten times, and I forced my kids to watch and everything. It highlights something ‘cause I know what you’re trying to say.
You’re saying, “Speak about it more,” and believe me, I wish I could speak about existential risk. In fact, existential risks have been some of the biggest things that I have campaigned on previously. The problem is really that it requires people to be rational, and I no longer believe that a huge portion of the population is actually using rationality on a regular basis.
I think social media has created a situation where a lot of people are either angry all the time or misled all the time, or just not really on planet Earth in terms of what you and I know. Listen, we’re talking a three-to-eight-year timeline. There could be massive disruption to humanity and our survivability because of a superintelligence. You and I know that. We know that rationally.
But to say that to a lot of people who have spent the last thirty years working to pay off their fifty-year mortgage, whatever, it just doesn’t sound... It just doesn’t matter anymore to them.
I don’t think people are using a lot of rational basis for making their decisions on an everyday level, and so when I go out there and campaign, if I want to win voters, I’ve got to be like: “Ah! ICE is terrible. ICE is terrible,” and that’s what draws people to me. When I go out there and say: “Listen, we have a huge superintelligence issue, where all of humanity might die,” they are just looking at me like some freak.
Liron 00:45:33
Yeah, yeah. So, you know, two rational people can discuss and be like, “Okay, this is clearly the issue, and every other issue takes a backseat.” But when you’re talking to the average person, you go like: “Okay, let me give you the buffet,” right?
We can talk about how we’re gonna fight ICE, right? Which is what a lot of Democrats want to do. So you gotta hold out the whole tray for them. Maybe, right? Now, at this point, we’re talking politics. I’m just glad to hear you were such a good sport, where we kind of put the politics aside and we said, “Okay, what do we really need to do?”
I’m thankful to you for at least coming with me for that part of the journey, and then we can kind of put aside the issue of, like, okay, well, as a matter of communications, how do you get the voters on board? I don’t know. It’s a tough nut to crack. I’m doing my part on this show, moving the Overton window. You’re doing your part somewhat, right?
So you still bring it up, even if you don’t like lead with it right up front. You’re pretty good about bringing it up eventually, right?
Zoltan 00:46:22
I definitely bring it up, and I bring it up when I can to the right people, but there is some practicality here. If you talk about this upfront to people who are worried about day-to-day affordability and just trying to get through the day, you lose them. They’ll say: “What a freak! Why is this guy even running for governor?”
So I gotta say, there’s almost a sense that I want to win on a basic platform, but later, when I get into office, then I will say, “Ah, the real Zoltan is coming out, the guy who knows that humanity must be protected,” and this fifty-fifty odd is absolutely outrageous.
We would never have chosen fifty-fifty odds if we were doing a nuclear build-out. It would’ve been too dramatic, and yet that seems to be what’s happening with the creation of AI and superintelligence.
So I would act, I think, in office, quite a bit different in dealing with existential risks. It would be an absolute top priority, but it’s not something I can campaign on. It will make me lose immediately.
Liron 00:47:20
If we can get really passionate about supporting all your other planks that are more on the non-doom scenario, right? Like, “Oh, we’re gonna survive, and we’re gonna make the country greater and the world greater.”
If we can get really passionate about that, maybe we buy you a lot of support, and then once you’re in office, you’re like, “Hey, guys, I gotta just tell you, this is a big priority.” So it’s a little bit of a Trojan horse strategy, but maybe there’s a nicer way to say it. You’re focusing on different things at different times, right?
Zoltan 00:47:46
I am, I am, and I gotta be honest. We have thought a lot about this because I did begin my campaign with a much more focused doomsday message, and it really just fell on deaf ears.
Then we transitioned to a much more space industry, basic income, Zolts, the National Geographic guy, “Let’s be happy, let’s talk positivity in this campaign.” Yeah, the doomsday message on AI just didn’t work. Maybe it was too early.
But the job apocalypse works pretty well because I think people see that. They can see, okay, Amazon fired sixteen thousand people two days ago, likely for AI. I mean, they see that.
But it’s funny, the existential risk one is something that I think there must be something biological or DNA-related with people. Nobody wants to face things, even if they know it’s gonna happen. There’s so many great movies about that, too.
Exploring His Platform: Education, Crime, and Affordability
Liron 00:48:39
Yep, yep, yep. All right, well, with that, let’s drill into the non-doom platform because these are interesting points. I agree with a lot of them, and I think it’s fresh, it’s interesting. Certainly better than other platforms I’ve seen, or, like in the case of the last presidential election, I don’t even think they wrote a platform, right?
Zoltan 00:48:56
It’s amazing. You had the presidential debates, I don’t think AI was even mentioned. It’s just, some of the stuff is insane. This is what I was trying to tell you, like, a lot of what I find in politics is not rational anymore.
It’s grown beyond what’s rational, and that’s what scares me the most, is that nobody’s reasonable anymore. In fact, sometimes the more unreasonable you are, the better chance you have of winning.
Liron 00:49:19
Yep. So on your website, you’ve got 11 major sections, and we don’t have that much time, so I want to really briefly have you say one or two sentences about each, and then in some cases, I’ll have some follow-up questions. But let’s get the lay of the land here. We’ll go broad and shallow.
So your first point here is, fighting against authoritarianism, corporatism, fraud, and bureaucracy.
Zoltan 00:49:40
Well, look, I think if you look at what’s happening in Minnesota, they’re discovering an enormous amount of fraud, and I think California is probably much more so than Minnesota ever was. This is the problem, is a ton of our tax dollars are going to people that aren’t really using it for the benefit of the citizenry here in California.
It’s not necessarily that it’s illegal, it’s that there’s just a huge amount of government waste, and so we want to fight that. We also want to fight billionaires being unfair in terms of employment practices, things like that. If you don’t fight for the common person in California, they’re being left behind.
The inequality has grown to such a point when it’s impossible to leave our little towns here in California and go five miles into Oakland and not be harassed or shot or endanger your car being broken into. So inequality has to be stopped, and this is really what I’m talking about.
But believe me, when we go at all my different 11 points, and there’s actually like 20, the basics of it all is universal basic income. Because we feel like, A, this will stop the inequality growing, B, this will help with climate issues, C, it’ll stop the homelessness because people will be able to afford it.
So it’s really the overarching thing, but I think the reason we have that as number one—fighting billionaires, fighting authoritarianism, stuff like that—is it seems to be the main message that a lot of Californians are involved with right now. Their life has become so hard and the elite have to be stopped, and I do agree with that.
Liron 00:51:11
Second point here, the specter of AI and creating an automated abundance economy. Yeah, this might be the meatiest point, I guess, connected to your next point, which is universal basic income.
But before you get into that, I noticed you specialize in AI ethics in graduate school at Oxford. What was your takeaway from that whole experience?
Zoltan 00:51:29
Well, Oxford was perhaps the most wonderful time in my life. Just so fun. To be a fifty-year-old and be back at university was pretty amazing. But it was also really good to be in an academic environment. It got me away from being so, I guess, passionate about transhumanism and AI, and more focused on just trying to be rational, like, academically rational.
And you can see that a lot in my campaign. I’m no longer like: “Oh, this is going to happen, this is gonna...” Now I’m much more of a probability guy. “Well, there’s a good chance this will happen, and this will happen.”
But you know, we’ve talked about the automated abundance economy and basic income, points two and three. Look, without those things happening, when people start losing their jobs, we haven’t talked about this, there’s going to be rioting, there’s going to be civil unrest, there’s going to be demonstrations.
That’s not going to be good for progress. It’s not going to be good for my overarching goal, which is to get people to live dramatically longer through technology. So somehow, we have to keep peace in society, no matter how wealthy the trillionaires get or how wealthy corporations get, or how crazy AI gets. We still want to keep society civil so that progress can occur.
That’s again a huge part of my gubernatorial run as well. The age we’re entering is going to be crazy. It’s going to be totally tumultuous. That’s what’s really going to happen over the next ten years.
But if we could have people that actually just are able to reasonably say, “Well, we want everyone to live, thrive, have more abundance, prosperity goes up, standard of living goes up,” this is really what my entire campaign is about. I welcome the AI age, not necessarily the superintelligence age, but I welcome the AI age. We just need someone to lead us through it so that our lives actually do improve for the first time in thirty years in California.
Liron 00:53:08
I think that’s a good distinction, if you can welcome AI without welcoming superintelligence. So I thought you had some interesting details on your website.
When we talk about the automated abundance economy, you mentioned you’d like to create a one- to two-day workweek for people instead of five days. That sounds pretty good, and you want to give every California home at least one full-size humanoid robot that cleans dishes, cooks food, does laundry, helps with driving, babysits kids, watches pets, and goes to work for people.
Nice, so you only have to work one to two days, and then even on your days off, you’re still getting all this robot help. All right, that sounds pretty good. How did you arrive at that balance, like the one- to two-day workweek?
Zoltan 00:53:45
To be honest with you, we’re just targeting a lot of the experts. Nobody wants to say you’re not going to work ever again because it’s so transformative that all of a sudden you get written off as some weird guy from the future. So we’re trying to say, “Look, you’re going to be one to two days a week. You’re still going to be working, you’re still going to be associating with your profession.”
But the truth is, the automated abundance economy in ten years may have nobody working, except for one or 2% of the people at all, because AI is going to be better at us than everything.
So in the real long term, the idea is that you’re going to be finding a brand-new lifestyle that might be doing, as I mentioned before, your fifth PhD, maybe just raising kids, maybe traveling all the time. A lot of my policies are for the first four years of my governorship, and I don’t think people are going to lose their jobs that quickly.
But we’re certainly going to start getting to a point when maybe your jobs become a lot less, maybe one to two days a week. So that’s kind of where we come up with those numbers. But let me be honest, I am full speed in trying to say: I don’t want anyone to work ever again. I don’t want nine-to-five to be a part of our history. So I’m transitioning to that, and the one to two is just a way to make people think I’m not too crazy.
Liron 00:55:01
I really appreciate that you’re just able to reason about the implications of having the AI actually be better than humans, and you’re not kind of grasping onto old patterns. You’re like: “Listen, it is what it is.” Even the one to two-day workweek may be realistically at zero.
Maybe it can only be one to two for a few short years, and it just is what it is, and you’re just facing reality, and you’ve got the prioritization that you know that it’s worth talking about. So this is really good. You know, I wish more candidates were like this, so props for that.
Zoltan 00:55:29
Well, thanks. To be honest with you, I get people fighting me all the time saying AI will not be better than humans at this or that, and I just don’t think people quite understand.
AI—I’ve been writing for thirty years at the top echelon of the journalism industry, National Geographic, New York Times. Look, AI is already a better writer than me. It was a better writer than me one year ago, and it’s gonna be better than my wife, who’s an OBGYN, even though she trained for nineteen years in school to become an OBGYN.
It’s gonna be better than any physicist. Probably, as they mentioned already, Nobel Prizes will start being awarded to AI, if not by next year, the year after, because it’s gonna break every single thing we know about science and make these discoveries. People have to get used to this. We created something that is simply gonna be better than us. It’s so hard for our egos to take that, but that’s just what’s gonna happen.
Liron 00:56:21
Okay, great. So we covered point two, the spectre of AI and creating an automated abundance economy. We kind of covered universal basic income. I think it’s kind of straightforward. You’re not hardcore on any details you wrote in your platform. You’re like, “Look, we just gotta make it happen somehow. We’ll figure it out.”
But you did mention that you think we can tax the robot and AI labour, ‘cause there’s so much wealth creation. Specifically, you don’t even think that we need to raise taxes to do it. You’re just preparing for the AIs generating all this new wealth, and you just wanna be ready to redirect some of the new wealth, correct?
Zoltan 00:56:50
That is correct. We do have a number of different, like, more technical policies. For example, I’m the creator of the federal land dividend, so about forty-five percent of the western half of the United States is federally owned land. Some of it is state-owned land. That stuff can be monetized and made to provide a huge basic income already for Californians, somewhere around fifteen hundred, maybe eighteen hundred dollars if it was sold off or leased off.
I actually don’t want to sell off, but I think we would lease off things. Now, I’m not talking about national parks, but there are huge forestry tracks in Northern California, just like empty land, that could easily be monetized, that aren’t being monetized right now.
And we could also tax cryptocurrencies. There’s a million little ways that you could actually create a basic income, but I gotta be honest, if companies are becoming trillion-dollar companies because they’re creating so much new wealth, that wealth needs to be distributed in some way to the people who lose their jobs because of AI.
Liron 00:57:47
I like it. All right, so point number four is saving the California Dream, which you describe as deregulating, cutting rules and red tape, like for housing and infrastructure, avoiding toxic cultural crusades, and bringing back businesses to California by offering competitive incentives similar to other states.
Oh, and you also say, “Use new satellite and drone technology to spot wildfires so they can be contained immediately.” So this seems like kind of a grab bag of a point, right? Just kind of like do a bunch of stuff that we’re annoyed at, that California has been doing badly, correct?
Zoltan 00:58:17
It is, but most of it is just based on either completely being reasonable and also radical new technologies. It seems amazing to me that we’re not using drones to help us fight fires right now when these things can fly like two to three hundred miles per hour. We have the money. We have the technology.
It’s the same thing. You have the big shooting that was in the Mandalay Bay in Las Vegas, and it’s amazing to me that they don’t have technology that says, “Hey, somebody has brought in a couple guns through the lobby.” We have the technology to have it. We’ve already had it for twenty years.
Liron 00:58:49
My Google camera texts me when that kind of thing... It just texts me like: “Hey, two people just came in, and somebody’s carrying a package.” It’s like, you’d think they could do it for a gun.
Zoltan 00:58:56
Of course, and we could do this in airports and everything else. It just seems like there’s no willpower. You need a politician who’s willing to say: “Listen, I realize this is gonna put some of you out of work, but this is gonna be for the better good of the community and for citizens as a whole.” And we should do that with wildfires specifically because wildfires are so destructive to California.
Liron 00:59:17
You call the bullet saving the California Dream. I think I kind of know what you mean in terms of vibes. Like, I’ve been getting bad California vibes that I think a lot of people are, and there’s so much focus on saying the right stuff, that people have just lost sight of excellence in many ways, right? Just taking pride in doing things well and doing things smoothly. I feel like everybody just wants to fight over the details and be unproductive, right?
Zoltan 00:59:39
Well, what I think you’re speaking about is cancel culture. I mean, that’s the essence of what’s happening, is a lot of people are afraid to speak out and say things.
I gotta be honest, I was afraid for four years going to Oxford that I wouldn’t graduate because I am known for some radical philosophies and ideas, and you have to worry that they’re cancelling people that aren’t perfectly on the progressive left from going to universities or getting new jobs and things like that.
And it’s crazy because here I support even a universal basic income and always have, and still, people on the left want to cancel me. And I think that’s the problem with woke ideology, is it doesn’t allow for any rationality anymore.
Again, I keep going back to people being just sensible again. We just need people to kind of come back. You can be on the left, you can be on the right, but give everyone the chance to make their case and be sensible about how you interpret that case.
Liron 01:00:33
All right, point number five, changing education. You wanna develop a new vision of human education that complements AI and robots doing most jobs, emphasise creativity, art, wellbeing, and life skills, hire more teachers, but reduce education administration, make public college and public preschool from ages one and up totally free.
Yes, I will say, I mean, it is a good question, like, what do you educate people about when no matter what they learn, the AI’s already learned it better? But maybe, as you said, okay, well, at least focus on skills to live your life, right? That’s a good starting point.
Zoltan 01:01:03
Yeah, absolutely. For example, I’ve been trying to teach my daughters the business of making wine. We have wineries and vineyards, and even if there are robots, I still think people are gonna want wine.
But I think also it would be important to me to have, in the age where you don’t actually go to school for a vocation, you learn maybe more things about spirituality, how to be rational, how to deal financially with the world.
Even if you are just giving a basic income check, there’s still this idea of managing your life, managing your time, managing your self-worth, and being valuable, and managing meaning.
If it was a perfect world, I’d be like: People have to study philosophy, a lot of philosophy. They have to understand what the world’s going on. It’s not just taxes and what people are fighting in this woke ideology that’s kind of smothering California.
My idea of school is very different. I would love to raise children to be super beings, transhuman beings, that might live indefinitely. And what kind of education is required for that?
Well, it would be much more Star Trek-oriented, like: What’s your meaning in life? Well, your meaning in life might be to explore the stars, and to do that, you’re gonna need good ability to make much more rational and better decisions. And those decisions are not just gonna be about buying some lipstick at Walmart, which drives me nuts.
I’m not an anti-materialist, but I’m definitely, to some extent, anti-consumer oriented. I have two young daughters, and they’re just constantly thinking about clothes and thinking about things that are so trivial, and here we have superintelligence, the ability to get off planet.
The world’s changing underneath our feet. These are things to think about. So I would want school to encourage that. I would want the AI and the bots to really teach that to human beings that are still alive.
And of course, let’s just say, be honest, in ten years, we’re gonna have Neuralink and things like that. We’re also gonna be connected to the AI, so we’d be starting to learn new ways of even educating ourselves, as well as new ways of perhaps of even thinking, because a lot of our own thoughts might be automated. That’s a whole another ballgame, but these are things I’m thinking about already with education.
Liron 01:03:19
Nice. I mean, I guess I would want to educate people on basic concepts in what I call intelladynamics. So like, hey, there’s a superintelligence here. What do superintelligences do? They tend to get really strong outcomes on any success metrics you give them, even if you’re talking about optimizing the entire galaxy to have whatever property.
So, this has been an obscure area of study until now, pioneered by Eliezer Yudkowsky, but this would be one of my choices for what you want to teach in the schools. It seems like one of the most relevant things to understand. Any thoughts about that?
Zoltan 01:03:49
No, it sounds great. I’m all about anything that’s alternative at this point, especially with what you just mentioned. This is what the automated abundance economy is really about. It’s reimagining what the American dream is. And the American dream is not just gonna be buying lipstick at Walmart anymore.
It’s really gonna be like: What’s the meaning of life on planet Earth when you don’t need to worry so much about money? Maybe you’re transforming yourself, you’re merging with AI, you’re thinking different things.
It’s almost like if you were to try to educate God, what are the metrics or what are the courses you would teach to God? Well, this is sort of what we’re going to become like. We’re gonna become hyper-intelligent beings with chips inside our head that connect us to AI all the time, probably within ten to fifteen to twenty-five-year window.
What kind of education levels are we talking about? It’s not gonna be physics. You’re gonna have every single physics experiment that’s ever been done in your head, be the most expert in the world because of the downloading.
So what else is it? That’s why I think things we just mentioned and other avenues that perhaps I haven’t even thought about yet should be starting to be thought about. I’ve given these talks on education forums here in California. I’m the only one at these debates that even mentioned this, and as soon as I mention it, everyone’s just like: “The guy’s crazy.”
And yet it’s right down the road here, the Neuralinks are already getting people out of wheelchairs. And I’m telling you, the idea of downloading a book into your brain is probably twenty-four months away. Literally twenty-four months. So when you can start downloading books, you start downloading things like that, downloading this podcast. The world’s changing underneath our feet again.
Liron 01:05:31
So you touched on eliminating crime with new technology, like use tech to fight crime, hire more police officers, better equip them, tamp down on all crime. Okay, I think that’s straightforward.
And then we get to number seven, keeping California affordable. So you list a few things like cutting regulation, auditing health and pharma companies, audit nonprofits. Let me just ask you this: What do you think is the number one cause of the affordability crisis in California?
Zoltan 01:05:57
Very complicated, but I think what happened is, in my opinion, rich people started playing funny games with the financial system, and then when COVID hit, they just created a new sense of inflation. So instead of our kind of gear point being two percent, it’s now three, four, in some cases, eight, nine, ten.
And at some point, you have to say: How far can inequality grow before people start causing civil unrest or civil war?
Liron 01:06:25
I don’t think the whole US is unaffordable, right? I feel like there’s pockets of affordability all over the US. Don’t you think California, in particular, has a bigger affordability crisis? And if so, why?
Zoltan 01:06:34
Absolutely. And that’s just because there’s a premium here to pay for the great weather, I think. And there’s also this idea that it’s built into the culture, that California takes more money from the rich corporations that are here and gives it to people. Of course, that’s a feedback loop, which then makes things more expensive.
If you have more social services, it ends up like eating itself, and government just gets bigger, and government has a tendency to constantly blow up prices on everything. So what you’re seeing is, in California, everything is worse off than it would be, let’s say, in Washington, where there might be some semblance of trying to keep things under control. We’re almost a socialist state here, and that makes things more expensive in itself.
That’s one answer, but the bigger answer is, like I was trying to say, is I think at some point, when you look at insurance premiums compared to the cost of televisions, they’ve gone up like twelve thousand percent versus zero.
At some point, the rich have to say, “Wait a sec, are profits everything when you go down the street and there’s murders and trash?” I think a lot of the billionaires, a lot of the other very wealthy people have to come back to planet Earth. And I’m not saying we have to tax them like they’re trying to tax them. I’m against the billionaire tax, but I do think we could pay more taxes, generally speaking, to make things work better.
But we also have to audit those things that exist in California, because there’s so much fraud right now. If those two things came together, like cleaned up the system and taxed the billionaires a little bit more, I bet California’s inequality would go down dramatically in the next four years.
So that would be something that I would aim to do, and it’s really not too big of a draw down on the really rich people or the corporations. The only people that are really gonna lose are a lot of the fraudsters out there in California that have been taking so much money from the government and not producing anything in return.
That’s why our education levels are thirty years going down here in California, is there’s so many nonprofits out there, but the public education is worse than ever, and they’re taking billions from the system, and nothing’s improving. Those guys gotta go. They gotta be regulated, and they gotta be either fired or just stop giving money to.
Inflation has come because so many people in this socialist state have just been taking it and thinking this is like the golden goose, but it only works for so long.
Exploring His Platform: Super Cities, Space, and Longevity
Liron 01:08:55
Okay, so that was point number seven, keeping California affordable.
Zoltan 01:08:58
Sorry, I went on a lot there.
Liron 01:09:01
So you have this specific proposal, point number eight, build seven new super cities in California. You’re saying you wanna build seven new one-million-plus person cities with skyscrapers, schools, sports arenas. You wanna build in deserts, empty coastline, and forests, bypass NIMBYism, build so homeless can be relocated there, and Californians can afford homes there.
So I am positive on building. I was gonna mention the NIMBYism. California is kind of ridiculous about not letting people build houses and not letting... You know, San Francisco, it’s hard to build tall buildings there. It’s hard to get the most out of the land.
I like the idea of building new cities. Have you looked into some of the projects that are trying to do that right now? They’re pretty early.
Zoltan 01:09:41
I think it’s California Forever.
Liron 01:09:41
Yeah, California Forever.
Zoltan 01:09:41
Yeah. California Forever is a great idea, and I completely support it and hope it succeeds. The problem, though, is it’s really in the suburbs of the Bay Area. Our seven super cities are designed to be in the middle of nowhere, where people would really start brand new.
The reason is that you could go in, like the Salton Sea, for example, we’ve been talking to some of the people there. These are like communities that are just dying to have any kind of investment put in there. They have their own jurisdictions. You could go into these jurisdictions and build according to different types of codes that don’t necessarily apply, let’s say, where I live in San Francisco, which is impossible, essentially, to build.
So there’s a reason that you would try to be outside of these areas where you get local city councils to support you with tax incentives and things like that. And again, I know seven super cities sounds pretty insane, and maybe it wouldn’t be seven, maybe it’d only be two or three.
But I’ve recently been to Dubai a couple times, and I’ve seen Dubai over the last twenty years go from just one or two skyscrapers to three or four hundred. I was just there speaking at an event, and they had three hundred forty-two platforms building these skyscrapers up. And you look in California, you don’t even see one.
Liron 01:10:55
Right. And you could say Dubai is trying to invest their oil wealth, but then you could argue, “Okay, well, I’m getting prepared for us to have all this AI wealth, right? I’m prepared for the tax revenue to skyrocket, and I wanna invest that,” and that would be one way that I do that.
So fair enough. Point number nine, you wanna talk about building up the space industry as well. How’s that going right now? I mean, SpaceX has their old Hawthorne Center, right? How much else is going on?
Zoltan 01:11:20
Very little, and it’s probably continuing to leave because the regulations here are impossible, the environmental regulations, and just the cost. They’re proposing brand-new taxes for commuting, and most of the space stuff is actually in areas that require commuting to.
I wouldn’t be surprised if more business of the space industry is lost here in California over the next decade. That can be reversed if you were to offer free state land, as I’m trying to do, tax incentives, educational incentives, and use universities like Berkeley and Stanford to help out.
There’s an absolute belief, at least I have, that you could return the industry here and make it astounding again. But until you offer those incentives, it’s never gonna happen.
And right now, it seems like there’s not even—the current governor is not only not offering incentives, it looks like legislation is gonna make it more difficult over the next few years for the space industry to be here.
Liron 01:12:15
All right. I agree that attracting space companies is a good move. And then point number ten, pursuing longevity for everyone. You wanna declare aging a disease in California. You wanna dedicate a super city to the promise of longevity, inviting startups to begin their work there.
Yeah, I mean, I find that extremely unobjectionable. I agree. I think you and I are on the same page. Aging is an underrated problem, like, transhumanism is a good way to go. Anything you wanna add to that?
Zoltan 01:12:40
No, just, you know, longevity is probably gonna be the biggest field in the world moving forward, maybe even bigger than AI. Once people figure out that you can live indefinitely, you’re talking about eight billion people that are probably gonna want it. So we’re looking at multi-trillion-dollar industries around it. If you could get a central place in California, that could really kickstart this economy, too.
Closing Thoughts
Liron 01:13:00
Right. Okay, last but not least, or maybe it is least... wildlife and the environment.
All right, so you say you wanna use genetic engineering to restore endangered and extinct wildlife and plant populations, and you wanna strengthen anti-poaching laws, and you wanna create many new California state parks that preserve the environment and wildlife, and fight back against the erroneous fear-mongering that climate change is going to end the world.
Okay, so you got a lot of interesting views on the environment, and, hey, it could go in many directions. My first thought is, like, that’s interesting that you wanna create all these super cities, but you also wanna create wildlife preserves. So you’re really carving up the different California areas of land, huh?
Zoltan 01:13:34
Well, first off, you have to understand, I was an executive director for a while at WildAid, which is an organization that really wants to stop wildlife poaching and preserve species, especially extinct or endangered species. So when we were there, though, the genetic editing wasn’t even available at the time. It was still a dream, like the Jurassic Park stuff.
Now it’s really possible to bring back some stuff, so I really do heavily emphasise trying to use science and technology, and nanotechnology specifically, to keep California pristine. That’s a very different take than I think a lot of the other gubernatorial candidates who want to protect it.
I’m not so interested in necessarily protecting land. I’m more interested in recreating it as a pristine entity as it once was, and there’s a very different take there. And a lot of the environmentalists are kind of upset with me for saying those kinds of things, but I think the future of environmentalism is really about recreation and bringing back endangered species and preservation in that sense, rather than lack of economic progress because we choose not to go into a certain property and destroy it.
So there’s a balance there, obviously, but I guess I’m the only candidate right now that really thinks we could regrow forests using brand-new types of nanotechnology here in the future after cutting them down. And I’m gonna be proven right by this.
But the point is that most environmentalists in California are not thinking that way. They’re thinking, you just leave it alone. Whereas I think, actually, no, I would rather care—I care more about the people. I wanna monetize it for the benefit of the people, but I also wanna preserve it once that technology is here, and a lot of that technology already is here.
Liron 01:15:11
Sounds good. All right, well, you’re coming into the campaign bringing big ideas. I respect that. I like a lot of what I’ve been hearing in this conversation. You haven’t really said anything that’s made me really disagree, so that’s pretty impressive, I guess, right?
I feel like most of my guests come on the show, and there’s plenty that I wanna attack them on, and I’ve been generally sympathetic to everything you’ve been saying, so I’m glad you’re running for government. And out of all the things you said, the thing that I’m most glad about is you being open about the fifty percent P(doom).
In fact, funny enough, I literally say that my own P(doom) is fifty percent. So, you know, there’s obviously a range of error, right? Ten percent, twenty percent, eighty percent, ninety percent. I don’t think that it’s that significant, unless you start getting like below ten or above ninety, at which point I’m like, “Okay, that’s a different P(doom). I don’t get why you’d have a P(doom) so low or so high.”
So that’s awesome that you nailed my exact P(doom). What a perfect P(doom) you have. Appreciate that.
Uh, but, you know, your willingness to be open of like: “Yep, I don’t see how we necessarily survive the next ten years. Like, it definitely seems like we need to pause AI. That would be the ideal policy, and now it’s just a matter of bringing people along, and if anybody asks my opinion, I will tell them as long as they’re ready to hear it.”
There’s much to really appreciate about what you said in this conversation. Tell us anything you’d like in terms of a closing statement, why people should vote for you, and any other call to action you wanna give people.
Zoltan 01:16:30
Sure. Well, first, thanks for having me on your show. It’s been awesome to talk to you. And listen, just for your viewers, just go check out my website, zoltanistvan2026.com, and look at my ideas.
And also, I’ve done a ton of interviews. I’ve been writing forever. I have a book of three hundred different opinion essays, some for The New York Times, some for Wired, some for TechCrunch, whatnot. So you can look at a lot of my ideas. Over the years, my ideas haven’t really changed much.
It just happens to be that the future’s come a lot closer now, and so the crazy ideas that were once talked about ten years ago are now coming to fruition. The idea of superintelligence, something I’ve written about in ten different essays, is actually here, and so we have to start looking at it.
But go to my website, discover it, look at the ideas, and email me. I make a point of every night answering every single person that emails me. It’s not always that long, but I do try to answer questions, and I make it a point of shaking every person’s hand, as well as responding to any message I ever get.
Liron 01:17:28
Nice. One more thing people should consider is that if New York State has NYC Mayor Zohran Mamdani, that guy’s probably gonna be running for New York government, New York governor in short order. So if the East Coast is gonna have a Zohran, doesn’t the West Coast deserve to balance it out with a Zoltan?
Zoltan 01:17:45
Yes, yes, that sounds great. In fact, people have commented on the Z names coming to making their appearance here. But yeah, it would be funny there, especially as Zohran’s ideas are a little bit much further left than mine. I’m much more of a centrist than him.
But that said, everyone’s looking for change right now, and I do believe that no matter what happens, it’s good to get new people in government. I’d love to see that people are upset with the system and ready for something different, because if we’re gonna deal with the challenges that come forward in the age of AI, it’s gonna take a lot of younger people rising up, making their voice heard, and new types of things.
The last thing I think we need in the world right now is old, established politicians that have been here for thirty, forty years. We need new people in the system to get through whatever’s coming.
Liron 01:18:33
Great. All right. Zoltan Istvan, best of luck becoming governor. Definitely book another appearance on the show once you get into that governor’s mansion. We’ll take the latest update at that point, and thanks so much for coming on Doom Debates.
Zoltan 01:18:46
Thanks so much for having me.
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏









