0:00
/
0:00
Transcript

Richard Hanania vs. Liron Shapira — AI Doom Debate

Let's uncover where he gets off the AI "doom train"
1

Richard Hanania is the President of the Center for the Study of Partisanship and Ideology. His work has been praised by Vice President JD Vance, Tyler Cowen, and Bryan Caplan among others.

In his influential newsletter, he’s written about why he finds AI doom arguments unconvincing. He was gracious enough to debate me on this topic. Let’s see if one of us can change the other’s P(Doom)!


0:00 Intro

1:53 Richard's politics

2:24 The state of political discourse

3:30 What's your P(Doom)?™

6:38 How to stop the doom train

8:27 Statement on AI risk

9:31 Intellectual influences

11:15 Base rates for AI doom

15:43 Intelligence as optimization power

31:26 AI capabilities progress

53:46 Why isn't AI yet a top blogger?

58:02 Diving into Richard's Doom Train

58:47 Diminishing Returns on Intelligence

1:06:36 Alignment will be relatively trivial

1:15:14 Power-seeking must be programmed

1:21:27 AI will simply be benevolent

1:27:17 Superintelligent AI will negotiate with humans

1:33:00 Super AIs will check and balance each other out

1:36:54 We're mistaken about the nature of intelligence

1:41:46 Summarizing Richard's AI doom position

1:43:22 Jobpocalypse and gradual disempowerment

1:49:46 Ad hominem attacks in AI discourse

Show Notes

Subscribe to Richard Hanania's Newsletter:

Richard's blogpost laying out where he gets off the AI "doom train":

Richard Hanania's Newsletter
AI Doomerism as Science Fiction
Intellectually, I’ve always found the arguments of AI doomers somewhat compelling. Yet instinctually I’ve always thought they were wrong. This could be motivated reasoning, as I find the thought of having to quit talking about what I’m interested in and focusing on this narrow technical issue extremely unappealing. But for years I have just had a naggin…
Read more

Richard's interview with Steven Pinker:

Richard Hanania's Newsletter
Pinker on Alignment and Intelligence as a "Magical Potion"
My recent article on diminishing returns to intelligence and what it means for AI alignment, along with my responses to some comments, sparked an email discussion with Steven Pinker. It helped shape my thinking on this topic, so I thought it would be a good idea to share the exchange…
Read more

Richard's interview with Robin Hanson:

Richard Hanania's Newsletter
Robin Hanson Says You're Going to Live
Like many people, I was taken aback by Eliezer Yudkowsky’s recent appearance on the Bankless podcast. I find Yudkowsky’s arrogant doomerism to be quite charming, and think it explains why he’s been so successful in spreading his ideas. Whenever I listen to him, I sense a feeling of pure exasperation, in the sense of yes, we’re all going to die, it’s goi…
Read more

My Doom Debate with Robin Hanson:

My reaction to Steven Pinker's AI doom position, and why his arguments are shallow:

"The Betterness Explosion" by Robin Hanson:

Overcoming Bias
The Betterness Explosion
We all want the things around us to be better. Yet today billions struggle year after year to make just a few things a bit better. But what if our meagre success was because we just didn’t have the r…
Read more

Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk!

PauseAI, the volunteer organization I’m part of: https://pauseai.info

Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!


Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates

Discussion about this video

User's avatar