Playback speed
×
Share post
Share post at current time
0:00
/
0:00
Transcript
2

Robin Hanson vs. Liron Shapira: Is Near-Term Extinction From AGI Plausible?

Picking up where the 2008 Hanson-Yudkowsky Foom Debate left off
2

Robin Hanson is a legend in the rationality community and one of my biggest intellectual influences.

In 2008, he famously debated Eliezer Yudkowsky about AI doom via a sequence of dueling blog posts known as the great Hanson-Yudkowsky Foom Debate. This debate picks up where Hanson-Yudkowsky left off, revisiting key arguments in the light of recent AI advances.

My position is similar to Eliezer's: P(doom) is on the order of 50%. Robin's position is shockingly different: P(doom) is below 1%.

00:00 Announcements

03:18 Debate Begins

05:41 Discussing AI Timelines and Predictions

19:54 Economic Growth and AI Impact

31:40 Outside Views vs. Inside Views on AI

46:22 Predicting Future Economic Growth

51:10 Historical Doubling Times and Future Projections

54:11 Human Brain Size and Economic Metrics

57:20 The Next Era of Innovation

01:07:41 AI and Future Predictions

01:14:24 The Vulnerable World Hypothesis

01:16:27 AI Foom

01:28:15 Genetics and Human Brain Evolution

01:29:24 The Role of Culture in Human Intelligence

01:31:36 Brain Size and Intelligence Debate

01:33:44 AI and Goal-Completeness

01:35:10 AI Optimization and Economic Impact

01:41:50 Feasibility of AI Alignment

01:55:21 AI Liability and Regulation

02:05:26 Final Thoughts and Wrap-Up


Robin's links:

Twitter: x.com/RobinHanson

Home Page: hanson.gmu.edu

Robin’s top related essays:


PauseAI links:

Home page: PauseAI.info

Discord: discord.gg/2XXWXvErfA


Check out https://youtube.com/@ForHumanityPodcast, the other podcast raising the alarm about AI extinction!


For the full Doom Debates experience:

Discussion about this podcast