Playback speed
×
Share post
Share post at current time
0:00
/
0:00
Transcript

Andrew Critch vs. Liron Shapira: Will AI Extinction Be Fast Or Slow?

We're doomed, but what *kind* of doom will it be?

Dr. Andrew Critch is the co-founder of the Center for Applied Rationality, a former Research Fellow at the Machine Intelligence Research Institute (MIRI), a Research Scientist at the UC Berkeley Center for Human Compatible AI, and the co-founder of a new startup called Healthcare Agents.

Dr. Critch’s P(Doom) is a whopping 85%! But his most likely doom scenario isn’t what you might expect. He thinks humanity will successfully avoid a self-improving superintelligent doom scenario, only to still go extinct via the slower process of “industrial dehumanization”.


00:00 Introduction

01:43 Dr. Critch’s Perspective on LessWrong Sequences

06:45 Bayesian Epistemology

15:34 Dr. Critch's Time at MIRI

18:33 What’s Your P(Doom)™

26:35 Doom Scenarios

40:38 AI Timelines

43:09 Defining “AGI”

48:27 Superintelligence

53:04 The Speed Limit of Intelligence

01:12:03 The Obedience Problem in AI

01:21:22 Artificial Superintelligence and Human Extinction

01:24:36 Global AI Race and Geopolitics

01:34:28 Future Scenarios and Human Relevance

01:48:13 Extinction by Industrial Dehumanization

01:58:50 Automated Factories and Human Control

02:02:35 Global Coordination Challenges

02:27:00 Healthcare Agents

02:35:30 Final Thoughts


Show Notes

Dr. Critch’s LessWrong post explaining his P(Doom) and most likely doom scenarios: https://www.lesswrong.com/posts/Kobbt3nQgv3yn29pr/my-motivation-and-theory-of-change-for-working-in-ai

Dr. Critch’s Website: https://acritch.com/

Dr. Critch’s Twitter: https://twitter.com/AndrewCritchPhD


Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.

Discussion about this podcast