Playback speed
×
Share post
Share post at current time
0:00
/
0:00
Transcript
1

Debate with Roman Yampolskiy: 50% vs. 99.999% P(Doom) from AI

Cross-post from the For Humanity podcast with John Sherman
1

Dr. Roman Yampolskiy is the director of the Cyber Security Lab at the University of Louisville. His new book is called AI: Unexplainable, Unpredictable, Uncontrollable.

Roman’s P(doom) from AGI is a whopping 99.999%, vastly greater than my P(doom) of 50%. It’s a rare debate when I’m LESS doomy than my opponent!

This is a cross-post from the For Humanity podcast hosted by John Sherman. For Humanity is basically a sister show of Doom Debates. Highly recommend subscribing!


00:00 John Sherman’s Intro

05:21 Diverging Views on AI Safety and Control

12:24 The Challenge of Defining Human Values for AI

18:04 Risks of Superintelligent AI and Potential Solutions

33:41 The Case for Narrow AI

45:21 The Concept of Utopia

48:33 AI's Utility Function and Human Values

55:48 Challenges in AI Safety Research

01:05:23 Breeding Program Proposal

01:14:05 The Reality of AI Regulation

01:18:04 Concluding Thoughts

01:23:19 Celebration of Life


This episode on For Humanity’s channel: https://www.youtube.com/watch?v=KcjLCZcBFoQ

For Humanity on YouTube: https://www.youtube.com/@ForHumanityPodcast

For Humanity on X: https://x.com/ForHumanityPod

Buy Roman’s new book: https://www.amazon.com/Unexplainable-Unpredictable-Uncontrollable-Artificial-Intelligence/dp/103257626X


Join the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching.

Discussion about this podcast

Doom Debates
Doom Debates
Urgent disagreements that must be resolved before the world ends, hosted by Liron Shapira.