Playback speed
×
Share post
Share post at current time
0:00
/
0:00
Transcript

David Shapiro Part II: Unaligned Superintelligence Is Totally Fine?

Rebutting his shockingly ad-hoc arguments

Today I’m reacting to David Shapiro’s response to my previous episode, and also to David’s latest episode with poker champion & effective altruist Igor Kurganov.

I challenge David's optimistic stance on superintelligent AI inherently aligning with human values. We touch on factors like instrumental convergence and resource competition. David and I continue to clash over whether we should pause AI development to mitigate potential catastrophic risks. I also respond to David's critiques of AI safety advocates.

00:00 Introduction

01:08 David's Response and Engagement

03:02 The Corrigibility Problem

05:38 Nirvana Fallacy

10:57 Prophecy and Faith-Based Assertions

22:47 AI Coexistence with Humanity

35:17 Does Curiosity Make AI Value Humans?

38:56 Instrumental Convergence and AI's Goals

46:14 The Fermi Paradox and AI's Expansion

51:51 The Future of Human and AI Coexistence

01:04:56 Concluding Thoughts

Join the conversation on DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of extinction. Thanks for listening.

Doom Debates
Doom Debates
Urgent disagreements that must be resolved before the world ends, hosted by Liron Shapira.