Today I’m reacting to David Shapiro’s latest YouTube video: “Pausing AI is a spectacularly bad idea―Here's why”.
In my opinion, every plan that doesn’t evolve pausing frontier AGI capabilities development now is reckless, or at least every plan that doesn’t prepare to pause AGI once we see a “warning shot” that enough people agree is terrifying.
We’ll go through David’s argument point by point, to see if there are any good points about why maybe pausing AI might actually be a bad idea.
00:00 Introduction
01:16 The Pause AI Movement
03:03 Eliezer Yudkowsky’s Epistemology
12:56 Rationalist Arguments and Evidence
24:03 Public Awareness and Legislative Efforts
28:38 The Burden of Proof in AI Safety
31:02 Arguments Against the AI Pause Movement
34:20 Nuclear Proliferation vs. AI
34:48 Game Theory and AI
36:31 Opportunity Costs of an AI Pause
44:18 Axiomatic Alignment
47:34 Regulatory Capture and Corporate Interests
56:24 The Growing Mainstream Concern for AI Safety
Follow David:
Follow Doom Debates:
Share this post