Playback speed
×
Share post
Share post at current time
0:00
/
0:00
Transcript

P(Doom) Estimates Shouldn't Inform Policy??

Liron Reacts to Sayash Kapoor

Princeton Comp Sci Ph.D. candidate Sayash Kapoor co-authored a blog post last week with his professor Arvind Narayanan called "AI Existential Risk Probabilities Are Too Unreliable To Inform Policy".

While some non-doomers embraced the arguments, I see it as contributing nothing to the discourse besides demonstrating a popular failure mode: a simple misunderstanding of the basics of Bayesian epistemology.

I break down Sayash's recent episode of Machine Learning Street Talk point-by-point to analyze his claims from the perspective of the one true epistemology: Bayesian epistemology.


00:00 Introduction

03:40 Bayesian Reasoning

04:33 Inductive vs. Deductive Probability

05:49 Frequentism vs Bayesianism

16:14 Asteroid Impact and AI Risk Comparison

28:06 Quantification Bias

31:50 The Extinction Prediction Tournament

36:14 Pascal's Wager and AI Risk

40:50 Scaling Laws and AI Progress

45:12 Final Thoughts


My source material is Sayash's episode of Machine Learning Street Talk.

I also recommend reading Scott Alexander’s related post:

Astral Codex Ten
In Continued Defense Of Non-Frequentist Probabilities
Read more

Sayash's blog post that he was being interviewed about: AI existential risk probabilities are too unreliable to inform policy.

Follow Sayash: https://x.com/sayashk

Discussion about this podcast

Doom Debates
Doom Debates
Urgent disagreements that must be resolved before the world ends, hosted by Liron Shapira.