Playback speed
×
Share post
Share post at current time
0:00
/
0:00
Transcript

This Yudkowskian Has A 99.999% P(Doom)

What's it like to grow up with a high P(Doom)?

In this episode of Doom Debates, I discuss AI existential risks with my pseudonymous guest Nethys.

Nethy shares his journey into AI risk awareness, influenced heavily by LessWrong and Eliezer Yudkowsky. We explore the vulnerability of society to emerging technologies, the challenges of AI alignment, and why he believes our current approaches are insufficient, ultimately resulting in 99.999% P(Doom).

00:00 Nethys Introduction

04:47 The Vulnerable World Hypothesis

10:01 What’s Your P(Doom)™

14:04 Nethys’s Banger YouTube Comment

26:53 Living with High P(Doom)

31:06 Losing Access to Distant Stars

36:51 Defining AGI

39:09 The Convergence of AI Models

47:32 The Role of “Unlicensed” Thinkers

52:07 The PauseAI Movement

58:20 Lethal Intelligence Video Clip


Show Notes

Eliezer Yudkowsky’s post on “Death with Dignity”: https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy

PauseAI Website: https://pauseai.info

PauseAI Discord: https://discord.gg/2XXWXvErfA


Watch the Lethal Intelligence video and check out LethalIntelligence.ai! It’s an AWESOME new animated intro to AI risk.


Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.

Discussion about this podcast