In today’s episode, instead of reacting to a long-form presentation of someone’s position, I’m reporting on the various AI x-risk-related tiffs happening in my part of the world. And by “my part of the world” I mean my Twitter feed.
00:00 Introduction
01:55 Followup to my MSLT reaction episode
03:48 Double Crux
04:53 LLMs: Finite State Automata or Turing Machines?
16:11 Amjad Masad vs. Helen Toner and Eliezer Yudkowsky
17:29 How Will AGI Literally Kill Us?
33:53 Roon
37:38 Prof. Lee Cronin
40:48 Defining AI Creativity
43:44 Naval Ravikant
46:57 Pascal's Scam
54:10 Martin Casado and SB 1047
01:12:26 Final Thoughts
Links referenced in the episode:
Eliezer Yudkowsky’s interview on the Logan Bartlett Show. Highly recommended: https://www.youtube.com/watch?v=_8q9bjNHeSo
Double Crux, the core rationalist technique I use when I’m “debating”: https://www.lesswrong.com/posts/exa5kmvopeRyfJgCy/double-crux-a-strategy-for-mutual-understanding
The problem with arguing “by definition”, a classic LessWrong post: https://www.lesswrong.com/posts/cFzC996D7Jjds3vS9/arguing-by-definition
Twitter people referenced:
Amjad Masad: https://x.com/amasad
Eliezer Yudkowsky: https://x.com/esyudkowsky
Helen Toner: https://x.com/hlntnr
Roon: https://x.com/tszzl
Lee Cronin: https://x.com/leecronin
Naval Ravikant: https://x.com/naval
Geoffrey Miller: https://x.com/primalpoly
Martin Casado: https://x.com/martin_casado
Yoshua Bengio: https://x.com/yoshua_bengio
Your boy: https://x.com/liron
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
Share this post