The live Q&A was Friday, January 24th at 12-1:30pm Pacific Time (GMT-8)
Watch the recording: https://lironshapira.substack.com/p/2500-subscribers-live-q-and-a
Doom Debates is most effective at its dual mission — to raise awareness of AI x-risk and build the social infrastructure for high quality debate — when we have a huge audience. And as the proverb goes, a journey of 1 million subscribers begins with 2,500 subscribers!
The podcast has been attracting a growing audience of smart people who appreciate my “AI doomer + normie communicator” combo, as well as people who have a low P(doom) but appreciate that someone is finally filling the strangely empty niche of high-quality debate.
The growing audience is creating a snowball effect of attracting higher-quality guests and then growing the audience further.
The AI industry had better get ready to deal with our exponential growth 💪
Submit Your Questions
Comment on this post to submit your questions!
I’ll prioritize live questions and questions submitted here by my Substack subscribers above questions from YouTube comments.
Click the button on the YouTube Live event to get notifications so you don’t miss it. See you soon.
Watch the recording here: https://lironshapira.substack.com/p/2500-subscribers-live-q-and-a
Hi Liron, I truly appreciate you creating doom debates and wanted to use the opportunity to extend my gratitude towards you. Keep up the great work :)
My question:
Assuming we don't get lucky and the doom train doesn't get derailed by a low probability outcome at one of its stations (e.g. there being some magic sauce to truly advanced general intelligence that can't be replicated using a computer or AGI randomly deciding that it has humanity's best interest at heart), I feel like the best chance we have is some kind of disaster, which is clearly caused by the misuse of AI, resulting in enough public and governmental support to pause AGI development, before actual AGI is achieved.
Do you agree that this is the most likely road to avoid doom and how do you see the chances of a "minor" AI disaster happening vs. the chances of getting to AGI before that?
All the best to you and your family!
Cheers!
How does your p(doom) depend on the forecast horizon? What would be your p(doom) until year 2030, 2050, 2100, and a longtermist perspective like, say, 4000?