Discussion about this post

User's avatar
Nini's avatar

Hi Liron, I truly appreciate you creating doom debates and wanted to use the opportunity to extend my gratitude towards you. Keep up the great work :)

My question:

Assuming we don't get lucky and the doom train doesn't get derailed by a low probability outcome at one of its stations (e.g. there being some magic sauce to truly advanced general intelligence that can't be replicated using a computer or AGI randomly deciding that it has humanity's best interest at heart), I feel like the best chance we have is some kind of disaster, which is clearly caused by the misuse of AI, resulting in enough public and governmental support to pause AGI development, before actual AGI is achieved.

Do you agree that this is the most likely road to avoid doom and how do you see the chances of a "minor" AI disaster happening vs. the chances of getting to AGI before that?

All the best to you and your family!

Cheers!

Expand full comment
Jakub Growiec's avatar

How does your p(doom) depend on the forecast horizon? What would be your p(doom) until year 2030, 2050, 2100, and a longtermist perspective like, say, 4000?

Expand full comment
7 more comments...

No posts