9 Comments

Hi Liron, I truly appreciate you creating doom debates and wanted to use the opportunity to extend my gratitude towards you. Keep up the great work :)

My question:

Assuming we don't get lucky and the doom train doesn't get derailed by a low probability outcome at one of its stations (e.g. there being some magic sauce to truly advanced general intelligence that can't be replicated using a computer or AGI randomly deciding that it has humanity's best interest at heart), I feel like the best chance we have is some kind of disaster, which is clearly caused by the misuse of AI, resulting in enough public and governmental support to pause AGI development, before actual AGI is achieved.

Do you agree that this is the most likely road to avoid doom and how do you see the chances of a "minor" AI disaster happening vs. the chances of getting to AGI before that?

All the best to you and your family!

Cheers!

Expand full comment

How does your p(doom) depend on the forecast horizon? What would be your p(doom) until year 2030, 2050, 2100, and a longtermist perspective like, say, 4000?

Expand full comment

Could you please break up your 1-p(doom)=50% into subjective probabilities of (i) no AGI developed within the discussed time frame, (ii) AGI developed but not taking over, (iii) AGI taking over but remaining perfectly aligned? How sustainable are these scenarios over a longer time horizon?

Expand full comment

Imagine it's 2050 and everything actually turned out fine and there was no doom, what is your most likely reason that things went well?

Expand full comment

You've said before that you have a p(doom) of 50%. Why only 50%? What possible ways out of doom do you foresee?

Expand full comment

What's your take on "Transformers Squared" as a stepping stone to AGI?

Expand full comment

How do you approach your children's education in light of your well-balanced predictions of doom?

Expand full comment

What do you think will happen first: a workforce shift so dramatic that it becomes undeniable most people need to find entirely new roles, or the emergence of nationalist superintelligence as a dominant global force? Which scenario poses a greater risk to humanity's well-being, and why?

Expand full comment

Question:

Most of my friends get off the doom train at the claim that broadly superhuman AI could be here soon. What is a convincing way to communicate likely short timelines in a verbal discussion?

Expand full comment