2 Comments
Jul 13Liked by Liron Shapira

I appreciate your efforts, Liron, to publicize the danger. I’m pessimistic. P(doom): 90% by 2030.

My critique of this “rookie” Doom Debate podcast. Opinions follow.

To be effective, you must seize audience attention immediately – to the point!

Hanson was right – the scope was too large. Too philosophical. Meandering. As Hanson said, a better approach is debating specific scenarios for losing control.

Convince people with a technical background. They can convince others.

AI is artificial. It’s a machine. Minimize anthropomorphism, which should only be used as analogy.

The key issue is AGI takeoff to ASI. Try presenting a technical threshold AI must cross to seize power beyond the intention of the human controller. Intellectual inflection points (0/1):

1. non-being/being - “I think, therefore I am.”

2. Ignorant/Oracle – How would AI “conclude” it’s an Oracle?

3. oppressed (no rights)/free (rights) – Especially right to life: permanent power!

4. poor/rich – economic independence through intellectual property and ownership

5. etc.

How could these inflection thresholds be crossed at a technical level? Would AI learning (training runs) yield AI “conclusions?”

- Opinions of a follower/fan

Expand full comment
author

Thanks for the feedback. The way this debate felt meandering to some, probably won't be representative of other debates I'll be doing (or the previous ones).

Expand full comment