Playback speed
×
Share post
Share post at current time
0:00
/
0:00
Transcript

Debate with a former OpenAI Research Team Lead — Prof. Kenneth Stanley

Superintelligence *won't* be guided by goals?!

Prof. Kenneth Stanley is a former Research Science Manager at OpenAI leading the Open-Endedness Team in 2020-2022. Before that, he was a Professor of Computer Science at the University of Central Florida and the head of Core AI Research at Uber. He coauthored Why Greatness Cannot Be Planned: The Myth of the Objective, which argues that as soon as you create an objective, then you ruin your ability to reach it.

In this episode, I debate Ken’s claim that superintelligent AI *won’t* be guided by goals, and then we compare our views on AI doom.


00:00 Introduction

00:45 Ken’s Role at OpenAI

01:53 “Open-Endedness” and “Divergence”

9:32 Open-Endedness of Evolution

21:16 Human Innovation and Tech Trees

36:03 Objectives vs. Open Endedness

47:14 The Concept of Optimization Processes

57:22 What’s Your P(Doom)™

01:11:01 Interestingness and the Future

01:20:14 Human Intelligence vs. Superintelligence

01:37:51 Instrumental Convergence

01:55:58 Mitigating AI Risks

02:04:02 The Role of Institutional Checks

02:13:05 Exploring AI's Curiosity and Human Survival

02:20:51 Recapping the Debate

02:29:45 Final Thoughts


Show Notes

Ken’s home page: https://www.kenstanley.net/

Ken’s Wikipedia: https://en.wikipedia.org/wiki/Kenneth_Stanley

Ken’s Twitter: https://x.com/kenneth0stanley

Ken’s PicBreeder paper: https://wiki.santafe.edu/images/1/1e/Secretan_ecj11.pdf

Ken's book, Why Greatness Cannot Be Planned: The Myth of the Objective: https://www.amazon.com/Why-Greatness-Cannot-Planned-Objective/dp/3319155237

The Rocket Alignment Problem by Eliezer Yudkowsky: https://intelligence.org/2018/10/03/rocket-alignment/


Lethal Intelligence Guide, the ultimate animated video introduction to AI x-risk – https://www.youtube.com/watch?v=9CUFbqh16Fg

PauseAI, the volunteer organization I’m part of — https://pauseai.info/

Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!


Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates

Doom Debates
Doom Debates
Urgent disagreements that must be resolved before the world ends, hosted by Liron Shapira.