1 Comment

There's a version of the story of the rooster and the eagle that begins with a breathless and panegyric-adjacent description of the rooster, from the perspective of the other barnyard animals (or maybe just the hens). I've forgottten who wrote it, and I can't find it, but I recall how it describes his flight: "up, up, up, until it seemed he would pierce the very vault of heaven, coming at last to rest upon the very pinnacle of the barn!" Certainly it's a good metaphor, and if Aesop could have been apprised of AGI he'd probably have thought it appropriately used here.

As for the rest (and the podcast), I am a layman in these matters but flatter myself that I understand Yudkowsky's arguments enough to agree with all of them. Where all of you lose me, though, is with the assumption that we are in a real sense drawing near to creating intelligence. Humans really aren't that bright, and already our brains are absurdly complex in ways we can't model and don't understand. It's obvious that we're skulking along at the bottom of the concept-space we call 'intelligence', but I don't see any evidence that creating something truly smarter than us lies within our grasp -- especially not within the timelines we have while running on all our present systems*.

* I mean this in whatever way one might choose: in the Hansonian sense that we're not producing enough smart people right now to keep up with needed innovations, or hard resource limits (topsoil, crude oil, easily-extracted metals), social collapse, 'the woke mind virus', good-times-create-weak-men, whatever. I'm in general a 'doomer' in the sense that I don't think we'll ever get to AGI**, but if we could it would kill us all absolutely for sure, (p)99.5%.

** Please tell your kids I said so. If in 2080 they haven't been killed in bread riots or race riots or plagues or civil war, maybe they'll remember that once upon a time smart, serious-minded men thought we might be killed by too much progress.

Expand full comment