4 Comments
User's avatar
Kevin Flynn's avatar

I’ve come up with what I refer to as a “course” which solves the AI alignment problem from a purely philosophical standpoint. In other words, if AI were to be trained with my course and strictly used it as both its governor and sole ultimate objective it would most likely be aligned with that which is in the most beneficial long term interests of the human race as a species. It also just so happens that if humans themselves were to be able to follow the dictates in my course in the same way it would solve many of our biggest existential problems.

Expand full comment
Cyberneticist's avatar

That’s great Kevin. I have the same opinion about my own version of a similar solution but I don’t call it the course, I just think people who are mean to me are misaligned w/ objective human values and they’re bad people who need to be re-educated.

Expand full comment
Dominic Ignatius's avatar

I think "AI" has the potential to be very, VERY bad for humanity: humongous socio-economic disruption that could leave things MUCH worse on net. But "literal" TOTAL human extinction from A"G"I/superintelligence? I'm still betting against that in my lifetime.

Expand full comment
Cyberneticist's avatar

I don’t know about entropy or whatnot but I’m in favor of human extinction regardless of how it happens. I think it will be pollution and global warming but if the AIs get there first then more power to them.

Expand full comment