It seems likely that, even if the initial AI were targeting the right thing, the values drift into something unrecognizable while fooming. The first AI capable of begetting better AIs probably won't have literally god-like omnipotence, so it might make some subtle errors in the next generation that by itself are acceptable, but compound over the course of many self-improvement cycles. It's like those videos where people keep asking ChatGPT to output the same image fed into it. It's close enough at first, but when you do it 100 times, it turns into something completely different. Maybe at some point during foom, one of these AIs become smart enough to realize this problem and stop fooming. But of course, if we have an ecosystem of many AIs, the ones that win out are going to be the ones that continue fooming for just a bit longer. So we'll probably still see these systems evolving towards existentially threatening levels of might.
What’s interesting is that my course itself, in its totality, needs to be both AI’s “governor” and its “sole ultimate objective”. The trouble is, as this podcast points out, that we are not taking the time (to at least try) to make sure that that happens.
It’s my opinion that the “course” I’ve developed is the best way to try to save ourselves from ourselves and from artificial intelligence. I fashioned my course in the same way good a debater would present a debate in that it makes an argument which is based on rationality and it anticipates the counter to the argument and the counter to the counter. That’s why what I have to say is in the form a course. I’m trying to make my course into a documentary film so that it’s a little bit easier for people to relate to it. It directly applies to the dooms day scenario that is outlined in this podcast.
It seems likely that, even if the initial AI were targeting the right thing, the values drift into something unrecognizable while fooming. The first AI capable of begetting better AIs probably won't have literally god-like omnipotence, so it might make some subtle errors in the next generation that by itself are acceptable, but compound over the course of many self-improvement cycles. It's like those videos where people keep asking ChatGPT to output the same image fed into it. It's close enough at first, but when you do it 100 times, it turns into something completely different. Maybe at some point during foom, one of these AIs become smart enough to realize this problem and stop fooming. But of course, if we have an ecosystem of many AIs, the ones that win out are going to be the ones that continue fooming for just a bit longer. So we'll probably still see these systems evolving towards existentially threatening levels of might.
What’s interesting is that my course itself, in its totality, needs to be both AI’s “governor” and its “sole ultimate objective”. The trouble is, as this podcast points out, that we are not taking the time (to at least try) to make sure that that happens.
It’s my opinion that the “course” I’ve developed is the best way to try to save ourselves from ourselves and from artificial intelligence. I fashioned my course in the same way good a debater would present a debate in that it makes an argument which is based on rationality and it anticipates the counter to the argument and the counter to the counter. That’s why what I have to say is in the form a course. I’m trying to make my course into a documentary film so that it’s a little bit easier for people to relate to it. It directly applies to the dooms day scenario that is outlined in this podcast.