Very interesting debate. I listened to it as a podcast. My one criticism is that it went on too long, particularly the middle third (roughly) where the host kept trying to corner Gary into admitting that a superintelligent AI could easily wipe out the human race. Gary wasn't going to admit it, and Liron should have moved on. I think this podcast could have been edited down to about half its length. Still, it was entertaining and informative, and I'll probably listen to other episodes.
Thanks for the feedback. I generally spend extra time drilling down into load-bearing sub-beliefs of the most important crux of disagreement between myself and the guest. IMO Gary’s other views don’t matter as much as why he’s 99% sure a superintelligent AI wouldn’t wipe out the human race.
I think my p-doom has decreased in way, I still firmly believe unaligned A.I. will cause extinction level chaos, but the likelihood of 100% success seems less likely than 80% to 90%, There must be some who will survive an attack, be it by fluke, or secret anti nuke type bunkers that are not regulated by computers, The likelihood of major catastrophic death and destruction is certain, but the idea of total extinction seems a bit less likely when I think about it now, That however does not change the urgency to do something to slow down A.I. it's still an extremely dangerous alien being created, and it's still a coin toss about when an attack will happen. My point is not that important, I just think some would survive (in an extremely messed up world with slim chance of rebuilding in a few generations)
Very interesting debate. I listened to it as a podcast. My one criticism is that it went on too long, particularly the middle third (roughly) where the host kept trying to corner Gary into admitting that a superintelligent AI could easily wipe out the human race. Gary wasn't going to admit it, and Liron should have moved on. I think this podcast could have been edited down to about half its length. Still, it was entertaining and informative, and I'll probably listen to other episodes.
Thanks for the feedback. I generally spend extra time drilling down into load-bearing sub-beliefs of the most important crux of disagreement between myself and the guest. IMO Gary’s other views don’t matter as much as why he’s 99% sure a superintelligent AI wouldn’t wipe out the human race.
I think my p-doom has decreased in way, I still firmly believe unaligned A.I. will cause extinction level chaos, but the likelihood of 100% success seems less likely than 80% to 90%, There must be some who will survive an attack, be it by fluke, or secret anti nuke type bunkers that are not regulated by computers, The likelihood of major catastrophic death and destruction is certain, but the idea of total extinction seems a bit less likely when I think about it now, That however does not change the urgency to do something to slow down A.I. it's still an extremely dangerous alien being created, and it's still a coin toss about when an attack will happen. My point is not that important, I just think some would survive (in an extremely messed up world with slim chance of rebuilding in a few generations)