Solid summary. I'm not sure you quite got what Robin was saying about culture, and honestly he didn't explain it as well there as he has elsewhere in writing (in pieces not pointing to AI, which you may not have read during prep). He said that culture is the power to watch what others are doing and figure out what works. That's true in a general sense, but not in any individual sense (which the grammatical construction implied). Culture in the Henrich sense that he's using it is a rapid evolution in which we copy successful people without understanding what they do (and without THEM understanding what they do). That's why he's saying apes with 'culture' in this sense would be very different. Monkey-see, monkey-do is a saying because monkeys have zero shame, and will ape you without concern they'll look stupid. Actual human children are something like an OOM better at aping than actual apes are, including chimps. We're really good at this. We're not really very good at understanding what we copy.
Especially, and this is extremely important to this topic, when it comes to manipulating things in the real world. Nobody invented the arrowhead - it evolved. So what, though? Now we can just invent things, and do all the time. Well, yes. Sorta. But we still work in the real world to a/b test. Yes, things like general relativity exist. But they're a very small portion of what makes humanity great. Mostly we're doing things like iterating jet engines and scrapping or rather allowing to self-scrap the ones that don't work (even if there's a test pilot attached to it). Is it possible that an AI will soon be better at iterating a jet engine design with (possible!) improvements that we can then build and test? Sure. Can we build such an AI system at all? Yeah probably. Is it theoretically possible to create a machine intelligence so vastly much smarter than we are that it fully understands everything about jet engines (or nanite replicators) and can design a perfect one on the first try? Yes of course. Is such a range of intelligence within the cone of possibilities ahead of us right now, with all the problems facing us as a species? Very possibly (perhaps even very likely) not. That's where Robin is coming from with the "some time in the next century maybe [paraphrased]" talk -- he thinks it more likely that we will stop producing enough really smart innovative people to keep pushing all the various aspects of technological progress forward along the contact boundary. That's why he's far more worried about fertility rates than about AI.
Overall his p(doom) is probably higher than yours, though, if you combine the metrics.
Hmm ok, so humans are visibly better at "monkey see monkey do" than monkeys, but can you really describe humans as better copiers without stretching that concept to sneak in the entire concept of general intelligence / goal-completeness?
Einstein is inventing Relativity, is that a kind of advanced copying skill? What is actually happening in Einstein's brain there? Perhaps more than 1/4 of Einstein's brain is required to explain that?
Well absolutely yes, clearly we've got a lot of capability. But the point is that most problems aren't like relativity. Most problems are like checking all the bolts on an aircraft or fishing a broken bit out of an underwater oil well. We'll continue to try to make computers more powerful and the AI software more intelligent, and Hanson is very clear that he believes someday in theory our machine descendants will be more cognitively capable than us by far.
To deny this would require claiming (as some appear to) that the human brain just happens to be at the upper limit of intelligence, or to claim for sure that we can't build something more intelligent in some sense than a single human being (in 'some sense' we already have obviously). But are we, as a civilization, on track to build something more intelligent than humanity inclusive of our total culture within the next few decades? Sure doesn't look like it to Robin.
“Diffusing innovations” is a big factor of what makes humans or any agents powerful, sure. I just don’t get why he’s not focused on the cognitive work that lets one create innovations in the first place. When you’re sufficiently good at creating innovations, it didn’t even occur to you to factor out the part where you diffuse innovations to other minds or other data centers of your own distributed mind.
Solid summary. I'm not sure you quite got what Robin was saying about culture, and honestly he didn't explain it as well there as he has elsewhere in writing (in pieces not pointing to AI, which you may not have read during prep). He said that culture is the power to watch what others are doing and figure out what works. That's true in a general sense, but not in any individual sense (which the grammatical construction implied). Culture in the Henrich sense that he's using it is a rapid evolution in which we copy successful people without understanding what they do (and without THEM understanding what they do). That's why he's saying apes with 'culture' in this sense would be very different. Monkey-see, monkey-do is a saying because monkeys have zero shame, and will ape you without concern they'll look stupid. Actual human children are something like an OOM better at aping than actual apes are, including chimps. We're really good at this. We're not really very good at understanding what we copy.
Especially, and this is extremely important to this topic, when it comes to manipulating things in the real world. Nobody invented the arrowhead - it evolved. So what, though? Now we can just invent things, and do all the time. Well, yes. Sorta. But we still work in the real world to a/b test. Yes, things like general relativity exist. But they're a very small portion of what makes humanity great. Mostly we're doing things like iterating jet engines and scrapping or rather allowing to self-scrap the ones that don't work (even if there's a test pilot attached to it). Is it possible that an AI will soon be better at iterating a jet engine design with (possible!) improvements that we can then build and test? Sure. Can we build such an AI system at all? Yeah probably. Is it theoretically possible to create a machine intelligence so vastly much smarter than we are that it fully understands everything about jet engines (or nanite replicators) and can design a perfect one on the first try? Yes of course. Is such a range of intelligence within the cone of possibilities ahead of us right now, with all the problems facing us as a species? Very possibly (perhaps even very likely) not. That's where Robin is coming from with the "some time in the next century maybe [paraphrased]" talk -- he thinks it more likely that we will stop producing enough really smart innovative people to keep pushing all the various aspects of technological progress forward along the contact boundary. That's why he's far more worried about fertility rates than about AI.
Overall his p(doom) is probably higher than yours, though, if you combine the metrics.
Hmm ok, so humans are visibly better at "monkey see monkey do" than monkeys, but can you really describe humans as better copiers without stretching that concept to sneak in the entire concept of general intelligence / goal-completeness?
Einstein is inventing Relativity, is that a kind of advanced copying skill? What is actually happening in Einstein's brain there? Perhaps more than 1/4 of Einstein's brain is required to explain that?
Well absolutely yes, clearly we've got a lot of capability. But the point is that most problems aren't like relativity. Most problems are like checking all the bolts on an aircraft or fishing a broken bit out of an underwater oil well. We'll continue to try to make computers more powerful and the AI software more intelligent, and Hanson is very clear that he believes someday in theory our machine descendants will be more cognitively capable than us by far.
To deny this would require claiming (as some appear to) that the human brain just happens to be at the upper limit of intelligence, or to claim for sure that we can't build something more intelligent in some sense than a single human being (in 'some sense' we already have obviously). But are we, as a civilization, on track to build something more intelligent than humanity inclusive of our total culture within the next few decades? Sure doesn't look like it to Robin.
“Diffusing innovations” is a big factor of what makes humans or any agents powerful, sure. I just don’t get why he’s not focused on the cognitive work that lets one create innovations in the first place. When you’re sufficiently good at creating innovations, it didn’t even occur to you to factor out the part where you diffuse innovations to other minds or other data centers of your own distributed mind.