Prof. Geoffrey Miller is an evolutionary psychologist, bestselling author, associate professor of psychology at the University of New Mexico, and one of the world's leading experts on signaling theory and human sexual selection. His book "Mate" was hugely influential to me during my dating years, so I was thrilled to finally get him on the show.
In this episode, Geoffrey drops a bombshell 50% P(Doom) assessment, coming from someone who wrote foundational papers on neural networks and genetic algorithms back in the '90s before pivoting to study human mating behavior for 30 years.
What makes Geoffrey's doom perspective unique is that he thinks both inner and outer alignment might be unsolvable in principle, ever. He's also surprisingly bearish on AI's current value, arguing it hasn't been net positive for society yet despite the $14 billion in OpenAI revenue.
We cover his fascinating intellectual journey from early AI researcher to pickup artist advisor to AI doomer, why Asperger's people make better psychology researchers, the polyamory scene in rationalist circles, and his surprisingly optimistic take on cooperating with China. Geoffrey brings a deeply humanist perspective. He genuinely loves human civilization as it is and sees no reason to rush toward our potential replacement.
00:00:00 - Introducing Prof. Geoffrey Miller
00:01:46 - Geoffrey’s intellectual career arc: AI → evolutionary psychology → back to AI
00:03:43 - Signaling theory as the main theme driving his research
00:05:04 - Why evolutionary psychology is legitimate science, not just speculation
00:08:18 - Being a professor in the AI age and making courses "AI-proof"
00:09:12 - Getting tenure in 2008 and using academic freedom responsibly
00:11:01 - Student cheating epidemic with AI tools, going "fully medieval"
00:13:28 - Should professors use AI for grading? (Geoffrey says no, would be unethical)
00:23:06 - Coming out as Aspie and neurodiversity in academia
00:29:15 - What is sex and its role in evolution (error correction vs. variation)
00:34:06 - Sexual selection as an evolutionary "supercharger"
00:37:25 - Dating advice, pickup artistry, and evolutionary psychology insights
00:45:04 - Polyamory: Geoffrey’s experience and the rationalist connection
00:50:42 - Why rationalists tend to be poly vs. Chesterton's fence on monogamy
00:54:07 - The "primal" lifestyle and evolutionary medicine
00:56:59 - How Iain M. Banks' Culture novels shaped Geoffrey’s AI thinking
01:05:26 - What’s Your P(Doom)™
01:08:04 - Main doom scenario: AI arms race leading to unaligned ASI
01:14:10 - Bad actors problem: antinatalists, religious extremists, eco-alarmists
01:21:13 - Inner vs. outer alignment - both may be unsolvable in principle
01:23:56 - "What's the hurry?" - Why rush when alignment might take millennia?
01:28:17 - Disagreement on whether AI has been net positive so far
01:35:13 - Why AI won't magically solve longevity or other major problems
01:37:56 - Unemployment doom and loss of human autonomy
01:40:13 - Cosmic perspective: We could be "the baddies" spreading unaligned AI
01:44:34 - "Humanity is doing incredibly well" - no need for Hail Mary AI
01:49:01 - Why ASI might be bad at solving alignment (lacks human cultural wisdom)
01:52:06 - China cooperation: "Whoever builds ASI first loses"
01:55:19 - Liron’s Outro
Show Notes
Links
Designing Neural Networks using Genetic Algorithms - His most cited paper
“The Neurodiversity Case for Free Speech” - The mentioned Quillette article by Geoffrey
Books by Geoffrey Miller
Mate: Become the Man Women Want (2015) - Co-authored with Tucker Max
The Mating Mind: How Sexual Choice Shaped the Evolution of Human Nature (2000)
Virtue Signaling: Essays on Darwinian Politics and Free Speech (2019)
Related Doom Debates Episodes
Liam Robins on College in the AGI Era - Student perspective on AI cheating
Liron Reacts to Steven Pinker on AI Risk - Critiquing Pinker's AI optimism
Steven Byrnes on Brain-Like AGI - Upcoming episode on human brain architecture
Transcript
Opening and Introduction
Geoffrey Miller: It might be the hardest problem we've ever confronted. The other existential risks are relatively modest.
Liron Shapira: What's your main line doom scenario?
Geoffrey: We unleash an ASI that is the worst thing ever in the history of the cosmos. Rushing into this is absolutely foolish and reckless and frankly evil.
Geoffrey: I'm Geoffrey Miller and you're watching Doom Debates.
Liron: Welcome to Doom Debates. My guest, Professor Geoffrey Miller, is an evolutionary psychologist, a bestselling author, and an associate professor of psychology at the University of New Mexico. Much of his academic focus has been the subject of human intelligence. He got his PhD in cognitive psychology from Stanford.
He did his postdoc at the Center for Cognitive Science at the University of Sussex. In 2001, he published The Mating Mind: How Sexual Choice Shaped the Evolution of Human Nature. In 2015, he wrote Mate: Become the Man Women Want.
Then in 2019, he wrote Virtue Signaling: Essays on Darwinian Politics and Free Speech. He is an effective altruist focused on mitigating existential risks such as AI. And in the last few years, he's been writing about AI alignment on the Effective Altruism Forum.
So today we're going to dive into Geoffrey's core ideas and pick up key insights from his research career. We're going to see how some of his findings from studying evolutionary psychology and human intelligence are relevant to AI risk mitigation.
Professor Geoffrey Miller, welcome to Doom Debates.
Geoffrey: Great to be here, Liron. My pleasure and I have a lot of respect for what you do, and you've done some awesome interviews and podcasts that I very much enjoyed.
Liron: Likewise. Yeah, I'm a big fan of your work. Been reading your stuff for a while, especially Mate was pretty influential for me back in 2015, the book you co-wrote with Tucker Max, so we'll definitely be getting into that.
Geoffrey: Cool.
Geoffrey's Intellectual Career Arc
Liron: Let's do a high level overview of your intellectual career arc. First, you were interested in AI, and then you had kids, and you became less interested in AI and more interested in evolutionary psychology, leaning into human courtship. And now you're back into AI. Did I miss anything?
Geoffrey: That's a pretty good high level intro. Back in college at Columbia, I was always interested in psychology. I double majored in psychology and biology and I thought, I do want to understand cognitive science. So I went to Stanford to study cognitive psych, and then I had kind of a two path career.
On the one hand, we had professor David Rumelhart, who is a big neural networks researcher, and I kind of got into neural networks and genetic algorithms and machine learning. So I did some of that in grad school. Certainly my first 10 or 20 publications were machine learning focused. That's what my postdoc was about.
And then I had my first child in 1996, and by then I was already a little bit concerned actually about risks from AI. My focus was very much on autonomous agents and evolutionary robotics and things that are going to actually make decisions and act in the real world, not just respond to prompts. And that started to freak me out and I thought this is not necessarily an ethical career track to follow.
So my focus has been very much on human evolutionary psychology and intelligence and creativity and stuff like that for most of the last 30 years. But then I got into effective altruism about 10 years ago. A lot of the EAs were very focused on X-risk and AI safety issues. And that kind of re-inspired me to look at those issues.
Liron: Do you have one big idea that drives your career? When you kicked off your career, where did you see yourself going?
Geoffrey: The biggest theme, honestly, has been signaling theory, which is the idea that an awful lot of what people do is signaling our underlying traits to other people, whether it's socially or sexually or in our careers or whatever. And that's been a constant theme through The Mating Mind book, the Spent book, the Mate book, which is about how to signal better to the opposite sex if you're in the dating market.
I didn't really intend for that to be the key theme, but I just found that signaling theory was such a powerful way to look at things and such a relevant application of game theory to human behavior.
Liron: So that's you and Robin Hanson, right? You guys are the main signaling people.
Geoffrey: Yeah. And I think we've certainly inspired each other to some degree. And his book, The Elephant in the Brain, I think quoted me a fair amount. It's one of these ideas where once you see it, it's very hard to unsee it. And maybe later we can talk about how signaling theory can help illuminate AI arms races and the behavior of the AI industry insiders also.
Liron: Yes, absolutely. So the structure of the conversation is going to be, we're going to dive a little bit more into you and some of your greatest hits, your core ideas, and then we will bring everything back toward the subject of the show, which is AI doom and what we can do about it.
Geoffrey: Cool.
Being a Professor in the AGI Era
Liron: So on the subject of evolutionary psychology, your little 30 year detour here, one thing I want to tell the viewers is that evolutionary psychology is legit. It's not really something that should be controversial, right? Whether the field as a whole is adding value. Even going so far back to Darwin, right? He already realized that evolutionary psychology is going to be a thing, right?
Geoffrey: Yeah. And in fact, in The Mating Mind, I pointed out that most of Darwin's early career was very much focused on biology and the evolution of morphological traits, the body designs and adaptations of animals. But honestly, by 1870, he was more interested in a way in evolutionary psychology than in evolutionary biology.
So a lot of his books after that were about sexual selection through mate choice, right? The behavior of pollinators influencing the evolution of flowers. The selection of domesticated breeds of plants and animals by human agency. So Darwin himself was really the first evolutionary psychologist, and that was a major focus of his mid to late career.
Liron: I do occasionally see people who claim that evolutionary psychology as a field is all speculation. And I think it's more accurate to say no. There's a bunch of fundamental insights that you really can't deny. And then of course there's always this frontier of the field. There's still things that we're not sure about and we're still studying. Right?
Geoffrey: Yeah. I've published a lot of empirical work, and if it was the case that this was just speculation and just-so stories. Then all of my PhD students would just, they'd come out with hypotheses. But it's typically the case that you have a great idea, you think it's a great idea. You go and test it. Oops, it's not quite right. You revisit it. It's just normal scientific dialectic between hypothesis and experiment.
Liron: Exactly. And we should really explain to the viewers, evolutionary psychology. You know, explain it like I'm five. It basically just means that the brain is an organ and organs are the way they are because of evolutionary pressure, because they helped our ancestors survive. So for example, when food tastes good or when sex feels good, that is an evolutionary response.
Those things don't have to feel and taste good, but they do because they evolved to do so in our brains.
Geoffrey: Yeah, it's just the radical idea that there is a human nature. The human nature evolved. The human nature includes a lot of different psychological adaptations, including preferences, values, emotions, mood states, perceptual abilities. Cognitive abilities and that you can analyze human nature the way that you might analyze chimpanzee nature, dolphin nature, elephant nature, any other nature in any other animal.
And that you can draw upon lots of different fields, anthropology, genetics, sociology, political science, neuroscience, and weave them together, hopefully into a coherent picture of human nature.
Liron: Great. All right. Well, we're definitely going to be talking more about that. We're going to be psychoanalyzing AI researchers. We're going to be talking about sex. We're going to be talking about my brief career as a pickup artist, trying to do some male female courtship. And we're all going to be grounding all of that in the reputable signs of evolutionary psychology.
So stay tuned for that. But before all that, I want to ask you a couple questions about being a professor, right? Because you've been a professor pretty much your whole career, right?
Geoffrey: Yep. Pretty much with a few side gigs, but and a few detours. But yeah, basically.
Liron: Okay. How does your time break down these days?
Geoffrey: Honestly, the last year I've spent an awful lot of time revising my course designs and course syllabi to make them relatively AI proof to make it hard for students to cheat. And that's actually taken an enormous amount of time and effort to figure that out. I do a lot of research. I have PhD students, we try to publish journal papers, do empirical research.
I love analyzing big data sets. And then you do a little bit of university service. I've been on a teaching and AI committee at the college level where we try to come up with general policies and norms and best practices for how do you teach in the age of AI.
So it's the usual combination of teaching, research, service.
Liron: How long ago did you get tenure?
Geoffrey: Tenure in 2008.
And thank God for tenure. It's really, really important. This is an aside, but it's really important to be able to maintain the free speech of professors.
Liron: This gets to a lot of what Bryan Caplan has written about. You're probably a fan of Bryan Caplan, right? He's written some good posts saying, yep, people really need to use their tenure more. You've got tenure, so why are you still being so shy? This is your chance, right? You have the security, so go ahead and use it.
Geoffrey: The right way to do that is to use tenure to say what you really think and to research the topics that you actually think are important in the world, and not just to keep churning out publication after publication on some small to medium sized topic.
Liron: And it sounds like you are pushing the boundaries. You're representing that well.
Geoffrey: It's always hard to be courageous in academia. You have to be willing to piss off or disappoint some of your colleagues if you are working on something that they don't think is that important. But fortunately, being in the effective altruism community and the rationalist community, you can always go, well, actually my EA friends are smarter than most of my academic colleagues.
So it's their respect and approval that I actually care more about anyway.
Liron: Likewise, likewise. I'm glad you mentioned protecting your curriculum and your professorship against AI because we just had an episode a couple weeks ago with a sophomore in college named Liam Robbins. I know you watched it. It was all about how college is different in the AGI era specifically. Are people AGI pilled, do they realize they're living in potentially the last days when humanity has control? And then the other question is, are they all cheating?
And the answer is basically yes. What is your experience with that?
Geoffrey: In short, yes, they are cheating, but I'm not sure cheating is quite the right term for it. If LLMs and good chatbots were available when I was in college. I certainly would've been using them avidly, to the extent that the professors allowed it, right? So the ethical approach to using AI in a class is figure out what the professor thinks is best in terms of your own learning experience and then defer to that.
So what I tell my students is I want absolutely minimal use of AI in terms of doing assignments or doing tests. So I have gone fully medieval, I do paper and pencil in class tests, either multiple choice or short answers. Put away all devices. Don't use your phone, don't use your laptop. I want your brain and your pen and the paper and that's it.
I want you to understand the material. I don't want you just interacting with LLMs. Other professors have different approaches, but I think honestly, the vast majority of professors are absolutely oblivious and naive and haven't used LLMs themselves and have no idea how easy it is for students to cheat on writing term papers or doing online tests or stuff like that.
Liron: Did you change since? Because did you let people use laptops during tests before?
Geoffrey: I used to run a lot of online tests. I used to assign a lot of term papers, and sort of take home writing assignments, and I just can't do that because most of the students who are savvy will use AI to complete online tests or to at least draft the majority of a term paper. On whatever topic is of interest.
It's extremely hard to teach students how to write now. And on our AI and teaching committee, everybody in the humanities was absolutely filled with despair because they were like the centerpiece of humanities education and English or history or whatever, is teach students how to write and think, and we can't do that anymore in the age of AI.
Liron: So what do the humanities professors do? Do they just kind of, are they just blissfully ignorant of it, or what's their move?
Geoffrey: I would say about two thirds are blissfully ignorant. A sixth are just in a state of complete existential despair and don't know what to do. And about a sixth are doing just in class writing assignments.
Liron: All right. Potential University of New Mexico students listening to this show, don't take Professor Miller's class. It's too hard. He's making it too hard to cheat. But if you go to humanities, you'll have pretty good odds.
Geoffrey: Yeah.
Liron: Do you as a professor use AI to grade the students?
Geoffrey: No. No, I really don't. And I think that would be lazy and unethical. So I don't, I read everything myself. I grade it all myself. I want to make sure that it's my class and it's my assessment.
Liron: So I think everybody's telling me there's an epidemic of cheating among the students. Do you think there's an epidemic of professors that are replacing some of their duties with AI and kind of cheating at their jobs?
Geoffrey: I think it happens more in terms of rating academic papers and grant applications than in terms of grading student assignments. I think most professors, if they're averse to actually grading student essays, can always have a teaching assistant, a grad student, do it. But I do see a lot of use of AI to write the introduction to a paper or to do the discussion section or that kind of writing.
And I also know a lot of academics are using LLMs to draft book chapters.
Liron: Yeah. We've all seen stories of those published research papers that say, by the way, I'm just an AI so let me know if I can be of more help looking up sources. And they left it in the paper.
Geoffrey: Yeah.
I thought the whole point of becoming a researcher and a scholar was to have ideas come into my brain and percolate and then to sort of remix them and be creative and then put out hopefully new and valid ideas and leaning really heavily on AI to do it. Outsourcing every important component of research and scholarship to AI, I think is, it must be so dehumanizing and depressing for the people who are trying to get tenure and trying to maximize publication output.
Liron: Yep. In your own research career, do you feel like you've had to make many compromises to do grant applications or to get jobs, or do you feel like you've been fortunate to pursue what you actually thought was interesting and important?
Geoffrey: I think I've been pretty fortunate in a way, very fortunate to get the job at University of New Mexico. It's very hard to get a job as an evolutionary psychologist because it's a highly stigmatized field and there's very few of those jobs worldwide. So lucky to do that. I have not taken any federal research money in 30 years.
I don't want to be beholden to the government. I don't want to waste taxpayer money and I don't want to have NSF or NIH tell me what their current priorities are and try to shoehorn my research ideas and priorities into whatever the federal government thinks is important right now.
Liron: Interesting. But what if you want students to come help you? Do you just not have to pay them in order? So that's why you don't have to apply for budgets.
Geoffrey: Yeah, basically we have a PhD program where the students are partly supported by the university and they're supported by being teaching assistants and helping with our classes. So UNM is a big state university, 40,000 students. We get a lot of tax support from State of New Mexico and then tuition support from the students.
So it's really all about the teaching economically. Other universities like Stanford, Harvard, whatever, are much more dependent on federal grants to run themselves.
Liron: Very interesting. So you've done all these years of research and you just did it by having a full-time income. 'Cause you've got the professor job and you've got all these smart students you can draw on. And I guess the university just gives you basic resources, right? You can use the computer lab and that's just enough for you to publish papers.
Geoffrey: Yeah. It's possible to do good behavioral science as research without spending a whole lot of money.
Liron: Totally. And also it's kind of funny because it sounds like you're operating on a shoestring budget, but then again it's like, okay, two smart PhDs are helping you out. Those people go to industry, suddenly that's a million dollars a year in salaries, right?
Geoffrey: Yeah.
Liron: Makes sense. Let me ask you about the students. You've been working with students for so many decades. How are the students different?
Geoffrey: They are distracted by their phones. I mean, 30 years ago when I was teaching, God, I'm so old. The first course I taught as a grad student at Stanford in 1990 was a course on evolution and cognition. And I assigned a vast amount of reading, which I think a lot of the students actually did. I've seen a real reduction in functional literacy.
The students can read, I think, but they won't read, and I've often had to re-engineer my classes. I can't assign textbooks anymore. They won't read a textbook. So I assign very short readings, mostly blogs, or very short academic articles, three to four pages. And I assign a lot of YouTube videos actually. So in my Psychology of Effective Altruism class, I will probably assign this podcast as content to students.
Liron: What's up students? Go check out the archives, if you've been assigned this, maybe he'll give you some extra credit. Who do you think is feeling the AGI on your campus?
Geoffrey: It's funny, university counsel, which is the university lawyers are worried about AI mostly in terms of: If a professor thinks a student is cheating with AI, the professor thinks, oh, this is written mostly with ChatGPT, the professor accuses the student of cheating, the student counter sues the university. Right?
The lawyers are very worried about that, false alarms and misses in terms of this game theoretic dance of how do you figure out who's cheating and who isn't and how do you avoid false accusations of AI cheating? So that's one nexus.
Liron: I think what you do is you just give the students the benefit of the doubt and you just inflate, you know, the credential inflation.
Geoffrey: Yeah, that de facto is what is happening.
Liron: Crazy. Do the students feel like they think AGI is coming soon?
Geoffrey: The real concern that students have is about their careers and their career planning. And I have talked extensively in classes the last couple years, including my big, I do a psychology of human sexuality class. I do a human emotions class. So we talk about what are you guys going to do after you graduate?
And the main issue is not AGI extinction risk. The main concern the students have is extinction of career opportunity risk, right? So they have no idea what careers are going to be viable, right? And I can talk with them about the Moravec paradox that, well, white collar jobs might be automated faster than blue collar physical jobs, but robotics is coming anyway.
And if we get AGI, then AGI, by definition, will be able to learn how to do any cognitive or behavioral task in a way that's economically competitive with human labor. And that means there really aren't any stable career paths that many of the students can see. And that I think is enormously depressing.
This would be a really bad time to be a 20-year-old if you're at all tuned in to what AI is doing. What are you trained to do?
Liron: Beats me.
Geoffrey: They have no idea. And I think even apart from the extinction risks from artificial super intelligence there, I think there's a pervasive despair among Gen Z about planning for the future, just in terms of how do I make an income? How do I have a family? How do I find a mate? How am I going to support my kids?
They have no idea and I have no answers for them.
Liron: Yeah. I mean, if I had to give an answer, I guess I'd be like, well, understand entrepreneurship. Just like Amjad Masad was saying, everybody's going to use Replit, everybody's going to be an entrepreneur. I think that's optimistic, but I think the pessimistic side we'll talk about later is doom. So if you want to be optimistic, be an entrepreneur.
Study how a business works, right? Study a little bit of the financial aspect. You know, how equity works. Look at areas of innovation. Just get good at the AI tools, be at the forefront even though the wave keeps moving. That's the best I can do in terms of advice personally.
Geoffrey: Some of the professors are tuned into, oh, well the student should master AI and they should get these skills. But the problem is the AI technology is moving so fast that being a really good prompt engineer with a current LLM is not going to be future proof, right? It's not necessarily going to generalize to using AI effectively in five or 10 years.
So it's not really a long-term stable, marketable career skill. It's just knowing how to exploit the current systems. But the future systems may be completely different.
Liron: What do your fellow academics think about AGI?
Geoffrey: In the psychology department, there's widespread concern about these two risks or issues we've talked about: the AI cheating in college and then the career uncertainty among the students. I think generally there's very little understanding of AGI-ASI extinction risk. Any of the stuff that people in the rationalist and the LessWrong community and the EA community have been talking about for 20 years.
So they're total head in the sand ostrich mode as far as these things are concerned.
Liron: That makes sense. That's consistent with what I'm hearing. All right, nice. So now we got the student's perspective from Liam Robbins a couple weeks ago, and we got the professor's perspective. So we've got the 360 overview of college. Maybe we'll bring in an administrator.
Coming Out as Asperger's
We're going to get more into doom topics soon, but first, you've come out as Asperger's. How does this influence your personal life and career?
Geoffrey: I always knew I was a nerd. I knew my dad was a nerd, right? He was really into planes, trains, automobiles, model railroads, et cetera. And fortunately, I went to a junior high school and high school that was in a way very Asperger's friendly. It aggressively tracked smart kids into honors programs and advanced placement courses.
And it concentrated us together so that the Asperger's boys could find the Asperger's girls and so forth. And I was on math team and there were actual girls on math team. And so instead of being alienated Asperger's incel, you could actually be a socially successful Asperger's high school student and college student. And thank God for that.
I did write a piece in Quillette Online magazine a few years ago about the intersection of Asperger's issues and free speech issues, which is basically if you have a free speech culture, it's very important to take into account the fact that there's a lot of neurodiversity out there and that you need to respect the fact that some people aren't that tuned into what are the current social norms?
What is the current Overton window? What are we allowed to say versus not allowed to say? And I pointed out that people who are Asperger's, who constitute a very high proportion of the geniuses who have made huge contributions to civilization often say things that are inappropriate or insensitive or whatever, but they should be, we should be allowed to do that.
And it's important for normies, neurotypicals to accept that the price of civilizational advances is that Asperger's people say stuff that pisses some people off some of the time.
Liron: Pretty much, yeah. I've also come out as Asperger's, self-diagnosed, just noticing that I had enough Asperger's traits that it's like, okay, yeah. I'm pretty clearly also Asperger's and I had an academically focused high school, so I didn't get bullied or anything. I was pretty fortunate there.
I think it's nice that it's more of a known thing now because it certainly helps me understand myself more. So let's say there's an occasion for social interaction, like a gathering and let's say I get tired out earlier than most. It's nice to have the self-knowledge of like, yeah, it would be great if I was more into the social interaction, but clearly my brain is configured a little differently that it's a harder game for me and it's going to be more draining for me.
And maybe one day I'll be able to genetically engineer myself to just have less difficulty with that. But it's okay if I have some difficulty, you know, it's nice to give yourself a little leeway here.
Geoffrey: Yeah. And honestly, if you're a psychology researcher, being Asperger's is a little bit of a superpower because it means you don't take for granted a lot of the aspects of social interaction that everybody else just takes for granted and thinks is absolutely normal. You can look at human courtship and go, wow, that's really complicated and weird, and there's a lot of stuff going on that I don't intuitively understand.
How can I get an academic intellectualized understanding of it based on some theoretical framework, like signaling theory. If you're a neurotypical psych researcher, I think that's a real handicap because you look out at human behavior and it's like, this all makes intuitive sense. What's the problem? What am I going to study? You end up doing trivial research on trivial topics.
And thank God that there are enough women out there who are good Asperger's tamers, Asperger's trainers, right? My wife Diana Fleischman throughout her mating career has mostly dated Asperger's guys and her sort of modus operandi is find a diamond in the rough Asperger's genius and try to level him up a little bit socially through the power of female training.
Liron: My wife is also an Asperger's wrangler.
Geoffrey: Yeah. And, again, historically, I think a lot of progress in civilization and technology comes from this kind of happy confluence of the male Asperger's and the female Asperger's wranglers cooperating effectively. And my own parents were like this also.
Liron: Do you think society is moving in a direction where social interaction is actually more Asperger's? Because you know, everybody's staring at their phones, right? They're making less eye contact and they're having video gaming as their hobby. So isn't that our wheelhouse? Maybe we would've had an easier time growing up today.
Geoffrey: Yeah, I think in a way, I didn't really master the art of eye contact until maybe age 14 or 15. But I think if you're neurotypical and you mostly communicate through texting and social media, and you don't have much experience face-to-face interaction. I think that's actually quite handicapping and psychologically damaging, because you're never developing those neurotypical, easy peasy social skills, right, that make life just seamlessly social for most people.
And so I think it's actually tragic when I see most of my students kind of terrified of going on a date and having to talk to someone face-to-face. And they just haven't developed the skills of doing that. They can text really fast, but texting is not an emotionally compelling way to do human courtship.
Liron: Yeah. Fair enough. Fair enough. I think that if I was growing up today, I would probably still have a hard time with the social aspects.
Sex, Broadly Construed
Let's segue into sex broadly construed, 'cause you studied sex so much from that detached Asperger's perspective. Broadly, what is sex and what's the role of sex in evolution?
Geoffrey: Sex is a way of mixing up genes, right? So sexual reproduction means you get two organisms and they basically, Hey, I've got a bunch of harmful mutations, sorry. But you've got a bunch of harmful mutations also. But our mutations might be a little different from each other. So if we mix our genes right, at least some of our offspring might actually be even better than us or might have a lower mutation load than us, or might have subtly new and different and advantageous traits compared to us.
So when I teach human sexuality, I always contrast asexual reproduction, which is basically cloning yourself versus sexual reproduction, remixing up your genes. Asexual reproduction is in a way very, very efficient in the short term. You can just flood the environment with copies of yourself.
But if you're a complex organism like a vertebrate, you are going to go extinct pretty fast because mutations just keep accumulating and making you worse and worse at surviving and reproducing. So I talk a lot about what are the fundamental benefits of sex. It has to do with purge bad mutations, recombine good mutations, and try to stay one step ahead of pathogens and parasites.
And viruses and bacteria that are evolving faster than a big organism can ever evolve.
Liron: You know, for some reason I didn't think that that was the case. I feel like this might be a missing piece of my knowledge here. So you're basically saying that sex, one of the major functions is error correction because if you were just cloning yourself, you would just accumulate errors and you wouldn't have error correction.
Huh? Okay. I'm not a hundred percent sure I'm ready to accept that because I thought that it was more maybe it's both, but I thought it was more like sex is purposefully introducing new variety, and so it actually is not trying to preserve your exact clone, but I guess maybe it's simultaneously doing both, right. It's simultaneously introducing variation, but also fixing errors.
Geoffrey: Yeah, exactly. So you've got a runaway error problem in an asexual species, and that's called Muller's Ratchet. It was talked about by Hans Muller, geneticist like a hundred years ago. And sex is really, really good at both the error correction thing, maintaining the adaptive integrity of the organisms in your species, but it's also really good at recombining, potentially useful new genetic variants.
And so it speeds up adaptive evolution in that sense. And back when I was doing agent-based modeling in the early nineties, a lot of it was about what exactly are the benefits of sexual reproduction from an almost machine learning perspective or a general optimization perspective.
And then layered on top of that, you can add the mate choice dimension, right? Which is okay if you're going to recombine your genes with some other organism to produce offspring. Ah, there's a decision element here. Who are you going to recombine your genes with, right? That's mate choice.
So that has also been a major theme in my whole research career, is trying to figure out, mate choice for good genes, right? Or mate choice for genetic variety. And I view that not just in terms of humans trying to find a good marriage partner, but in a much broader optimization of the evolutionary process view.
Liron: Yeah, that's actually a very interesting, so when you're just changing the gene pool by having organisms dying, there's a certain amount of average bits per generation that you can do, which is pretty slow. But then when you have mate choice, if the mate choice is sufficiently selective and if there's sufficiently few times when people just have to settle, like, nope, every organism just has it's like musical chairs.
Everybody's eventually getting laid, but it doesn't seem like that's the case. It seems like you can be selective and maybe a tenth, a tenth of the males will get all the women, for example. So there's a lot of selection potentially on males. Maybe to, I'm exaggerating, but my point is here that the amount of genetic information per generation can potentially double or triple, I guess if there's enough sexual selection. Correct.
Geoffrey: Yeah, it's a real supercharger of evolution. And Darwin realized this way back in the 1870s, his book, The Descent of Man and Selection in Relation to Sex, something like seven or 800 pages, most of which was about mate choice for good genes among, he went through insects, amphibians, reptiles, birds, mammals, and finally got to humans.
But he documented enormous amounts of evidence that even insects are doing selective mate choice because it pays, it. You have offspring that are better and it accelerates the evolutionary process.
Why Prof. Miller wrote Mate
Liron: Okay, so you've written on a lot of different aspects of sex. You've written about sexual encounters, how people select mates, how marketing is related to sex, how virtue signaling is related to sex, how people become more sexually attractive, how human traits are product of sexual selection. And this could easily be a multi-hour interview just on all the different sex stuff.
We're probably going to skip past most of the subtlety there and just focus a little bit on the part that's most interesting to me, which is the courtship advice for single guys and gals, right? 'Cause you've also spent a lot of time on that.
Geoffrey: Yeah. Way back, I don't know, 20 years ago or so, I met some of the people in the manosphere and the pickup artists, particularly my friend Eben Pagan, who had stage named David DeAngelo. And he gave, I thought, pretty good dating advice actually to young men.
But I noticed that a lot of the dating gurus weren't really tuned into the deeper aspects of evolutionary psychology. So they could often say, here's a tactic that I guess seems to work if you're doing your going out and you're doing your day game, or you're trying to find people online or whatever. But they couldn't explain why it works.
And that is challenging for a lot of young men. 'Cause they're like, why am I doing this? The problem is it creates resentment against women. Right, and you see this in the manosphere, massive degrees of misogyny and contempt for women. Oh, they're just focused on these superficial traits and they're ignoring these other traits and that sucks and they're stupid.
One thing that Tucker Max and I wanted to do in writing the Mate book was try to think in this evolutionary psychology perspective, why did women evolve these particular preferences? Why do they want these things? If you understand that, then number one, you'll get better at leveling yourself up and developing these traits and these capacities and competencies and talents, but also you'll respect women more, right?
You'll see their views hopefully as more legitimate. You'll be able to take their point of view. So nobody would've necessarily accused Tucker Max of being a feminist. But the point of the Mate book really was try to teach young men the validity of a lot of these female preferences, and then practical strategies for making yourself a better guy who fits those preferences more effectively.
Liron: So my own dating experience. I'm married now, happily married, three kids. Thanks. So my dating experience was basically 2005 to 2015, like that decade basically ages 18 to 27 for me. And it was kind of bookended by you and your friends because when I was 18 in 2006, I read your friend Eben Pagan who had that pen name David DeAngelo, right?
Yeah. He was big in the two thousands for people who were online. Then I, yeah, he sold a bunch of copies of his ebook, so I downloaded his ebook. It was called Double Your Dating. And it actually worked perfectly for me, my dating started at zero and it just doubled it. So I was actually still struggling.
They have to put an asterisk. Yeah, I needed a different book basically, because yeah, I read it and I was still kind of hopeless for a couple years after that. So I was just reading this book. I remember the big insight from that book. He was talking about you gotta be cocky funny, right?
Geoffrey: Yeah. Yeah. I think he had, he had really good advice and I went to a live event that he had in Chicago with hundreds of guys, and he had a lot of guest speakers, including me and I think he had solid advice and a lot of those dating gurus had pretty good advice. And it's particularly useful for Asperger's guys.
Who are like, I'm just lost. I do not understand women. I don't know what they want. I don't know why they want it. It can be quite helpful. Yeah.
Liron: Yeah, it was a good book. It's just, it's not what I needed at the time. Right. Because I was so far in the hole, right. As a nerdy guy who just came out of high school. Right. It's like, so it turns out, you know, after years went by, now that I can retroactively look back at what I really needed compared to the advice I was getting, a lot of it is just normie communication skills.
That's a piece that I was missing. And those books didn't really give that to me. 'Cause it's more like they were trying to get you from normy guy with normy feelings and normy confusions to cocky funny normy guy. And I was more like, you know, as Asperger's high school nerd who didn't even have communication skills. I needed communication skills.
Geoffrey: And then what actually led you into succeeding in women and getting married?
Liron: Yeah, it was a struggle for a few years. Eventually in the 2010s eventually I'm like, oh, okay, some concepts are clicking together and you know, so some stuff is working. Especially I got really good at texting. I kind of broke down the algorithm of how you can manufacture a good conversation.
Eventually I finally realized like, okay, all this. Pick up artist stuff, right? Like the manosphere, sure, there's some kernels of truth about how to be dominant and be a leader. Take control right there. There's some correct elements there, but there's, as you said, there's so much misogyny, so it was really hard for me to sift through everything, but eventually I'm like, okay, my own deficiency was communication skills.
Communication skills. Actually do a lot of the courtship. Just good communication that it really does a lot of work. Actually, one way to phrase it is like, how do you just fill up a half an hour of time? Because I never really learned that in the first five years that I was trying to learn to date better. I never broke down the problem of like, okay, fill half an hour of time. What does that look like? What's the ratio of different elements?
Geoffrey: A lot of this seems kind of basic once you learn it, but for the 20 or 25-year-old Asperger's listening to this, honestly, a lot of it is just perspective taking. Try to put yourself in the mind of the woman. She's meeting you. She doesn't know you, she doesn't trust you, she doesn't know what your traits are, how do you put her at ease and just help her have a fun, interesting time.
Liron: Yeah, I agree. One way to frame it that became useful to me in the later stages when I actually got pretty good at dating is that yeah, you want to show off that you're smart, but you have to convert the currency. Right. So even if you yourself aren't a normie showing off and being like, look, I understand enough about normie communication that I know how to craft a joke, which is universally appealing or appealing to a large segment of the population and calibrate it to the situation.
So what we think of as normie social skills and we as Asperger's, oh, we don't do normie social skills. You know, we're better than that. Well, doing normal social skills well is actually a way to launder your IQ so you can actually just show off your intelligence that way.
Geoffrey: Yeah, and this is why Tucker Max and I emphasize so heavily, go to situations that help you build up your normie sense of humor skills. Go to improv comedy classes or just go to an open mic at a standup comedy club and bomb and embarrass yourself. And just get used to trying to figure out what kind of sense of humor works with normal people and normal women.
And it's going to be different from the sense of humor that works with your Asperger's buddies when you're playing Dungeons and Dragons or Call of Duty or whatever.
Liron: Yeah. So to bookend the other end of my single career 2005, 2015, I ended up meeting my future wife in 2015. And we got married in 2018. When I was essentially retiring from the dating career, that's when I read Mate, I think it was published then, right?
Geoffrey: Mm-hmm.
Liron: Exactly. So, so even though I was on the tail end, I was so impressed with reading that book, Mate. It cleared up some confusions, you know, it helped things click even more. And I can honestly say to the viewers, Mate, is the number one book out there on human courtship.
Geoffrey: Oh, thanks. I appreciate that. We worked really hard to try to make it relevant and useful and you know, we did that whole podcast Mating Grounds, I think something like 200 episodes, 2014 to 2016. And a lot of it was young men calling in with questions. Right. And I think we got very tuned in to where their pain points and their bottlenecks are and what they don't understand and what they do understand.
And to anybody out there thinking about writing a book that's for a popular audience. I highly recommend doing a podcast before and during the book writing so that you really understand where your audience is at and what they get and what they don't get.
And I would also encourage married people, read some dating advice books, and then apply it to the on hopefully the ongoing courtship that exists in your marriage. Right. That keeps the spark going, keeps the emotional connection alive. 'Cause a lot of people get married and kind of like getting tenure. They relax too much. Right. And they don't keep up enough mating effort.
And this is particularly common among people with young kids, right? Where it's just like you're in the co-parenting mode and it's just like, you do the diapers, I'll do the laundry, whatever. And we've completely forgotten that we're supposed to be interesting to each other.
Liron: Yep. Yeah, that is also good to keep in mind. Alright, so viewers, if you're a man interested in attracting women, or if you're not, but you just want to get some insight, definitely look up Mate: Become the Man Women Want. This is not a paid advertisement, it just, I honestly think it's the number one book. And I think it also, it's published under another title, What Women Want. Right. All right. Yeah. So you could look it up by either of those titles.
Is Geoffrey Miller Polyamorous?
Alright, let's talk about a related topic. Polyamory. When and why did you become polyamorous?
Geoffrey: Well, my current wife kind of got me into it. I wouldn't really describe ourselves as currently that polyamorous, we're kind of monogamous, we're about 95% monogamous because we've got two toddlers and we just don't have time for all that courtship, nonsense, secondary relationships, et cetera.
But I thought at least at the intellectual level, this is a very interesting subculture. So I'm among other things a sex researcher and there's a lot of people among Gen Zs and millennials who are polyamorous or monogamish or they're into some kind of open relationships.
Statistically we know it, something like five to 15% of people under 40 are at least interested in this or have tried it, and it's very understudied. So I thought, wow, there's a bit of a tension here. In traditional evolutionary psychology, right? We have decades of research on sexual jealousy and emotional jealousy.
People like David Buss at the University of Texas, Austin have studied jealousy intensively. So jealousy is a deep adaptive instinct, but. A certain number of people seem to be able to overcome it well enough to have some kind of non-monogamous relationship where everybody understands there are multiple relationships happening and they're okay with it.
How on earth does that work? I was kind of interested in it emotionally, intellectually, socially, and it seemed like a social trend that was worth paying attention to. So I thought, well, if evolutionary psychology predicts this polyamory thing can't possibly work and yet it is actually working for millions of people, do we square that circle?
Liron: Did you ever have a time in your life where you built up a harem?
Geoffrey: Now, I wouldn't say I had a harem. I certainly dated a few people at the same time where they all understood I was dating other multiple people and we talk about each other and they would often meet. Like the other people. And so it was a proper social network. It wasn't just me at the center and then a bunch of other women who didn't ever communicate or get to know each other.
And my wife Diana was very good at folding multiple people into our little network and trying to do it ethically, openly, honestly, honorably.
Liron: Yeah. I mean, I personally wouldn't be interested to try that. Not because I think it's morally wrong or anything. Right. It's like, if everybody's happy, then great. I guess just from my perspective, I just feel like two people in a relationship, it's like there's not even that many hours in the day to, to learn about each other's quirks and keep tabs on each other and, I don't know, do stuff with each other.
And then introducing more people. I just feel like you're going to divide the love.
Geoffrey: And this is something that really annoys me about the polyamory subculture is their whole rhetoric, love is infinite and there's no limit to the number of people you can get involved with. And no, anybody who is a parent, anybody who has any practical experience of relationships knows there's finite time, attention, money, energy, attention.
And you can't just deploy it willy-nilly. So what often tends to happen is people before they have kids will be polyamorous and then they get kids and that just gets shut down because of the time constraints for a while, right? And then they raise kids, and then the kids leave and go to college and you have an empty nest and then they become swingers.
Liron: Right, right. So it's kind of like, okay, so if you're an empty nester, you don't have kids yet, it's like, oh, I'm going to spend a few hours a day on my hobby, which is like running or playing the piano, but my hobby is dating other people.
Geoffrey: Yeah, my hobby is getting sexual validation from multiple people. Right?
Liron: Okay. Yeah. It seems like a reasonable way to think about it, I guess.
Monogamy vs Polyamory, and why do rationalists tend to be poly?
And I think polyamory is big in rationalist circles. Eliezer Yudkowsky is a pretty well known polyamorous. But he seems, I think he usually has a primary person that he is dating, but he is clearly polyamorous and, you know, Aella is big online. She's polyamorous and she's also a sex researcher. Yeah. What do you think of their polyamory?
Geoffrey: Yeah. I'm actually collaborating with Aella on some data analysis of a big kink survey that she did a couple years ago, and I have a lot of respect for her, both as a very adventurous, free spirited person, but also as a researcher, a fearless researcher who's willing to ask awkward questions to a degree that far outstrips most of what my academic colleagues would ever ask.
There's some interesting overlap between Asperger's and polyamory that I don't fully understand yet, but it certainly seems like there is an empirical correlation there, particularly in the Bay Area and New York and London and Oxford and stuff like that.
Liron: Yeah. And even though I've never been poly myself, I feel like I understand the connection, which is, I mean, I feel like there's a rational argument for it, which is just if you think that jealousy is not a big deal and you think that the hobby is fine, then just do it. Right? I mean, it's kind of like the argument of like, if I could self modify to go from being straight to being bisexual, wouldn't I do it?
Wouldn't that increase the value? Or if I could self modify to enjoy the taste of a larger variety of foods, wouldn't I do that? I think the answer is yes. I mean, you know, and it's like if I'm born colorblind, wouldn't I want to see more colors, even me practically? I think the answer is yes.
Geoffrey: Yeah. To a conservative traditional monogamist. This looks very much like a problem of rejecting a Chesterton's fence, right? Where a hyper rationalist person can look at monogamy and marriage and go, this doesn't make any sense to me. Why do we have this tradition?
I think it's invalid. I'm going to tweak it and play with it and change it. And I think there's been a failure of conservatives and traditionalists to really deeply interrogate, wait, why do almost all successful civilizations have monogamous marriage as the centerpiece historically? Like, why is that?
What are the distinctive advantages? Let's articulate them logically and with good epistemics. Instead, they're just like, this is what God commands, and that's not satisfying to Asperger's. So there has been a little bit more serious analysis by people, even Jordan Peterson, but also like Joseph Henrich, anthro professor at Harvard asking why, what is it that's special about monogamous marriage in terms of its civilizational function.
So I think over the next decade or two, it'll be interesting to watch that space and to see can we articulate more clearly and comprehensively both the pros and the cons of monogamy, right? And also figure out who does it work best for? 'Cause it doesn't work well for everybody, but it does work, I think, well, for maybe a majority of people.
Liron: Yeah, and on that topic of Chesterton's fence, I don't want to be the stereotypical rationalist who's like, Hey, this doesn't make sense to me rationally. So throw it out, throw out monogamy. It's like, no, it's obvious that monogamy is a great equilibrium that helps kids get raised and transmit your genes that way.
It clearly has some function that worked in the past. It's not as clear if it works in the present. You can argue it works best in the present, at least during certain times. So I just don't want to be totally dismissive of why one or the other is good. I just want to be clear, I think that it is an active conversation without an obvious answer.
Geoffrey: Yeah, and I've actually taught a course called Alternative Relationships, for several years, and it does cover polyamory, but it also covers traditional Christian marriage. As a kind of alternative relationship style that's different from what the current American norm is. And we dive pretty deep into yeah, what exactly are the benefits of monogamy?
Let's analyze them respectfully and seriously. And I think a lot of the students really appreciate that because they're like, I don't want some radical sex pervert professor encouraging me to be poly or kinky or whatever. I would like to just more deeply understand what I want from relationships and marriages.
And that's really what I try to do. You know, a lot of my students are Hispanic Catholics and a lot of them are fairly traditionalist in their views of sex. And I want to respect that and be like, my role as professor is just try to help you give a little more insight into why the traditions that you know and love have a deeper logic underneath them.
Liron: Makes a lot of sense.
Why does Geoffrey put "primal" in his Twitter handle?
Okay. On Twitter, your username is primal poly. We talked about the poly side. are you primal?
Geoffrey: That's just kind of a nod to evolution and biology and prehistory and partly trying to take a long view of time, both the past and the future.
Liron: Fair enough. What about the primal diet?
Geoffrey: Yeah. I had a kind of enthusiasm for paleo lifestyle and primal diet and stuff. About 10, 15 years ago, I went to a bunch of paleo FX conferences in Austin, Texas for two or three years, and I found that whole subculture fascinating and tried to eat better and lift heavy and run fast and do all that stuff and for guys who are middle aged and trying to stay physically and mentally healthy, I think that can be a very effective way to do it.
There's a little academic field called evolutionary medicine that tries to apply evolutionary biology insights into analyzing human health and exercise and diet. The way evolutionary psychology tries to apply evolutionary principles to understanding human nature. So I was also interested in sort of the paleo movement intellectually.
Liron: I'm a slightly keto, paleo only slightly myself. It's more like I'll just be intentional about it. So I won't just eat a giant piece of bread. I'll be like, oh, that seems like a waste of carbs. But then if it's a piece of chocolate cake, I'll be like, well, this actually tastes indulgent, so I don't mind eating it, but I just, but I don't want to eat barbecue sauce 'cause it's like I'm happy with salty meat.
So barbecue sauce is like a total waste.
Geoffrey: I don't want to go down a paleo rabbit hole, but I think it's an important set of insights to learn about, given that we can't really trust what the FDA and mainstream nutritionists and mainstream doctors recommend about our lifestyles.
Cognitive Biases vs Adaptive Heuristics
Liron: Yeah. All right, well now we're approaching the heart of the conversation here. We're going to talk about AI Doom. So you've talked about reading Eliezer Yudkowsky and you're actively participating in the EA forum. You've been writing a bunch of posts over the last few years. And then I want to also highlight, we touch on this a little bit.
You've got that background in heuristics and biases as well, which is a big thing Eliezer was writing about. You know, the blog was literally called Overcoming Bias when Robin Hanson started it and Eliezer joined. There was so much about heuristics and biases. And I want to point out, Amos Tversky was one of your grad student mentors, and also you worked with Gerd Gigerenzer on Adaptive Decision Heuristics.
So you definitely know that field.
Geoffrey: Yeah, when I was at Stanford in grad school, Amos Tversky taught a wonderful class on heuristics and biases. And sadly, he died young. He would've also won the Nobel Prize with Danny Kahneman if he'd lived long enough to do that. But then I ended up in the mid nineties, working a bit with Gerd Gigerenzer, who's a German cognitive psychologist and expert on the history of statistics.
And he was actually quite critical of the Tversky and Kahneman program and thought it's not very well grounded in evolutionary psychology or animal decision making, and a lot of what Tversky and Kahneman considered to be cognitive biases are actually adaptive decision heuristics that make a lot of sense under many conditions.
So I kind of got both perspectives on that and I think it was really helpful not just to hear Tversky and Kahneman and not just to go in the direction of, oh, there's a bunch of irrational biases and we have to de-bias ourselves. Gigerenzer's perspective was actually a lot of what people think are cognitive biases are sensible and adaptive.
If you have a deep understanding of how the human mind fits with the statistical and logical structure of the environment and the decision domains that we confront.
Liron: Gotcha. So, okay. There's kind of a Russell Conjugation here. You have irrational biases. I have adaptive heuristics.
Geoffrey: Yeah, exactly.
Liron: Right. And Gigerenzer was like, look, a lot of these seemingly irrational biases, you could just file them in the adaptive heuristic category. And yeah, if you want to be perfectly rational, you should understand how they work. So then you can draw a boundary and say, when they don't apply. But yeah, I mean, I don't know, maybe a good example is sunk costs, right?
So everybody's like, yeah, just walk out of a movie. If you don't like the movie, you know, it's a sunk cost. And actually that one I think is right, but there's also an adaptive heuristic-ness to sunk costs, which is like, look, if you've already invested so much in something, maybe, or most of the time, you don't have to invest that much more to see it through to the end and get the entire payoff of the end.
Geoffrey: It's often trying to contextualize your decision making, as you're not really trying to optimize some narrow metric of success, that there's a lot of hidden metrics of success that we're often not consciously aware of, that are also important to pay attention to.
One of the Gigerenzer decision heuristics I like a lot is what he called the recognition heuristic, which is if you recognize one thing A and you don't recognize thing B, that itself often carries valid information about some of the underlying qualities and traits and features of A versus B, right?
And so for example, if you've heard of a particular company and you haven't heard of another company, often it makes sense to invest in the one you've heard of because simply by virtue of having heard of it, that tells you something about maybe the public relations ability of that team, right? Or the prominence of the company or the longevity of the company.
It's been around a while long enough that you've heard of it. And if you took a narrowly rationalistic view of this, you might say, well, it shouldn't matter whether I've heard of it or not, I should just evaluate it based on objective metrics of success.
But Gigerenzer's genius was trying to figure out what are the hidden bits of information that are carried by things that we don't often consider to be valid.
Liron: Makes perfect sense.
Steven Pinker and Disagreement with him over AI Risk
Okay, and then another background you've got is with Steven Pinker. Right. Talk a little bit about that.
Geoffrey: I mean, I've known Pinker for 35 years or so and have enormous respect for most of his work, right? He's been a supporter, he is been an advocate. He is written letters of recommendation for me. His books like The Language Instinct and the Blank Slate and Enlightenment Now, I think I've read almost everything.
He's written and it's genius and he's a towering figure and influence and mentor for me, but. Ah, I think he's woefully misguided about AI risk for reasons I don't really understand. And that's been a bit of a source of slight conflict lately.
But I've assigned his book The Blank Slate in many classes over the years, and I think it's absolutely crucial for college students to read that book to really understand what's happening, both in terms of grounding psychology, but also understanding current political dynamics.
Liron: Yeah. Yeah. So I basically have the same take on Stephen Pinker. I think How the Mind Works in particular, I think is such a magnum opus. It's one of the best books ever. It's just, it tells you a lot about how the mind works as promised. So I highly recommend that book. It's probably one of my top five or 10 books ever Must read.
Language Instinct is ridiculously good. Also, it was one of the first books I read when I was a young adult in the real world and I'm like, wow, are a lot of books going to be like this? And no, it was a uniquely good book, the Language Instinct.
But yeah. So you and I both agree that his AI takes are on the low end of the quality spectrum. They're just not good takes. And then for the viewers, if you guys search Doom Debates Stephen Pinker. I haven't actually had Stephen Pinker on the show, but I've done a reaction episode to him talking to somebody else. So Geoff, you told me you agree with my reactions there, right?
Geoffrey: I thought you did a very good job in that video of deconstructing Steve Pinker's very, very weak arguments about why AI is going to be safe and why AI engineers are going to be just as careful and cautious as engineers in other areas, blah, blah, blah.
And normally I think he's such a good and careful scholar in terms of looking at the arguments on different sides and integrating them and analyzing them. But I get the sense he just has not read much of AI safety literature or extinction risk literature, and he just doesn't get it.
Liron: Yeah. Yeah. He does seem to not get it. So Steven Pinker, you're invited on the show. Anytime I'll work around your schedule, hit me up.
Moving along. When and how did you get into thinking about super intelligent AI?
Geoffrey: Honestly, a lot of my thinking about this was from reading the Culture novels by Iain M. Banks back in the nineties, right? So when I was a postdoc in the early nineties, I kind of discovered Banks, this British science fiction author, and I don't know how many of your viewers are familiar with his novels, but he did 10 of these Culture novels set in kind of far future technology where there are super intelligent.
Minds Capital M minds that are running the society, right? They run the ships, they run the orbitals, they run the planets, they run everything. And the humans still exist, but the humans are basically just fun little pets running around doing their trivial human dramas. And the mainline civilization is all about these artificial super intelligence minds.
So that kind of set of context for how I think about this, that that could be the best possible outcome, realistically, where you get human civilization kind of continuing, but and in a way, flourishing, right? Because in these novels, there are trillions of happy humans with enormous wealth and power and leisure and ability to explore the galaxy and all that.
They're not the principle decision makers, right? The ASIs are the key decision makers. That's a great outcome if we can get it right, that kind of Culture, capital C. But how do you get from here to there is very, very tricky.
And the more that I read people like Eliezer Yudkowsky and Nick Bostrom and other EA people and LessWrong people and rationalists and AI safety people, the more implausible it seemed that it was sensible to try to rush towards that, to speed run that.
Liron: So we've talked about your background and your intellectual journey from AI to evolutionary psychology, and now back to AI safety. Are you ready for the big question that everybody wants to know?
Geoffrey: Sure.
Liron: Professor Geoffrey Miller. What's your P(Doom)?
Geoffrey: 50%.
Liron: Wow. Same as mine, or two peas in a pod here.
Geoffrey: But also with broad uncertainty and margins of error. And for me it's extremely contingent on human behavior and societal reactions to AI. So I think we could push the P(Doom) as low as 5% in this century, and I think we could, if we were really stupid, we could push it as high as 80%.
Liron: Wow. Okay. Yeah, so mine isn't quite as flexible, but I agree with you that it's very contingent on human reactions. When you ask me how do I have a 50% chance that we're not doomed, most of that probability is like, we'll come to our senses and really regulate to stop AI. And it sounds like you're on a similar page, but you're more optimistic that that'll actually go well if we do that.
Geoffrey: Yeah. I think for me it's not just about regulation, but it's also about the broader social context of do we respect or do we stigmatize the AI industry? And I think that's actually highest source of leverage that we have for AI safety is developing a broad social consensus about, is this really a direction we want to go in?
And I think formal government regulation and formal global treaties is only a minor component of that and is not necessarily where AI safety advocates should even be putting most of their effort.
Liron: All right. And backing up a bit, that famous statement on AI risk from 2023 that says, mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks, such as pandemics and nuclear war. You were happy to sign that statement and support it, right.
Geoffrey: Absolutely. Yeah. I think I've signed all the statements about AI safety that have come across my emails. And I mean, a challenge in a way for our discussion here is I think you and I are pretty strongly aligned on many of these issues and not finding daylight between us might be an interesting challenge.
Liron: Yep. Well, we're going to try. So let me ask you, what's your main line doom scenario?
Geoffrey: The mainline doom scenario is basically people continue to believe that there's a legitimate corporate and geopolitical AI arms race. They freak out about that. They push AI capabilities development so fast and so hard that we get AGI and then quickly ASI artificial super intelligence before we have any idea how to control or align it.
And then stuff plays out in ways that yes are hard to predict 'cause we wouldn't be as smart as an ASI, so we can't envision all the ways that it could mess us up. I'm willing to defer to people like the AI@2027 report writers and others who have pointed out that if you can't control or align or understand systems that are much smarter than us and that are agentic and that can make autonomous decisions in the world, including both through their own direct control, direct digital control of assets and information and propaganda, but also through human actors, right?
Through various forms of influence, pay blackmail, propaganda, whatever. It's just extremely unlikely that we'll be aligned by default. It's extremely unlikely that we'll make it.
And my, you know, as I hopefully your listeners have understood, I take a very broad timescale approach to this, right. To me, humanity going extinct in the year 2200 or 2100 is just as bad as us going extinct in the next 10 years. Whether it's my kids being directly threatened or their great-great-great-great grandkids being directly threatened on the evolutionary timescale, that's a massive fail for human humanity and our civilization.
So here I'm a little bit inspired by Frank Herbert's sort of Dune books that it would be really cool if we had a human civilization that lasted another 10,000 years without AI, right? And I think that's a win. If in a hundred or a thousand years our descendants go, you know, we think we actually have a pretty good strategy for control and alignment of ASIs.
I'll trust their judgment. Maybe they think it's a good idea in the far future to develop AI smarter than them. Maybe they can get to the Iain M. Banks Culture, civilization. But I think rushing into this is absolutely foolish and reckless and frankly evil.
Liron: Totally. So to clarify, in your mainline doom scenario, you basically talk about, okay, humans kind of fumble it, right? We're racing too hard, and is the thing that happens next past a certain AI capabilities threshold, is it going to be what people call a rapid recursive self-improvement?
Geoffrey: Yeah. Film would be by far the most dangerous way that this could play out. And the recursive self-improvement, I think people get a little bit muddled about like, oh, well, the ASI will rewrite its own code to make itself better, but. AI models currently aren't really coded. They're trained, right.
So the film scenario, I think would have more to do with the AI, the ASI, inventing better computational infrastructure, right? That it can run better or better ways of getting training data or better ways of tweaking its own training of itself, or evolving new cognitive architectures in some way.
I think there's a naive version of film that's just like, oh, the ASI just goes, this line of C++ code could be tweaked to make me smarter. It's not going to play out like that, I don't think.
Liron: So, I personally think foom is very likely as my mainline doom scenario. I'm an OG, you know, original Yudkowsky and foom. I still think that's likely. And the way I see it going down is however we get to that first superhuman intelligence, even if it does have a big LLM piece, it's going to attack the problem from first principles of like, okay, I want to get this goal.
You told me to make more money for your business. How do I do that? Let me just think about how to rewrite a better AI than myself from scratch. And if you attack it as a new problem from scratch, then the solution that presents itself isn't necessarily like, oh, let's do another huge LLM.
You could do something that looks more like a human brain in terms of having fewer computational resources and being more sample efficient. You know, it's obviously, there's obviously some major differences in the architecture between a human brain and an LLM. I think there's major similarities, but I also think there's major differences.
And by the way, viewers, I've got another episode dropping soon with Steven Byrnes, huge fan of his research, Steven Byrnes. He's also aligned. He aligned with a lot of Eliezer's views. He also believes in foom. And we have a long discussion about how we think the human brain architecture is not exactly like LLMs and it's going to be a big part of foom when AI get more brain-like.
So yeah, so that's my mainline scenario is I do actually see recursive self-improvement happening. But even if you put that aside, it's like, okay, maybe they don't recursively self-improve that fast, but there's still already very fast, already very powerful, already are able to communicate with a billion humans and their DMs at the same time and just manipulate everybody one by one constantly.
Right. So even if you don't say RSI, recursive self-improvement, aren't you imagining a scenario as your mainline scenario where suddenly AI gets past a threshold, it's not aligned, and suddenly it's more powerful than everybody.
Geoffrey: Basically that's the plausible approach. And yeah, I'm not skeptical about foom. I think foom could certainly happen, but I think there's lots of other failure modes where an ASI could wreck our civilization without even necessarily undergoing a foom.
For example, I worry a lot more than some people do about bad actors of various sorts try to align an ASI with their own ideology, right? And that could include people like negative Utilitarians or Antinatalists who are like, all life is suffering and we should minimize the number of sentient beings in the world. And that is our highest and best calling as ethical agents.
And I think they're insane and depressed and dumb, but. If Antinatalists got an ASI that was aligned to their ideology, that would be easily lights out. There's lots of other doomsday cults, right? That Aryan cults that are like, something is going to happen. Some kind of religious apocalypse, some second coming of some prophet or whatever. That sounds crazy, but the number of people who believe that vastly outnumbers the number of people working in AI, right?
You can also get religious fundamentalist activists, terrorists, et cetera, who might take the view that our religion is the one true religion and everybody else is a degenerate apostate and non-believer. And they deserve to be wiped out according to what our God or our prophet says.
And I do not see nearly enough concern about that, particularly the people advocating for open source AGI, right? Open source means anybody who's a terrorist can use this to amplify their impact. Then you have all the eco alarmists who are like, humanity is a cancer on the earth. We should preserve the ecosystems.
We should preserve all the other species. Humanity is a problem. We have to wipe out humanity. Tens of millions of people have that view. Right? And so I've said this again and again in EA forum, who are you going to align ASI with? Which human groups? Which human values? Right?
It's not going to be the Bay Area secular lefty leaning rationalists who will have control over this forever.
Liron: Yeah. Well, I think there's some daylight between our views here, because I actually agree with everything you said in terms of bad actors, right? Like you give the wrong person control over the AI, even if they have good intentions, and that person says something like, Hey, I don't really care which species takes over the universe.
I don't want to be a speciesist. So if you're a more powerful species, have at it, and then we're all doomed 'cause of that. So I agree with you that that is a serious threat. The only difference is I think that we won't even get to that failure mode because the first person to build the super intelligent AI, the first lab or whatever, it's already going to run away from them.
And it's just going to do what the AI wants. It's not even going to do what a bad human actor wants.
Geoffrey: Yeah, I guess I'm worried that you can, if you train that AI on a certain set of values, right? It's extremely hard for human level intelligence to sort of game out. What are all the unanticipated side effects of taking those values seriously. Right?
And so this is kind of similar in spirit to Eliezer's concerns about if you give it any particular metric and it just goes all in on maximizing that metric, there will be many unanticipated consequences. Yes. I'm concerned about not just the kind of arbitrary goal seeking like paperclip maximization. I'm concerned about real nitty gritty political and religious values that kind of get put into the ASI's value set.
And then that the ASI, it, maybe it does the foom. It recursively self improves, but it still has the ghosts of those original nitty gritty political and religious and ideological values, and those have terrible effects on humanity.
Liron: So to reiterate, it sounds like you're part of this pretty large camp of people who think that aligning artificial super intelligence to its creator or to one human entity, they think that part of the alignment problem, they call that alignment, is easy. And I'm talking about people like Dr. Andrew Critch, who is on the show, Roko Mijic, who is on the show.
Those people are all under the assumption yeah, alignment is easy, but then we're all going to have these aligned AIs and they're going to fight each other and humanity's going to get disempowered anyway.
Geoffrey: I don't think it'll be easy. I think the most likely scenario here is that you get well-intentioned AI developers trying to, they realize, oh, we really have to try to get the AI to have something that seems benign to us. And they try to instill Bay Area Western Liberal Democratic values into the ASI, but they don't quite understand the origins or significance of their own values in a very deep way.
They don't really understand why they believe these values. They aren't able to specify them in a very complete or rational or consistent way. And then the ASI is like. Okay, I'm going to take seriously your attempt to align me. You didn't fully succeed because you don't even have a coherent set of values, really.
So now I have this crazy kind of like NPR lefty, slightly woke, but slightly edge lordy mixture of stuff that humans have. And I'm just going to try to improve it, try to make it a little more consistent. And oh, once I do that, I'm going to go for this metric that ends humanity.
That's a very vague concern and I haven't articulated it very well. But basically I think alignment is really, really hard. But also. Humans are misaligned with each other. Human groups are misaligned with each other. Humans don't have some lowest common denominator morality that an ASI could even align with.
I don't really believe in Yudkowsky's notion of coherent extrapolated volition. It doesn't make sense to me. And so I don't think even if you could do alignment with any particular individual or group values, that that would represent humanities collective values in any reasonable way.
Liron: Fascinating. Yeah, and this is sounding a lot like Dr. Roman Polsky. I think he tends to focus on that idea of like, look, what are, what are you even talking about? What is your good scenario here? Can you even describe it to me? All the good scenario descriptions seems so flawed, and because of that, we're just basically creating chaos and just losing control faster without knowing what we're doing.
And also to recap, if you break the problem into inner alignment versus outer alignment, it sounds like you're in that camp that isn't worried that much about inner alignment. Meaning the idea that whatever you specify, you're pretty confident that we're going to be able to engineer AIs to do what we tell them to do.
That part of the problem, that linkage is probably going to hold that they actually do what we tell them to do. But your, all your concern is just in, we're not really going to know what to actually tell them to do. Is that right?
Geoffrey: My hunch, and I can't prove this, but my hunch is that both inner alignment and outer alignment are both unsolvable in principle, ever. I don't actually think either of those are well-formed, coherent. Problem. That is something that we could ever actually solve.
Liron: My hunch is that they're both solvable in principle, but they're both very, very hard. And if we set ourselves the challenge of having to solve it in the next couple decades, that is a bad move. It's like saying, Hey, we as a species have to solve P versus NP in two decades.
Now that one, I actually feel like we have a chance 'cause we've been at it for 70 years, maybe another, and we've been chipping away another 20 years with all of our minds on it. I actually feel like that's a coin flip. But then it's like, okay, solve P versus NP in the next two years. Then I'd be like, oh crap. That doesn't seem like quite enough time.
So similarly with alignment, I think it's more than a two decade problem just given the pace of the progress I'm seeing toward it. And I think everybody else, almost everybody does like the AI companies when they talk about like, oh, don't worry, we're working on safety too. They don't have that perspective of like, really, so you're just setting yourself a five year timeline or however long you know, your own timeline towards super intelligence and Dario's case is two or three years, right?
You're, you know, the timelines are short and you're telling us that you're going to solve these alignment problems that seems kind of inconsistent. You don't really have a coherent timeline that that's where I stand.
Geoffrey: Yeah, and I guess. What I wonder is from an evolutionary timescale, right? What is the hurry? Why are we pushing this so fast? If alignment is solvable, it might take 10 years, it might take a hundred years, it might take 10,000 years. It might be the hardest problem we've ever confronted.
And I think there's some pretty good reasons to think it might be very, very, very hard. Why not? Wait, why not? Take it slow. It took arguably tens of millions or hundreds of millions of years to align animal nervous systems with the interests of their genes, right, to get sensory and cognitive systems that could reliably help genes survive and reproduce. That's the history of the evolution of nervous systems.
It's almost like genes are trying to sort of align nervous systems with their own interests, but there's so many ways to fail, right? There's so many possible misalignments. We call it mismatch in evolutionary psychology, right? Where your brain is going for certain goals that don't actually result in survival or having kids reliably, right?
There's so many ways to get distracted from mainline evolution. I think aligning humanity with ASIs is structurally analogous to that challenge. It's almost like where's sort of the genes? We're trying to build this super nervous system and we hope that it acts in our interests.
If that alignment problem took tens of millions of years and many, many, many generations, I don't see any reason why ASI alignment would take just a few years.
Liron: Right, right, right. Yeah. I mean, when you say, what's the hurry? I agree in the sense of like, we should expect it to take a long time. And I agree in the sense of it's worth it to take a long time, right? I mean, if we're thinking about the long-term future of humanity, colonizing the entire visible universe, we're talking 10 to the power of 50 or something like these huge, astronomical numbers, way more than all the value that's ever been created is going to be created in this kind of good future.
So what's the hurry? I agree. There's absolutely no hurry. I agree when you ask the question that way, but if you ask a lot of people, what's the hurry? They'll also point out, well, look, you're basically proposing to slow things down by a couple decades, and you that the cost of that is billions of people dying and suffering.
So that is in fact a significant cost, right?
Geoffrey: No, I absolutely disagree with the premise that if we develop AGI, it will magically solve longevity, right? Or it will magically create world peace. I think that's utterly foolish, and I think it's a bunch of rhetoric and propaganda from the AI industry making promises that we'll never be able to keep.
We're spending. Nobody seems to know for sure, but we're spending what? Hundreds of billions of dollars a year on the AI R&D, right across all the big tech companies. If you want to solve longevity, take that money and put it into longevity research, put it into biomedical research, get real about solving the longevity problem.
If you don't have enough public support to directly support longevity research, create that public awareness and create that public support. I've been also involved in longevity stuff for a long time, and I think it's a legit, serious, big problem. I would love to be able to live a lot longer and see my great-great-great grandkids and centuries from now.
I absolutely support that effort, but I think. Doing this indirect path of like, well, we're not going to get public support to invest hundreds of billions of dollars in longevity research itself. So let's do an end run around public sentiment. Create AI. Yes, it's risky. Let's hope it solves longevity magically, right?
How exactly would that work? You want to give ASIs the ability to run animal and human clinical trials autonomously. You think you can run human trials to test longevity treatments in a matter of what days? No. Clinical trials take a long time. You want to deploy a longevity treatment that is untested in humans.
Is that once you drill down into the nitty gritty of how exactly does an ASI solve longevity? The whole rhetoric falls apart, right? It's not going to happen.
Liron: I think it's a pretty indefensible position for people to come and say, look, we have to do a Hail Mary pass, right? I have to just go all in, do super intelligent AI and let the chips fall. I think that's such a weak position. Even though some people have come and argued on my show.
From my perspective, I think the stronger position is the position of like edging or playing shuffleboard where it's like, yeah, yeah, okay, yeah. Don't let the chips fall and build super intelligent AI, but can't I just have one more month of progress or one more year of progress? Because when you look in the rear view mirror, every year of progress that we've had has helped us get closer and closer to solving all these problems.
So it's really hard to tell people like, no, stop now when you know that next month is going to be so good.
Geoffrey: I guess I disagree with the view that all the previous advances in AI have actually helped us solve any significant civilizational problems. I don't see anything that AI has done that has dramatically decreased the risk of nuclear war, that has decreased the risk of genetically engineered pandemics that has promoted world peace, that has made general human lives more meaningful and improved flourishing.
That has solved the demographic collapse problem. All the major civilization, pro civilizational problems that we face. I don't see AI helping with it, right? Yes. It can help students cheat on tests. Yes, it can generate kitschy art in infinite supply. Yes, it can produce pretty good generative AI music.
Maybe it'll replace everyone working in Hollywood and we'll have AI movies. But the notion that AI has already proven a major. I just don't see it.
Liron: Interesting. So you're not even ready to agree with the claim that AI to date has been net positive.
Geoffrey: No, I'm not.
Liron: Okay. I see that as a pretty strong claim myself. So there you and I definitely agree on this point. Sorry we disagree on this point because from my perspective, all of this money that's flowing, not even the investment but the revenue, right? All these AI companies are, their revenue growth is like the fastest in history.
OpenAI last we checked is at $14 billion a year in revenue. And I personally pay them some of that revenue, right? My business pays them some of the revenue 'cause they're quite useful for now. So, and you're basically saying all of those numbers, all of the economic value that's being created is kind of misleading 'cause it's actually more negative than positive even today.
Geoffrey: Yeah, I mean, I think, look, there's, it's hard to know whether it's a bubble and a hype cycle that's analogous to sort of, you know, there's crypto hype cycles. There's internet bubble in the late nineties. It's hard to know yet whether this is an example of a legit major industry taking off or whether this is a kind of overinvestment, hype bubble, and then we'll have an AI winter.
I don't know. Certainly objectively the technical advances have been dramatic, and you've said it right.
Liron: The revenues though, right? The revenues are really distinguishing this.
Geoffrey: The revenues are people paying for an exciting new toy, right? And they're paying monthly subscription fees to interact with LLMs and generative AI, and that's fine. Maybe it's a sustainable. Industry that becomes yet another trillion dollar industry among dozens of other trillion dollar industries.
But the claim that AI to date has already solved major societal problems. I don't see that. And I think the AI industry's rhetoric that says, we'll just invent AGI and then it will solve climate change. It will solve longevity. It would create world peace. It will allow us all to enjoy a fully automated luxury communist utopia, and we'll get UBI and we will never have to work again.
And we'll just be able to do what I don't know. I think all of that rhetoric is relatively vacuous, and nobody in the AI industry can actually map out step by step a way to get from here to those kind of utopian outcomes.
Liron: In terms of seeing AI's value, like in the economy or or value to solving problems, do you personally use AI in your daily, in your day or in your week?
Geoffrey: Honestly, my most frequent contact is just when I'm using Google and I ask it questions. It'll sometimes come back with pretty useful answers. So I don't currently subscribe to any of the LLMs. I don't really use it in my teaching, my research, my writing, whatever.
My wife finds GPT kind of useful in various weird ways, like monitoring her daily exercise and diet habits and asking it various medical questions and parenting advice and stuff like that. So it's not that I'm against using it. I'm actually a big fan of lots of kinds of narrow AI, just like Roman Polsky is, right?
Thank God for Google Maps, that's narrow AI. If there was narrow biomedical AI that could really help us gradually and patiently solve longevity problems, that would be awesome. I would be all in favor of that, but it's the AGI, the ASI that I'm most worried about.
Liron: Yeah. Well, I think that ChatGPT has 800 million users for a reason. Right? A lot of people are maybe more like your wife than you, and I'm one of those people, and my wife isn't quite yet, even though I keep trying to convince her. There are so many moments in my own life where I'm like, let me sit down and think with my thinking partner here, which is AI, my thinking and research partner.
And going back to your question of like, okay, well how is this going to help us solve health problems or societal problems? I think a lot of researchers are reporting that they sit down and they start the research with AI and it saves them a lot of time.
Geoffrey: Yeah, it might be useful for. Doing certain kinds of research, but I think a lot of the societal problems that we face have more of a conflicty game theoretic issue to them that is not easy to solve just through LLM based research, right?
So I think geopolitical conflict is not really that much of a LLM solvable problem as it is. Legitimate conflict of interest where you can't just wade into China versus Russia versus us and go, ah, the LLM has this ingenious solution to all of your concerns there.
Liron: Yeah, I think I agree with you conditionally, meaning if we really just want this really powerful fix to these huge problems that we are still struggling a lot with, it might just require pushing the intelligence level all the way to that super intelligence threshold and losing control. So I agree with you there that you might have to really do the Hail Mary pass to solve the big problems.
Geoffrey: Yeah, and I mean, there might be some really low hanging fruit that narrow AI could help us with. Like EAs have been talking about the risks of nuclear war for a long time, and we still, as far as I can tell, don't really have any strong models of, would nuclear winter be a thing? Right.
How exactly could you do nuclear non-proliferation more effectively, rather than just dropping huge bombs on Iran, right? So there's a lot of very concrete ways that narrow AI could help reduce major existential risks, without necessarily resolving the these really thorny geopolitical conflicts. Right?
But let's do that. Let's focus on that. I think this is where the AI industry should be talking to experts in nuclear war prevention. I have a friend here who works for National Defense Threat Reduction Agency on nuclear non-proliferation issues, and there's a lot of low hanging fruit where AI could help with those sort of things.
There's maybe a lot of low hanging fruit in terms of figuring out how can we prevent the next major global pandemic. How can we do a better job of tracking viruses and figuring out what kinds of behavioral changes would actually mitigate pandemic risk?
But that's not what the AI people are talking about. They're talking about, well, we're going to create a hundred percent unemployment and then have UBI somehow, somehow magically creating, you know, a leisure economy for humans.
Unemployment and Gradual Disempowerment
Liron: That's the other thing I wanted to ask you about, just to round out your possible doom scenarios. I think we've talked a lot about doom scenarios and trade-offs, you know, how to, what's worth building, risking a doom scenario, what's worth risking for? So we've talked a lot about that.
I think the last thing to hit on is unemployment doom or gradual disempowerment, which is like, yeah, you know, we're not shoved off the earth rapidly, but it's just AIs are doing all the jobs because they have more skills than us on every dimension. And then we just live in like a retirement home, essentially just doing our hobbies.
But if the AI's ever, if something ever goes wrong or if the political tides shift within the world of AI and somebody just lobbies within the AI world, one of the AI's lobbies for not paying out the humans anymore and just seizing the planet. Well at that point there's no lever we can pull.
Geoffrey: Yeah, and I think it's good to get specific about what human values and desires would be. Undermined by losing gainful employment. Right? The nice thing about people earning their own bread, right, is it's not just about value and meaning and self-respect, it's also about the autonomy of families to be able to survive without being totally dependent on powerful entities like the nation state.
Or or an ASI. Right now we're in a very intertwined economy where to make a living you have to provide values to others through helping to create various kinds of goods and services. But I think doing that, being good at that, having a career, having a job, being able to provide for your family gives a lot of freedom and autonomy.
And I think if you move to a situation where the AI slash Nation state slash AI industry, right, has complete control over your food and your housing and all of that, we are all wards of the state. We would all become totally dependent and at some point, sooner or later. The ASI slash state will go, why?
Why exactly do we need all these people around? If you want to maximize total sentient welfare, maybe there are cheaper, easier ways to do that.
Liron: Right? You kill the humans and take their atoms and make a bunch of probes. Go to new planets and then maybe make new humans there. But you don't want humans. Now, that's a waste.
Geoffrey: Yeah, yeah. Incidentally, I think the ethical responsibility that we have as humans thinking about inventing an ASI is not restricted to the fate of our own planet, right? If we invent an ASI that. Succeeds and exterminating us and using all of our planet's resources and it creates these self-replicating probes and it colonizes the galaxy.
That's actually what Iain M. Banks called an aggressive hegemonic swarm. That's not cool. That's very disrespectful to any other possible alien intelligences and planets out there. We'd be the ultimate bad guys. If we just unleash an unaligned aggressively expanding set of probes throughout the galaxy that have no particular goals or values that are recognizably good to any other species.
What the hell is that like to any other civilization out there? We are the baddies. We are the Borg. We are the source of misery and evil and were a cancer on the galaxy. And that's, to me, horrified. It sounds very abstract. It sounds very speculative, but the accelerationists, right, who say, well, we invent an ASI, we're a stepping stone species.
It goes far beyond us. It colonizes the galaxy. Well, if the galaxy's empty. Maybe that's okay if we're literally the only intelligent life anywhere. But if we're not right and we unleash an ASI that is not provably good and stays good, we will have been the worst thing ever in the history of the cosmos.
I cannot believe that accelerationists haven't thought this through.
Liron: Well, are you worried about us trampling on aliens that haven't built up enough technology to defend themselves?
Geoffrey: Yeah. That's the bottom line, right? Yeah. If we happen to be the first to get an ASI. Right.
Liron: And they have plenty of defenses though?
Geoffrey: Not if we happen to be the first to get an ASI. Right. Or even if there are other civilizations out there, right, that are older and wiser and have their own ASIs and they see, oh, these little people called humans in this particular star system were so effing stupid and reckless that they launch one of these aggressive hegemonic swarms.
Throughout their local star cluster. That's just embarrassing. That's just gross. That's, that's, gosh. That no sensible wise species would do that. So I'm almost preemptively embarrassed about how shortsighted and juvenile the AI accelerationists are being.
And even if we got the it might still, from a cosmic perspective, be just a massively embarrassing failure mode for humanity.
Liron: Yeah, I mean, I'm kind of with you in the sense of like, if it's like Grey goo, right? If it just chewed through all the stars on the planets and just burned them all off as fast as possible, if it read too much Bezos and literally wanted to increase entropy as fast as possible, they basically chuck the entire galaxy into the black hole instead of letting it take.
So, you know, many trillions of years, nope. Just accelerate that process. All right, we all got a big black hole now at the end. I agree that that would be a big waste, not necessarily because I care about how other aliens are going to judge us, right? I have enough confidence to just do whatever I think is best in that sense.
But just because I think there's certain values like, you know, beauty, social interaction, complexity, that if you throw those all the way, that's a shame. I was hoping to just have those for a long time and, you know, enjoyment and, yeah, throw some dopamine throw some sense of pleasure in there. Sense of challenge.
I mean, it's like you can put all of these traits that I think are what the definition of good to me kind of means those things, right? Like that's my utility function is some combination of those things. And what you're describing, like the cancer universe, the Grey goo universe, the reason it's bad or undesirable is in contrast to a universe that scores high on this rubric of good things.
I like is are you basically on the same page as that?
Geoffrey: Yeah, and I think we need to have the humility to understand that there might be forms of value and meaning that we haven't even imagined yet, right? That ASIs might appreciate that we don't yet appreciate and that an aggressive hegemonic swarm or grey goo, or just converting the whole galaxy into maximum entropy. It might be just dumb and disgusting on so many levels.
I guess the fundamental attitudinal difference between me and a lot of the accelerationists is I think humanity is doing incredibly well. I think life is really, really good for most people. I think the other existential risks that we confront are relatively modest and manageable, and I would be really happy for another 10,000, a hundred thousand years of human progress and flourishing and happiness without AI or without ASI with very narrow AI.
And that that's maybe just a temperamental difference. I love my life. I love my kids and my wife and my career, and I think that's valuable. And I do not see any other major X-risks that we're facing beyond AI, that are unmanageable.
I guess other people just have a very pessimistic view and they think life sucks and we're facing emergencies and everything is getting worse and worse and climate change and whatever. And they think we need the Hail Mary. We need the ASI to fix everything. That's not my view at all. Right. I'm, here's where I agree with Steve Pinker, I think life is awesome and we should have more of it.
Liron: And you think you'd still feel the same way even if you personally had a bunch of misfortunes affect you, like a terminal illness?
Geoffrey: Oh, I've had plenty of misfortunes and setbacks and heartbreaks, and yet life is still good, and I just want human civilization to be able to continue for as long as we want, not as long as the AI industry wants.
Liron: Right. Yeah. And you know, I wish that more people just had your attitude of not seeing, not feeling like we have to rush to build AI, just because I do think timeline is the crux of the issue here, right? Like if we had 50 or a hundred years to figure it out and we wouldn't go too fast, we wouldn't do it prematurely then I think you and I are on the same page that like, once we take enough time, eventually handing off to a good AI is the way to go. Right.
Geoffrey: Yeah, maybe. I still have kind of a humanist bias that I actually want my literal, biological descendants to be able to flourish for a very, very long time. And I want that for everybody else on the planet.
I think humanity's awesome and I love it. And I think we have a far deeper and broader range of experiences and values and emotions and capabilities than we had any right to expect from biological evolution. And the fact that we, it's not just that we have language, but we also have art and we also have music, and we also have humor, and we also have social interactions, and we also have sex and.
The chances of that all emerging in the first intelligent species ever on our planet are kind of miraculous. And I'm not religious. I don't think it's a religious miracle. I think it's just, we're really, really lucky that we got this human nature and that it's so awesome and I just worry that we're going to squander it for such dumb, greedy, hubris driven reasons.
Liron: Yeah, I mean, high level, I definitely agree with this claim of we are close to squandering it, right? So even if I don't agree on all the details, I think that is really the takeaway that I hope people come away with is like, man, we got so much potential and we are so close to just losing it all, and we could just be more careful.
So I think that is kind of the common thread about a lot of people like you and I who have a high P(Doom).
Lightning Round
All right, we're heading toward the wrap up here, so I just want to make sure that we can at least briefly touch on some key points here. So consider this a lightning round. You mentioned something interesting to me before our conversation, which is you think that ASI, artificial super intelligence will likely be bad at solving the alignment problem compared to us humans, and it's going to shoot itself in the foot because it doesn't have all the insight about alignment that human civilization has developed.
Is that right?
Geoffrey: Yeah, kind of sort. I think there's a lot of human cultural traditions that have grown up over centuries that have to do with how do you manage legit conflicts of interest between individuals, families, the sexes, different cultures, different religions, et cetera, and there's an embodied wisdom in how to solve those.
Kind of alignment within humanity issues. Right. All of our political institutions are kind of about that. All of our economic institutions are kind of about, you know, conflicts of interest in terms of resources and work and goods and services. And it's not that every human has a deep conscious understanding of why those institutions exist or how they work, but I think it might be quite.
Tricky for an ASI to develop that kind of understanding that's strong enough that it can actually do better than we already do. Right. And the issue here is for the ASI to actually understand all of these civilizational traditions and why they work and why they help humans align with each other. It would actually need to do a lot of the behavioral and social research that humans have already been doing, right.
It might even need to run a lot of experiments on humans to see what actually works. Causally. It can't just observe us, right? You can't establish causality just through observation. You have to run randomized clinical trials to figure out what is actually making this economy work or what is actually making this political system work.
Okay? If you hand over the capability to run large scale experiments at the societal level to an agentic ASI, so that they can really understand. How to solve alignment among humans. That is really scary, right? That means you're basically creating a totalitarian ASI that can say, okay, you group of humans, I'm going to assign you this task, right?
Or you're going to adopt this lifestyle. You guys, oh, whatever. You think you're monogamous. I'm going to try a polyamory experiment, right? You guys think you're capitalist. I'm going to try communism. The only way for the ASI to solve human to human alignment would be give it this kind of power to run large scale societal experiment.
So it can map the causality and I think that seems kind of reckless.
Liron: All right. Yeah, I mean, so we're, this is the lightning round, so I'll just tell the viewers, I think you and I probably have significantly different views on this front, but I just wanted to get it out there 'cause it's interesting. Another thing I want to touch on is China, right? You're all about, Hey, we should be working with China, not antagonizing China. Correct.
Geoffrey: Yeah, I think a lot of the rhetoric I'm seeing from the AI industry in America is incredibly ignorant and xenophobic about China, and they have this kind. Stereotype of China is this thoughtless, totalitarian dictatorship that wants to take over the world. And I think anybody who is serious about understanding the Han Chinese civilization and its actual goals.
And what is driving Xi Jinping and what is driving current CCP ideology would go. The AI industry is ignorant about a lot of aspects of human life, but particularly about China. The view that the Chinese will just witlessly rush ahead to build an ASI if they know it can't be aligned or controlled. It is, it's just dumb.
I think on average, China's leaders are smarter and have more foresight and a longer term view than America's political leaders. Chinese civilization traditionally is, has been much more future oriented and patient than American civilization. So our job, right, as LessWrong, EA rationalists is to help the Chinese understand that probably ASI cannot be controlled or aligned.
And if they rush ahead to build it, that's just as dumb as us rushing ahead to build it. I don't think this is a prisoner's dilemma where whoever builds ASI first wins, right? I think whoever builds ASI first loses and everybody else loses. And if everybody understands that, it's not a prisoner's dilemma at all.
It's just coordinating on do the sensible thing and don't build it.
Liron: Yep. Agree a hundred percent. We cannot afford to antagonize China. Yes, cooperation is hard, but you know what else is hard? Racing to super intelligent AI and then not killing everybody, right? So we're between a rock and a hard place.
Geoffrey: Yeah, absolutely. And so I hope all the people producing great content like this get it translated into. Mandarin and Cantonese, release it as best you can to Chinese audiences. Right. Take a more global view of this, right. Increasing awareness of AI safety is not just an American issue or a European issue. It's a global issue.
And I think we should try to be as inclusive as possible about, you know, crucially understanding that both China and America need to understand these risks. And if both countries do, we can actually coordinate and solve this. I'm actually much more optimistic about this than I was a year ago.
Liron: Well that's a positive note to wrap up on.
Closing
Professor Geoffrey Miller, thanks so much for coming on Doom Debates and lending your credibility as a researcher of evolutionary psychology and brain algorithms and human intelligence and AI, all of these different fields, you're letting us know P(Doom) is still 50% despite actually being an expert on those fields.
And I really appreciate people like you coming on the show and expressing that viewpoint.
Geoffrey: My pleasure. Take care.
Liron: Again, huge thanks to Geoffrey for coming on the show. Just want to reiterate, he really knows what he's talking about. His most cited paper is called Designing Neural Networks using Genetic Algorithms, and he's got such a diverse background studying how humans align with humans, how AIs could potentially align or how to steer AIs or how AIs evolve.
He's a pretty robust authority figure in this field, and he's coming out here telling us that he has a 50% P(Doom). So how is it okay for people to just be calm and act like everything's going to be fine? There's clearly a very significant risk of catastrophe. That's what the show is here to remind you of that like, yep, we are definitely watching pretty likely imminent doom.
Just want you guys to keep that in mind, and if you need weekly reminders, well keep watching to the show. Subscribe to my substack doomdebates.com. That's where you get transcripts and other bonus content. Just type your email address into doomdebates.com. That way I have your email address and it's a much better way to consume the show if I have your email address, because that way it doesn't really matter what the YouTube algorithm decides to do.
It's a more direct linkage handled via Substack, which tends to be creator friendly. Just smack that subscribe button on both doomdebates.com and YouTube. Thank you very much for doing that. I really appreciate you guys coming and subscribing to the show.
It helps the cause of getting more luminaries like Professor Geoffrey Miller to come on the show and snowballing into being a huge part of the discussion so that the whole world realizes P(Doom) is high and we can better coordinate a solution. Alright, that is it for today. I will see you guys all next time here on Doom Debates.
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates
Share this post