0:00
/
0:00
Transcript

These Effective Altruists Betrayed Me — Holly Elmore, PauseAI US Executive Director

Do AI safety insiders care more about status than safety?

Holly Elmore leads protests against frontier AI labs, and that work has strained some of her closest relationships in the AI-safety community.

She says AI safety insiders care more about their reputation in tech circles than actually lowering AI x-risk.

This is our full conversation from my “If Anyone Builds It, Everyone Dies” unofficial launch party livestream on Sept 16, 2025.

Timestamps

0:00 Intro

1:06 Holly’s Background and The Current Activities of PauseAI US

4:41 The Circular Firing Squad Problem of AI Safety

7:23 Why the AI Safety Community Resists Public Advocacy

11:37 Breaking with Former Allies at AI Labs

13:00 LessWrong’s reaction to Eliezer’s public turn

Show Notes

PauseAI US — https://pauseai-us.org

International PauseAI — https://pauseai.info

Holly’s Twitter — https://x.com/ilex_ulmus

Holly’s Substack:

Holly’s post covering how AI isn’t another “technology”:

Holly Elmore
The “technology" bucket error
As AI x-risk goes mainstream, lines are being drawn in the broader AI safety debate. One through-line is the disposition toward technology in general. Some people are wary even of AI-gone-right because they are suspicious of societal change, and they fear that greater levels of convenience and artificiality will further alienate us from our humanity. Pe…
Read more

Related Episodes

Holly and I dive into the rationalist community’s failure to rally behind a cause:

Are We A Circular Firing Squad? — with Holly Elmore, Executive Director of PauseAI US

Are We A Circular Firing Squad? — with Holly Elmore, Executive Director of PauseAI US

Eliezer Yudkowsky can warn humankind that If Anyone Builds It, Everyone Dies and get on the New York Times bestseller list, but he won’t get upvoted to the top of LessWrong.

The full IABED livestream:

“If Anyone Builds It, Everyone Dies” Party — Max Tegmark, Liv Boeree, Emmett Shear, Gary Marcus, Rob Miles & more!

“If Anyone Builds It, Everyone Dies” Party — Max Tegmark, Liv Boeree, Emmett Shear, Gary Marcus, Rob Miles & more!

Eliezer Yudkowsky and Nate Soares just launched their world-changing book, If Anyone Builds It, Everyone Dies. PLEASE BUY YOUR COPY NOW!!!

Transcript

Intro

Liron Shapira 00:00:37
Alright, ladies and gentlemen, another guest coming up. You know her from PauseAI fame. If you’ve ever seen me yelling into a megaphone, that was probably from a protest that she planned. Please welcome Holly Elmore.

Holly Elmore 00:00:54
Hello. I wanted to find something festive like your hat, but I couldn’t.

Liron 00:00:59
If you bought the book, that’s already good enough. So, more about Holly: She’s the founder and executive director of Pause AI US. She holds a doctorate in evolutionary biology from Harvard University, and she’s been involved in the world of effective altruism.

Holly’s Background and The Current Activities of PauseAI US

She knows some of the key individuals working in frontier AI development and AI safety, the same people she now leads protests against. Fair to say?

Holly 00:01:21
Yeah.

Liron 00:01:23
Yeah. And I just heard your episode with Robert Wright, which I thought was really good. You guys were basically talking about social dynamics of this community, and you mentioned that you were Facebook friends with Dario. Is that right?

Holly 00:01:34
I think I still am. I don’t know if he is very active on Facebook these days.

Liron 00:01:38
Do you ever send him a poke?

Holly 00:01:41
I did accidentally invite him to early protests, including one at Anthropic. ‘Cause you have to click one by one by hand and I managed to invite him.

Liron 00:01:53
Pretty funny, yeah. The Dario sucks group.

Holly 00:01:56
Yeah.

Liron 00:01:56
So in general, what’s the current state of the protesting movement?

Holly 00:02:00
Well in San Francisco where I live, StopAI has been much more active and they’ve been focusing mostly on OpenAI, up until this hunger strike that Guido has had at Anthropic. That’s been more of my local scene.

Around the country, we’ve got a lot of groups starting up and doing small starter protests across the entire country. We’ve got some very great photos from our 15 local groups, and I’ve been working a lot on maintaining those.

But actually, our next big event in October is going to be around this book, and it’s gonna be more of a fun one. You know, it’s not gonna be yelling at AI company buildings, it’s gonna be discussing this book.

So I do wanna plug that. The one in San Francisco where we are is gonna be at Manny’s in the Mission, five to eight on October 4th. There’s gonna be events across the country, and there’s also gonna be events across Europe during the two week period after October 4th.

Holly 00:02:57
We’re giving everybody a chance to read the book, and then we’re gonna come together and discuss it and discuss what do you do from here if you feel worried.

Liron 00:03:04
Oh yeah. And what do you think about Michael Trazzi’s hunger strike?

Holly 00:03:07
Well, PauseAI doesn’t do that. A lot of people attributed the hunger strikes to PauseAI, which was interesting. I guess it was kind of gratifying, like that’s who they think of, you know?

But yeah, we don’t do stunts. Trazzi decided to do it. He was inspired by Guido. I was honestly pretty worried. Trazzi had a very concrete demand. I’m nervous about stunts because I don’t want things to go too far.

Holly 00:03:37
I think we really have such a strong base of just moderate activism to get established in this space. It’s already quite enough to be getting on with. But I think the reaction I saw to it was pretty positive. It really impressed upon people the severity, like what’s going on.

So I think it ended up having a positive reception.

Ori Nagel 00:04:00
I mean, I was blown away that some people were like, oh, this must be PauseAI. I saw someone on Twitter, you know, a notable AI safety figure, say, “Oh, I’m not sure if it was PauseAI but it’s people who are spiritually aligned with Pause AI.”

Liron 00:04:13
Pause AI is becoming like one of those generic brand terms, you know? Like, oh, I’m gonna Google it. I’m just gonna PauseAI it.

Holly 00:04:17
I mean, that was the hope. As long as you’re not claiming to officially represent our organizations, like, take the name. It’s a movement that’s meant to represent something very broad.

It’s called PauseAI ‘cause a lot fits under the umbrella of pausing AI. I think you can say they’re spiritually aligned with PauseAI, but it just so happens we have pretty strict rules about not doing that kind of stunt.

The Circular Firing Squad Problem of AI Safety

Liron 00:04:41
So Holly, I feel like I do like to compliment the guests, but you really deserve it. You’re a thought leader, at least as far as my own thinking is concerned. Everybody check out Holly’s blog because you make a bunch of points in your posts and also in your LessWrong posts that I don’t see other people saying. It’s legit original.

It’s influenced me. And in particular, you know, protesting, this idea of like, “Hey, we should protest.” I don’t think that I would’ve gotten there without your influence. So that was really good.

Liron 00:05:02
And you’ve got other posts talking about how AI isn’t another technology, riffing on that. So you’ve got a bunch of different posts. And one of the things that I think you and I are both on the same page on that I feel like other people are not is just this idea of the circular firing squad, right?

Like people just love to stay on their forum and argue instead of just agreeing enough to get out there and do the mission.

Holly 00:05:32
Yes. Yeah, it is something. It’s funny ‘cause you just mentioned Robert Wright’s podcast. I end up engaging a lot in talking about the sociology of the groups. Starting PauseAI was kind of trying to get away from them and leapfrog them and just talk to the public.

And I’ve been surprised how much I kind of still get mired, because the story of what’s going on isn’t really just the story about the technology. It’s not just about the kinds of things discussed in this book, If Anyone Builds It, Everyone Dies. It’s about this theoretical case.

Holly 00:06:06
Like the issues on the ground do have a lot to do with tribalism and who knows who and who trusts who and who’s investing in whom. It actually ends up being kind of a complicated morass where you can’t leave. But you should, as much as you can, you can take action anywhere you are without being involved in this whole deep sociology of the problem.

And I keep trying to move out. But the more I move further out, the more when I double back, it’s more productive. So yeah, I think I’ve argued many times and I’ve said, “This is the time. I’m just gonna leave. I’m just gonna be focused only on trying to reach the public.”

Holly 00:06:46
And I do end up coming back and then delivering my take on what’s going on in the psychology of tech or something.

Liron 00:06:54
Exactly, exactly. I think I have an advantage in terms of my psychology, which is that I’m just relatively less sensitive to what other people think. That’s my superpower of cringe, right? I was dressed as the grim reaper, right?

So the cringe is like my Star Wars force. But in your case, I don’t actually think that’s you, right? I think you actually are sensitive to all this stuff.

Why the AI Safety Community Resists Public Advocacy

Holly 00:07:20
Yeah. The thing that surprised me was I just really thought that the existing AI safety community before ChatGPT was released, basically, before that time, I thought that they would be excited to move into stuff like protesting and doing advocacy.

I really believed the story, which I had been told for like 10 years, which is that they didn’t already do those things in AI safety because it just wasn’t time.

Holly 00:07:45
Like people, the idea is too sci-fi. People don’t listen to you and you’ll have no impact and people think you’re crazy. And so that will backfire if you try to talk to the public about it or talk to politicians about it. If you talk to the government about it, they will just wanna build it. They’ll see how powerful it is and that’ll be dangerous.

So they had all these excuses. And then when it’s like, okay, now we have ChatGPT, and the public’s really getting it. We’re getting these amazing polls saying that 70% of people think that there should be regulation on this. They think it could be dangerous.

Holly 00:08:16
They just didn’t wanna do it. And the first couple protests I held, a lot of people on my own side were just incredibly, viciously brutal about it.

And it took me a long time to realize, like, “Oh, they just don’t wanna do this.” Because they couched it in terms of, “This is wrong,” and they would try to make me feel bad, like, “This is bad for the cause. You’re actually doing something really harmful.” And I would feel like I had to take that really seriously even though it didn’t make any sense to me.

Holly 00:08:42
That’s really what I’m getting to—it was funny to hear people describe me as having thick skin. Like, no, I don’t. But I actually do. I don’t care if somebody gets it wrong in the mainstream media, but it was the people who I really thought were my friends and allies doing this. That hurt.

And you know, they didn’t just tell me, “I don’t wanna do this,” or “I think it’s cringe.” They kind of tried to convince me that they had the moral high ground in not going out there.

Liron 00:09:06
Right, right. Yeah, it’s like gaslighting almost.

Ori 00:09:09
Can I throw out an idea? You know, we were just talking about how there’s the circular firing squad. It’s like people who are on your own team, maybe you go harder on. And I wonder to what extent that is a lot of the animosity that you see from people in tech.

Because maybe they agree with you on many things. They see you as kind of the same similarly aligned person, but on this topic they don’t. And that can bring up a lot of feelings.

Holly 00:09:37
Oh yeah. I won’t say the name of this person ‘cause I think they were just on the show. But I’ve had many people come at me and say that I’m like embarrassing, making it hard for them to do what they need to do for AI safety because I seem like a Luddite.

And that happened many times. It often just made no sense. I got a lot of glimpses into people’s deep thinking on this because they would just say things that were so... I had no idea where it even came from. But like to them, they were connected.

Holly 00:10:12
They looked at what I was doing and they’re like, “That’s embarrassing. That’s like a loser thing. You’re not gonna win. That’s leftist.”

A big comment that I got was that the protesters must be feminists. So it was against a lot of cool boy tech energy. And because of that, they were mad. Like, “Stop embarrassing us.”

Holly 00:10:30
Back then even, it’s gotten better already, but people really believed our only chance was to get in with powerful tech guys and powerful intellectuals and stuff. That was their own influence. And seeming really cool was our only way to achieve this cause.

And I just think that’s... I don’t know, maybe there’s a time when that’s true, but that stopped being true. You have this huge open opportunity to talk to normal people and frankly have real power.

Liron 00:11:03
Yeah. I mean, look, I like that you’ve basically gone explicitly antagonistic because I have too with the AI companies. Because it’s so nice. I mean, on a personal level, I think people who work at Anthropic are objectively, you know, maybe if they were doing something else, like take that variable out of the equation, objectively they seem like really good human specimens.

I wish there were more people like them in the world. But then if you look at their actions, it’s like, isn’t that actually a really bad choice?

Liron 00:11:25
And I think you’ve made that decision of like, well we really gotta cut off the sympathy and just be like, you know, stop making bad choices, guys.

Breaking with Former Allies at AI Labs

Holly 00:11:34
It does amaze me. I realized how insular and protected the world that I had been in, and EA really was after starting to do advocacy. They really just... I was like, they don’t know that good people who were really impressive and interesting and smart and wanted to do good stuff have done terrible things.

Like what do they think? When an entire country, like their military does the wrong thing, there’s lots of people swept up in that who are not bad people, but you have to oppose them. It’s so strange. Like, “Oh, but ‘cause we’re good, it’s gonna be fine.”

Holly 00:12:12
And I really think that a lot of them, it’s not that they’ve taken any steps that they really think are wrong. It’s just that they’re trusting of the leadership and they never really were reasoning that well morally for themselves, even though the moral calculus that they endorse is very complicated.

And if a couple of conditions change, it should change their analysis. They’re clearly just following the leader who provides them nice stuff and cool jobs to do, and a mission where they’re saving the world and they get to think about what they’re gonna do after the singularity and all that.

Holly 00:12:44
Yeah, it’s been very upsetting, because I also loved that community. I thought they were doing the right thing. And I do think a lot of people at Anthropic really mean well. But I also think it’s absolutely their responsibility to see through this.

LessWrong’s reaction to Eliezer’s public turn

Liron 00:12:58
Yeah. Well, I’ve got an anecdote to share. So I just posted the Eliezer interview on LessWrong. I’m like, “you know who’s going to like an Eliezer interview, the LessWrong community!”

Now to be fair, it did get like 58 upvotes or something, so that’s considered a lot for LessWrong. So I don’t wanna complain too much. But then I look at the comments. There’s two top level comments.

Liron 00:13:15
First comment: “Don’t you think you were super flattering to Eliezer? It’s just too much Eliezer flattery. It should be about the ideas, not the person. He even says he wants it to be about the ideas, not him.”

And I’m like, okay, well, I mean, he is a genius and the ideas do seem to come from him, but okay, alright.

Liron 00:13:34
Second comment: “I also agree about the comment being too fawning around Eliezer.”

Okay, that was the entire LessWrong comment. Which is like, okay, that’s totally fine. I mean, it’s fine to have nitpicks and be a nuanced thinker, but it’s just like the ratios get distorted, right.

Holly 00:13:51
This is on a website where you can just cite something Eliezer’s written as a Bible.

Liron Right, exactly. He started LessWrong. Everybody’s on LessWrong because of Eliezer Yudkowsky, right?

And the only two comments about this big Eliezer Yudkowsky interview are about you’re twoo fawning for Eliezer. It’s like, somebody needs to do the job of fawning over Eliezer Yudkowsky. I’m doing the yeoman’s work here.

Holly 00:14:10
Are they turning on Eliezer? I’ve been wondering actually if LessWrong is gonna turn on Eliezer because of writing the popular book.

Liron: I mean, that’s the best way to get status, right, is to be so original that you turn on all the others.

Ori 00:14:22
He sold out, right?

Holly 00:14:24
Yeah. That’s what they didn’t like about me doing. I was never fully welcome at LessWrong, I think, for being a girl frankly. They would accuse me of being, like, a high status person who didn’t get it because I went to Harvard.

And then they didn’t like that I did EA stuff. They were often hostile to that. But Eliezer, they’ve not really known what to do since he wrote the Time article, first espousing this view and switching to government intervention as the hope.

Holly 00:14:45
I mean, it’s really an extremely libertarian place. And they don’t... you don’t really get, there’s no contingent of LessWrong that really embraces government solutions.

The whole community was filtered on a few things. And one of them is hating school and being traumatized there. So I think anybody but Eliezer would’ve just been booed out from the beginning, you know, with that position.

Holly 00:15:19
And people haven’t really known what to do with it. But I kind of wonder, I wondered if the contingent that’s just online is gonna turn on Eliezer now or be less associated with him, even though this is the website that hosts the Sequences.

Liron 00:15:38
Exactly. Exactly.

Liron 00:15:40
Sweet. Okay, great. So yeah, we’re coming up on time here, but I guess I just think a good note to end on is just like... I think we hit on some important topics, and I do see you as a thought leader or perhaps like a mindset leader or just socially. I hope that you keep doing that.

Holly 00:15:55
Thank you. It means a lot.

Liron 00:15:57
Sweet. Alright.

Holly 00:15:58
Actually, coming from you, Liron has spoken at almost all of our recent protests, and he’s really good, uniquely talented at megaphone skills.

Liron 00:16:06
Megaphone skills! Alright, thanks Holly.


Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏

Discussion about this video

User's avatar

Ready for more?