0:00
/
0:00
Transcript

AI Won't Save Your Job — Liron Reacts to Replit CEO Amjad Masad

Amjad Masad is the founder and CEO of Replit, a full-featured AI-powered software development platform whose revenue reportedly just shot up from $10M/yr to $100M/yr+.

Last week, he went on Joe Rogan to share his vision that "everyone will become an entrepreneur" as AI automates away traditional jobs.

In this episode, I break down why Amjad's optimistic predictions rely on abstract hand-waving rather than concrete reasoning. While Replit is genuinely impressive, his claims about AI limitations—that they can only "remix" and do "statistics" but can't "generalize" or create "paradigm shifts"—fall apart when applied to specific examples.

We explore the entrepreneurial bias problem, why most people can't actually become successful entrepreneurs, and how Amjad's own success stories (like quality assurance automation) actually undermine his thesis. Plus: Roger Penrose's dubious consciousness theories, the "Duplo vs. Lego" problem in abstract thinking, and why Joe Rogan invited an AI doomer the very next day.


00:00 - Opening and introduction to Amjad Masad

03:15 - "Everyone will become an entrepreneur" - the core claim

08:45 - Entrepreneurial bias: Why successful people think everyone can do what they do

15:20 - The brainstorming challenge: Human vs. AI idea generation

22:10 - "Statistical machines" and the remixing framework

28:30 - The abstraction problem: Duplos vs. Legos in reasoning

35:50 - Quantum mechanics and paradigm shifts: Why bring up Heisenberg?

42:15 - Roger Penrose, Gödel's theorem, and consciousness theories

52:30 - Creativity definitions and the moving goalposts

58:45 - The consciousness non-sequitur and Silicon Valley "hubris"

01:07:20 - Ahmad George success story: The best case for Replit

01:12:40 - Job automation and the 50% reskilling assumption

01:18:15 - Quality assurance jobs: Accidentally undermining your own thesis

01:23:30 - Online learning and the contradiction in AI capabilities

01:29:45 - Superintelligence definitions and learning in new environments

01:35:20 - Self-play limitations and literature vs. programming

01:41:10 - Marketing creativity and the Think Different campaign

01:45:45 - Human-machine collaboration and the prompting bottleneck

01:50:30 - Final analysis: Why this reasoning fails at specificity

01:58:45 - Joe Rogan's real opinion: The Roman Yampolskiy follow-up

02:02:30 - Closing thoughts

Show Notes

Source video: Amjad Masad on Joe Rogan - July 2, 2025

Roman Yampolskiy on Joe Rogan - https://www.youtube.com/watch?v=j2i9D24KQ5k

Replit - https://replit.com

Amjad’s Twitter - https://x.com/amasad

Doom Debates episode where I react to Emmett Shear’s Softmax - https://www.youtube.com/watch?v=CBN1E1fvh2g

Doom Debates episode where I react to Roger Penrose - https://www.youtube.com/watch?v=CBN1E1fvh2g

Transcript

Introduction and Amjad's Vision

Amjad Masad: The future where AI is headed is everyone's going to become an entrepreneur. I think a lot of those people will be able to reskill.

Liron Shapira: Why do we think that they're going to become an entrepreneur? What skill are they going to draw on that is suddenly differentiated from the AI?

Amjad: We have no evidence that it can generate a fundamentally novel thing, a paradigm change. Can a machine go from Newtonian physics to quantum mechanics?

Liron: I don't get why the average person is now going to do something that has a little bit of that paradigm shifting quantum mechanical discovery embedded inside it.

Welcome to Doom Debates. Today I'm going to be reacting to Amjad Masad's recent interview on the Joe Rogan Experience. Amjad Masad is the founder and CEO of Repl.it, a very interesting company. They make a great product. A lot of people are using Replit to prototype new software products, new websites and even deploy them. It's a very impressive product. You can ideate, you can prototype, you can deploy.

I'm probably going to be using it more myself for various projects. I think they've done really great work on that product and it's been in the news because in the last year, apparently they've gone from $10 million a year to $100 million a year in revenue thanks to their new AI agent integration. So you can open up Replit, you can tell the AI what you want to do inside Replit, and it's kind of like a software engineer working for you and it'll build you the whole website and it'll check it in to git source control.

So honestly, very impressive, amazing product that they've got here on Replit. But this isn't a podcast about Replit. This is actually all about various topics that he wants to talk about with Joe Rogan. And the one that's of interest to me is AGI and what the next few years of AI progress are going to look like.

A lot of the podcast is talking about that, so I'm going to be excerpting those clips and reacting to them, point by point. Amjad's position on AGI is a pretty unique combination of factors. He grants a lot of the normal stuff that the AI companies talk about doing. He says, yeah, they're going to get better and smarter and Replit's going to be a more and more powerful tool and we're going to automate software engineering. But he's pretty skeptical about superintelligence. He doesn't think that in the next few years you're going to have an AI that's better than a human at virtually every skill.

He draws the line somewhere. It's actually not very clear to me where he draws the line. I'm going to try to be unpacking it over the course of the interview. I think this is a particularly interesting interview because it's rare for a successful visionary, product leader type CEO to also be opining confidently on the future of AGI and also have a pretty untraditional mix of opinions.

Overall, you're going to see I have a lot of disagreements with how he's talking. I don't think the messages he's putting out there about AGI are very productive. So plenty to disagree about. But I also may be wrong myself. So listen to the episode for yourself and then let me know your thoughts in the YouTube comments or on my Substack. Here we go with Amjad Masad on Joe Rogan.

Okay, to kick things off early in the interview, he comes out with a pretty grand, impressive vision for Replit.

The "Everyone Will Become an Entrepreneur" Claim

Amjad: We want to bring AI coding to literally every student, every government employee. Because the thing about it is, it's not just entrepreneurs that's going to get something from it. It's also, if you're my view of the future, where AI is headed is everyone's going to become an entrepreneur.

Joe Rogan: Really?

Liron: Yeah, everyone's going to be an entrepreneur. Really? Really. Does that make sense? So, first of all, I can't help noticing there's some bias here. It's not a coincidence that Amjad himself is an entrepreneur. And he's saying, you know what? Everybody's going to be in the future an entrepreneur. That is a job where humans definitely add value.

The job security of people like me is so strong, it's a bias. Because if you remember, Richard Hanania came on my show a couple weeks ago and he was saying how AI is never going to replace Scott Alexander at blogging because Richard himself is a blogger. And then a few weeks before that, Marc Andreessen was saying AI could never be a venture capitalist. That's probably the last job AI will replace because identifying founder talent is such a unique human skill.

So everybody seems to have this bias where they're used to navigating the waves of their own career. So anytime something gets shaken up, they look at their memory and they're like, ah, yes, I just adapted to the shakeup. I moved to higher ground and I was able to restore my career. And so when the AI revolution comes, people like me are still going to roll with the punches, because we always have.

Analysis of Entrepreneurial Skills and Bias

Liron: That seems to be what's the bias at play here. But let me put that aside. Forget about the bias. And by the way, I'm an entrepreneur too. I'm not a multi-billion entrepreneur like Amjad, but I'm a small scale entrepreneur and I do think I empathize with what Amjad is saying. I do think I'll just come up with ideas of stuff that I can sell to people. I've done it before, so I feel you, Amjad.

But let's just think objectively here. I think there's two parts of Amjad's claim. One is that the top entrepreneurs like himself, the top entrepreneurs, will still have jobs for a very long time. That has some plausibility to it because I do think that an entrepreneur is somebody with a lot of steering power over the future.

The domain of what the entrepreneur does is very broad because a lot of different things can happen in their business. They just have to be open to anything that could happen. Lawsuits or your suppliers screw with you, or your customers screw with you, or the government screws with you. And you just have to roll with the punches. And there could be all kinds of different punches. Your employees have an issue, you're managing so many people, so many departments. Everything bubbles up to you. Every problem bubbles up to you.

So in terms of broad domain outcome steering power, which I think is the core of the kind of superintelligence that I think is scary. I think outcome steering power is what's going to be powerful and scary about super intelligent AI. I agree that an entrepreneur or a CEO probably has the largest dose of that among the human population. Elon Musk probably has his hand as firmly on the steering wheel of humanity's future as anyone.

So Amjad says that entrepreneurs are going to be the last ones to be replaced. I mean, that's not exactly what he said. But if that's his claim, that actually makes a lot of sense to me. Entrepreneurs are the ones who are able to run to the highest ground when the flood is coming. That makes sense.

But what Amjad is actually saying here is he's talking about the average entrepreneur. His exact quote is the future where:

Amjad: AI is headed is everyone's going to become an entrepreneur.

Liron: And that's when things really stop making sense to me. Because think about who everyone is. Everyone is average people who were working in customer service and then they got fired because the company downsized the team of 100 customer service people down to 10. And the AI is essentially doing the work of the other 90.

Or they were doing outbound sales, they were making a lot of phone calls. And now AI has automated, let's say 90% of the job, and it's getting to 100%. Everyone is, they're working in education and they're the faculty at a high school. That's everyone. Their job is narrow scope. It's not like they're these big outcome steerers that are dealing with all kinds of different assaults that are problem solving hard problems. No, they have a pretty well defined scope of their job and they've spent many years just getting used to doing a job with that scope.

So in this new economy, when you have this new AI that can do all these different things, why do we think that they're going to become an entrepreneur? What skill are they going to draw on that is suddenly differentiated from the AI?

The Brainstorming Challenge

Liron: Well, they now have the ability to issue a prompt to a tool like Replit. And they'll kind of be this project manager with a really good team of engineers. And the engineers will deliver, let's say, good software. I mean, there's some kinks that need to be worked out right now. But let's say a couple years pass and the software gets really, really good and anybody can essentially have a contract team of engineers.

Does that let the average person become successful? You might say, well, yeah, because now the average person could be like, hey, I'm going to invent my own e-commerce business. I'm going to figure out something to sell and the AI will manufacture it for me and it'll build the software for me and I'll just collect the profits.

But now you have to zoom in and you ask, well, wait a minute, what was the key part that the human did? The idea of what to sell. And so now it comes down to brainstorming. The brainstorming challenge. Let's go head to head. Human versus AI, who can sit down and brainstorm better ideas?

Because the moment that the AI can brainstorm better than the human, why would the human be the one who's making more profit in their e-commerce business? You're going to have an AI powered e-commerce business that was dreamed of by the AI that's outselling the human and it's going to have thinner margins because it doesn't need to pay the human overhead salary.

In fact, what you're going to have is a human who owns a business of a million AIs and all of those million AIs are playing the role of entrepreneur brainstorming a business. And so you have this giant pyramid where even the entrepreneurship layer is reporting up to some human mega owner, let's say.

And now the mom and pop, the "Hey, I had this idea to 3D print figurines that are based off this obscure old novel that I found that's out of copyright." Right. That's the human innovation or whatever that the average person can do. But the AI is also doing that at scale. Where exactly is the human differentiation?

This is what I wish Amjad would be more specific about because I think he's kind of left it as an exercise to the viewer to figure out what he's talking about when he says everybody can be an entrepreneur. Or maybe it hasn't occurred to him that the thing that the human is adding, the brainstorming or the life experience or whatever, the AI can still do that too. Head to head.

Let's see if Joe manages to ask Amjad the kind of obvious follow up questions that need to be asked to Amjad's claim, or if Joe Rogan just lets it slide. I wonder which one he's going to do.

Joe Rogan's Response to the Entrepreneur Claim

Amjad: My view of the future, where AI is headed is everyone's going to become an entrepreneur.

Joe: Really?

Amjad: Yeah.

Joe: And so this is the best case scenario future, as opposed to everyone goes on universal basic income and the state controls everything and it's all that's right. Everything is done through automation.

Amjad: I don't believe in that at all.

Joe: You don't? I don't. Okay, good. Help me out, man. Yeah, so give me the positive rose colored glasses view of what AI is going to do for us.

Amjad: Yeah. So AI is good at automating things. I think there's a, there's a premise to human beings still.

The "Statistical Machines" Framework

Liron: The way Amjad is using this word automation, it's a more constrained sense of automation than how I would use the word. I would just use the word automation to be like, oh yeah, it's taking things that humans are doing and then it's doing them. So if a human entrepreneur is starting a business, well, the AI could automate starting a business.

But apparently Amjad wants to use the word automation to only refer to limited scope automation. So if it's a factory assembly line, that's automation. And apparently if it's writing blocks of code, that counts as automation. But if it's project managing a large software project, I think in Amjad's terminology he wants you to not call that automation.

It stops being automation when it's really large in scope or when you have to inject true creativity under the assumption that you don't need true creativity in order to write blocks of code. You only need true creativity to manage large software project. I'm not even trying to make a point right now. I'm just trying to set your expectations for how Amjad is going to be using this word automation. So let's continue.

Amjad: Fundamentally, the technology that we have, large language models today, are statistical machines that are trained on large amounts of data and they can do amazing things.

Liron: Don't worry guys, they're just statistical machines. So the only kind of jobs that they can do are statistical machine jobs. Graphic designer, turns out that that was a statistical machine job. Junior software engineer, turns out that that was a statistical machine job. But don't worry, somewhere along the line when you get to entrepreneur, that's not going to be a statistical machine job, but it is going to be a job that the average human can do.

These are all the inferences that you need to make if you want to get into Amjad's worldview.

Amjad: I'm so bullish on AI. I think it's going to change the world, but at the same time I don't think it's replacing humans because it's not generalizing. Right. AI is like a massive remixing machine. It can remix all the information it learned and you can generate a lot of really interesting ideas and really interesting things and you can have a lot of skills by remixing all these things.

The Abstraction Problem

Liron: Okay, so according to Amjad, AIs don't generalize. And that's why it's not going to replace humans because humans do generalize. And all that the AI does when it impresses you, it turns out it's only doing it using statistics and remixing. And humans can do more than statistics and remixing. There's some sort of separator there between AIs and humans.

Of course, if you want to test that kind of abstract claims, the way you test it is by doing these specific examples. Here's a specific example. Managing an e-commerce business 300 days out of 365 and then you delegate the other 65 days to a human, part time entrepreneur to deal with the tricky stuff, but you manage it 300 days out of the year. Is that a task that an AI can do? Can I remix and statistically non-generalize my way to managing an e-commerce business? That's doing $20 million a year revenue and then only delegate to the human the last 10 or 20% of the time. Does that fall under Amjad's carving of the world into these concepts? It's quite unclear.

And I want you guys to be looking out for this kind of trick. This is a rhetorical trick that actually most of the time the person using this rhetorical trick doesn't even understand it themselves. So I actually think that this is how Amjad thinks to himself. I don't think that his caliber of thinking is rising beyond the rhetoric that he's using with Joe Rogan.

I think that he is one of the many people who think in abstractions without unpacking their abstractions. When you watch Doom Debate, sometimes you learn lessons about how to think productively. So in this case, the lesson is think specifically.

It's the same lesson that you'll learn if you watch my episode from a couple weeks ago where I react to the episode with Emmett Shear. I had a really big beef with a lot of the concepts Emmett was trying to teach us. I had a beef with those concepts being too confusing, potentially self-contradictory, potentially not useful. I wasn't sold on any of his concepts because throughout the whole talk, he didn't undertake the exercise of mapping his abstract concepts to specific examples.

And so the whole reaction episode was me trying to basically do his job for him of being like, okay, here's a specific example. What would Emmett Shear's distinction say about this? It's unclear. Maybe he means this, maybe he means that. Either way, I don't see how this is a useful distinction.

And so if you want an hour or two of me doing that, go watch the episode with Emmett Shear. Now I find myself doing the same thing with Amjad, because when he's hand waving when he's using these abstractions, it sounds plausible. On the abstract level, it sounds plausible when he says humans generalize. AIs don't generalize. Humans do some kind of magic. AIs only do statistics and remixing. It sounds plausible.

But then you bust out a specific example. What if you have an existing business that's humming along like an e-commerce business? It's already humming along. Does the fact that the business already has a track record of operating mean that you'd no longer need an entrepreneur to keep running the business?

What about Replit? Well, Replit is Amjad's own business. Replit is a kind of business that's constantly innovating. So it's easy to say okay. Replit is probably one of the businesses where a human needs to be at the helm for the longest because Amjad keeps thinking of new directions, new strategies. Fine. I mean, the guy's a talented entrepreneur, right? So I'm willing to hand him that. I'm willing to say you're clearly doing more than statistics. You're going to be the last one that the AI is replacing.

But if you look at a typical business, a small business, I've run small businesses before, and I don't think that they're anything special. If you look at a typical e-commerce business, okay, it's a known playbook, yeah, they're kind of working at it every day. They're finding little hacks, little improvements. But the whole business has what you might call a remixed playbook, right? Amjad says remixing is what the AIs do.

So the entrepreneur that was doing the e-commerce, is that entrepreneur now out of the job or are existing types of entrepreneurs still going to be into a job? That would be one of the most obvious specific questions to be asking Amjad. When you talk about humans going and being entrepreneurs, is there a constraint that the type of entrepreneurship has to be inventing a new line of business, inventing the next Google, the next Replit, or can you just have a restaurant?

Most of the world's entrepreneurs, they're just opening a restaurant. They're just opening a bookstore or laser tag. That's what most of the world's entrepreneurs are doing. They're running vending machines, all of those kinds of entrepreneurs. Are they going to get replaced by AIs and then they have to become a new type of entrepreneur?

So what I'm doing here is I'm not even arguing against Amjad. I'm actually just pointing out that his distinctions, because he's choosing to make them only at the level of abstract hand waving, he thinks it's sufficient to say, AIs don't generalize. AIs are statistical. Humans do generalize. Humans aren't statistical. Because that's all he's telling us. He's not giving us more to go on because of that.

We start to run into problems when we even just ask ourselves what Amjad means. It's not even clear if he means that entrepreneurs of existing, known types of businesses, types of businesses that one could do statistics on, types of businesses that one could remix. It's not clear if Amjad is even claiming that those businesses still require human entrepreneurship.

And that's going to be the pattern with Amjad's AI claims is he's going to stick to the abstract level. He's not going to subject his own claims to basic, specific tests of what he means. And so I'm just going to argue against him by poking at his basic definitions. That's going to be the flavor of this episode.

The Paradigm Shift Discussion

Amjad: But we have no evidence that it can generate a fundamentally novel thing or a paradigm change.

Liron: I think on some level Amjad knows that he's not being clear. He's not drawing a clear boundary between what AI can do and what it can't. And instead of helping us out and doing something that would actually make the boundary clear, which is to work through some specific examples. Like the specific example of somebody who is running an e-commerce business. Is that person out of a job when AI gets a little better, or no?

Can that person still be an entrepreneur even if their entrepreneur is now following a remixed playbook? Or do they have to find a new playbook and have a business that's as innovative as Replit? Again, these are the kind of specific questions that Amjad could be answering if he wants to add clarity.

I think on some level he knows that more clarity is required of him in a conversation like this. But the way he's trying to add more clarity isn't by going down the ladder of abstraction and mapping his abstract claims to specific examples. What he's doing instead is he's staying on the abstract level and he's trying to just layer in more abstractions.

He thinks that if he gives you enough synonymous abstractions, then that will have added clarity. He started off by telling us that AIs are statistical, and then he layered in the claim that they don't generalize and that all they do is remix. And now he's layering on another synonymous abstraction. He's telling us that they can't change paradigms and he's telling us that they can't generate things that are fundamentally novel.

So I hope you've kept track of that pile of synonymous abstractions. In his mind now that he's layered in so many different abstractions, it's your job as the listener to get it. He thinks that he's made a clear point, whereas from my perspective, testing it using specific examples, it's totally unclear where the boundary that he's drawn even is.

It's totally unclear which types of businesses you might run as an entrepreneur count as being remixes versus businesses that count as being fundamentally generalized, novel, paradigm shifting businesses. If I have a, you know, like Zappos, was that a paradigm shift. They did a lot of things different, but they were still e-commerce, right? They sold shoes, but they had really good customer service and they let you send the shoes back. They had a better refund policy.

So was Zappos a remix of an e-commerce brand or was it still an e-commerce brand? It's not clear which side of his own distinction Zappos falls on. And instead of clarifying these kind of very basic meaning clarification questions, what he's going to do is just layer in more synonymous abstractions.

And in his mind, I think he thinks that that's all he can do. Because most humans, they have ladder of abstraction blindness. They don't realize that there is such a thing as the ladder of abstraction. He thinks that if he said they can't do a paradigm shift, he's been as clear as possible. He's like, I made myself clear. I said no paradigm shift. Don't you know what a paradigm shift is?

When the actual way to add clarity. I keep repeating this in many of my episodes. The actual way to add clarity is to check a bunch of specific examples, work through specific examples. By the time you've done that, you will see that the clarity of what you're trying to say is much increased.

Now, what is he really trying to say? Here's the thing. He's not trying to say more than what he's already said. So it's not like there's this secret specific meaning that Amjad is trying to communicate to us that he's just failing because he's not being more specific. No. In his own head, all he has is abstractions.

So in analogy, if you want to get inside Amjad's head, an analogy is like a child playing with Duplo bricks. So, you know, like LEGO bricks versus Duplo. Duplos are like Legos that are like four times larger and they're for kids. And I think they're still made by the LEGO Corporation.

So in most people's heads, I'm not even singling out Amjad specifically, by the way. This is actually. Most thought leadership is actually done by putting Duplos together. So Amjad is saying, look, I've made myself totally clear. I have a big Duplo. This Duplo is called generalization. I have a big Duplo called paradigm shift, called remixing. Can't you see I've connected these Duplos together. I have now told you what AIs can do and what AIs can't do.

And I'm over here with my Legos and I'm like, the actual shape that I'm seeing looks kind of scraggly, right? It looks like a lot of Legos fit together in a complicated shape. And I see that you have duplos over here. I don't know how your duplos correspond to my Legos. Can you help me out with some examples?

And I'm just saying. Nope, I can repeat myself with the duplos. I will give you another synonym for the duplos I'm using, and I will ignore your Legos. Right? An example of a Lego is like, I gave you a fleshed out description of a company. Or, you know, you could talk about another job function like, you know, being a lawyer, helping companies navigate complex legal challenges.

Like, hey, a competitor just fired a complex legal challenge at you. How are you going to handle that challenge? Does that require entrepreneurship? Does that require paradigm shifting, entrepreneurial problem solving that isn't a remix. Or can I just remix other legal cases that have been tried? Like, what's a remix and what's not? I don't know. I'm getting no specific clarity.

Okay, this is not just a random tangent that I'm going on. This is going to come up again and again because it's a major limitation of human thought that we just fail to get specific when specificity is actually what would add clarity. And instead we have a tendency to actually repeat and get stuck in our Duplo level abstract claims.

The Quantum Mechanics Example

Amjad: We have no evidence that it can generate a fundamentally novel thing or a paradigm change. Can a machine go from Newtonian physics to quantum mechanics? Really have a fundamental disruption in how we understand things or how we do things.

Liron: I think it's funny that we're already talking about humans inventing quantum theory because remember, Amjad's exact claim was that the future of work is everybody becoming an entrepreneur. Everybody, meaning the average human is doing something with their brain that AI isn't getting very close to replacing. And now he's jumping to quantum mechanics.

So he's saying, you know how Planck and Heisenberg and Schrödinger invented quantum mechanics? Well, Gus from the used car shop is also going to be doing something in his job that is kind of like that. Okay, can we maybe get a little bit more specific?

Because I get what it means to invent quantum mechanics. I get what it means to use Replit to order it to build applications. I don't get why the average person is now going to do something that has a little bit of that paradigm shifting quantum mechanical discovery embedded inside it.

Because talk about a specific person. Think about your own friends who have jobs that are kind of average jobs. Think about the average person who works for Walmart anywhere inside of Walmart corporate or in a Walmart store. That average person, what are they doing that has enough of that quantum mechanical invention? That's a microcosm of what Werner Heisenberg did.

This average Walmart employee who is now transferring into the career of entrepreneurship, what do they do in a typical day or week or month, which is a microcosm of the Schrödinger equation? What is that little microcosm?

If I think back to my own life, what are the most Schrödinger equation-like things that I've done in the last month of my life? I think it's slim pickings. It's literally just check Slack. Try to address people's problems. But they're kind of similar to problems that I've seen before. Look at my accounting dashboard. Work on payroll, deal with random stuff that comes up, open my mail, comply with different things that governments want. Tax obligations, workers comp. Random stuff.

I can tell you my job description is entrepreneur. And I'm not doing any of that Heisenberg stuff. I'm really just kind of plodding along, taking it one day at a time, just remixing my life. I'm not shifting any paradigms at my day job.

So how did we get here? How is Amjad making the claim that the average person is now going to be an entrepreneur and then jumping and talking about quantum theory? Don't you think there's some burden of proof here in Amjad to just draw a connection between what the inventors of quantum theory did and what the average person is doing at their job who's now going to become an entrepreneur?

How do you connect these two things? Because, yes, I get that the top humans have made a paradigm shift in physics. That's hard for most humans, that's hard for today's AIs. Fine, but how do you reconcile that with your claim that the average human is going to become an entrepreneur and not have the AI be a better entrepreneur than that? He's leaving the logic kind of unfinished. He's never going to just close this logical gap.

And again, the way to close the logical gap is to just be more specific. Just tell me a story. Tell me a single plausible story. It's always a red flag when somebody leaves a conversation having made an abstract claim and never let you visualize a single substantiating hypothetical story of what they meant.

It's a very effective sneaky rhetorical move because simultaneously he's failing to give you kind of the minimum of what he should be giving you in order to make a coherence point. But he doesn't sound like he's being dumb when he does that. He actually sounds like he's being smart. Because humans actually tend to view speaking in abstractions as a smart thing to do.

So here's a guy who's dropped so many of these abstractions onto me. He's dropped statistical machine and paradigm shift and generalizing and remixing. He's dropped all of these sky high abstractions on me and then he's exited like Elvis has left the building. He didn't give me any specifics, but that's fine. Specifics are just dirty work, no need to get our hands dirty talking about specific examples of what people who are currently employed in a normal job are going to go change their employment to entrepreneur and do something defensible.

There's no need to get into the weeds of those specifics. Look at this genius founder who's talking on this really high level. Everybody can be an entrepreneur. Boom. Great point, Amjad. Keep throwing us more abstract synonyms.

It's pretty frustrating for me as somebody who understands that clear communication requires specific substantiating examples of abstract points. And then to not get those specific examples and to suspect that the speaker doesn't have a coherent point. And then the interviewer, like Joe Rogan has no idea that this is a thing. Wraps up the interview like, oh, thanks for enlightening us, Amjad.

It's frustrating for me and that's why I have the show. I get to vent. I get to raise the level of discourse by calling people out like, this is actually a discourse level failure. Discourse about abstract claims should include substantiating specific examples. This is a drum I'm going to beat quite often.

The Roger Penrose Connection

Amjad: Can a machine go from Newtonian physics to quantum mechanics, really have a fundamental disruption in how we understand things or how we do things?

Joe: Do you think that takes creativity?

Amjad: I think that's creativity for sure.

Joe: And that's a uniquely human characteristic. For now.

Amjad: For now. Definitely for now.

Liron: So the pile of abstractions that they're lobbing around synonymously just keeps getting larger and larger. It wasn't enough to tell you that AI can't invent a new paradigm or do something non-statistical or truly generalize. Now they also have to tell you that an AI can't be truly creative. For now, true creativity is only a human thing.

Which is interesting because that thing that you called remixing, if I could rewind the clock five years and said, hey, here's a really good essay, here's a really good analogy written to explain a concept to a five year old. The kind of thing that you can ask AI to explain to you in one prompt and get a pretty satisfying answer back out. If I had just shown you the input output five years ago and I'd be like, what do you think of this work? Do you think that somebody could generate this work without being creative? Or do you think the act of producing this work is an act of creativity?

I don't know about you, I would have very confidently told you the act of generating this work feels like an act of creativity. It even gives you an emotional reaction to read it. It's never been written before. It's not that similar to other things that have been written, because you can give it 10 different constraints in the prompt to really get something quite novel.

I know you can call it remixing, but it's quite different from anything that's ever appeared before. That kind of prompt and the associated output to me really is a brilliant output, a creative output. But I'm just here to interpret their ontology, to interpret their choice of terminology.

Okay, so now we are labeling these kind of input outputs, these kind of productions, we are now labeling them as just non-creative. So whatever realm of true creativity you have to get into, we haven't got into it yet by doing these kinds of "oh wow, you drew a really good picture that is good enough for me to just use in production."

My YouTube thumbnails as an example, I often cite they're pretty much generated by the AI, but nope, they're not truly creative. They're just never before seen YouTube thumbnails that are doing the job of getting humans to click on my videos. But not true creativity here.

Which, fine, okay, I'm not here to play semantics with the terminology. I am here to point out that when you set the standard of creativity that high, I have to go back to Gus from the used car lot. What is Gus from the used car lot going to do that's drawing on that much creativity? What creative act has Gus done in the last year of his life that represents a level of creativity higher than that of GPT writing essays and drawing new designs?

Where is this creativity? Or maybe Amjad will admit, okay, yeah, Gus doesn't exhibit that kind of creativity. But then maybe Amjad has to withdraw his claim that Gus is going to waltz into a job as an entrepreneur drawing on this kind of Heisenbergian creativity. What is Amjad expecting us to believe here? It just feels like you can open the trapdoor of his abstract claims and see a real mess. He hasn't really thought through basic specifics of the worldview he's claiming.

Amjad: Actually, one of my favorite Rogan episodes was Roger Penrose. Do you remember?

Joe: Yes.

Liron: Oh, boy, we're getting into Roger Penrose. I highly recommend searching Doom Debates Roger Penrose. A couple months ago, I did a pretty elaborate reaction to Roger Penrose's claims. In a nutshell, totally wrong and nonsensical, which is shocking because the guy is very smart. He almost got the Nobel Prize. That ended up going to him. Or wait, is that right? No, he got the Nobel Prize and I didn't.

And yet his ideas have been fringe for decades and he hasn't changed his mind at all. I don't want to get into it, but if you watch my episode, it's just a real head scratcher how he's making the kind of claims that he's making about Gödel's theorem and quantum microtubules and uncomputable consciousness, and he's just stubbornly sticking with them despite the vast majority of computer scientists and philosophers telling him that he's smoking crack.

So of course Amjad's going to argue that Roger Penrose totally makes sense, and that's why his theory of entrepreneurship is going to be an accurate depiction of the future, because Roger Penrose is correct. Great. Let's listen.

Gödel's Incompleteness Theorem

Amjad: So do you remember the argument that he made about why humans are special? He said something like, he believes there are things that are true that only humans can know it's true, but machines cannot prove it's true. It's based on Gödel's incompleteness theorem.

And the idea is that you can construct a mathematical system where it has a paradoxical statement. So, for example, you can say G, you can say this statement is not provable in the machine, or the machine cannot prove the statement. If the machine proves the statement, then that statement is false. So you have a paradox, and therefore the statement is sort of true from the perspective of an observer, like a human, but it is not provable in this system.

So Roger Penrose says these paradoxes that are not really resolved in mathematics and machines are no problem for humans. And therefore his sort of a bit of a leap is that therefore there's something special about humans and we're not fundamentally a computer.

Joe: Right. That makes sense.

Liron: Actually, it doesn't make sense. I think Amjad did a good effort at trying to explain to viewers what Penrose is talking about when he says that Gödel's incompleteness theorem is evidence that humans can do something that machines can't. He did a pretty good job for somebody who's kind of digging up the knowledge on the fly. But it is highly questionable the actual description that he gave.

When he says machines can't see the truth of the statement being true. Well, wait a minute. Machines just don't get to assume that the axioms of set theory are true. And then humans can just be like, you know what? They feel true to me. I'm going to assume that they're true. And because I get to assume that, and I'm not letting my machine assume that because of that, that's how I know that this Gödel statement G is true. Therefore I'm better than the machine.

It's basically like the human gets to waltz in and create their own axioms that they're letting themselves accept, and they're purposely preventing their machine from accepting them. I'm not expecting you to follow what I just said if you haven't already been exposed to Gödel's theorem and Penrose's argument. The reason I said it is just to show you that it's pretty easy to push back to the substance of what Amjad is saying and what Penrose is saying.

But that's actually not what I want you to focus on in this particular reaction episode. That's what I want you to focus on if you go listen to my Penrose episode. Here's what I want you to focus on right now. Everything Amjad is saying right now. How does that relate to Gus working at the used car lot and Gus becoming an entrepreneur?

When did Gus go and look at the truth of which machine is going to prove which statement in formal set theory? I don't think Gus from the used car lot has ever done that. So Amjad, instead of just leaving us hanging with this abstract argument about formal systems, why not connect that? Be like, hey, do you remember the time when Gus had to shake the soda fountain at his office in order to make the soda come out. That was kind of like a new paradigm that a formal system could never prove and only Gus could do it because he was a human.

I'm taking some liberty with this example, but why is Amjad way out in the clouds talking about Gödel's theorem and Penrose's extremely questionable argument? Why doesn't Amjad just give us a little bit more specifics about what the human brain is doing in the last year of time? An actual story?

Isn't this a red flag that he keeps having to refer to Heisenberg, he keeps having to refer to Gödel, keeps having to refer to Penrose. Can't you just talk to us on a more concrete level?

Ironically, as an entrepreneur, it's actually one of Amjad's core skills to keep mapping strategy down to specific tactics. So this kind of multi-level reasoning is actually a key skill for a successful entrepreneur. Amjad is probably amazing at that skill. He will probably go and read research papers, read the news, see where the wind is going in terms of the industry, come up with a 10 year strategy and then he'll go reorganize his company, he'll meet with his executive staff, meet with his managers and get the whole company rowing in the same direction toward a long term strategy.

But he'll micromanage or he'll go founder mode and get individual engineers implementing individual features that are consistent with his strategy. So this idea of thinking high level and then mapping it to lower level, he's actually presumably genius level at this point.

And yet when it comes to his concepts about what future humans are going to do as entrepreneurs, this same thinking that he's using to micromanage his own team, to get each member on his own team operating at full productivity, it seems like he's willfully not using it to micromanage the average person.

Because he's pitching to the average person, hey, my tools or the AI revolution is going to help you, the average person, it's going to slot you into a really useful entrepreneurial blanket. And he's actually an expert at assigning tasks to people to maximize their productivity within his own company.

So it's kind of interesting that he has a blind spot where he won't talk in specific terms. He won't use a little bit of imagination to be like, okay, this Penrose stuff that I'm saying about formal systems and how the human mind can for a long time or forever do things that computers can't. He won't map that down to the level of, okay, so this guy who used to work at a used car lot and now he's an entrepreneur, an example of what he could now do is blank.

It's a very interesting disconnect in my mind. It kind of invalidates this whole chain of reasoning that he's using right now. It just seems to me like he's regurgitating stuff that he's heard because he likes to believe. It has the flavor to me of wishful thinking. He's lived his whole life with this mental model that there's a separation between humans and computers.

Penrose has legitimized this because there's a guy with a Nobel Prize who a lot of people respect because he is legitimately smart. And so Amjad has just kind of been living with this assumption and he's not questioning it. Even as the water level rises and rises, even as more and more jobs are falling.

It just seems like he wants to protect this worldview where there's some kind of separator. And he's not going to get specific about it, he's not going to actually help us reason through it, but he just wants to defend it.

Creativity and Human Uniqueness

Joe: I mean, whatever creativity is, whatever allows you to make poetry or jazz or literature, whatever allows you to imagine something and then put it together and edit it and figure out how it resonates correctly with both you and whoever you're trying to distribute it to, there's something to us that's different, okay?

Liron: But AIs are writing literature today. AIs are composing music today. If you look at the share of listener hours that are made by AI compositions, or if you go to Instagram and you look at the share of influencers that are just AI influencers, I think those shares are growing rapidly and significantly.

If you look at human conversation, the amount of conversation that's taking place between humans compared to human with chatbot, you don't think having a conversation and sharing your feelings and having an appropriate response for those feelings and sharing back those feelings, you don't think that that back and forth exchange also represents the heart of what it means to be human or the heart of creativity?

I mean, I don't know. You're the one with definitions here. I think you don't want to fall into my trap. So you'll say, no, no, no, that doesn't represent creativity. That's pure remixing. That's pure statistics. Right? It's easy to retroactively say, well, AI can do it today, so it must not be that impressive.

But this seems like cope. These abstractions that you're throwing around to try to draw a separator between what AI is already doing and what you think AI can never do. And you think you're separating it by using words like creativity and generalization. It sounds like cope. It sounds like the water level is just rising. And both Amjad and Joe are refusing to deal with this using productive logic.

It seems like they really want to come away from the conversation saying, AI totally can't do this. It totally can't write literature. It totally can't paint the most beautiful paintings. It totally can't compose the most beautiful jazz. But it's like, are you watching these trends? You think that this exponential curve is just going to slam into a wall?

And actually Amjad might be like, yep, I think it's going to slam into a wall. Because he has tweeted before. When AI 2027 came out, he criticized their methodology. He just had a snarky tweet saying, ah, the extrapolator. Like they're just extrapolating a trend. Look at those idiots.

Where from my perspective, it's like, okay, you don't have to extrapolate a trend. You can go ahead and present your model and tell us why the trend is going to slow down. But it sounds like his model is just based on throwing around a few abstract words and calling it a day.

Because once again, the red flag that I'm seeing is the way he's not mapping it to specifics. He purposely just wants to conduct the conversation by lobbing a few abstractions at you and then calling it a day and go back to work.

Where in his work as an entrepreneur, again, ironically, he is subject to the standard of needing to map his abstractions, map his abstract high level strategies. He is subject to the constraint to the feedback loop where his abstractions do need to map to ground level reality. He does need to go talk to the line managers and be at the coal face.

But when he goes and lobs these abstractions to you, the viewers, and talks about the future of the economy, he's not subject to that same feedback loop. There's no social pressure besides Doom Debates. Besides my own reaction. There's no social pressure in terms of the Joe Rogan audience holding him to account on mapping his abstractions to coherent specifics.

The Consciousness Discussion

Amjad: I mean, we don't really have a theory of consciousness and I think it's sort of hubris to think that consciousness just emerges. It's plausible. I'm not totally against this idea that you built a sufficiently intelligent thing and suddenly it is conscious.

Liron: On one hand, I see talking about consciousness as a non sequitur to the other points being made about creativity and about everybody becoming an entrepreneur. I don't see why consciousness is logically connected to that. I mean, if you just go day by day, okay, this guy who used to work in sales, now he's going to go work as an entrepreneur and he's going to be issuing prompts to these AIs that are building websites and the prompts are going to be better than the prompts an AI can write. What does that have to do with consciousness? Because the AI won't be able to come up with prompts because the AI isn't conscious or. I'm not exactly sure.

It sounds like a non sequitur, but I think I can steel man where he's coming from at a very high level. He's saying, look, yes, the waterline keeps rising and rising, but there may be some barrier because there may be some mysterious thing that we don't understand. Consciousness is a mysterious thing we don't understand.

So when I throw around these other abstractions like generalization and something being non-statistical and something starting a new paradigm, sure, I don't really understand what I'm saying, but maybe that's okay because nobody really understands what consciousness is.

Now, from my perspective, I actually feel like there is an asymmetry here because yeah, consciousness is poorly understood. Consciousness is mysterious. But if you look at terms like creativity, I don't feel like they're poorly understood. I think I can give you a pretty specific account of what creativity is and how to recognize it.

In Amjad's case, it seems like the two are connected somehow. And that's why in his mind, it's not a non sequitur to skip ahead from AI's lack the essence of creativity the way humans have and AIs lack consciousness. And that's why the two are connected.

So I see the abstract level Duplo move that Amjad's doing. He's like, look at my Duplo set. I've connected creativity to consciousness. Whereas in my Lego set it's like, look, you can measure creativity. It has to do with searching an exponentially sized space like in the complexity class NP. And you somehow found a solution that seemed like it would have required brute force search or would have required a type of search that a priori seems unlikely to you, and yet you made it likely.

You somehow steered the probabilities from improbable to probable in a way that I, epistemically, I, as an observer with less intelligence than you, I didn't see a high probability path. And yet you got there anyway, and it's a high value solution.

So you steered a path that seemed low probability to me, but it was high probability to you and it got to a solution. So I'm just talking about the kind of terminology that I would use to analyze creativity. I think that terminology is foreign to Amjad.

I think Amjad's Duplo level analysis is that creativity is wonderfully mysterious and humans have it, AIs don't have it. Consciousness is wonderfully mysterious and humans have it, AI don't have it. So these are Amjad's two duplos that he's snapped together.

If Amjad wants to go that route, and in his mind it's not a non sequitur, fine. I would still ask that he meet the standard of, okay, you want to talk about consciousness? Tell me where Gus is going to invoke his consciousness to do a specific task that the AI taking over Gus's job can't do.

Amjad: It's like a religious belief that a lot of Silicon Valley have is that consciousness is just a side effect of intelligence, or that consciousness is not needed for intelligence somehow. It's this superfluous thing. And they try not to think or talk about consciousness because actually consciousness is hard.

Joe: Hard to define.

Amjad: Hard to define, hard to understand scientifically. It's what I think Chalmers calls the hard problem of consciousness.

Liron: So first of all, nobody I know claims that consciousness is a side effect of intelligence. People claim consciousness is separable from intelligence. So you can imagine a system that's pretty dumb and yet highly conscious. It just feels things and it can't do much about them. Think about a mentally disabled human, they're potentially very, very conscious. They potentially feel powerful emotions, feel pleasure and pain, feel qualia, and yet just aren't very intelligent. That seems like a probably accurate description of a mentally disabled human.

So I would claim something like, hey, you could have an agent that's very, very intelligent and still have very little consciousness. I do think it's possible to have a system where it is a better outcome steerer than any human and yet has less consciousness, whatever that means.

Just imagine a very robotic AI that just crunches through problems, but the problems can be incredibly complex and it can just spit out an answer, and the answer is brilliant, and yet it just didn't experience much qualia in the process of doing that. Think about AlphaGo with that brilliant move 37. I don't think most people have a strong sense that AlphaGo probably experienced a lot of qualia in the course of churning out move 37.

I think a lot of people still think, well, AlphaGo is still mostly just a robot. It doesn't have that much consciousness. Maybe not much consciousness at all. If Amjad feels strongly that no, no, no, AlphaGo is totally conscious, he's welcome to put forward that argument.

It's just his criticism of people like me that he's labeling Silicon Valley. I guess I'm a representative of Silicon Valley here. He's criticizing us for kind of separating out the mystery of consciousness from our claim that AI is getting more and more intelligent in the outcome steering sense.

But the reason that I'm making the claim isn't because I have this motivation, because I'm an atheist who doesn't want to talk about religion. I mean, I am an atheist, but I'm still happy to consider religious hypotheses. The reason I'm making the claims I'm making is just because I used to have a lot of tests and benchmarks, and now those tests and benchmarks have been surpassed.

So if you want to say, hey, they've been surpassed because the AI is growing more conscious, fine. They've been surpassed because the AI is growing more conscious. Just because I didn't say anything about consciousness doesn't mean that I'm opposed to saying something about consciousness. Say whatever you want to say about consciousness.

I'm just telling you that when you throw around abstractions like creativity and remixing versus generalizing, those abstractions aren't cutting it in terms of predictively telling you where the AI's capabilities are going to stop growing. Those abstractions aren't going to cut it in terms of claiming that an average human who's currently working sales or customer service or education or whatever, that human is now transferable to an entrepreneurship job where their entrepreneurship job is safe for a long term.

Just because you think consciousness might be a mysterious X factor doesn't mean that you're reasoning productively about how the economy is currently being transformed. I'm about talking. I think maybe you can introspect a little bit and see that there's something missing here in your own thought process, something that could be solved if you would only just get specific about what you mean.

For people like Gus from the used car lot now, maybe he would say, you know what I don't want to get specific about Gus. It's not my obligation to get specific about people like Gus. I'm just hand waving. I'm just telling you that you don't know what you don't know. It's kind of like when religious people who believe in miracles say, you don't know when a miracle happens, okay? You don't know if you're going to wake up and one day your disease is going to be healed. You don't know what you don't know. Okay? Consciousness can potentially do anything.

And it's like, fine, yeah, I might be surprised. Maybe every average human will still have a job somehow. But let's play out an example of a hypothetical story where I get surprised. Okay? What does the person actually do that would surprise me?

Maybe you can just tell me. Well, the AI would just actually keep them doing sales because humans will still walk into the used car lot and they'll still want to talk to a human. Okay, fine, so make that claim. But that's not an example of me rejecting consciousness. That's just saying for whatever reason, humans really physically want to go and talk to a human body. And even if robotics get really good and you have a robot face, nope, they really want to talk to a human.

I mean, that's not me. That's not how I buy a used car. But maybe 20% of the population wants to walk onto a physical used car lot. Fine, you can make a claim like that. It's just weird to me that Amjad is kind of leaving the conversation content to just be like, you know what Silicon Valley types, you guys are underestimating the power of consciousness. And because I am somebody who believes in the power of consciousness, and you know what? I'm going to not give you any hypothetical examples of how the power of consciousness could possibly change the default future where everybody just loses their jobs. I'm not going to give you an example. I'm just going to tell you that you guys are underestimating the power of consciousness. Good day. I said good day.

General Intelligence and Consciousness

Amjad: But I think it is something we need to grapple with. We have one example of general intelligence, which is human beings. And human beings have very important property that we can all feel, which is consciousness.

Liron: He's saying, what does it mean that the one general intelligence out there, humanity, also has consciousness? Okay, fair question. I would also ask, what does it mean that all of these other animals still seem like they're kind of conscious and yet they're way less intelligent? Probably the most powerful example is that of a mentally disabled human whose intelligence is just very limited. But a lot of these humans, as far as most of us would acknowledge, are very conscious persons despite lacking intelligence.

Does that mean anything to you about intelligence and consciousness potentially not being interlinked? Is it possible that a lot of what the brain's neurons are doing are feeling stuff and then a lot of what the brain's neurons are doing are computing stuff? And a lot of times you can just get the output of a computation even without drenching it and feeling. Or you can drench something in feeling and have the computation be very simple.

So that is already a little bit of evidence of orthogonality. Amjad would have to claim. Oh yeah, sure you can have consciousness even while you have low intelligence, but the only way to reach the upper Heisenbergian, quantum theory level of intelligence is if you can drench those moves, those leaps of insight, if you can drench them with the appropriate amount of qualia. That's the only way to reach those higher levels.

That's a fine claim to make. It's just if you're going to bust out this observation that the only general intelligence we know is conscious, I would also just bust out the observation that there's plenty of lower intelligent systems that really do seem like they're also quite conscious. But let's continue.

Amjad: We don't know how it happens, how it emerges. People like Roger Penrose are like, they have these theories about quantum mechanics in microtubules. I don't know if you got into that with him, but I think he has a collaborator, neuroscientist Hameroff I think, or something like that. But people have so many. I'm not saying Penrose has the answers, but it's something that philosophers have grappled with forever.

Liron: It sounds like Amjad is dignifying the whole Penrose lore, like he thinks it all might be correct. So let me help him out and connect the three prongs. Because you kind of have to accept all three prongs at once if you really want to go full Penrose.

So remember Amjad covered Gödel's theorem. He's like machines can't handle this paradox of formal systems about having unprovable statements in them, even though there's like a quick fix from my perspective. But in Amjad's mind, in Penrose's mind, it's like so disastrous that formal logic has these holes.

And now he's bringing up this idea of quantum microtubules. If you want to be a conscious being, you can't just use the kind of physics that would implement a classical computer. You have to do uncomputable computations. But wait, aren't you just made out of cells? Okay, but these cells have these microtubules. And there's laws of physics that don't really affect anything that we notice, except they do affect the microtubules, which then help us think better.

This is all the Penrose cinematic universe. He makes all these claims that are all highly questionable, but they all stick together in the Penrose ball of tape. And Amjad is basically rolling with it. He's saying, I'm not 100% Penrose devotee, but let's give Penrose a chance, guys. Let's give Penrose a chance.

So I'm just bringing all this up because Amjad kind of didn't make the final connection, so I'm helping him out. So the final connection is that the reason why humans can generalize better than AIs and the reason why humans can solve these paradoxes that a formal system can never solve. According to Penrose, which I disagree, the magic that lets humans do it is laws of physics that make the brains leap from one state to the next, disobey all possible computable models.

Which is a crazy, crazy claim that has zero evidence, as far as I can tell. But this is Penrose's connection. So he wants you to embrace Gödel being a big problem, which I disagree with. He wants you to embrace the brain using special laws of physics that are poorly understood. Penrose wants you to accept all of this in order to then say, aha. So consciousness is the key to doing all of these abilities.

Those are the three prongs. Humans can do more things than machines can do. They can generalize better than machines because they aren't bounded by the usual laws of computation, because they aren't bounded by the usual laws of physics, because microtubules use different physics. And Gödel's incompleteness theorem is related to that assertion.

So these are all the pieces of the puzzle. Amjad did a pretty good job pointing to most of those pieces. I'm trying to kind of complete the picture for you. It's a crazy picture, in my opinion.

And the thing that makes it really crazy is that if you'd interviewed Penrose a decade ago and you're like, hey, Penrose. So, because humans are so awesome, do you think that human graphic designers are probably safe for a while? Because the way that humans use their creativity to make images to spec is a really creative conscious ability. I think there's a very good chance Penrose would have been like, yeah, that seems like the kind of creative conscious ability that only humans have.

But of course, now that we know the answer, hey, look, it turns out AIs can do it without apparently being conscious, without having quantum microtubules, it's now very easy for the Amjads of the world to look back and be like, oh, that's just remixing. Don't worry about that. That's not something like being an entrepreneur. Being an entrepreneur is the holy grail.

Or in Marc Andreessen's case, being a VC, that's the holy grail of venture capitalists, right? Evaluating founders. Don't get the impression that just because AIs keep doing all this other stuff that they could be a venture capitalist.

So, I mean, it's crazy stuff. Amjad accuses people like me of willfully rejecting connections to consciousness. But then when you look at the argument that he cites, when you look at the Penrose argument, it's a totally crazy argument that's been rejected by scientific consensus. Don't even blame Silicon Valley for that. There's very, very few Penrose supporters in the neuroscientist community.

Human Limitations and Hubris

Joe: You know, we're kind of primitive in terms of what we are as a species. Our senses have been adapted to the wild world in order for us to be able to survive and to be able to evade predators and find food. That's it. That's what we're here for.

Amjad: The kind of thing that I think is sort of the Silicon Valley AGI cult is like, there's a lot of hubris there that we know everything. Of course, we're at the end of the world. We, yeah, AI is just gonna. It's the end of knowledge. It's gonna be able to do everything for us. And I just feel it's so early.

Liron: So Amjad is saying people like Liron Shapira who are giving a 50% P(Doom) because they're alarmed that AI's ability to steer the future is potentially running away from human controllability. People like Liron are in the AGI cult and they have a lot of hubris because they don't appreciate what we don't know, they're not humble enough to entertain the possibility that maybe the brain that evolution has designed into us that's struggling to understand the universe nevertheless is tapping into a kind of consciousness that's going to let average people come get a job as an entrepreneur using Amjad's tool.

It is Liron Shapiro's arrogance as an AI cult member to not appreciate the overwhelming likelihood that Amjad's mainline future is just everybody becoming an entrepreneur. Amjad is humble. Amjad doesn't claim to know anything. He's just not concerned about doom at all and is claiming that he somehow knows that unemployment won't be a big issue because everybody's going to become an entrepreneur using his tools.

Amjad is the not arrogant one here. Got that?

Amjad: I think the negative angle of technology and AI gets a lot more views and clicks. And if we want to go viral right now, I'll tell you, these are the 10 jobs that you can lose tomorrow. And that's the easiest way to kind of go viral on the Internet.

Liron: Well, try to put yourself in the shoes of people who have regular jobs. They started a career path, they worked in a certain industry for a few years, they got some amount of seniority in their position. And if they lost their job and if they hit the job market, they're not like you, Amjad. They're not like a top entrepreneur who can navigate their way around anything, who we all know is going to be one of the last ones to be replaced by AI.

They're just regular people who know that their area of the job market is somewhat limited and they hope that it stays an area of the job market that they can make a living on. And they see AI coming in and decimating other areas of the job market. Lower end copywriter, lower end engineers. I mean, these people are climbing for safety. Will they find safety? Maybe. But you have to empathize with the idea that maybe they won't. Right. Instead of just having the optimism and the positivity of, look, everybody's going to find safety.

It's like, you got to empathize with their plight here because they know that jobs tend to get taken by people who have more skills. Right. They've seen themselves lapped. I mean, it's easy for me and you people who have generally been on the upper end of their classes. Right. On the upper end of intellectual capability, the upper end of that bell curve. Well, most people aren't. Most people are more like average on the bell curve or even low on the bell curve of these kinds of marketable skills.

And so when they see this new kind of agent, this new kind of system that has marketable skills, right, that's getting paid billions of dollars, like OpenAI now making over $10 billion, $14 billion in annual revenue and growing exponentially. So when they see this new system that has marketable skills, that's new, now challenging their own, you got to empathize with why they're reading these viral articles saying, Top 10 jobs that are going away now.

That's not to say that you can't also see a glass half full and try to speculate about how there's going to be upsides and how once again, we're going to navigate the transition and there's going to be an even better service sector. The service sector is how we navigated the previous economic disruption. We had more services. Maybe something analogous to that is going to come.

But when you hand wave and you just say everybody's going to be an entrepreneur on Replit, maybe it would behoove you to spend a little bit more time thinking concretely about how that would work, because you don't want to peddle false hope to people, right?

Just like it's your job to strategically navigate your company, people also look to you to give them a little bit more insight into what the economic future is going to look like. And when it's so clear that you haven't thought things through that much, you're largely going on vibes and speculation and your distaste for doomerism, apparently, then it's understandable why some people might not be on the same page as you.

Now, to be fair, I do think you're seeing some signals that are meaningful. I mean, Replit's own company growth is a big indication that people are finding the tool useful. And if that trend were to continue, like what Amjad says about extrapolating trends, if everybody was using Replit, instead of just going straight to OpenAI, if Replit itself still had these exponentially growing revenues the way they've amazingly had in the last year or two, that would actually be a really good sign.

That would kind of be like, hey, we're all living in Amjad's mainline future. We're all entrepreneurs on Replit making a living. It's kind of like if Shopify's revenue kept increasing, that would be a good sign that it's not like Amazon taking everything, manufacturing every product. It's like a bunch of independent entrepreneurs somehow eking out their own businesses on Shopify.

So Replit is kind of a business like that. And so I get what Amjad's saying of like, look, if my growth trends continue, this is a great economy. But the trend that would counteract that is just the trend of AI's intelligence and capabilities surpassing that of the user of Shopify, of the user of Replit. Right. That would make Amjad's own trend be discontinuous.

So just because you're seeing a temporary flight to safety on Replit of people being like, hey, look, I don't need my traditional job role. I can be an entrepreneur on Replit. Just because you're seeing that right now, when you also factor in the growth of AI capabilities, you might see a break in your trend. It might be a temporary trend.

The same way that, oh, there's a flight to CD ROMs. CD ROMs are a great way to store software. Oh, wait, we don't need that. We can just have flash drives and we can just have software over the Internet. Right. There are transitional periods. Yeah, you're seeing a major transition toward Replit being really powerful and useful to entrepreneurs. That doesn't necessarily mean that the future of AI is going to play out that way.

And so instead of just criticizing pessimism, I think you should realize that there's a big open problem here that needs a more satisfying solution than look at Replit's current growth graph. And let's not be pessimistic. Somebody really needs to be thinking about, wait, what if we actually are doomed? Let's try to play out the scenarios in more specific detail, taking seriously what the AI companies are telling us is coming, but trying to think through what are the actual implications and what is true about human nature that really doesn't change and really is timeless.

Human Nature and Creative Desires

Liron: Being timeless is a privilege. Right? The Neanderthals didn't get to be timeless. The dodo bird didn't get to be timeless. So hopefully we have something timeless about us that's going to let us keep being timeless.

Amjad: And I think that people want to create and people want to make things and people have ideas. Again, everyone that I talk to have one idea or another, whether it's for their job or for a business they want to build or somewhere in the middle.

Liron: Yeah. Everybody that I talk to wants to play professional sports. Everybody that I talk to wants to write the next great American novel. Everybody that I talk to wants to make millions of dollars in passive income. Humans have always wanted all of these ambitions, right?

I think the more relevant question is what market opportunities are available and are going to remain available. The market opportunity of writing novels or running businesses is currently only open to people who are at a certain threshold of overall competence.

I know a lot of people who are currently doing fine in some parts of the economy that they're somewhat of adapted for. And they wouldn't do fine if they had to compete in the business world. Their profit margins wouldn't be high enough to sustain the amount of inefficiency in their business relative to that of their competitors. Right. I mean, that's the nature of the free market, that it's hard to run a successful business.

It's like saying, you know what everybody's going to do? Open their own restaurant. Because everybody's always said, how great would it be to have my restaurant? People will come to my restaurant. And then it's like, yeah, that is something that a lot of people want to do. But in terms of sustaining the economy, that's a terrible idea, right? Most people get really sunk. They really screw their finances trying to open a restaurant.

And so to me, that's always been the salient question with AI getting smarter and smarter. I'd love to go back to the discussion of, wait, what were you saying about generalizing that makes humans want to use your tool because the AI can't generalize well enough to use your tool. Can we maybe finish that line of argument?

I feel like it's really important how that argument ends up playing out, because if it doesn't play out the way Amjad hopes, if it turns out that it's just the AI using the AI and Replit just gets cut out because it's an unnecessary middleman. You can just tell the AI what you want, or the AI can decide what it wants, and then you just get all the code written. You don't need the Replit tool.

Seems like a very likely outcome to me. That seems like where we're headed. No offense to Replit. It's a great tool, like I said. But if that's the future we're heading, the future that a lot of people are anticipating, then the interesting question isn't what humans want. It's can humans get what they want? Or should we start asking, what does the AI want? And how do we align the AI to make sure that it wants the right thing instead of have it spiral out of control wanting nothing? The right thing?

Isn't this a worthwhile topic of discussion. Why are you kind of optimistically whistling past the graveyard here? Why not address this? This is the wide ranging conversation that you're having here with Joe Rogan. It's a three hour conversation. Why not address all of these red flags that are piling up adjacent to your company?

The Ahmad George Success Story

Amjad: Just yesterday I was watching a video of an entrepreneur using a platform Replit. His name is Ahmad George and he works for this skincare company. And he's an operations manager. And a lot of a big part of his job is managing inventory and doing all of this stuff in a very manual way and very tedious way. And he always had this idea of let's automate big part of it. It's no problem ERP.

So they went to their software provider NetSuite and told him we need these modifications to the ERP system so that it makes our job easier. We think we can automate hundreds of hours a month or something like that. And they quoted them $150,000.

And he had just seen a video about our platform and he went on Replit and built something in a couple of weeks, costed him $400 and then deployed it in his office. Everyone in the office started working it, using it. They all got more productive. They started saving time and money.

He went to the CEO and showed him the impact. Look at how much money we're saving. Look at the fact that we built this piece of software that is cheaper than what the consultants quoted us. And I want to sell the software to the company. And so he sold it for $32,000 to the company and next year he's going to be getting more maintenance subscription revenue from it.

Liron: Yeah, this is a legit story. This is a great example of the state of the art of how AI is empowering people today. It's similar to a story of like, man, I really wanted to just make images for my YouTube show. I really wanted to just make transcripts. And then I just typed into the AI that that's what I wanted and then I got it right.

It's like these new stories of people automating so much work. In this case it wasn't like a one and done automation. It sounds like there was back and forth, like a product manager working with an engineer. But yeah, I mean, to Amjad's credit, this is really the ultimate Replit success story, right? I mean Replit is an amazing product. There's a reason the company is valued at billions of dollars. They've created major value here.

So if we take this example of this particular engineer automating what NetSuite was doing for his company and reducing costs of software as a service and getting paid for it. The question then becomes is how much secret sauce did we need coming out of the human? And how long is the human brain's advantage to keep being able to inject that secret sauce instead of having the AI just also have that secret sauce?

I mean, that is the trillion dollar question, right? That's the question of whether everybody can become an entrepreneur or whether at least half of us can become entrepreneurs and the other half can go on universal basic income. That is the major question here that I'd like to see Amjad show some more curiosity in.

I mean, if he's giving us this case study which makes him look great, why not also turn the card over and be like, okay, how long will this advantage be defensible? Which specific prompts did this particular manager need to give the AI in order to get what he wanted? And how easy is it to just train an AI to be like, oh, interesting. These kind of prompts help these managers at companies build interesting tools.

What if we just remix the ultimate things that AIs can do? What if they remix? What if they statistically interpolate new contexts in other companies where they also need similar kinds of projects? How long is the human defensively going to be in that position?

It's a very interesting question for Amjad. Is this an example of something that's remixable or no? Let us know. Address the key questions here.

Job Automation and Reskilling

Amjad: So this idea of people becoming entrepreneurs, it doesn't mean everyone has to quit their job and build a business, but within your job, everyone has an opportunity to get promoted. Everyone has an opportunity to remove the tedious job. There was a Stanford study asking people what percentage of your job is automatable just recently and everyone said about half, like 50% of what I do is routine and tedious and I don't want to do it. I'd rather and have ideas on how to make the business better, how to make my job better. And I think we can use AI to do it.

Liron: You know, there's this new company called Mechanize founded by smart dudes with top tier investors like Daniel Gross, Patrick Collison, Jeff Dean, Sholto Douglas. I mean this is a pretty high quality company and they're explicitly trying to automate work tasks in order to automate the whole economy. That's their mission. We know people are trying that. They're not the only ones trying that kind of thing.

So when you're telling a success story of somebody using Replit to streamline their job, that's great, but why not extrapolate, right? Why act like this is the end game? You're saying, okay, there's a survey where people said that 50% of their job is automatable. Okay, But I know a friend who said that 90% of his customer service turned out to be automatable.

If you had talked to those customer service people, would they have told you that 90% of their job is automatable? They probably would have been like, yeah, I work pretty hard. I do a lot of tasks. Maybe I can automate half my job. Nope. Turns out you could automate 90% coming on 100% of your job. By the way, it's not like the automation has stopped, so why not do this kind of extrapolation? Why act like 50% is the end game?

It does seem like Amjad kind of has a blind spot, maybe purposefully. I mean, it obviously doesn't help his marketing to be like, yeah, my tool could be a stopgap solution. But there's every reason to think that the part where the human needs to input commands and prompts in my tool is getting more and more automated itself and the AI is actually learning and training on the data.

So when humans are coming in and using my tools to prompt, that's actually creating enough data that the next prompt won't even be a new paradigm, but it'll instead be a remix, right? I mean, why not address this? Why not address the possibility that his own tool is a stopgap solution? Because that seems very likely to be not a hundred percent likely, but it's the obvious elephant in the room. If you do what he himself was saying to do and extrapolate trends.

Amjad: There's this hunger in the workforce to use AI to. For humans to sort of for people to reclaim their seat as the creative driver. Because the thing that happened with the emergence of computers is that in many ways people became a little more drone like and NPC like, they're doing the same thing every day.

Liron: Wait, you think the trend is that computers cause people to be more drone like and do the same thing every day? Because before computers, before we had Microsoft Excel, there were people sitting in a room where their whole job all day long was to just make spreadsheets and manipulate the numbers in spreadsheets and get all the formulas right. And there were people whose job title was literally computer just doing big math for you. That was before calculators.

So what is he talking about when he says that having computers made people more drone? Like, is he just talking about the physical movement of files? Like, hey, you would have a mail room and you'd have people who would have to move their muscles and deal with interesting ways that the mail was moving around before it just moved around electronically? No, that doesn't sound right.

What is a job that used to be creative and then became drone-like, thanks to the computer? Maybe like a travel agent. You used to use interpersonal skills in order to talk to somebody on the phone and sell them on a flight, and now that person just goes on Google flights. Is that what Amjad is talking about? Is he talking about managers at companies? The role of a manager at a company used to be dynamic and then the computer made it more NPC. This is a tough one for me.

I asked ChatGPT to help me understand what Amjad's talking about, and it was like, well, look at gig workers. They're taking orders from an app and just kind of getting micromanaged by the software and they don't have much creative control. But ChatGPT actually agreed with my overall assessment that when you describe humans as being more NPC-like, thanks to having a computer as a tool, it's kind of the opposite of the overall trend that I see, which is more like, hey, we got a bicycle for the mind. And the mind has in fact been bicycling more far and wide than it ever has before.

So I think Amjad's probably wrong on that one. But anyway, just a minor point. Let's keep moving on.

The Promise of AI and Automation

Amjad: I think the real promise of AI and technology has always been automation, so that we have more time either for leisure or for creativity or for ways in which we can advance our lives, change our lives or our careers. And yeah, this is what gets me excited. And I think it's. I don't think it's predominantly a rose color glasses thing because I'm seeing it every day and that's what gets me fired up.

Liron: Yeah, he's absolutely right that the trend from AI is continuing the trend since the Industrial Revolution where productivity keeps increasing, technology keeps applying a leverage on people and we do actually have more leisure time. I know some people feel like we don't have more leisure time, but we totally do. Our vacations are definitely getting much more luxurious. We're definitely spending more hours of the day doing things that are more leisure.

So I agree with Amjad that we are seeing a continuation of that trend toward new heights. Of course there's a fly in the ointment, which is that humans are going to lose their economic value, they're going to lose their economic leverage. And so we're going to get to the point where if we want to have a share of this productivity increase, it's entirely going to have to be through governance, through control, right?

Through the ability to press the nuclear button and blow up AIs. And they're scared of us, so they have to keep giving us a piece of the resources. They have to keep letting us be their owners. Or their programming is just so good and robust that they never want to do anything other than pass the resources to us. But this is a much more fragile connection.

But in Amjad's worldview, it's like, no, don't worry about that. Because at the end of the day, the AIs kind of grind to a halt without that human using their generalist consciousness power in order to use Replit in order to tell them what to do. Right? In Amjad's mainline scenario, everything is totally fine because the economy still chugs along with everybody being their own entrepreneur.

I mean, I haven't heard him amend that mainline scenario. It's sounding kind of ridiculous to me. But that's, I guess the crux of our disagreement. He is very happily kind of assuming that his tool is a good representation of the future. Whereas from my perspective, it's more like this intermediate stage, which to be fair is really cool.

It's really cool that this is increasing people's productivity the same way that the graphical user interface and the high level programming language has increased people's productivity before. His tool is kind of like the next step in that, which I really respect. I just unfortunately think that the step beyond his tool involves AI also using his tool on behalf of humans.

Which actually, funny enough, I mean, his tool started out not being AI powered, right? He just started out making this really good programming tool that just lets you write code easily, deploy code easily. And AI actually came in and added a layer above his tool, just like his own tool was a layer above a bunch of other stuff. AI came in and added another layer.

Well, guess what? The next step is for AI to come in and add another layer above the AI. What if the AI could prompt the AI? Doesn't that blow your mind? Well, again, extrapolate. That's the obvious next step. A lot of people, I'm not the only one saying this. This is kind of the obvious next step that a lot of us are seeing.

And it's kind of funny that Amjad, who's clearly a visionary, clearly a talented guy, is not really grappling with the likely possibility that this is what's going to happen next.

More on Reskilling and Job Loss

Amjad: If you're someone whose job is sort of a desk job, you already are in the computer, there's a lot of opportunity for you to reskill and start using AI to automate a big part of your job. And yes, there's going to be job loss, but I think a lot of those people will be able to reskill.

Liron: I know we're beating a dead horse at this point, but it's kind of striking to hear him explicitly say it. He thinks somebody who just has a desk job that's not particularly talented at coding or even product management is now going to be in this position where a company doesn't just use AI tools to build their own products. No, no, no. They hire this person who's suddenly reskilled. Reskilled how?

Reskilled in order to prompt an AI to go build software and do that faster than you can get an AI itself to learn the skill of prompting an AI to rebuild software. I mean, look, it might happen. We might even get an AI winter where this kind of reskilling and this kind of using Replit as the dominant tool. This could last for 10 years, 15 years, right? That's kind of the ultimate scenario for Replit.

And frankly, I think Replit is a good investment. Right. I think this is a plausible outcome. But to be so optimistic that this represents the long term future instead of a little rest stop. And I said 15 years, it could easily be one year. Right. There's no guarantee of this. Right. So I'm just kind of only giving the good outcome.

And I keep saying it, but the weird thing for me is he's not being like, yep. Or it could also go totally the other way where AI just gets better. And the trend does in fact extrapolate. Right. It's starting to sound very intentional that he's just kind of refusing to play it out the other way.

Amjad: Our world is going to be primarily run by AI and robots and all of that. And more and more people need to be able to make software even if it is prompting and not really, but a lot more people just need to be able to make it. There's going to be a need for more products and services and all of that stuff. And I think there's enough jobs to go around if we have this mindset of let's actually think about the future of the economy as opposed to let's bring back certain manufacturing jobs which I don't think Americans would want to do anyways.

Liron: Yeah, I think he's totally right if you grant him the assumption that AI is not just going to get more intelligent and generalize and be able to prompt Replit. If you aren't bullish that AI can just be a drop in replacement for a human employee who's prompting Replit. If you really think that you need to go reskill somebody from another industry or educate somebody when they're a child or a teenager and have them come into the workforce as a human and beat AI at using AI, if that's really how you think the future is going to go, then what Amjad is saying makes total sense.

He's giving a great pitch for it. And there is, in fact a possible future where the brain does beat AIs for a long time. I don't really see it but I admit that it's possible. I'll give it more than 10% chance that we get stuck at this AI winter where you do in fact need humans to come and build software by prompting Replit.

So I got a hand at Amjad. He is giving a great argument for that particular future. Ignoring that it might not actually be the case, but conditioning on it being a case. He's doing a good job.

Quality Assurance Jobs

Amjad: I think the jobs to be worried about, especially in the next months to a year a little more is the routine computer jobs where it's formulaic. You know, you go, you have a task like quality assurance jobs, right? Software quality assurance. It's like you get, you have to constantly test the same feature of some large software company, Microsoft or whatever. You're sitting there and you're performing the same thing again and again and again every day. And if there's a bug you kind of report it back to the software engineers. And that is I think really in the bullseye of what AI is going to be able to do over the.

Joe: Next months and do it much more efficiently.

Amjad: Much more efficiently, much faster. Yeah.

Liron: So on one hand, yeah, he makes a lot of sense. These kinds of white collar jobs that are somewhat repetitive, like doing quality assurance on software, yeah, they're probably going to be automatable. But the question that I feel like he's willfully not asking himself is like, oh, wait a minute, wait a minute, there's a human whose job it is to check if software is performing the way it should. And they're using their human judgment to tell you, yep, the software is performing the way it should and you're saying that that's a kind of job that's about to get automated.

Okay, well consider this. The CEO of a company writes out a couple pages of spec, the kind of spec that he would write in communication to their VP or their line manager, whatever it is down the organization, telling him what he wants. And then that manager just hands it off to this job that you said is automated, this quality assurance job, but it's connected to a Replit agent. It's connected to your own software that's actually building software.

And now by your assumption, there's now an automated system that doesn't need a human in the loop because you said those jobs are getting replaced. So the quality assurance part of the feedback loop, it's now telling the Replit agent, it's saying, hey, Replit agent, your software isn't working right this, and this is broken because you set that that's automatable.

So if you hook up a Replit agent to one of these quality assurance people who have been automated, aren't you getting really, really close to just having no human in the loop to building software? I'm not trying to make a definite prediction. I'm not saying, hey, and we're definitely going to have this in the next two years.

I just keep pointing out it's like a dramatic irony where we kind of know that Amjad is intentionally not asking himself this question, but he's accepting the premise, hey, your tool is going to get more powerful, right? Replit agent is going to get more powerful at writing code and these quality assurance functions are going to get more powerful and automated.

So what happens when you combine the two Amjad? Are there still going to be reskilling jobs where somebody who used to work in sales or education or whatever is now coming on and being an entrepreneur and using their tools and their brain is better than the combination of this engineer and this quality assurance person who have both been automated.

It's just getting a little bit hard to imagine this being as a long term robust scenario. It's plausible, but again, it's funny that he's not asking himself the question.

Online Learning and AI Systems

Amjad: I think these systems will start to have more online learning. Instead of just training them in these large data centers and these large data and then giving you this thing that doesn't know anything about it is totally stateless. As you use these devices, they will learn your pattern, your behavior and all of that.

Liron: That sounds right, but I think you just gave us a new Duplo, Amjad. So in addition to AIs being really good remixing, being really good at statistics and pattern matching, he's now telling us that they're going to be good at learning. So you've got these systems that are learning remixing, pattern matching. You're telling me that combination still doesn't add up to generalizing or doing anything novel?

Can't you just learn enough hints so that when you remix them you get something novel? It's getting to the point where Amjad's abstract duplos are now crowding each other out. They're overlapping each other because he had a Duplo saying that the AI can't create anything new, but now he has Duplo saying that the AI can remix and the AI can learn.

It seems like learning and then remixing could potentially add up to creating something new. Because when you're learning, you're finding yourself in a new context. That's why you need to learn. So it's a new context and then you're remixing old things into the new context. So doesn't that add up to creating something new?

This is why you really got to zoom into your Duplos. When you make Duplo level claims, when you make abstract claims, this is why you got to play out specific stories and you got to say, okay, in this specific scenario, the human is still needed because this particular issue might come up and Amjad's not really doing that. He's only telling success stories, right?

Oh, this person built this today. But he's not really extrapolating outward, saying, this particular time when the human broke through this problem, I can't imagine prompting an AI to get through the same problem that the human got through.

I think the most likely feature we're going to see is that the thrust of Amjad's claims is more likely wrong than right. We're more likely to see AI's quote unquote invent new things, quote, unquote, generalize. Whatever firewall he thinks he sees that make his own tool robust are probably going to get broken through.

Now, to his credit, he's the kind of entrepreneur who's probably going to keep improving his tool so that his tool hangs on as long as possible. When AIs came onto the scene, he improved his tool to help you use AI agents via his tool. He's probably going to make his tool useful for as long as possible.

But at the end of the day, the human brain is just going to get surpassed on every dimension by AIs. And maybe a few of us like him will hang on and still have value in the economy, but fewer and fewer of us will have any contribution to make whatsoever because our brains are just not as powerful as our AIs.

At the end of the day, we're just evolved to survive in the savannah and have a few adaptations. Like Joe Rogan said, we're not evolved to be part of powering this exponentially growing economy of the future. When we're competing against these super machines and data centers and that have amazing algorithms, amazing speed, amazing power in terms of megawatts, amazing data.

I mean, these are a lot of advantages that are being arrayed against the human species. You gotta pay some respect to those advantages instead of being like, well, you know what? Nobody understands consciousness. So I think we're probably good. It's like, hello, there's a lot. It's like, look around, okay, there's a lot of forces being arrayed against us.

You gotta be humble to that potential threat instead of just dismissing the Doomers because you don't like the Doomers attitude. Okay, let's show a little bit of respect to the Doomers. Let's acknowledge that the Doomers are just telling you something highly plausible.

Superintelligence Definition and Evidence

Amjad: Like I said, my philosophy tends to be different than, I think, the mainstream in Silicon Valley. I think that AI is going to be extremely good at doing labor, extremely good at ChatGPT and being a personal assistant, extremely good at, like, you know, like Replit, being a automated programmer. But the definition of superintelligence is that it is better than every other human collectively at any task.

And I am not sure there's evidence that we're headed there. Again, I think that one important aspect of superintelligence, or AGI, is that you drop this entity into an environment where it has no idea about that environment, it's never seen it before, and it's able to efficiently learn to achieve goals within that environment.

Liron: Okay, did you catch that? Duplo efficiently learn to achieve goals within a new environment. Amjad is bearish on that, even though he also said:

Amjad: I think these systems will start to have more online learning.

Liron: So I guess his position is just that online learning is harder and it won't be as robust. And so that'll be the ultimate advantage to humans, is that if the environment keeps changing, the humans will just kind of be racing ahead because we'll learn faster with less data. I think that's kind of what Amjad is waving at. Yep, the environment is fast paced. The AI can't keep up with humans, which is totally plausible if you look at how AIs are today.

That is kind of a limitation of AIs today. I just don't see the same kind of fundamental consciousness barrier that Amjad sees. So I think extrapolating five years out, that's where his worldview and my worldview would probably really diverge.

Amjad: Right now there's a bunch of studies showing like GPT-4 or any of the latest models. If you give them an exam or quiz that is even slightly different than their training data, they tank. They do really badly on it. I think the way that AI will continue to get better is via data.

Liron: Okay, so it's an AI's first day on the job. It doesn't have much data yet for online learning, so it shadows the human. The data comes in for a little while, but then at some point, isn't it bye bye for the human? Or should we assume that more data keeps coming in?

I mean, this all comes down to definitions, right? At some point, okay, the AI has been on the job for a month. Is it now remixing or is it still new learning? Right, all of the complexity here goes onto the definition of what's remixing. What's a new paradigm?

What Amjad hasn't done is he hasn't grounded these Duplo terms into lower level concepts. What is the definition of new paradigm? At what point does the remixing blur into a new paradigm? How much more new data do you need in order to cross this chasm?

And from my perspective, physical reality isn't actually made out of these Duplos. There is no fundamental distinction between remixing and a novel paradigm. If you do enough remixing and the remixing is like a chain of thought that inspires you to remix the latest thought further. Well, there's no limit to how many new things you can create.

It's all actually a search process. The concept of newness isn't that fundamental. The concept of searching efficiently and then noticing when you found something good, that kind of thing is fundamental. Every problem in an abstract sense is a search problem. And you can quantify how good and how broad different searches are.

That's a more useful framework in my opinion than what Amjad is saying about novelty and consciousness and sticking to your data. Ultimately, it doesn't seem like Amjad is being very clear about what he sees. The limit is besides hand waving that maybe there is some kind of limit.

So let's see what else he's willing to grant about possibilities for the future.

Self-Play and Game Environments

Amjad: Now at some point, and maybe this is the point of takeoff is that that they can train themselves. And the way we know how AI could train itself through a method called self play. So the way self play works is, take for example, AlphaGo. The way AlphaGo is trained is that part of it is a neural network that's trained on existing data. But the way it achieves superhuman performance in that one domain is by playing itself millions, billions, perhaps trillions of times.

We know how to make this in game environments because game environments are closed environments. But we don't know how to make it. We don't know how to do self play, for example, on literature because you need, you need objective truth. In literature there's no objective truth. Taste is different.

Liron: Right? I'm not clear what Amjad's claim is about literature. Does he not think that AI is getting pretty close to being able to write award winning literature? Because that would be a very interesting prediction and it would be a nice supplement to his earlier prediction that everybody's going to be an entrepreneur.

It seems like it's worth carving out, oh well, some of us are also going to be famous authors whose job is also not going to be replaced by AI. I think it would be worth Amjad pointing out all the areas that he thinks are robust to AI takeover. I mean, given his position in the tech industry, people are looking for his guidance to tell us which jobs are going to be robust.

So it sounds like he's thinking, okay, entrepreneur, literary genius, maybe musical genius. It's an interesting prediction from my perspective. Sure you can't do self play for literature because when you write a new type of book there's now a objective evaluation for how good it is. But it also seems like award winning books are kind of the wheelhouse of LLMs, the wheelhouse of unsupervised learning.

Because the next token could be the token about whether or not the book wins an award. Isn't this what Amjad would call statistical pattern matching, where an LLM could suck in a book and give you a score of how good it is? Aren't LLMs getting better and better at doing that?

So is Amjad claiming that LLMs are going to top out at not being able to evaluate whether a book is a really good winner? Because from my experience talking to LLMs, they do actually seem pretty insightful at evaluating writing. So does Amjad want to claim, oh no, no, no. There's actually something about human consciousness that when the writing quality goes beyond a certain point, you really just need a genuine human to tell you whether it's award worthy or not.

I mean, these are interesting claims that I'd love to see where Amjad stands on. I'm not really clear from just what he's saying to Rogan.

Intangible vs. Tangible Domains

Amjad: Conjecture, philosophy, there's a lot of things. And again this is. I go back to why there's still a primary of humans is there are a lot of things that are intangible and we don't know how to generate objective truth in order to train machines in the self play fashion.

Liron: This distinction where AI is currently succeeding at things that are tangible, but then there's going to be things that are intangible that it's going to struggle with. It seems like there's a hint of confirmation bias here because from my perspective, the kind of essays that AI is writing today or the kinds of short stories, if I told you five or ten years ago, yep, this is going to be the output of AI. So tell me, is this tangible or intangible?

I think a lot of us would have looked at it and said it looks like there's some good intangible qualities here that the AI is hitting. I feel like it's really copying a lot of the magic that the human brain is doing at this point. But of course, confirmation bias, it already exists. So we can look back and be like, no, no, no, it's all been tangible so far. There's no intangible greatness yet. That's all somewhere in the far future. Don't worry about that.

Amjad: But like programming has objective truth, coding has objective truth. The machine can like, you can construct an environment that has a computer and has a problem. There's a ton of problems and even an AI can generate sample problems and then there's like a test to validate whether the program works or not. And then you can generate all these programs, test them, and if they succeed, that's a reward that trains your system to get better at that. If it doesn't succeed, that's also feedback and they run them all the time. And it gets better at programming. So I'm confident programming is going to get a lot better.

Liron: Wait a minute, this seems like a bait and switch here. You're saying that literature is intangible whether or not it's good or not. And you can't use self play to get good at literature, but you can use self play to get good at programming? I mean, yeah, of course, if you formally define the input output relationship between a program, then yes, you run the program and you check if the output is provably correct.

But that's not what people are doing on Replit. Right? You're building a system on Replit successfully, where people are using it to have this automated engineer that's just meeting a product spec, but it's not formally verifying that it meets the product spec. It's intangible whether something meets the product expect. That's why humans have had to keep doing it. Even today you still have careers for human software engineers and product managers.

So wait a minute. The programming that you're doing at Replit is able to be done tangibly and with self play. I mean, these duplos that you're using are just very fuzzy. I'm not really clear on why you think software engineering is like a solved problem when the real world specs that managers are giving their employees are quite fuzzy and quite subject to intangible human judgment. I think you really got to revisit the distinctions you think you understand.

AI's Creative Abilities

Joe: If AI analyzes all the past creativity, all the different works of literature, all the different music, all the different things that humans have created completely without AI, do you think it could understand the mechanisms involved in creativity and make a reasonable facsimile?

Amjad: I think it will be able to imitate very well how humans come up with new ideas in a way that it remixes all the existing ideas from its training data. But by the way, again, this is super powerful.

Liron: He's going out on a limb trying to claim that the way AI creates things is remixing. I had this experience today where I asked the AI to try to recognize a song that I had in mind just based on the scale degree notes that I remember from hearing the song. And the AI was like, you're saying it's using these scale degrees. Let me search the web. You said the song was released after 2013 and it's a pop song and you said you think it uses a four chord. Let me search the web and let me brainstorm some ideas of the kind of songs that would use that. Let me scan through the list.

I pulled up these guitar tabs and it said that it's using these chord notes. I think that counts as a four chord note. So I just reasoned through it the exact same way that an intelligent human would do it. In fact, I myself as a searcher of Google, looking over what the AI did, I'm like, oh, I could have searched Google like this. I could have done these steps, but those steps didn't even occur to me.

So is this remixing or is the AI just reasoning? Can't we just give it the prize being like, okay, okay, it's learned how to reason. It's learned how to be an agent in the world, how to just learn things and then apply its learning. Why are we so hesitant to give the AI this credit? It keeps giving us new solutions to things and then we go and look at how it did it and it's like, oh yeah, that is how one would go about doing that.

Do you really have to bring in this big old Duplo saying remix. You just saw a remix happen, guys? No, I don't think that we saw a remix. Okay, let's drop remix.

Amjad: There's not like a dig at AI. The ability to remix all the available data into new, potentially new ideas or newish ideas because they're remixes, they're derivative is still very, very powerful. But, you know, the best marketers, the best, think of, one of my favorite marketing videos is think different from Apple. It's awesome. I don't think that really machines are at a point where they, I try to talk to ChatGPT a lot about marketing or naming. It's so bad at that. It's midwit bad at that. And, for now. But that's the thing.

It's like I just don't see. And look, I'm not an AI researcher and maybe they're working, they have ideas there. But in the current landscape of the technology that we have today, it's hard to imagine how these AIs are going to get better at, say, literature or the softer things that we as humans find really compelling.

Liron: Okay. I disagree. But in the interest of your own self consistency, don't you want to go back to the statement you made at the beginning of the podcast where you said:

Amjad: Everyone's going to become an entrepreneur.

Liron: Don't you want to make another carve out? Don't you want to say, hey, if you're a marketer, I don't see how AI is going to ever come up with something as good as Apple's Think Different campaign. Because you can't do that by remixing. And AI can only remix. So if you're a human, I encourage you to go into the field of marketing. You don't have to be an entrepreneur, you don't have to use Replit because there's this whole greenfield space that AI is not going to touch. It's called high end marketing. That's not a remix, that's a great career field. Everybody should study that, right?

Why not? Notice there's a little bit of inconsistency in you saying everybody's going to be an entrepreneur. And now you're saying, wow, marketing is so beyond the reach of AI because it's not a remix. Get your story straight.

Human-Machine Collaboration

Amjad: Human plus machine will be able to create amazing things. So what people are making with Veo is not because the machine is really good at painting it, generating it and.

Joe: Making it, but it can't make it without the prompts.

Liron: So from Amjad's perspective, all these cool videos are mostly humans doing the creativity of prompting Veo. And then, yeah, Veo is rendering the scene like, yeah, it's drawing exactly which pixels and which characters go where and look like what and how do they move and how do they talk. But the people prompted them, they knew how to prompt them.

So the obvious way to extrapolate this is, okay, so a year of this happens. Google has all this data of what people are posting on YouTube, what people are generating with Veo, and then just feeds the prompts. It just trains the AI, oh, this is how people prompted Veo. This is what Veo made, this is what went viral on YouTube.

Don't you think that a year from now it's just going to be an end to end process where there's just a bunch of popular videos that are both prompted and generated automatically? Why not address this obvious extrapolation? Do you really think that a human is going to come up with the majority of prompts of things that are going into people's feeds a year or two from now?

I mean, maybe, but why would you think that that's the most likely scenario? It seems like we are giving plenty of data. Wouldn't you just remix, right? Your favorite word. Wouldn't you just remix? Wouldn't you just statistically analyze the last year of prompts?

I just don't get why he's confident that the human machine symbiosis is this permanent state where this piece of meat. I understand humanity better than the machine's ever going to understand humanity. Really. I think if you just do a lot of deep statistics on humanity, you'll be well on your way.

Final Analysis and Conclusion

Liron: Okay, that's it. That's everything Amjad said on the Joe Rogan podcast related to AI. So overall, I get the sense that he's just not interested in entertaining the idea that, yep, AI is imminently going to surpass human intelligence in every way. It's going to be more agentic, have higher capabilities, higher outcome steering power than the human brain. It's going to make the human brain obsolete. This is going to happen in a matter of years, maybe a decade or two at most. It's probably not going to take very long.

It seems like Amjad is just not interested in entertaining that as a hypothetical. I wonder what probability I should give that. Should I give it at least 10%, at least 20%? From my perspective, it's like, yeah, duh, you got to give it 20%. This is a very realistic possibility if you just extrapolate trends and you don't try to be overly confident that you know what consciousness is.

Right? I mean, he's the one who said, hey, have some humility about consciousness, okay? If you're humble about consciousness, why don't you be humble enough to realize that maybe it's just this thing that you can attach to organisms regardless of their level of intelligence. It's just emotional qualia that can be associated with all kinds of stimuli doesn't necessarily correspond to intelligence.

I feel like I'm just not interested in being like, let's talk about the doom scenario. Let's talk about machine intelligence becoming more powerful than human intelligence and us having kind of a last moment where we need to get the hand off, right? It's like he's not interested in talking about that stuff.

And so he did weave a lot of arguments together that made various degrees of sense. He busted out a lot of what I call duplos, high level concepts that are about limitations of AI. Like it has to pattern match. It has to be statistical, it can only remix, it can't make a new paradigm. But it can learn and it can self play. But only in some domains, not in fuzzy domains. But I have a tool called Replit where you do have a software engineer in the fuzzy domain.

Oh, and quality assurance, that domain is not too fuzzy. You're definitely going to get. AI is replacing humans in quality assurance. Oh, and everybody's going to be an entrepreneur, but some people are still going to be needed to do soft sciences, and some people are going to be needed to be authors, and some people are going to be needed to do marketing.

So he's throwing a lot of duplos around and to some degree it makes sense. He has good intuitions basically. Right. I feel like he's staying on top of the news, he's pulling insights, incorporating them into his own company, incorporating into his own product. I mean, he's a very effective guy. His mind works really well.

It's just when you ask him to lay down his prediction of the future, it's when you start seeing the gaps between his duplos. The way he's reasoning is great for staying on top of the tech industry and managing his own company, like a couple years at a time. But it's not so great for being like, hey, are we about to get overtaken by intelligence?

You need a higher level of rigor for thinking about that. You need to go down from Duplos to Legos. You need to be able to answer questions about specific scenarios like, okay, you said that AI can only do statistics. Give me an example of a problem that a human entrepreneur can do sustainably and tell me why the AI can't copy that specific problem. Like write an eval.

You know how AI companies use evals to test the power of their AI? Where's your eval about entrepreneurship? Right. If you're thinking a few years ahead in Replit. If an entrepreneur is using Replit in order to build a company, an entrepreneur that was formerly working sales at a car dealership and now they've reskilled, they're now using Replit. Okay, so what is an eval that this entrepreneur can do? They can pass the eval of, run this company and the AI can't pass the eval because you believe there's a big skill difference. Because the human is conscious and the human is generalizing. Right. Let's think a little bit more rigorously.

That's what I would have liked to see. I would have liked to see more humble probabilities for a different scenario than what he imagines to be the future of Replit and the future of everybody being an entrepreneur, I would have liked to see a more humble probability distribution over other possible outcomes. I would have liked to see some self awareness that his own reasoning wasn't very rigorous. Like there were gaps in his reasoning. He was telling a story that was hand wavy. I would have liked to see his own awareness that that's what he was doing.

But yeah, I mean, the good news for him and his company is I am bullish on Replit for as long as we're in this intermediate stage where AIs can't do everything. And I also admit that this stage could last 5, 10, even 20 years. I hope it lasts 20 years. Right. That would be great. That mean we have more time than I expect.

So I think he's in a good position. I think it's probably strategic for him to keep doing podcasts like this. Keep talking the way he's talking. It's just if you're watching Doom Debates and you're trying to ascertain what P(Doom) is and what we should expect for the future of our civilization, I don't think he's a very good source on that front.

Now you might be wondering, how did Liron come away unconvinced by all of Amjad's points when Joe Rogan, the other guy who was sitting right there listening to the same points, he seemed to be nodding along. So how did Amjad manage to convince Joe?

Well, as an interesting addendum, I wanted to show you what Joe Rogan said the next day on his show because the day after he invited Amjad, he invited another guy with a very different opinion. He invited Dr. Roman Yampolskiy, who's very much an AI doomer. He actually has a 99.9999% P(Doom) and he did an episode with me. If you searched Doom Debates, Roman Yampolskiy. We had a very, very interesting back and forth where I tried to convince him that his P(Doom) should actually be lower, which is a rare thing for me to find myself doing.

Yeah, so Joe Rogan has a lot of respect for Roman Yampolskiy because Roman Yampolskiy has actually been on the show before. Joe Rogan invited him back because he wanted basically a sanity check. He wanted to make sure that he wasn't too sucked in by Amjad's reality distortion field.

But this is very interesting. Listen to the very first thing that Joe said to Dr. Yampolskiy right when he was starting the episode.

Joe Rogan's Follow-up with Roman Yampolskiy

Joe: Thank you for doing this.

Roman Yampolskiy: My pleasure. Thank you for inviting me.

Joe: This subject of the dangers of AI, it's very interesting because I get two very different responses from people dependent upon how invested they are in AI financially. The people that have AI companies or are part of some sort of AI group, all are like, it's going to be a net positive for humanity. I think, overall, we're going to have much better lives. It's going to be easier. Things will be cheaper. It'll be easier to get along. And then I hear people like you, and I'm like, why do I believe him?

Liron: Yeah. The Joe Rogan experience is a very different type of podcast from Doom Debates. I don't hesitate to very bluntly push back against my guests if I think they're saying something I disagree with. Joe's MO, like that of most podcasters, is to just roll with the guest. If the guest is going in some direction, don't harsh their vibe. Just kind of egg them along, and it makes for a good episode.

I mean, we really got to see where Amjad was going. We really got to hear all the points that he wanted to make. He got to make them his way without a lot of pushback. But the real Joe Rogan seems to have more sympathies on the Roman Yampolsky high P(Doom) side of the spectrum. At least that's my read.

Let's give Joe Rogan the last word. This is how he ended the episode with Roman.

Joe: More people need to listen to you, and I urge people to listen to this podcast and also the one that you did with Lex, which I thought was fascinating, which scared the shit out of me, which is why we have this one. Thank you, Roman. Appreciate you.

Roman: Thank you so much.

Joe: I appreciate you sounding the alarm, and I really hope it helps.

Closing Thoughts

Liron: All right. Hope you enjoyed that episode. If you want to watch more of me reacting to other people on other podcasts, go to doomdebates.com and scroll through the archives. There's plenty more where that came from. A couple weeks ago, I reacted to Emmett Shear talking about his new AI alignment company, Softmax.

When you're at doomdebates.com, subscribe to my Substack. You're going to get bonus content. You're going to get transcripts and notifications when new episodes drop. You can also go to YouTube.com/doomdebates. That's where most of the conversations about the show are taking place.

You can also subscribe to Doom Debates in your podcast player, however you do it. Thanks for listening and I'll see you next time here on Doom Debates.


Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates

Discussion about this video