Mark Zuckerberg – Meta's AGI Plan - podcast episode cover

Mark Zuckerberg – Meta's AGI Plan

Apr 29, 20251 hr 15 min
--:--
--:--
Listen in podcast apps:
Metacast
Spotify
Youtube
RSS

Summary

Mark Zuckerberg discusses Meta's plans for AGI, including Llama 4, benchmark gaming, intelligence explosion, and business models. He touches upon DeepSeek/China, export controls, and potential relationships with AI. The conversation also explores AI's role in productivity, creativity, and the future of work, along with the importance of open source and security in AI development.

Episode description

Zuck on:

* Llama 4, benchmark gaming

* Intelligence explosion, business models for AGI

* DeepSeek/China, export controls, & Trump

* Orion glasses, AI relationships, and preventing reward-hacking from our tech.

Watch on Youtube; listen on Apple Podcasts and Spotify.

----------

SPONSORS

* Scale is building the infrastructure for safer, smarter AI. Scale’s Data Foundry gives major AI labs access to high-quality data to fuel post-training, while their public leaderboards help assess model capabilities. They also just released Scale Evaluation, a new tool that diagnoses model limitations. If you’re an AI researcher or engineer, learn how Scale can help you push the frontier at scale.com/dwarkesh.

* WorkOS Radar protects your product against bots, fraud, and abuse. Radar uses 80+ signals to identify and block common threats and harmful behavior. Join companies like Cursor, Perplexity, and OpenAI that have eliminated costly free-tier abuse by visiting workos.com/radar.

* Lambda is THE cloud for AI developers, with over 50,000 NVIDIA GPUs ready to go for startups, enterprises, and hyperscalers. By focusing exclusively on AI, Lambda provides cost-effective compute supported by true experts, including a serverless API serving top open-source models like Llama 4 or DeepSeek V3-0324 without rate limits, and available for a free trial at lambda.ai/dwarkesh.

To sponsor a future episode, visit dwarkesh.com/p/advertise.

----------

TIMESTAMPS

(00:00:00) – How Llama 4 compares to other models

(00:11:34) – Intelligence explosion

(00:26:36) – AI friends, therapists & girlfriends

(00:35:10) – DeepSeek & China

(00:39:49) – Open source AI

(00:54:15) – Monetizing AGI

(00:58:32) – The role of a CEO

(01:02:04) – Is big tech aligning with Trump?

(01:07:10) – 100x productivity



Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe

Transcript

All right, Mark, thanks for coming on the podcast again. Yeah, happy to do it. Good to see you. You too. Last time you were here, you had launched Llama 3. Yeah. Now you've launched Llama 4. Well, the first version. That's right. What's new? What's exciting? What's changed? Oh, well, I mean, the whole field's so dynamic. So, I mean, I feel like a ton has changed since the last time that we talked.

um meta ai has almost a billion people using it now monthly so that's um that's pretty wild um and you know i think that this is going to be a really big year on all of this because Especially once you start getting the personalization loop going, which we're just starting to build in now, really, from both the...

context that all the algorithms have about what you're interested in feed and all your profile information, all the social graph information, but also just what you're interacting with the AI about. I think that's just going to be kind of the next thing that's going to be super exciting. So really big on that. The modeling stuff continues to make really impressive advances too, as you know.

the llama four stuff uh i'm pretty happy with the first set of releases you know we announced the um we announced four models and we released the first two the scout and maverick ones which are kind of like the mid-sized models mid-sized to small um it's not like actually the most popular llama 3 model was um the 8 billion parameter models so we're we've we've got one of those coming in the llama 4 series too

Our internal codename for it is Little Llama. But that's coming probably over the coming months. But the Scout and Maverick ones, And I mean, they're good. They're some of the highest intelligence per cost that you can get. of any model that's out there, natively multimodal, very efficient, run on one host.

designed to just be very efficient and low latency for a lot of the use cases that we're building for internally. And, you know, that's our whole thing. We basically build what we want and then we open source it so other people can use it too. So I'm excited about that. I'm also excited about the Behemoth model, which is coming up. That's going to be our first model that... is uh sort of at the frontier i mean it's like more than two trillion parameters so it is

I mean, it's, you know, as the name says, it's quite big. So we're kind of trying to figure out how we make that useful for people. It's so big that we've had to build a bunch of infrastructure just to be able to post-train it ourselves. And we're kind of trying to wrap our head around how does...

the like the average developer out there how are they going to be able to use something like this and how do we make it so it can be useful for distilling into models that are of reasonable size to run because you're obviously not going to want to run you know something like that in a consumer model but

But yeah, I mean, there's a lot to go. I mean, as you saw with the Lama 3 stuff last year, The initial Llama 3 launch was... was exciting and then we just kind of built on that over the year 3.1 was when we released the 405 billion model 3.2 is when we got all the multimodal stuff in

So we basically have a roadmap like that for this year, too. So a lot going on. I'm interested to hear more about it. There's this impression that the gap between the best closed source and the best open source models... has increased over the last year, where I know the full family of Llama 4 models is not yet, but Llama 4 Maverick is 35 on Chabad Arena and on a bunch of major benchmarks, it seems like.

04 Mini or Gemini 2.5 Flash are beating Maverick, which is in the same class. What do you make of that impression? Yeah, well, okay, there's a few things. I actually think that this has been a very good year for open source overall, right? If you go back... to where we were last year.

What we were doing with Llama was like the only real super innovative open source model. Now you have a bunch of them in the field. And I think in general, the prediction that this would be the year we're open source. generally overtakes closed sources, the most used models out there, I think is generally on track to be true. I think the thing that's been...

sort of an interesting surprise. I think positive in some ways, negative in others, but I think overall good is that it's not just Llama. There are a lot of good ones out there. So... I think that that's quite good. Then there's the reasoning phenomenon, which you basically are alluding to with talking about 03 and 04 and some of the other models. that's happening where if you want to model

that is sort of the best at math problems or coding or different things like that. I do think that these reasoning models with A lot of the ability to just consume more test time or inference time compute in order to provide more intelligence is a really compelling paradigm. For a lot of the applications that, and we're going to do that too, we're building a Lama 4 reasoning model and that'll come out at some point.

For a lot of the things that we care about... latency and good intelligence per cost. are actually much more important product attributes If you're primarily designing for a consumer product, people don't necessarily want it to wait.

half a minute to go think through the answer if you can provide an answer that's generally quite good too in like half a second then that's great and that's a good trade-off so i think that both of these are going to end up being I am optimistic about integrating the reasoning models with kind of the core language models over time. I think that's sort of the direction that Google has gone in with some of the more recent.

Gemini models. And I think that that's really promising. But I think that there's just going to be a bunch of different stuff that goes on. You also mentioned the whole chatbot arena thing, which I think is interesting. And it goes to... this challenge around how do you do the benchmarking, right? And basically, how do you know what models are good for which things? And one of the things that we've generally tried to do over the last year

is Anchor more of our models in our meta-AI product Northstar use cases. Because the issue with both kind of open source benchmarks and, you know, any given thing, like the LM Arena stuff. is they're often skewed for either a very specific set of use cases, which are often not actually what any normal person does in your product.

um they are often weighted um kind of the portfolio of things that they're trying to measure is different uh from what people care about in any given product and um Because of that, we've found that... trying to optimize too much for that stuff is often led us astray and actually not led towards the highest quality products and the most usage and best feedback within meta ai as people use our stuff so we're trying to anchor our north star in um

And basically the product value that people kind of report to us and what they say that they want and what their revealed preferences are and using the experiences that we have. So sometimes I think... sometimes these things don't quite line up. And I think that a lot of them are quite easily gameable, right? So, I mean, I think on the arena, you'll see stuff like...

like Sonnet 3-7. It's like a great model, right? And it's like not near the top. And it was relatively easy for our team to tune a version of Llama 4 Maverick that basically was way at the top. that's the kind of the pure model actually has no tuning for that at all. So it's further down. So I think you just need to be careful with some of the benchmarks and we're going to index primarily on the products. Do you feel like there is some benchmark which captures

what you see as a North Star of value to the user, which can be sort of objectively measured between the different models. And you're like, I need Llama 4 to come out on top on this. Well, I mean, our benchmark is basically user value in meta AI.

But you can't compare other models. Well, we might be able to because we might be able to run other models in that and be able to tell. And I think that's one of the advantages of open source is basically you have... a good community of folks who can like poke holes that okay where is your model not not good and where where is it good um but i think the reality at this point is that all these models are optimized for

slightly different mixes of things. I mean, everyone is trying to I think all the leading labs are trying to create general intelligence, right? super intelligence, whatever you call it, right? Like basically, AI that can lead towards a world of abundance where like everyone has these superhuman tools to create whatever they want. And that leads to just dramatically empowering people and creating all these economic benefits. I think that that's sort of, however you define that, I think that.

That's kind of what a lot of the labs are going for. There's no doubt that different folks have sort of optimized towards different things. I think the Anthropic folks have really focused on kind of coding and agents around that. You know, the OpenAI folks, I think, have gone a little more towards reasoning recently. I think that there is a space which, if I had to guess, I think will end up probably being the most used one, which is... quick is very natural to interact with.

is very natively multimodal that fits into kind of throughout your day, the ways that you want to interact with it. And I think you've got a chance to play around with... with the new Meta AI app that we're releasing. And one of the fun things that we put in there is the demo for the full duplex voice. And-

It's, I mean, it's early, right? I mean, it's not, you know, there's a reason why we haven't made that the default voice model in the app, but there's something about how naturally conversational it is that I think is just like really fun and compelling. And I think being able to mix. with the right personalization.

is going to lead towards a product experience where I would basically just guess that you go forward a few years like we're just going to be talking to ai throughout the day about different things that we're wondering and um you know it's like you'll

You'll have your phone. You'll talk to it on your phone. You'll talk to it while you're browsing your feed apps. It'll give you context about different stuff. You'll be able to answer questions. It'll help you as you're interacting with people in messaging apps. You know, eventually I think we'll walk through our daily lives and we'll either have glasses or, you know, other kinds of AI devices and just be able to kind of seamlessly interact with it all day long. So I think that that is.

That's kind of the North Star and whatever... the benchmarks are that lead towards people feeling like the The quality is like, that's what they want to interact with. That, I think, is actually the thing that is ultimately going to matter the most to us. I got a chance to play around with both Araya and also the MetAI app, and the voice mode was super smooth. It was quite impressive. on the point of what the different labs are optimizing for.

To steelman their view, I think a lot of them think that once you fully automate software engineering and AI research, then you can kick off an intelligence explosion where you have millions of copies of these software engineers. replicating the research that happened between LAMA 1 and LAMA 4, that scale of improvement, again, in a matter of weeks or months rather than years.

And so it really matters to just have close the loop on the software engineer, and then you can be the first to ASI. What do you make of that? Well, I mean, I personally think that's pretty compelling. And that's why we have a big coding effort, too. I mean, we're working on... a number of coding agents inside Meta.

You know, because we're not really an enterprise software company, we're primarily building it for ourselves. So again, you know, we go kind of like for... you know the specific goal we're not trying to build a general developer tool we're trying to build a coding agent and an ai research agent that um that basically advances llama research specifically and um

It's like just fully kind of plugged into our tool chain and all this. So I think that that's important and I think is going to end up being an important part of how this stuff gets done. I would guess that like sometime in the next 12 to 18 months.

We'll reach the point where like most of the code that's going towards these efforts is written by AI. And I don't mean like autocomplete. I mean, right today you have like, you have kind of, you know, good autocomplete. Like you start writing something and it can complete the... the kind of section of code i'm talking more like You give it a goal, it can run tough.

It can kind of improve things. It can find issues. It writes higher quality code than the average very good person on the team already. I think that that's going to be a really important part of this for sure. But I don't know if that's the whole game. I mean, I think that that's, that I think is going to be a big industry. And I think that that's going to be an important part of how AI gets developed.

But I think that there are still guys, I think, I mean, look, I guess one way to think about this is this is a massive space, right? So- I don't think that there's just going to be one company with one optimization function that serves everyone as best as possible. I think that there are a bunch of different labs.

that are going to be doing leading work towards different domains. Some are going to be more kind of enterprise focused or coding focused. Some are going to be more productivity focused. Some are going to be more social or entertainment focused. Within the assistant space, I think there are going to be some that are much more kind of informational or productivity. Some are going to be more companion focused.

It's going to be a lot of the stuff that's just like fun and entertaining and like shows up in your feed. And I think that that's so I think that there's just like a huge amount of space. And part of what's fun about this is like it's.

Like going towards this AGI future, there are a bunch of common threads for what needs to get invented, but there are a lot of things at the end of the day that need to get created. And I think that that's... I think you'll start to see a little more specialization between the groups if I had to guess.

It's really interesting to me that you basically agree with the premise that there will be an intelligence explosion and something like super intelligence on the other end. But then if that's the case, tell me if I'm misunderstanding you, if that's the case... Why even bother with personal assistance and whatever? Why not just get to superhuman intelligence first and then deal with everything else later? Well, I think that that's just one aspect of the fly.

Right. So part of what I generally disagree with on the fast takeoff thing is it takes time to build out physical infrastructure. Right. So if you want to build like a gigawatt cluster of compute. That just is going to take some time, right? takes nvidia a bunch of time to like stabilize their new generation of of of the systems and then you need to figure out the networking around it and then you need to like build the building, you need to get permitting, and...

you need to get the energy and then like okay you want like some whether it's gas turbines or um or green energy you need to like there's a whole supply chain of that stuff so i think there's like a lot of and we talked about this a bunch on the last time that i was that it was on the the podcast with you and i think some of these are just like physical world human time things that as you start getting more intelligence in one part of the stack um

you'll basically just run into a different set of bottlenecks. I mean, that's sort of the way that engineering always works. It's like you solve one bottleneck, you get another bottleneck. Yeah. Another bottleneck in the system or another ingredient that's going to make this work well is basically people... getting used to and learning and having a a feedback loop with um with using the system so i don't think like like these systems don't tend to be the type of thing where like

Something just shows up fully formed and then people magically fully know how to use it. And that's the end. I think that there is this co-evolution that happens where people are learning how to best use these AI assistants. On the same side, the AI assistants are learning what those people care about. And the developers of those AI assistants are able to make the kind of AI assistants better. And then you're also building up this base of context. So now you wake up.

And you're like a year or two into it. And now the AI assistant can reference things that you talked about a couple of years ago. And like, that's pretty cool. But you couldn't do that if it just did. You just launched the perfect thing on day one. There's no way that it could reference what you talked about two years ago if it didn't exist two years ago.

I guess my view is there's this huge intelligence growth. There's a very rapid curve on the uptake of people interacting with the AI assistants and the learning feedback and data flywheel around that. And then there is also the build out of... the supply chains and infrastructure and regulatory frameworks to enable the scaling of a lot of the physical infrastructure. But I think at some level, all of those are going to be necessary and not just the coding piece.

I guess one specific example of this that I think is interesting. Actually, even if you go back a few years ago, we had a project on, I think it was on our ads team to automate. ranking experiments, right? That's like a pretty constrained environment. It's not like write open-ended code. It's basically look at the whole history of the company, every experiment that any engineer has ever done in the ad system.

Look at what worked, what didn't, what the results of those were, and basically formulate new hypotheses for different tests that we should run that could improve the performance of the ad. What we basically found was... We were bottlenecked on compute to run tests.

based on the number of hypotheses. It turns out even with just the humans that we have right now on the ads team, we already have more good ideas to test than you actually have either kind of compute or... cohorts of people to test them with right because i mean even if you have like

three and a half billion people using your products you still want each you know each test needs to be statistically significant so it needs to have you know some number of whatever it is hundreds of thousands or millions of people and um And there's kind of only so much throughput that you can get on testing through that. So we're already at the point, even with just like the people we have, that...

that we already can't really test everything that we want. So now, just being able to test more things is not necessarily going to be additive to that. We need to get to the point where the average quality of the hypotheses that the AI is generating is better than what the... all the things above the line that we're actually able to test that like sort of the best humans on the team have been able to do before it'll even be marginally useful for it. So I think that there's like...

We'll get there. We'll get there, I think, pretty quickly. But... But it's not like, okay, cool, the thing can write code. All of a sudden, everything is just improving massively. There are like these real world constraints that basically it needs to... First, it needs to be able to kind of do a reasonable job. Then it needs to be able to, you need to have the compute and the kind of people to test. And then over time, as the quality creeps up.

i don't know are we here in like five or ten years and it's like no set of people can generate a hypothesis as good as the ai system I don't know, maybe. Then I think in that world, obviously, that's going to be how all the value is created. But that's not the first step. Publicly available data is running out. So major AI labs like Meta, Google DeepMind, and OpenAI all partner with scale to push the boundaries of what's possible.

Through Scales Data Foundry, major labs get access to high-quality data to fuel post-training, including advanced reasoning capability. SCALE's research team SEAL is creating the foundations for integrating advanced AI into society through practical AI safety frameworks and public leaderboards around safety and alignment. Their latest leaderboards include Humanities Last Exam, Enigma Eval, Multi-Challenge, and Vista.

which test a range of capabilities from expert-level reasoning to multimodal puzzle solving to performance on multi-turn conversations. Scale also just released Scale Evaluation, which helps diagnose model limitations. Leading frontier model developers rely on scale evaluation to improve the reasoning capabilities. If you're an AI researcher or engineer and you want to learn more about how Scales Data Foundry and Research Lab can help you go beyond the current frontier of capabilities,

go to scale.com slash thwarkash. So if you buy this view that this is where intelligence is headed, The reason to be bullish on Meta is obviously that you have all this distribution, which you can also use to learn more things that can be useful for training. You mentioned the Meta AI app. billion active users. Not the app. The app is a...

Standalone thing that we're just launching now. I think it's fun for people who want to use it. It's a cool experience. We can talk about that. We're kind of experimenting with some new ideas in there that I think are novel and worth talking through. I'm talking mostly about our apps. Meta AI is actually most used in WhatsApp. Got it. So it's in WhatsApp.

is mostly used outside of the US. We just passed like 100 million people in the US, but it's not the primary messaging system in the US, iMessage is. So- So I think people in the US probably tend to underestimate. the meta AI use. But it's also part of the reason why the standalone app is going to be so important is the U.S. is, you know, for a lot of reasons, one of the most important country. And like, and, you know, the fact that WhatsApp is the main way that people are using MetAI.

not the main messaging system in the U.S. means that we need another... way to kind of build a first-class experience that's in front of people. And I guess, to finish the question, the bearish case would be that If the future of AI is less about just answering your questions and more so just being a virtual coworker, it's not clear how meta AI inside of WhatsApp gives you the relevant training data to make a fully autonomous. programmer, remote worker. So...

Yeah, in that case, does it not matter that much who has more distribution right now with LLMs? Well, again, I just think that there are going to be different things, right? It's like if you were sitting at the beginning of the... kind of the development of the internet and it's like well what's going to be the main internet thing is it going to be knowledge work or is it going to be like massive consumer apps it's like

I don't know. You get both. Right. It's like you don't have to choose one. Right. And now the world is big and complicated. And does one company build all that stuff? I think normally the answer is no. But. But yeah, no, to your question, people do not code in WhatsApp for the most part. And I don't foresee that that's going to be like that people starting to write code in WhatsApp is going to be like.

a major, major use case. Although I do think that people are going to ask AI to do a lot of things that result in the AI coding without them necessarily knowing it. So that's a separate thing. But we do have a lot of people who are writing code at Meta, and they use Meta AI. We have this internal thing that we call MetaMate, and basically in a number of different coding and AI research agents that we're building around that.

And that has quite its own feedback loop and I think can get good for accelerating those efforts. But again, I just think that there are going to be a bunch of things. I think AI is... almost certainly going to unlock this massive revolution in knowledge work and code. I also think it's going to be kind of the next generation of search and how people get information and do more complex information tasks. I also think it's going to be fun. I think people are going to use it to be entertained.

And, you know, a lot of the internet is like memes and humor, right? And we have this like amazing technology at our fingertips and it is sort of... amazing and kind of funny when you think about it how much of human energy just goes towards entertaining ourselves and design and pushing culture forward and finding humorous ways to explain cultural phenomenon that we observe and um

I think that that's almost certainly going to be the case in the future, right? If you look at like the evolution of things like Instagram and Facebook, if you go back 10, 15, 20 years ago, right? It was like tech. Then we all got phones with cameras. Most of the content became photos. Then the mobile networks got good enough that if you wanted to watch a video on your phone, it wasn't just buffering. So that got good. So over the last 10 years, most of the content has moved.

you know basically towards video at this point most of the time spent in facebook and instagram is video but like i don't know do you think in five years we're just going to be like sitting in our feet and consuming media that's video it's like no it's gonna be interactive right it's like you'll be scrolling through your feed and there will be content that

is um is basically i don't know maybe it looks like a reel to start but then like you talk to it or you interact with it and it talks back or it changes what it's doing or you can jump into it like a game and interact with it and that's all going to be like ai right so um So I guess my point is there's just all these different things. And I guess we're ambitious, so we're working on a bunch of them. But I don't think any one company is going to do all of it.

Okay, so on this point of AI-generated content or AI interactions, already people have meaningful relationships with AI therapists, AI friends, maybe more. And this is just going to get more intense as these AIs become more unique and more personable, more intelligent, more spontaneous and funny and so forth. How do we make sure people are going to have relationships with AIs? How do we make sure that these are healthy relationships?

I know there are a lot of questions that you only really can answer as you start seeing the behaviors. So probably the most important upfront thing is just like, ask that question and care about it at each step along the way. But I think also being too prescriptive upfront and saying, we think these things are not good, often cuts off value, right? Because... i don't know people use stuff that's valuable for them i mean one of my core guiding principles in designing products is like

People are smart, right? They know what is valuable in their lives. Every once in a while, something bad happens in a product and you want to make sure that you design your product. well um to minimize that but but if if if you think that something someone is doing is bad Most of the time in my experience, they're right and you're wrong and you just haven't come up with the framework yet for understanding why the thing that you're doing is valuable and helpful in their life.

Yeah, so that's kind of the main way that I think about it. I do think that... People are going to use AI for a lot of these social tasks. Already, one of the main things that we see people using that AI for is kind of talking through difficult conversations that they need to have with...

with people in their life. It's like, okay, I'm having this issue with my girlfriend or whatever. Help me have this conversation. Or I need to have this hard conversation with my boss at work. How do I have that conversation? That's pretty helpful. And then I think as the personalization loop kicks in and the AI just starts to get to know you better and better, I think that will just be really compelling. You know, one thing just from working on social media for a long time

is there's this stat that I always think is crazy. The average American, I think, has Three people that they'd consider friends. And the average person has demand for meaningfully more. I think it's like 15 friends or something, right? I guess there's probably some point where you're like, all right, I'm just too busy. I can't deal with more people.

But the average person wants more connectivity connection than they have. So, you know, there's a lot of questions that people ask of stuff like, okay. Is this going to replace... kind of in-person connections or real life connections and My default is that the answer to that is probably no. I think that there are all these things that are better about kind of physical connections when you can...

have them but the reality is that people just don't have the connection and they feel more alone um a lot of the time than they would like so i think that a lot of these things that today there might be a little bit of a stigma around I would guess that over time... We will find the vocabulary as a society.

to be able to articulate why that is valuable and why the people who are doing these things are like why they are rational for doing it and like and how it is adding value for their for their lives but but also i think that the field is very early so um i mean it's like i i think you know there are a handful of companies and stuff we're doing virtual therapists and you know there's like virtual girlfriend type stuff but it's um

it's very early right it's i mean the the embodiment in the things is is pretty weak a lot of them like you you open it up and it's just like a an image of of like of the therapist or the person you're talking to or whatever i mean sometimes there's some very rough animation but it's not like an embodiment i mean you've seen the stuff that we're working on in reality labs where like

you have the codec avatars and it like feels like it's a real person. I think that's kind of where it's going. You're going to, you know, you'll, you'll be able to, um, basically have like an always on video chat where it's like, Oh, and also the, the, the, um, the, uh,

The AI will be able to, you know, the gestures are important too. Like more than half of communication when you're actually having a conversation is not the words that you speak. It's all the nonverbal stuff. Yeah. I did get a chance to check out Orion the other day and I thought it was super impressive. And I'm mostly optimistic about the technology just because generally I'm, as you mentioned, like libertarian about if people are doing something.

probably think it's good for them. Although I actually don't know if it's the case that if somebody is using TikTok, they would say that they're happy with how much time they're spending on TikTok or something. So I'm mostly optimistic about it. Also in the sense that If we're going to be living in this future world of AGI, we need to be, in order to keep up with it, humans need to be upgrading our capabilities as well with tools like this.

And just generally, there can be more beauty in the world if you can see Studio Ghibli everywhere or something. I was worried that... One of the flagship use cases that your team showed me was I'm sitting at the breakfast table and on the periphery of my vision is just a bunch of reels that are scrolling by. Maybe in the future, my girlfriend is on the other side of the screen or something.

And so I am worried that we're just removing all the friction between getting totally reward hacked by our technology. How do we make sure this is not what ends up happening in five years? I mean, again, I think people... have a good sense of what they want i mean that that experience that you saw was that was a demo just to show multitasking and holograms right so i mean i i i agree that like i don't think that the the future is like you have

stuff that's trying to compete for your attention in the corner of your vision all the time. I don't think people would like that too much. So it's actually, it's one of the things as we're designing these glasses that... we're really mindful of is like probably the number one thing that glasses need to do is get out of the way and be good glasses right and um As an aside, I think that's part of the reason why the Ray-Ban meta product has done

so well is like all right it's like great for listening to music and taking phone calls and taking photos and videos and the ai is there when you want it but when you don't It's like a great, you know, good-looking pair of glasses that people like and it kind of gets out of the way well. I would guess that that's going to be... a very important design principle for the augmented reality future. The main thing that I see here is, you know, I think it's kind of crazy that...

For how important the digital world is in all of our lives, the only way we can access it is through these like physical... you know, digital screens, right? It's like you have like a phone, you have your computer, you can put a big TV. It's like this huge physical thing. It just seems like we're at the point. with technology where the physical and the digital world should really be fully blended. And that's what the holographic overlay is allow you to do.

I agree. I think a big part of the design principles around that are going to be, okay, you'll be interacting with people and you'll be able to bring digital artifacts into those interactions and be able to do cool things like... very seamlessly right it's like if I want to show you something here like here's a screen okay here it is I could show you can interact with it can be 3d we can kind of play with it

um you want to you know like play a card game or whatever it's like all right here's like a deck of cards we can play with it it's like two of us are here physically like you have a a third friend who's just hologramming in right and that they can they can kind of participate too um but but i think that in that world people are going to be you know just like you don't want your physical space to be cluttered it's sort of like a you know it's kind of

has like a it wears on you psychologically i don't think people are going to want the digital kind of physical space to to feel that way either so i don't know that's more of an aesthetic and and and one of these norms i think we'll have to get worked out but um But I think we'll figure that out. Going back to the AI conversation, you're mentioning how big a bottleneck the physical infrastructure can be. Related to other open source models like DeepSeq and so forth.

DeepSeq right now has less compute than a lab like meta, and you could argue that it's competitive with the Lama models. If China is better at... you know, physical infrastructure, industrial scale-ups, getting more power and more data centers online. How worried are you that they might beat us here? I mean, I think it's like a real competition. I mean, I think that you're seeing the...

the industrial policies really play out um where yeah i mean i think china's bringing online more power and because of that I think that the U.S. really needs to focus on streamlining the ability to build data centers and build and produce energy, or I think we will be at a significant disadvantage.

at the same time i think some of the export controls on things like chips i think you can see how they're clearly working in a way because you know there was all the conversation with deep seek about oh they did all these like very impressive low-level optimizations and

They did, and that is impressive. But then you ask, why did they have to do that when none of the American labs did it? And it's like, well, because they're using partially nerfed chips that are the only thing that NVIDIA is allowed to sell in China because of the export control. So DeepSeek basically had to go spend... a bunch of their calories in time

doing low-level infrastructure optimizations that the American labs didn't have to do. Now, they produced a good result on text, right? I mean, DeepSeek is text only. So the infrastructure is impressive. The text result is impressive. But... Every new major model that comes out now is multimodal, right? It's image, it's voice. And theirs isn't. And now the question is, why is that the case? I don't think it's because they're not.

capable of doing it. I think that they basically had to spend their calories on doing these infrastructure optimizations to overcome the fact that there were these export controls. But when you compare like Llama 4... with deep seek i mean our reasoning model isn't out yet so i think that the the kind of r1 comparison isn't isn't clear yet but um but we're basically like

effectively same ballpark on all the tech stuff is what deep seek is doing but with a smaller model so it's it's much more kind of efficient per um the kind of cost per intelligence is lower with what we're doing for llama on tech

And then all the multimodal stuff we're effectively leading at, and it just doesn't even exist in their stuff. So I think that the Lama 4 models, when you compare them to what they're doing, are... are good and and i think generally people are going to prefer to use the llama 4 models um but i think that there is this interesting contour where like

It's clearly a good team that's doing stuff over there. And I think you're right to ask about the accessibility of power, the accessibility of compute and chips and things like that. Because I think the kind of work that you're seeing the different labs do and play out. I think is somewhat downstream of that. Freemium products attract a ton of fake account signups, bot traffic, and free to review.

And AI is so good now that it's basically useless to just have a captcha of six squiggly numbers on your signup page. Take Cursor. People were going to insane lengths to take advantage of Cursor's free credit. creating and deleting thousands of accounts, sharing logins, even coordinating through Reddit. And all this was costing Cursor a ton of money in terms of inference compute and LLM API calls. Then they plugged in WorkOS Radar.

Radar distinguishes humans from bots. It looks at over 80 different signals from your IP address to your browser to even the fonts installed on your computer to ensure that only real users can get through. Radar currently runs millions of checks per week. And when you plug Radar into your own product, you immediately benefit from the millions of training examples that Radar has already seen through other top companies.

Previously, building this level of advanced protection in-house was only possible for huge companies. But now with WorkOS Radar, advanced security is just an API call away. Learn more at workos.com slash radar. All right, back to Zuck. So Sam Altman recently tweeted that... OpenAI is going to release an open source Soda reasoning model. I think part of the tweet was that we will not do anything silly like say that you can only use it if you have less than 700 million users.

DeepSeek has the MIT license, whereas LAMA, I think a couple of the contingencies in the LAMA license require you to say built with LAMA on applications using it, or any model that you train using LAMA has to begin with the word LAMA. What do you think about the license? Should it be less onerous for developers? I mean, look, we've basically pioneered the open source.

LLM things so I mean I I don't I don't consider the the license to be onerous I kind of you know think that When we were starting to push on open source, it was this big debate in the industry of like, Is this even a reasonable thing to do? But can you do something that is safe and trustworthy with open source? Will open source ever be able to be competitive enough that anyone will even care?

And basically when we were answering those questions, which, you know, a lot of the hard work that, you know, I think a lot of the teams at Meta, although there are other folks in the industry, but really the llama models were the ones that I think broke open this whole open source. AI thing in a huge way. You know, we were very focused on, okay, if we're going to put all this energy into it.

Then at a minimum, you know, if you're going to have these large cloud companies like Microsoft and Amazon and Google turn around and sell our model that. We should at least be able to have a conversation with them before they do that around around basically like, OK, what kind of business arrangement should we have? But but our goal with the with the license isn't.

We're generally not trying to stop people from using the model. We just think like, okay, if you're like one of those companies or... I don't know if you're Apple, you know, just come talk to us about what you want to do and let's find like a productive way to do it together. So I think that that's generally been fine.

If the whole open source part of the industry evolves in a direction where there's a lot of other great options, if like the you know the license ends up being a reason why people don't want to use llama then i don't know we'll have to reevaluate the strategy whether you know what it makes sense to do at that point but

I just don't think we're there. That's not in practice a thing that we've seen companies coming to us and saying, we don't want to use this because your license says that if you reach 700 million people, you have to come talk to us.

At least so far, it's a little bit more of something that we've heard from like... kind of open source purists like is this as clean of an open source model as you we as you'd like it to be and and look i mean i think that debate has existed since the beginning of open source with like

you know just all the gpl license stuff versus other things and it's like okay just like does it need to be the case that anything that touches open source can has to has to be open source or can people just take it and use it in different ways and i'm sure there will continue being debates around this but i don't know if you're if you're spending many many billions of dollars training these models i think asking the other companies that

are also huge and similar in size and can easily kind of afford to have a relationship with us to talk to us before they use it, I think it seems like a pretty reasonable thing. If it turns out that you, you know... Other models are also, you know, there's like a bunch of good open source models. So that part of your mission is fulfilled. And maybe other models are better at coding. Is there a world where you just say, look.

Open source ecosystem is healthy. There's plenty of competition. We're happy to just use some other model, whether it's for internal software engineering at Meta or deploying to our apps. We don't necessarily need to build with Llama. Well, again, I mean, we do a lot of things. So it's possible that... I guess, let's take a step back.

The reason why we're building our own big models is because we want to be able to build exactly what we want, right? And none of the other models in the world are sort of exactly what we want. If they're open source, then you can take them.

And you can fine tune them in different ways, but you still have to deal with the model architectures. And, you know, they make different size tradeoffs around that affect the latency and inference cost of the models. And it's like, OK, the scale that we operate at. um that stuff really matters like we made the llama scout and maverick models certain sizes for a specific reason because they fit on a host and we wanted certain latency, especially for the voice models.

that we're working on that we want to just basically have pervade and be across everything that we're doing, from the glasses to all of our apps to the Meta AI app and all this stuff. So I think that there's a level of kind of control of your own destiny that you only get when you build the stuff yourself. That said, there are a lot of things that like AI is going to be used in every single thing that every company does.

When we build a big model, we also need to choose which things, which use cases internally we're going to optimize for. So does that mean that for certain things, we're not going to think that like... Okay, maybe Claude is better for building this specific development tool that this team is using. All right, cool, then like use that. Fine. Great. I don't think we don't want to fight with, you know, one hand tied behind our back. We're doing a lot of different stuff.

You also asked, would it not be important because other people are doing open source? I don't know. On this, I'm a little more worried because I think you have to ask... For anyone who shows up now and is doing open source now that we have done it, still be doing open source if we weren't doing- Like, I think that there are a handful of folks who see the trend that more and more development is going towards.

um towards open source they're like oh crap like we kind of need to be on this train or else we're going to lose like we have some closed model api and like increasingly a lot of developers that's not what they want um so so i think you're seeing a bunch of the other players start to do some work in open source but

It's just unclear if it's dabbling or fundamental for them in the way that it has been for us. And, you know, a good example is like what's going on with like Android, right? It's like Android started off as the open source thing. There's not really like any open source alternative. Like I think over time, Android has just been kind of getting more and more closed. So I think if you're us... You kind of need to worry.

that if we stopped pushing the industry in this direction, that like all these other people, maybe you're only really doing it because they're trying to kind of compete with us in the direction that we're pushing things. They already have their revealed preference for what they would build if open source didn't exist. and it wasn't open source, right? So I just think we need to be careful about

relying on that continued behavior for the future of the technology that we're going to build at the company. I mean, another thing I've heard you mention is that it's important that the standard gets built around American models like Lama. I guess I wanted to understand your logic there because it seems like with certain kinds of networks, it is the case that the Apple App Store just has a big contingency around what it's built around.

But it doesn't seem like, you know, if you build some sort of scaffold for DeepSeek, you couldn't have easily just switched it over to Llama 4, especially since between generations, like Llama 3 wasn't MOE, Llama 4 is. So things are changing between generations of models as well. What's the reason for thinking things will get built out in this contingent way on a specific standard? I'm not sure. What do you mean by contingent?

Or as in, like, it's important that people are building for Llama rather than for LLMs in general, because that will determine what the standard is for the future. Sure. Well, look, I mean, I think these models encode values and ways of thinking about the work. And we had this interesting experience early on where...

We took an early version of Llama and we translated it. I think it might have been into French or some other language. And the feedback that we got... I think it was French, from French people was, this sounds like an American who learned to speak French. Like it doesn't sound like a French person. It's like, well, what do you mean? Does it not speak French well? It's like, no, it speaks French fine. It's just like the way that it thinks about the world is like, seems slightly American.

So I think there's like these subtle things that kind of get built into it. Over time, as the models get more sophisticated, they should be able to embody different value sets across the world. So maybe that's like a very kind of... not particularly sophisticated example, but I think it sort of illustrates the point. And, you know, some of the... stuff that we've seen in testing. some of the models especially coming out of china is like they sort of have certain values encoded in them and um

And it's not just like a light fine tune to get that to feel the way that you want. Now, the stuff is different, right? So I think language models. or something that has kind of like a world model embedded into it, have more values. Reasoning, I think, is...

I mean, I guess there are kind of values or ways to think about reasoning, but one of the things that's nice about the reasoning models is they're trained on verifiable problems. So do you need to be worried about like cultural bias if your model is doing math?

probably not right i think that that's you know i think it's like the the chance that like some reasoning model that was built elsewhere is like going to kind of incept you by like solving a math problem in a way that's that's um devious seems low um there's a whole set of different issues i think around coding which is the other verifiable domain which is you know i think you kind of need to be worried about like waking up

One day and like does a model that I have some tie to another government like Can it embed all kinds of different vulnerabilities in code that then like the intelligence organizations associated with that government can then go exploit? So now you sort of like, all right.

like in some future version where you have you know some model from some other country that we're using to like secure or build out a lot of our systems and then all of a sudden you wake up and it's like everything is just vulnerable to um in a way that like that country knows about but but like you don't or it turns on a vulnerability at some point those are real issues um So what we've basically found...

I'm very interested in studying this because I think one of the main things that's interesting about open source is the ability to distill models. the the primary value isn't just like taking a model off the shelf and saying like okay like meta built this version of llama i'm going to take it and i'm going to run it exactly in my application it's like no well your application

isn't doing anything different if you're just running our thing. You're at least going to fine tune it or try to distill it into a different model. and when we get to stuff like the behemoth model

Like the whole value in that is being able to basically take this very high amount of intelligence and distill it down into a smaller model that you're actually going to want to run. But this is like the beauty of distillation. And it's like one of the things that I think has really emerged as a very powerful technique in the last year since. the last time we sat down is um i think it's worked better than most people would predict as you can basically take a model that is much bigger and

take probably like 90 or 95% of its intelligence and run it in something that's 10% the size. Now, do you get 100% of the intelligence? No, but like 95% of the intelligence at 10% of the cost is like pretty good for a lot of things. The other thing that's interesting is now with this more varied open source community where it's not just Llama, you have other models, you have the ability to distill from multiple sources.

So now you can basically say, okay, Llama's really good at this. Like maybe the architecture is really good because it's fundamentally multimodal and fundamentally more inference friendly and more efficient. But like... Let's say this other model is better at coding. Okay, well, you can distill from both of them and then build something that's better than either of them for your own use case. So that's cool. But you do need to solve the security problem of knowing that you can distill it

in a way that is safe and secure. And so this is something that we've been researching and have put a lot of time into. And what we've basically come to is like, look, anything that's kind of like language.

is is quite fraught because there's like a lot of values embedded in that so unless you don't care about having the values from whatever the model is that you got you probably don't want to like distill the straight like language world model um On reasoning, I think you can get a lot of the way there by limiting it to verifiable domains, running kind of code... cleanliness and security filters like

like whether it's like the llama guard open source or the code shield open source things that we've done that basically um allow you to incorporate different different um input into your models and make sure that the that both the input and the output are secure

And then just a lot of red teaming to make sure that you're like, you just have people or experts who are looking at this. It's like, all right, is this model doing anything that isn't what I want after distilling from something? And I think with a combination of those techniques. you can probably distill on the reasoning side for verifiable domains quite securely. That's something I'm pretty confident about, and it's something that we've done a lot of research around.

I think this is a very big question is like, how do you do good distillation? Because there's just so much value to be unlocked. But at the same time, I do just think that there is some fundamental bias in the different models. Speaking of value to be unlocked. What do you think the right way to monetize AI will be? Because obviously digital ads are quite lucrative, but as a fraction of total GDP.

It's small in comparison to all remote work. Even if you can increase its productivity and not replace work, that's still worth tens of trillions of dollars. So is it possible that ads might not be it? Yeah, how do you think about this? I mean, like we were talking about before, there's going to be all these different applications and different applications tend towards different things.

um ads is great when you want to offer people a free service right because it's free you need to cover it somehow yeah ads is like okay it's ads solves this problem of like a person does not need to pay for something And they can get something that is like amazing for free.

And also, by the way, with modern ad systems, a lot of the time people think that the ads add value to the thing if you do it well, right? You need to be good at ranking and you need to be good at having enough liquidity of advertising inventory. So that way, you know, if you only have five advertisers in the system, no matter how good you are at ranking, you may not be able to show something to someone that they're interested in.

million advertisers in the system, then you're probably going to be able to find something pretty compelling if you're good at picking out the different needles in the haystack that that person's going to be interested in. So I think that definitely has its place. There are also clearly going to be other business models as well, including ones that...

just have higher costs, so it doesn't even make sense to offer them for free. Which, by the way, there have always been business models like this. There's a reason why social media is free and ad supported. or like espn or something you need to pay for that it's okay because the content that's going into that like they need to produce it and that's very expensive for them to produce

And they probably could not have enough ads in the service in order to make up for the cost of producing the content. So basically, you just need to pay to access it. Then the trade-off is fewer people do it, right? It's like they're talking about hundreds of millions of people using those instead of billions. So there's kind of a value switch there.

Not everyone is going to want a software engineer or a thousand software engineering agents or whatever it is. But if you do, that's something that you are probably going to be willing to pay thousands or tens of thousands or hundreds of thousands of dollars for.

Um, so I, I think that this just speaks to the diversity of different things that need to get created is like, they're going to be business models at each point along the spectrum and it met a, um, Yeah, for the consumer piece, we definitely want to have a free thing, and I'm sure that will end up being ad supported.

But I also think we're going to want to have a business model that supports people using arbitrary amounts of compute to do like really even more amazing things than what it would make sense to be able to offer the free service. And for that, I'm sure we'll end up having a premium service. But I mean, I think our basic... you know, values on this or we want to serve as many people in the world. Lambda is the cloud for AI developers.

They have over 50,000 Nvidia GPUs ready to go for startups, enterprises, and hyperscalers. Compute seems like a commodity though, so why use Lambda over anybody else? Well, unlike other cloud providers, Lambda's only focus is AI. This means their GPU instances and on-demand clusters have all the tools that AI developers need pre-installed.

No need to manually install CUDA, drivers, or manage Kubernetes. And if you only need GPU compute, you can save a ton of money by not paying for the overhead of general-purpose cloud architecture. Lambda even has contracts that let enterprises use any type of GPU in their portfolio and easily upgrade to the next generation. For all of you wanting to build with Llama 4, Lambda has a serverless API without rate limits.

It's built with rapid scaling in mind. Users have 1,000 extra inference consumption without ever having to apply for a quota or even speak to a human. Head to lambda.ai.com for a free trial of their inference API featuring the best open source models like DeepSeq and Llama 4 at the lowest prices in the industry. All right, back to Zach. How do you keep track of, you've got all these different projects, some of which we've talked about today. I'm sure there's many I don't even know about.

As the CEO overseeing everything, there's a big spectrum between going to the Lama team and here's the hyperparameters you should use to just giving a mandate like go make the AI better. and there's many different projects. How do you think about the way in which you can best deliver your value add and oversee all these things? Well, I mean, a lot of what I spend my time on is trying to get awesome people onto the teams, right? I mean, it's...

So there's that. And then there's stuff that cuts across teams. It's like, all right, you build meta AI and you want to get it into... WhatsApp or Instagram. It's like, okay, now I need to get those teams to talk together. And then there's a bunch of questions like, okay, I was... the thread for meta ai and whatsapp to feel like other whatsapp threads or do you want it to feel like other kind of like AI chat experiences. There's like different idioms for those.

And so I think that there's like all these interesting questions that sort of need to get answered around like, how does this stuff basically fit into all of what we're doing? Then there's a whole other part of what we're doing, which is basically pushing on the infrastructure. If you want to stand up a gigawatt cluster, then... First of all, that has a lot of implications for

for the way that we're doing infrastructure build outs. It has sort of political implications for how you engage with the different states where you're building that stuff. it has financial implications for the company in terms of all right there's like a lot of economic uncertainty in the world do we like go double down on infrastructure right now um and and if so what other trade-offs do we want to make around the company like those are things that like

It's tough for other people to really make those kind of decisions. And then I think that there's this question around like, taste and quality which is like when is something good enough that we want to ship it and and i do feel like In general, I'm the steward of that for the company, although we have a lot of other people I think have good taste as well who are also filters for different things. But yeah, I think that those are basically the areas. But I think...

AI is interesting because more than some of the other stuff that we do, it is more research and model led than really product led. Like you can't just like design the product that you want and then try to build the model to fit into it. You really need to like...

design the model first and like the capabilities that you want and then you get some emergent properties then it's like oh you can build some different stuff because this kind of turned out in this way and I think at the end of the day like like people want to use the best model right so that's partially why you know when we're talking about building the most like personal ai um the best voice the best personalization um

experience with very low latency those are the things that we basically need to design the whole system to build which is why we're working on full duplex voice which is why we're working on like the personalization to both both have like good memory extraction from your interaction with AI, but also be able to plug into all the other meta systems and why we design the specific models that we designed to have the kind of size and latency parameters that they do. Speaking of politics,

There's been this perception that some tech leaders have been aligning with Trump. You and others have donated to his inaugural event and we're on stage with them. And I think you settled like a lawsuit, which resulted in them getting $25 million. I wonder what's going on here. Does it feel like the cost of doing business with an administration? What's the best way to think about this? My view on this is like...

Our default as an American company should be to try to have a productive relationship with whoever is running the government. I would do this. We've tried to offer to support previous administrations as well. I've been pretty public with some of my frustrations with the previous administration how they basically did not engage with us or the business community more broadly which i think

frankly, I think is going to be necessary to make progress on some of these things. Like we're not going to be able to build the level of energy that we need if you don't have a dialogue and they're not prioritizing trying to do those things. But fundamentally, look, I think a lot of people want to write this story about like...

Like, you know, what direction are people going? Like, I just think it's like, we're trying to build great stuff. We want to work with, have a productive relationship with people. And that's sort of, that's how I see it. And it is. Also, how I would guess most others see it, but obviously I can't speak for them. You've spoken out about how you've rethought some of the ways in which you engage and defer to the government in terms of moderation stuff in the past.

How are you thinking about AI governance? Because if AI is as powerful as we think it might be, the government will want to get involved. What is the most productive approach to take there? And what should the government be thinking about here? Yeah, I guess in the past, I probably just...

I mean, most of the comments that I made, I think were in the context of content moderation, where it's been an interesting journey over the last 10 years on this, where it's obviously been an interesting time in history. There have been novel questions raised about online content moderation. Some of those have led to, I think, productive new systems getting built like our AI systems to be able to detect.

nation-states trying to interfere in each other's elections. I think we will continue building that stuff out and that I think has been net positive. I think other stuff... We went down some bad paths. Like, I just think the fact checking thing was not as effective as community notes.

Because it's not an internet-scale solution. There weren't enough fact-checkers, and people didn't trust the specific fact-checkers. You want a more robust system. So I think what we got with Community Notes is the right one on that.

But my point on this was more that... I think historically I probably deferred a little bit too much to either the media and kind of their critiques or the government on things that they... did not really have authority over but just as like a central figure um Like, I think we tried to build systems that were maybe we could... not have to make all of the content moderation decisions ourselves or something. And I guess I just think part of the growth process over the last...

We're a meaningful company. We need to own the decisions that we need to make. We should listen to feedback from people. shouldn't defer too much to people who are not who do not actually have authority over this because at the end of the day we're like we're in the seat and we need to like own the decisions. so i think we probably you know it's it's been a maturation process and

in some ways painful, but I think we're probably a better company for it. Will tariffs increase the cost of building data centers in the U.S. and shift build-outs to Europe and Asia? It is really hard to know how that plays out. I think we're probably in the early innings on that. It's very hard to know. Got it. What is your single highest leverage hour in a week? What are you doing in an hour?

I don't know. I mean, every week is a little bit different. And it's probably got to be the case that the most leveraged thing that you do in a week is not the same thing each week, or else by definition, you should probably spend more than one hour doing that thing every week.

Yeah, I don't know. It's part of the fun of both, I guess, this job, but also... the industry being so dynamic is like things really move around right and like and the world is very different now than it was at the beginning of the year than it was six months and into the middle of last year

I think a lot has really advanced meaningfully, and a lot of cards have been turned over since the last time that we sat down. I think that was about a year ago, right? Yeah, yeah. I guess you were saying earlier that recruiting people is... a super high leverage thing you do. It's very high leverage. Yeah. What would be possible if, you know, you talked about these models being mid-level software engineers by the end of the year.

What would be possible if, say, software productivity increased like 100x in two years? What kinds of things could we build that we can't build right now? What kinds of things? Well, that's an interesting question.

I think one theme of this conversation is that the amount of creativity that's going to be unlocked is going to be massive. And if you look at the overall arc of... kind of human society and the economy over 100 or 150 years it's basically people going from being primarily agrarian and most of human energy going towards just feeding ourselves to that has become a

kind of smaller and smaller percent and the things that take care of like our basic physical needs are smaller and smaller percent of human energy which has led to two impacts one is More people are doing kind of creative and cultural pursuits. And two is that more people, people in general spend less time working and more time on entertainment and culture.

I think that that is almost certainly going to continue as this goes on. This isn't like the one-to-two-year thing of what happens when you have a super powerful software engineer, but... I think over time, everyone is going to have these superhuman tools to be able to create a ton of different stuff. I think you're going to get this incredible diversity. Part of it is going to be solving... like things that we hold up as like these like hard problems like solving diseases or like

solving different things around science or um or just like different technology that makes our lives better but I would guess that a lot of it is going to end up being... kind of cultural and social pursuit and entertainment and like i would guess that the world is going to get a lot more like a lot funnier and like weirder and quirkier in a way that like the memes on the internet have sort of gotten over the last 10 years. And I think that that adds a certain kind of richness and depth.

as well that in kind of funny ways i think it actually helps you connect better with people because now like i don't know it's like all day long i just find interesting stuff on the internet and like send it in group chats to the people i care about who i think are going to find it funny and it's like like the the media that people can produce today to express very very nuanced specific cultural ideas um

I don't know. It's cool. And I think that'll continue to get built out. And I think it does advance society in a bunch of ways, even if it's not like the hard science way of curing a disease.

I guess this is sort of, if you think about it, like the meta social media view of the world is like, yeah, I think people are going to spend a lot more time doing that stuff in the future. And it's going to be a lot better. And it's going to help you connect because it's going to help express different ideas.

because the world's going to get more complicated, but like our technology, our cultural technology to kind of express these very complicated things in like a very kind of funny little clip or something are going to... It just gets so much better. So I think that's all great. I don't know, next year for, I tend to, I mean, just, I guess one other thought that I think is interesting to cover is,

I tend to think that for at least the foreseeable future, this is going to lead towards more demand for people doing work, not less. Now, people have a choice of how much time they want to spend working, but... I'll give you one interesting example of something that we were talking about recently. So we have almost 3.5 billion people use our services every day. And one question that we've struggled with forever is how do we provide customer support? Today, you can write an email.

But we've never seriously been able to contemplate having... having like voice support where someone can just call in. And I guess that's maybe one of the artifacts of having a free service, right? It's like the revenue per person's not so high that you can have an economic model that people can kind of call in.

But also with three and a half billion people using your service every day, I mean, there'd be like a massive, massive number of people, like the biggest call center in the world type of thing. But it would be like $10, $20 billion, something ridiculous a year to kind of staff that.

So we've never really kind of like thought too seriously about it because it was always just like, no, there's no way that this kind of makes sense. But now... you're going to get to this place where the ai can handle a bunch of people's issues not all of them right because um maybe 10 years from now or something it can handle all of them but when we're thinking about like a three to five year time horizon um

It'll be able to handle a bunch, kind of like self-driving cars can handle a bunch of terrain, but in general they're not like doing the whole route by themselves yet in most cases, right? It's like... People thought truck driving jobs were going to go away. There's actually more truck driving jobs now than there were like when we started talking about self-driving cars in whatever it was almost 20 years ago. And. I think for going back to this customer support thing, it's like, all right.

It wouldn't make sense for us to staff out. calling for everyone, but let's say the AI can handle 90% of that. then like and then if you if it can't handle it then it kicks it off to a person okay now like if you've gotten the cost of providing that service down to one-tenth of what it would have otherwise been then

all right, maybe now that actually makes sense to go do. And that would be kind of cool. So the net result is like, I actually think we're probably going to go hire more customer support people, right? It's like the common knowledge or like the kind of common belief that people have.

Is that like, oh, this is clearly just going to automate jobs and like all these jobs are going to go away. I actually just, that has not really been how the history of technology has worked. It's been, you know, you can, you like. Create things that take away 90% of the work, and that leads you to want more people, not less. Yeah. I mean, to close off the interview, I...

I've been playing devil's advocate on a bunch of points, and I really appreciate you being a good sport about it. But I do think there's not an upper bound to how much beauty there can be in the world, especially if there's billions of AIs optimizing the amount of beauty you can see and the amount of connection you can have and so forth. Yeah, I'm pretty optimistic about it. Final question. Who is the one person in the world today who you most seek out for advice?

Oh, man. Well, I feel like it's part of my style is I like having a breadth of advisors. So it's not just one person, but it's... We've got a great team. I mean, I think that there's people at the company, people on our board. There's a lot of people in the industry who are doing new stuff. There's not like a single person, but I know it's fun. And also when the world is dynamic, just having a reason to work with people you like. on cool stuff, to me, that's what life is about.

All right, great note to close on. Awesome. Thanks for doing this. Yeah, thank you. I hope you enjoyed this episode. If you did, the most helpful thing you can do is just share it with other people who you think might enjoy it. Send it to your friends, your group chats, Twitter, wherever else. Just let the word go for it. Other than that, super helpful if you can subscribe on YouTube and leave a five-star review on Apple Podcasts and Spotify.

Check out the sponsors in the description below. If you want to sponsor a future episode, go to dwarkesh.com slash advertise. Thank you for tuning in. I'll see you on the next one.

This transcript was generated by Metacast using AI and may contain inaccuracies. Learn more about transcripts.
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast