Today, this is going to be an Ask Me Anything episode. I'm joined with my friends Trenton Bricken and Sholta Douglas. You guys do some AI stuff, right? Yeah, dabble. They're researchers at Anthropic. Other news, I have a book launching today. It's called A Scaling Era. I hope one of the questions ends up being why you should buy this book. But we can kill two birds with one stone. Okay, let's just get at it. What's the first question that we've got to answer? Take us away.
So I want to ask the fly ball question that I heard before of why should ordinary people care about this book? Like, why should my mom buy and read the book? Yeah. First, let me tell you about the book, what it is. So, you know, these last few years I've been interviewing. AI lab CEOs, researchers, people like you guys, obviously, but also scholars from all kinds of different fields, economists, philosophers, and...
They've been addressing, I think, what are basically the gnarliest, most interesting, most important questions we've ever had to ask ourselves. Like, what is the fundamental nature of intelligence? What will happen? when we have billions of extra workers? How do we model out the economics of that? How do we think about an intelligence that is greater than the rest of humanity combined? Is that even a coherent concept? And so...
What I'm super delighted with is that with Stripe Press, we made this book where we compiled and curated the best, most insightful snippets across all these interviews. And you can read. Dario addressing why does scaling work? And then on the next page is Demis explaining DeepMind's plans for whether they're going to go the RL route and how much the AlphaZero stuff will play into the next generation of...
And on the next page is, of course, you guys going through the technical details of how these models work. And then there's so many different fields that are implicated. I mean, I feel like AI is one of the most multidisciplinary fields. That one can imagine because there's no field, no domain of human knowledge that is not relevant to understanding what a future society of different kinds of beings will look like.
You can have Carl Schulman talk about how the scaling hypothesis shows up in primate brain scaling from chimpanzees to humans. On the next page might be an economist trying to argue, like Tyler Cowen, explaining why he doesn't expect explosives. of economic growth and why the bottlenecks will eat all that up. Anyways, so that's why your mom should buy this book. It is the distillation of all these different fields of human knowledge applied to the most important questions.
that humanity is facing right now. I do like how the book is sliced up by different topics and across interviews. Yeah, yeah. So it does seem like a nice way to listen to all of the interviews in one. digestible way. There's two interviews I've done that haven't been released publicly before that are in the book. So one was with Jared Kaplan, who's one of your co-founders.
And this is another example where it's like he's like a physicist and he's explaining scaling from this like very mathematical perspective about data manifolds. And then on the next page, you have like a totally different perspective. It's like Goran talking about, you know, why can't we just have like distilled, like why did general intelligence actually evolve in the first place? What is the actual evolutionary purpose of it? And it's like.
Page by page, right? You can just get addresses. Even for me, I mean, like, the person who's been on the other end of these conversations, it was actually really cool to... like read it and just be like, oh, I actually like, now I realize how these insights connect to each other. Yeah, the only other thing that stood out to me as well is the introduction section. The only thing that stood out to you. Yeah, yeah, that was really the only thing that was not worthy.
I just mean in turn stood out in accessibility. Yeah. Is the introduction section and the diagrams for like all the different inputs that enable you to train. A machine learning model. Stripe Press books are also just beautiful and have these nice like side captions for explaining what parameters are, what a model is, these sorts of things. Actually, when we did our episode together, a bunch of people, I don't know if you saw this, independently made these.
like blog posts and on key cards and shit where they're like explaining the cause cause we just kind of passed over some things. Um, and hopefully we've given a similar treatment to every single interview I've done where you can read. a very technical interview with a lab CEO or something or an engineer or a researcher. And then the side is like, here's like more context, here's more definitions, here's more commentary.
And yeah, I feel like it elevated the conversations. So in other words, my parents will finally understand what I do for a job. What do they do? They get it very well. Maybe my parents will. Maybe your parents will. All mine need to know. that my name's in a book. You're a co-author. They're like, cool. Should we get into the AMA questions? Let's do it. All right.
So Brian Krav asks, the issue you raise with Dario and occasionally tweet about relating to models not making connections across disparate topics, some sort of combinatorial attention challenge. What are your thoughts on that now? Do you solve it with scale, thinking models, or something else? By the way, so the issue is, one of the questions asked Dario is, look, these models have all of human knowledge memorized. And you would think if a human had this much stuff memorized, um...
And they were moderately intelligent. They couldn't be making all these connections between different fields. And there are examples of humans doing this, by the way. There's Donald Swan or something like this. This guy noticed that... The way what happens to a brain after magnesium deficiency is exactly the kinds of...
I don't know, structure you see during a migraine. So he's like, okay, you take magnesium supplements and we're going to cure a bunch of migraines. And it worked. And there's many other examples of things like this where you just like notice two different connections between pieces of knowledge. Why, if these LLMs are intelligent, are they not able to use this unique advantage they have to make these kinds of discoveries? I feel a little shy, like me giving answers on AI shit with you guys here.
Actually, Scott Alexander addressed this question in one of his AMA threads, and he's like, look, humans also don't have this kind of logical omniscience, right? He used the example of... in language if you really thought about like why are two words connected it's like oh i understand like why rhyme has the same etymology as another word if you really thought about but you just like don't think about it right there's like this common tutorial explosion um
I don't know if that addresses the fact that we know humans can do this, right? Like the humans have in fact done this. And I don't know of a single example of LLMs ever having done it. Actually, yeah, what is your answer to this? I think my answer at the moment is that...
the sort of pre-training objective doesn't necessarily like it imbues with this like nice flexible general knowledge about the world but doesn't necessarily imbue the like the skill of making like novel connections or like research the kinds of things that people are trained to do through PhD programs and through like sort of the process of exploring and interacting with the world. And so...
I think at a minimum, you need significant RL in at least similar things to be able to approach making novel discoveries. And so... I would like to see some early evidence of this as we start to build models that are sort of interacting with one, trying to make scientific discoveries and sort of like modeling the behaviors that we expect of people in these positions. Because I don't actually think we've done that in a...
in like a meaningful or scaled way as a field, so to speak. Yeah, riffing off that with respect to RL, I wonder if models currently just aren't good at knowing. what memories they should be storing. Like most of their training is just predicting the next word on the internet and remembering very specific facts from that. But if you were to teach me something new right now, I'm very aware of my own memory limitations.
And so I would try to construct some summary that would stick. And models currently don't have the opportunity to do that. Memory scaffolding in general is just very... primitive right now. Right, like Cloud plays Pokemon. Exactly, yeah, where like someone worked on it, it was awesome, it got far, but another excited Anthropic employee then like iterated on the memory scaffold and was able to like very quickly improve on it.
Interesting. So, yeah, that's one. I do also just wonder if models are idiot savants. The best analogy might be to Kim Peek. So Kim Peek was born without a corpus callosum, if I recall correctly. Each hemisphere of his brain operated quite independently. He could read a page of a book. So he'd open a book. There'd be two pages visible. Each eye would read one of the pages.
And he had like a perfect encyclopedic memory of like everything he'd ever read. But at the same time, he had other debilitations, functioning socially, these sorts of things. And it's just kind of amazing how good LLMs are at very niche. topics but can totally fail at other ones still. I really want to double click on this thing of why there's this trade-off between memorization like yeah why does cutting it off like apparently it's sort of it's connected to this debilitation.
But why can't we, like, Wikitext is not that, it's like five megabytes of information. The human brain can store much more. So what does the human brain just not want us to memorize these kinds of things and is actively pruning?
And yeah, I don't know. But we don't know how to do it right now. We'll do a separate episode. The one thing I'll say on that is like there was another case to someone with a perfect memory. So they never forgot anything. But their memory was too debilitating. It'd be like... your context window for the Transformer is like trillions of tokens. And then you spend all your time attending to past things and like are too trapped in the details.
to extract any meaningful, generalizable insights from it. Terence Deacon, whose book he recommended, had this interesting insight about how we learn best when we're children. But we forget literally everything that happened to us when we were children, right? We have total amnesia. And adults have this in-between where we don't remember exact details, but we can still like learn in a pretty decent way. And then LLMs are on the opposite end of this gradient where...
They'll get the exact phrasing of Wikitext down, but they won't be able to generalize in these very obvious ways. A little bit like Guern's theory, optimizer theory, no? Yeah, yeah, yeah. I think I probably got it from that. Yeah, Guern has definitely had a big influence on that. this for me. I mean, I feel like what's underappreciated on the podcast is like we have this like group chat and we also just like meet up a lot in person.
And just all the offer from the podcast just comes from you and a couple other people just feeding me like ideas and nudges and whatever. And then I can just use that as an intuition pump during the conversation. Yeah, you're not the only one. Oh, like I'd benefit immensely from just hearing what everyone else has to say. Yeah. It's all regurgitation. Another question? Yes. Maybe Rabid... Maybe Rabid Monkey?
Um, asks, imagine you have a 17 year old brother slash nephew just starting college. What would you recommend he study given your AGI timelines? I don't know, become a podcaster? I feel like that job's still going to be around. It's funny because I started computer science. And in retrospect, I mean, at the time, you could have become a software engineer or something. Instead, you became a podcaster. It's kind of an irresponsible career move. But in retrospect, it's like...
I can ask this question all the time. Yes. And one answer that I like to give is that you should think about the next couple of years as increasing your individual leverage by like a huge factor.
every year so you know already software engineers will come up and say you know i'm two times faster or in new languages i'm five times faster than i was last year um i expect that trend line to continue basically as you sort of go from this model of I'm working with some model that's assisting me on my computer and it's like basically a pairing session to
I'm managing a small team through to I'm managing like a division or a company basically that is like targeting a task. And so I think that deep technical knowledge in fields. will still matter in four years. Like it absolutely will. Because you will be in the position of managing dozens or like your sort of your individual management bandwidth.
will be maxed out by trying to manage like teams of AIs. Yeah. And this kind of thing. And maybe AIs, you know, maybe we end up like a truly like singularity world where you have AIs managing AIs and this kind of stuff. But I think in a very wide.
part of the like possibility spectrum yes you are managing enormous like vastly more resources yeah than an individual could command today yeah yeah um and you should be able to solve so many more things that's right and i think like i would emphasize that
This is not just Coke. Like, it genuinely is the case that these models lack the kind of long-term coherence, which is, like, absolutely necessary for making a successful company. I guess, like, getting a fucking office is, like, kind of complicated, right? It's like you had to deal with all these...
And so you can just imagine that for a sector after sector, like the economy is really big. And really complex. Exactly. And so especially if it's, I mean, I don't know the details, but I assume if it's like a data sparse. thing where you kind of just like got to know what's actually what is the context of what's happening in the sector or something. I feel like you'd be in a good position. Maybe the other thought I have is that it's really hard to like plan your career in general.
And I don't know what advice that implies because I remember being super frustrated. I mean, I was in college and the reason I was doing the podcast was to figure out what it is I want to do. It wasn't the podcast itself. And it would go on like 80,000 hours or whatever. career advice and i it's just like in retrospect it was all kind of mostly useless and just like just try doing things i mean especially with ai we just like don't know what it's gonna like it's so hard to forecast
what kind of transformations there will be. So try things, do things. I mean, it's such banal, vague advice, but I am quite skeptical of career advice in general. Well, maybe the, like, piece of career advice I'm not skeptical of is put yourself close to the frontier because you have a much better vantage point. That's right. Right? Like, you can... study deep technical things uh whether it's computer science or biology or like and get
To the point where you can see what the issues are. Because it's actually remarkably obvious at the frontier what the problems are. And it's very difficult to see. But actually, do you think there is an opportunity? Because one of the things people bring up is... Maybe the people who are advanced in their career and have all this TASA knowledge will be in a position to be accelerated by AI. But you guys four years ago or two years ago.
when you were getting discovered or something, that kind of thing where you have a GitHub open issue and you try to solve it, is that just like that's done and so the onboarding is much harder? It's still what we look for in hiring. I'm in favor of the learn fundamentals, gain useful mental models. But it feels like everything should be done in an AI native way or like top down instead of your bottom up learning.
First of all, learn things more efficiently by using the AI models and then just know where their capabilities are and aren't. And I would be worried and skeptical about any subject which prioritizes rote memorization of lots of facts or information. Instead of ways of thinking. But if you're always using the AI tools to help you, then you'll naturally just have a good sense for...
The things that it is and isn't good at. That's right. Next one. What is your strategy, method, or criteria for choosing guests? The most important thing is do I want to spend one to two weeks? reading every single thing you have ever written, every single interview ever recorded, talking to a bunch of other people about your research. Because I get asked by people who are like quite influential often to be like, would you have me on your podcast?
and more often than not i say no for two reasons one is just like Like, okay, you're influential or something. It's not fundamentally that interesting as an interview prospect. Not from, like, I don't think about the hour that I'll spend with you. I think about, like, the two weeks. Because this is my life, right? The research is my life. And I want to have fun while doing it.
So just like, is this going to be an interesting two weeks to spend? Is it going to help me with my future research or something? And the other is... big guests don't really matter that much in the, in like, if you just look at what are the most popular episodes or what in the long run helps the podcast grow. By far, my most popular guest is Sarah Payne. And she, before I interviewed her, was just like,
a scholar, um, who was not publicly well known at all. And I just found her books quite interesting. Same goes with, so my most popular guests are Sarah Payne and then Sarah Payne, Sarah Payne, Sarah Payne. Um, uh, it's awesome. And. By the way, from a viewer-minute adjusted basis, I host a Sarah Payne podcast where I occasionally talk about AI. And that is David Reich, who is a geneticist of ancient DNA. I mean, he's like somewhat well-known, but...
he had a bestselling book, but he's not like, he's not Satya Nadella or Mark Zuckerberg, who are the next people on the list. And then again, I think at like pretty soon it's like you guys or Leopold or something. And then you get to the lab CEOs or something. So.
Big names just don't matter that much for what I'm actually trying to do. And it's also really hard to predict who's going to be the David Reich or Sarah Payne. So just have fun. Talk to whoever you want to spend time researching. And it's a pretty good... proxy for what will actually be popular. What was the specific moment, if there was one, that you realized that producing your podcast was a viable long-term strategy?
I think when I was shopping around ad spots for a Mark Zuckerberg episode, and which are, you know, like, now when I look back on it, it's like, not in retrospect that mind-blowing, but at the time I'm like, oh, I could actually hire an editor full time or maybe more editors than one. And from there, like turning into a real business.
That's when I because before I just like didn't people would tell me like oh these other podcasts are making whatever whatever amount of money and I'd be like how you know I have this running joke with one of my friends that I don't know if you've seen me do this, but every time I encounter a young person who's like, what should I do with my life?
I'm like, you've got to start a blog. You've got to be the Matt Levine of AI. You can do this. It's like a totally empty niche. And I was trying to joke with them where they're like, you're like a country bumpkin who's like won the lottery. And you're like, guys, it's a crash. I do want to press on that a bit more because your immediate answer to the 17 year old was to start a podcast. Yeah. So like what what niches are there? What sort of things would you be excited to see?
in like new blogs, podcasts. I mean, I wonder if you guys think this too, but I think this like Matt Levine of AI. Yeah, absolutely. It's like a totally open niche as far as I can tell. apologize to those who are trying to fill it in. I was like, I'm aware of at least one that's trying to do this. The other thing I'd really emphasize is it is really hard
to do this based on other people's advice or to say, like, here's a niche I'm, like, I, at least I'm trying not to fill, like, a specific niche. And if you think about any sort of successful new media thing out there, there's, it has two things which are true. It's like often not just geared towards one particular topic or interest. And two, it's the most important thing is that it is propelled by a single person's vision. It's not like a collective or whatever.
And so I would just really emphasize, sorry, the thing I really want to emphasize is it can be done to, you can make a lot of money at it, which is not the most important thing probably for the kind of person who would succeed at it, but still it's just worth knowing that it's a viable career. Three. that yeah that basically you're gonna feel like shit in the beginning where it's like all your early stuff is gonna kind of suck um maybe some of it will get appreciated but
It seems like bad advice to say, still stick through it in case you actually are terrible, because some people are terrible. But in case you are not, just do it, right? What is the three months of vlogging on the side really going to cost you? And people just don't actually seriously...
do the thing for long enough to actually get evidence or get the sort of RL feedback on like, oh, this is how you do it. This is how you frame an argument. This is how you make a compelling thing that people will want to read or watch. Blogging is definitely underrated. I think like most of us have. You both had blogs which are relevant. I don't know if they're actually relevant to getting that. They're like somewhat relevant. Yeah. But I think more so that.
we have all read almost all the blogs that do in-depth treatises on AI. That's right. If you write something that is high quality, it's almost invariably going to be shared around Twitter and read. Oh, this is so underappreciated. Yeah. Two pieces of evidence. I was talking to a very famous blogger you would know, and I was asking him, how often do you discover a new undiscovered blogger? And he was like, it happens very rarely, maybe once a year.
And they ask him, how long after you discover him or her does the rest of the world discover them? And he's like, maybe a week. Interesting. And what it suggests is like, it's actually really efficient. Oh, so I have some more takes. So I believe that slow compounding growth in media is kind of fake. Like, Leopold's situational awareness. It's not like he was willing of an audience for a long time, for years or something. It's just like, it was really good. Disagree or agree with it. And...
If it's good enough, literally everybody who matters, and I mean that literally, will read it. I mean, I think it's like heart is like zero shot, something like that. But the fundamental thing to emphasize is you're...
The compounding growth, at least for me, has been, I feel like I've gotten better. And it's not so much that somehow the three years of having 1,000 followers or somehow like a compounding, you know, I don't think it was that. I think it was just like, it took a while to get better. Yeah, certainly when Leopold posts that, like the next day, it's almost like...
You can picture it being almost stapled. Not it wasn't, but it was stapled to walls, so to speak, on Twitter. Everyone was talking about it. You went to any event for the following week. Every single person in the entire city was talking about that essay. Yeah, yeah. It's like Renaissance Florence. That's right, that's right.
That's right. Yeah. The world is small. Yeah. What would you say is your first big success? I'm trying to think back to when I first found your podcast. I distinctly remember you had your blog post on the Annus Mirabilis and Jeff Bezos retweeted it, I think. Yeah.
I'm trying to remember if it was even before that or not, but yeah, I'm curious. I feel like that was it. Okay. I mean, it wasn't something where I'm like, it was some big inside that deserved to blow up like that. It was just taking some shots on goal. They were all like... whatever, inside porny, and then one of them...
I guess caught the right guy's attention. And yeah, but I think that was it. Yeah, that's something else which is underappreciated, which is that a piece of writing doesn't need to have a fundamentally new insight so much as give people a way to express cleanly a set of ideas that they already are like.
aware of in a sort of broader way. And if it's really crisp and articulate, then even still, that's very valuable. And sorry, the one thing I should emphasize, which I think is maybe the most important thing to the feedback loop. It's not the compounding growth of the audience. I don't even think it's me getting more shots on goal in terms of doing the podcast. I actually don't think you improve that much by just doing the same thing again and again.
Uh, if there's like no reward signal, just you'll keep doing whatever you were doing before. I genuinely think the most important thing has been the podcast is good enough that it merits me getting to meet people like you guys. Then I become friends with people like you. You guys teach me stuff. I produce more good podcasts, so hopefully slightly better. That helps me meet people in other fields. They teach me more things.
Like with the China thing recently I wrote this like blog post about a couple stories about things that happened in China and That alone has like netted me an amazing China network in the matter of like one blog post right and so
Hopefully, if I do an episode on China, it will be better as a result. And hopefully that happens across field after field. And so just getting to meet people like you is actually the main sort of flywheel. Interesting. So move to San Francisco? Yes. If you're trying to do AI. Yeah. Next questions. Shall we do... Can we do a... A very important question. From a jacked Pajit. How much can you bench?
You can't lie because we fucked up the answer. At one point, I did bench 225 for four. Now I think I'm probably like 20 pounds lighter than that or something. The reason you guys are asking me this is because I've gotten left in with both of you. And I remember Trent and I were doing like... pull-ups and bench and then we like bench and he'd like throw on another plate or something and then like instead of pull-ups he'd like be cranking out these muscle-ups. It's all technique.
Yeah, so they both bench more than me, but I'm trying my best. Ask again in six months. Yeah, exactly. What's your favorite history book? There's a wall of them behind you. Oh, I mean, obviously the Cairo LBJ biographies. Oh, okay. Yeah. Sorry, the main thing I took away from those books is LBJ had this quote that he would...
Tell his debates in his early 20s. He thought debate to these like poor Mexican students in in Texas and He used to tell them if you do everything you'll win. I think it's an underrated quote. So That's the main thing I took away. And you see it through his entire career where there's a reasonable amount of effort, which, you know, goes by like 20, 80. You do the 20 to get the 80% of the effect.
And then if you go beyond that to get like, oh, no, I'm not just going to do 20%. I'm going to just like do the whole thing. And there's a level even beyond that, which is like, like this is like just like an unreasonable use of time. This is going to have no ultimate impact and still try doing that. Yeah. You've shared on Twitter using Anki and even like a Claude integration. Yeah. Do you do book clubs? Do you use Goodreads? And what are you reading right now?
I don't use, I don't have book clubs. I do, the, but the Space Fire tradition has just genuinely been a huge, like, uplift in my ability to learn. Mostly because it... It's not even the long-term impact over years, though I think that is part of it. And I do regret all the episodes I did without using speech print cards because all the insights have just sort of faded away. The main thing is if you're studying a complicated subject.
At least for me, it's been super helpful to consolidate. So it's like if you don't do it, you feel like a general where you're like, I'm going to wage a campaign against this country. And then you like climb one hill. Uh, and then the next day you had a retreat and then you climbed the same hill. Um, there might be a sort of more kosher analogy. Uh, sorry, and the other question was what am I reading right now? Yeah. Oh, uh, my friend.
Alvaro de Menard, author of Fantastic Anachronism. Can I just pull it up? Actually, it's right here. Yeah. I hope he's okay with me sharing this. But he wrote, he made like 100 copies of this. translation he did of a savored greek poet and they're like yeah uh kavafi hopefully i didn't mispronounce it um that one has an inscription for gwirn because that's his coffee but um but it's super delightful and that's what i've been reading recently
Any insights from it so far? Poets will hate this framing. I feel like poetry is like, it's like TikTok. Where it's like you get this like quick vibe of a certain thing and then you like like swipe and then you get the next vibe swipe How do you go about learning new things or preparing for an episode? You mentioned the one to two week period where you're deep diving on the person. What does that actually look like?
It's very much the obvious thing. Like you read their books, you read their papers, you read the papers. If they have colleagues, you try to talk to them to better understand the field. I will also mention that... I all I have to do is ask some questions. And I do think it's a much I think it's much harder to like learn a field to be a practitioner than just learn enough to ask interesting questions. But yeah, for that, like, it's very much the obvious thing you'd expect.
Based Carl Sagan asks, what are your long-term goals and ambitions? Yeah, the AGI kind of just makes a prospect of a long-term like harder to articulate, right? You know the Peter Thiel quote about what is your 10-year plan and why can't you do it in six months? Like it's especially salient given timelines. For the foreseeable future, grow the podcast and do more episodes, maybe more writing. But yeah, so we'll see what happens after.
like 10 years or something the world might be different enough yeah so basically podcast for now something you've spoken to me about and particularly when you were when you're trying to hire people for the podcast was
what you wanted to achieve with the podcast like what in what way do you want the podcast to like shape the world so to speak do you have any thoughts on that or uh because i remember you talking like i really want people to actually understand ai and like how this might change their lives or like you know how Like what we could be doing now to shape the world such that it ends up better. Yeah. I don't know. So I have contradictory views on this. On the one end, I do.
I do know that important decisions are being made right now in AI. And I do think, I mean, riffing on what we were saying about situational awareness, if you do something really good, it has a very high probability of one-shotting the relevant person.
You know, people are just generally reasonable. If you make a good argument, it'll go places. On the other hand, I just think it's very hard to... know what should be done it's like you got to have the very correct world model and then you got to know how in that world model the action you're taking is going to have the effect you anticipate and even in the last week i've changed my mind on some pretty fundamental things about
what I think about the possibility of an intelligence explosion or transformative AI as a result of talking to the Epoch folks. Yep. Basically, the TLDR is, I want the podcast to just be an epistemic tool. for now until i because i think it's just very easy to be wrong and so just having a background level of understanding of the relevant arguments is the highest priority makes sense yeah what's your sense what should i be doing i mean i think the podcast is awesome um
And a lot more people should listen to it. And there are a lot more guests I'd be excited for you to interview. So it seems like a pretty good answer for now. Yeah, I think making sure that like... There is a great debate of ideas on not just AI, but on like other fields and everything is incredibly high leverage and value. Yeah. How do you groom your beard? It's majestic.
I don't know what to say, just genetics. I do trim it, but... No beard oil? Sometimes I do beard oil. How often? Once every couple of days. Okay. That's not sometimes. That's pretty often. Do you have different shampoo for your head and your beard? No. What kind of shampoo do you use? Anti-dandruff. Do you condition it? Yeah. How often do you shave? We're giving people the answers that they want. Big beard oil. Yeah, you can sell some ad slots to different shampoo companies and we can edit it.
we sold an ad slot. Sorry, you had this idea of merch. Do you want to explain this t-shirt idea? Yes, so people should react to this. Someone should make it happen. Dworkesh wants merch, but he doesn't want to admit that he wants it. Or he doesn't want to... to make it himself because that seems tacky. So I really want a plain white tee with just Dworkesh's beard in the center of it. That's it. Nothing else.
But you were saying it should be, like, have a different texture than the rest of the shirt. Oh, so then I'm really ripping off it, where maybe, like, a limited edition set can have some of your beard hair actually sewn into the shirt. That'd be pretty cool. I would pay. I would pay for that.
Okay. How much? I've got like patches all over my beard. Depends on how much hair. If it's like one is like in there somewhere versus like the whole thing. Like do I have to dry clean it? Can I wash it as like on the delicate setting? But really, I think you should get merch. If you want to grow the podcast, which apparently you do, then this is one way to do it. Oh, yeah.
Which historical figure would be best suited to run a Frontier AI lab? This is definitely a question for you guys. No, I mean, I'm curious what your take is first. You've spoken to more of the heads of AI labs than I have. I was going to say... LBJ? Sorry, is the question who would be best at running an AI lab, or who would be best for the world, or? What outcome do you want? Yeah, what outcome do you want? Because I imagine...
It seems like what the best lab AI lab CEO succeed at is raising money, building a pipe, setting a coherent vision. I don't know how much it matters for the CEO themselves to have. good research taste or something. But it seems like their role is more as a sort of emissary to the rest of the world. And I feel like LBJ would be pretty good at this. Like just getting the right concessions, making projects move along.
coordinating among different groups to maybe, oh, Robert Moses. Yeah. Again, not necessarily best for the world, but just in terms of like making shit happen. Yeah. I mean, I think best for the world is a pretty important precondition. Oh, right. There's a Lord Ackwood quote of...
Great people are very rarely good people. So it's hard to think of a great person in history who are like, I feel like they'd really move the ball forward and also I trust their moral judgment. We're lucky in many senses with like the set today. That's right.
The set of people today are both, like, they try and care a lot about the moral side as well as sort of drive the labs forward. This is also why I'm skeptical of big grand schemes like... nationalization or some public-private partnership or just generally shaking up the landscape too much because I do think we've we're in like one of the better
I mean, the sort of like the difficulty of whether it's alignment or whether it's some kind of deployment safety risks, that is just like the nature of the universe is going to make that some level of difficulty. But the human factors.
In a lot of the counterfactual universes, I feel like we don't end up with people. Like, we could even be in a universe where they don't even play lip service. This is like not an idea that anybody had. They could have an ASI takeover. I think we live in a pretty good counterfactual universe. Yeah, we got a good set of gameplays on the board.
That's right. How are you preparing for fast timelines? If there's fast timelines, then there will be this six-month period in which the most important decisions in human history are being made. And I feel like having... An AI podcast during that time might be useful. That's basically the plan. Have you made any shorter term decisions with regards to like spending or health or anything else?
After I entered Zuckerberg, my business bank balance was negative 23 cents. When the ad money hit, I immediately reinvested it in NVIDIA. So that is the... Sorry, but you were asking from a sort of altruistic perspective. No, no, just in general. Like, have you changed the way you live at all because of your AGI timelines? I never looked into getting a Roth IRA. He brought us Fiji water before. Which is in plastic bottles. What have you guys changed your lifestyle as a result? Not really. No.
I just like work all the time. Would you be doing that anyways? Or would you not? I would probably be going very intensely at whatever like thing I'd picked to devote myself to. Yeah. How about you? I canceled my 401k contributions. Oh, really? Yeah, that felt like a more serious one. It's hard for me to imagine a world in which I have all this money that's just sitting in this account and waiting till I'm 60.
And things look so different then. You could be like a trillionaire with your marginal 401k contribution. I guess, but you also can't invest it in like specific things. And I don't know. I might change my mind in the future and can restart it. And I've been contributing for a few years now. On a more serious note, one thing I have been thinking about is how could you use this money to an altruistic end? And basically, if there's somebody who's up and coming...
In the field that I know, which is like making content. Could I use money to support them? And I'm of two minds on this. One, there are people who did this for me and it was kind of actually responsible for me continuing to do the podcast when it just like did not make sense as a couple hundred people listening or something. I want to shout out Anil Varanasi for doing this.
And also Leopold, actually, for the foundation he was recently running. On the other hand, it's like, I feel like I would... It's the thing about what that blogger was saying, that the good ones you actually do notice. It's like...
It's hard to find a hidden talent. Maybe I'm totally wrong about this. But I feel like if I put up a sort of grant application, I'd give you money if you're trying to make a blog. I'm actually not sure about how well that would work. There's different things you could do, though. Like there's...
I'll give you money to move to San Francisco for two months. And like, you know, sort of meet people and like sort of get more like context and taste and feedback on what you're doing. And like, it's not so much about the money or time. It's like, it's putting them in an environment where they can.
more rapidly grow like that's something that one could do um i mean you also i think you do that like quite proactively in terms of you like deliberately introduce people that you think will be interesting to each other and this kind of stuff so yeah yeah no i mean that's very fair and
Obviously, I've benefited a ton from moving to San Francisco. It's unlikely that I would be doing this podcast, at least on AI, to the degree I am if I wasn't here. So maybe it's a mistake to judge people based on the quality of their content as it exists now.
throw money at them, not throw money, but give them enough money to move to SF to get caught up in this intellectual milieu and then maybe do something interesting as a result. Yeah. The thing that most readily comes to mind is the MATS program for AI research. And this seems like it's just been incredibly successful at giving people the time, the funding, and the social status justification to do AI safety relevant research with mentors.
Oh, and you have a similar program. We have the Anthropic Fellows Program. That's right, yeah. And what is your, I know you're probably selecting for a slightly different thing, but I assume it's going to be power law dominated. Have you noticed a pattern among the, even with the mass fellows or your fellows?
who is just like, this made the whole thing worth it. Yeah, I mean, there have been multiple people who Anthropic and other labs have hired out of this program. Yeah, yeah. So I think the return on investment for it has been massive. And yeah, apparently the fellows, I think there are 20 of them, are like really good. What is the trick to making it work well or finding that one person? I think it's gotten much better with time where the early fellows, some of them did good work.
And got good jobs. And so then now later fellows, like the quality bar has just risen and risen and risen. And there are even better mentors now than before. So it's this really cool flywheel effect. But originally it was just people who like didn't have the funding or time to like make a name for themselves or do ambitious work. So it was kind of like giving them that niche to do it. Right, right, right, right. Seems really key. Yeah.
You can do other things. It doesn't have to be money. You know, like you could put out ideas for things you'd be really interested in reading. That's right. Or like promoting. Yeah, yeah, yeah. There's something coming there. Okay, there we go. So this episode hopefully will launch Tuesday.
at the same time as the book. By the way, which you can get at stripe.press slash scaling. But on Wednesday, which is the day after, hopefully there's something useful for you here. Okay, exciting. Yeah. Any other questions we want to ask? The thing I have takes on, which I rarely get asked about, is distribution. Distribution of AI? No, no, sorry. Like Mr. Beastial distribution. Oh, yeah, yeah, yeah. Where people...
I think rightly focus on the content. And if that's not up to snuff, I think you won't succeed. But to the extent that somebody is trying to do similar things, the thing they consistently underrate is... putting the yeah putting the time into getting distribution right um i just like random takes about
For example, the most successful thing for my podcast in terms of growth has been YouTube shorts. It's a thing you would never have predicted beforehand. And, you know, they're like responsible for like basically at least half the growth. of the podcast or something. I mean, I buy that. Yeah. Why wouldn't you predict it? Like, I mean, like, yeah, I mean, I guess there's the contrast of like the long form, deep content and like YouTube shorts and stuff. But I definitely think they're good hooks.
That's right. Yeah. And I have like takes on how to write tweets and stuff. The main intuition being like, write like you're writing to a group chat. Yeah. To a group chat of your friends rather than this like formal or whatever. I don't know. Just like these sort of like. Yeah. No, I mean, what else comes to mind here? Maybe it's interesting the difference between like TikTok and YouTube Shorts. Oh, yeah. We've never cracked TikTok. Yeah. Why not? Like you've tried. Yeah.
I mean, have you done everything? Have you read these poems? Maybe you're like in a bubble bath with like some beer shampoo on. Reading poems? That'd be an incredible thing, I bet you. that would go viral. You have to do that now. Reading a poem, uncross your legs. Last episode, it was the Interpretability Challenge. Now it's DoorDash in a bubble bath.
You gotta sell the book somehow, you know? We literally do it like Margot Robbie. Yeah, exactly. Explaining the CD on this and stuff. So what is... Scaling. And that's how you crack distribution. And that's how you crack distribution. But yeah, no, like when we did our episode.
it launched and you were sharing interesting tidbits about how it was doing and the thumbnail you wanted to use and the title. And I think I even asked you to share more details because it seemed interesting and cool and subtle things. But it seemed like you also kind of just hated it. Like playing this game of like really having to optimize all these knobs. When I realized, I mean, talent is everything. So having, I'm really lucky to have...
three to four editors who I'm like incredibly proud to work with. I don't know how to hire more of them. Like they're just so good and self-directed. So honestly, I don't have the tips to how to correct that. I hired those guys. So one of them was a farmer in Argentina. One of them was a freshman math student in Sri Lanka. One of them was a former editor for one of Mr. Beast's channels. The other is a director in Czechoslovakia who makes these AI animations that you've seen in...
the notes on China, and he's working on more essays like that. So, I don't know how to replicate that catch again. God, that's a pretty widely cost net, I'm going to be honest. Damn. But they're all like... And this was just through your challenges and just tweeting about. That's right, yeah. So I had a competition to make clips in my podcast. I rounded up a couple of them this way.
Yeah, it's hard to replicate because I've tried. Yeah. Why do you think this works so well with the video editors? Because you tried a similar approach with your chief of staff. Yeah. The difference is with the video editor. I think there is this arbitrage opportunity where there are people, it is fundamentally a sort of, are you willing to work hard and obsess about getting better over time, which all of them.
go above and beyond on but you can just find people in other countries who are like um and it's not even about the wages like i've 10x their salaries or something like that it's just about getting somebody who is really detail oriented and there is this global arbitrage there whereas with the general manager by the way so the person I ended up hiring and who I'm super excited to work with is
Your childhood best friend, Max Hearns. Max is so great. And he would have plenty of other opportunities. There's not this weird arbitrage where... You know, you find some farmer in Argentina. But, you know, it is striking that you were looking for a while. That's right. And then I just kind of mentioned offhand that Max was looking for something new. I genuinely, this is going to be like a total...
12-year-old learns about the world kind of question. But I genuinely don't know how big companies hire because I was trying to find this person for a year. And I'm really glad about the person I ended up hiring, but it was just like, if I need to hire a hundred people for a company, let alone a thousand people, I just like do not know how to find people like this at scale. Yeah. I mean, I think this is like the number one issue that startup CEOs have.
hiring like it's just relentlessly the number one and and the thing i was stunned with is how it didn't seem like my platform helped that much i got like close to a thousand applications across the different rounds of publicizing it that i did And a lot of, I think, really cool people applied. But the person that I ended up hiring was somebody who was just a reference, you know, like a mutual friend kind of thing. And a couple of other top contenders were also this way.
It's it's weird like the best people in the world don't want to apply at least to things like this And you just got to seek them out even if you think you have a public platform or something. Yeah Yeah, I mean, the job might just be so out of distribution from anything else that people would do. That's right. So Aditya Ray asks, how do you make it on Substack as a newbie writer? I think...
If you're starting from scratch, there's two useful hacks. One is podcasting because you don't need to have some super original new take. You can just interview people who do and you can leverage their platform. And two is writing book reviews. Again, because you have something to react to rather than having to come up with a unique worldview of your own. There's probably other things and it's really hard to give advice in advance. Just try things. But those I think are just like good.
cold starts. The book reviews is a good suggestion. I actually use like Gwen's book reviews as a way to recommend books to people. By the way, this is a totally undersupplied thing because I, if anybody has book reviews, Jason Furman is this economist who has like a thousand.
you know, Goodreads reviews. And I like, I probably have visited his Goodreads on a hundred independent visits. Wow. Same with the grown book reviews or something, right? So book reviews are a sort of very undersupplied thing. If you're looking to get started making. some kind of content. I like that. Cool. Thank you guys so much for doing this. Yeah, this was fun. We'll turn the tables on you again pretty soon. How does it feel being a hot seat?
That's nice. Nobody ever asked me a question. Nobody ever asked how is Dworkin. Yeah, super excited for the book launch. Thank you. The website's awesome, by the way. I appreciate it. Oh, yeah. Yeah, yeah. strife.press slash scaling. Yeah. Cool. Cool. Thanks, guys. Thanks. Okay, I hope you enjoyed that episode. So as we talked about, my new book is out.
It's released with Stripe Press. It's called The Scaling Arrow. And it compiles the main insights across these last few years of doing these AI interviews. And I'm super pleased with how it turned out. It really elevates the conversations. and as necessary context and just seeing them all together even reminded me of many of the most interesting segments and insights that I had myself forgotten. So I hope you check it out. Go to the link in the description below to buy it.
Separately, I have a clips channel now on YouTube. People keep complaining about the fact that I put clips and the main video on the same channel. So request granted. There is a new clips channel, but please do subscribe to it so we can get it kickstarted. And while you're at it, also make sure to subscribe to the main channel.
Other than that, just honestly, the most helpful thing you can do is share the podcast. If you enjoyed it, just send it on Twitter, put it in your group chats, share it with whoever else you think might enjoy it. That's the most helpful thing. If you want to learn more about advertising on future episodes, go to dwarkesh.com slash advertise. Okay, see you on the next one.