Okay, today I'm chatting with my friend Leopold Aschenbrenner. He grew up in Germany, graduated valedictorian of Columbia when he was 19, and then he had a very interesting gaffier, which we'll talk about, and then he was on the OpenAI Super Alignment team, made a rest in peace, and now he with some anchor investments from Patrick and John Collison and Daniel Gross and Nat Friedman is launching an investment firm. So Leopold, I know you're up to a slow start, but life is long,
and I wouldn't worry about it too much. You'll make up for it in due time. But thanks for coming on the podcast. Thank you. You know, I first discovered your podcast when your best episode had, you know, like a couple hundred views, and so it's just been, it's been amazing to follow your
trajectory, and it's so delight to be on. Yeah, yeah. Well, I think in the shelter and trend and episode, I mentioned that a lot of the things I've learned about AI, I've learned from talking with them, and the third part of this triumvir, probably the most significant in terms of the things that I've learned about AI has been you. We'll go out of this stuff on the record now. Great. Okay, first thing I'd like to get on record. Tell me about the Trillion Dollar Cluster.
But by the way, I should mention, so the context of this podcast is today, there's, you're releasing a series called Situation Lower Onus. We're going to get into it. First question about that is tell me by the Trillion Dollar Cluster. Yeah, so, you know, I'm like basically most things that have come out of Silicon Valley recently, you know, AI is kind of this industrial process. You know, the next model doesn't just require, you know, some code, it's building a giant new
cluster. You know, now it's building giant new power plants. You know, pretty soon it's going to be building giant new fabs. And, you know, since that should be tea, this kind of extraordinary sort of techno capital acceleration has been sent into motion. I mean, basically, you know, exactly a year ago today, you know, Nvidia had their first kind of blockbuster earnings call, right? Where it went
out 25% after hours. Never. And it was like, oh my god, AI, it's a thing. You know, I mean, I think within a year, you know, Nvidia and video data center revenue has gone from like, you know, a few billion a quarter to like, you know, 20, 25 billion a quarter now. And you know, continuing to go up, like, you know, big tech cat-backs, this guy rocker thing. And, you know, it's funny because it's both there's this sort of this kind of crazy scramble going on. But in some sense, it's just
the sort of continuation of straight lines on a graph, right? There's this kind of like long run trend basically almost a decade of sort of training compute of the sort of largest AI systems growing by about, you know, half an order of magnitude, you know, 0.5 ooms a year. And you can just kind of play
that for it, right? So, you know, GBD4, you know, rumored or reported to have finished pre-training in 2022, you know, the sort of cluster size there was rumored to be about, you know, 25,000 H100, you know, sorry, A100's on semi-analysis. You know, that's roughly, you know, if you do the math on that, it's maybe like a $500 million cluster, you know, it's very roughly 10 megawatts.
And, you know, just play that forward half a new year, right? So then 2024, that say, you know, that's a cluster that's, you know, 100 megawatts, that's like 100,000 H100 equivalent, you know, that's, you know, cost in the billions, you know, play it forward, you know, two more years, 2026, that's a cluster that's a gigawatt, you know, that's, that's, you know, sort of a large, nuclear reactor size. It's like the power of the Hoover Dam, you know, that costs tens of billions of
dollars. That's like a million H100 equivalent, you know, 2028, that's a cluster that's 10 gigawatts, right? That's, that's more power than kind of like most US states. That's, you know, like 10 million H100 equivalent, you know, cost hundreds of billions of dollars. And then 2030, um, trillion dollar cluster, 100 gigawatts, over 20 percent of US electricity production, you know, 100 million H100 equivalent,
and, um, and that's just the training cluster, right? That's like the one largest training cluster, you know, and then there's more inference GPUs as well, right? Most, you know, once there's products, most of them are going to be inference GPUs. And so, you know, US power production is barely grown for like, you know, decades. And now we're really in for a ride. So I mean, I, when I had Zuck on the podcast, he was claiming not a plateau per se, but that AI progress would be
bottlenecked by specifically this constraint on energy. And specifically like, oh, gigawatt data centers are going to build another three gorgeous dam or something. I know that there's companies according to public reports who are planning things on the scale of a gigawatt data center. 10 gigawatt data center, who's going to be able to build that? I mean, the 100 gigawatt center, like a state, so we're getting, you're going to pump that into one physical data center. How
is it going to be possible? Yeah, I mean, you know, I don't know. I think just 10 gigawatt, you know, like six months ago, you know, 10 gigawatts was the taco town. I mean, I think, I feel like now, you know, people have moved on, you know, 10 gigawatts is happening. I mean, I don't know, there's the information report on opening in Microsoft planning, a $100 billion cluster. So,
you know, you got it, you know, if you go what or is that the 10 gigawatt? I mean, I don't know, but you know, if you try to like map out, you know, how expensive would the 10 gigawatt cluster be, you know, that's maybe a couple hundred billion. So it's sort of on that scale. And they're planning
it. They're working on it, you know, so, so the, you know, it's not just sort of my credit, to take, I mean, AMD AMD, I think forecasted a $400 billion AI accelerator market by 27, you know, I think, I think it's, you know, an AI accelerator is only part of the expenditures. It's sort of, you know, I think sort of a trillion dollars of sort of like total AI investment by 2027 is sort of like, we're very much in shock on. I think the trillion dollar cluster
is going to take a bit more sort of acceleration. But, you know, we saw how much sort of chat GPT unleash, right? And so like every generation, you know, the models are going to be kind of crazy, and people it's going to shift the over to the window. And then, and then, you know, obviously, the revenue comes in, right? So these are four looking investments. And the question is,
do they pay off? Right? And so if we sort of estimated the, you know, the GPT-4 cluster out around 500 million, by the way, that's sort of a common mistake people make, as they say, you know, people say like a hundred million dollars for 54. But that's just the rental price, right? They're like, ah, you rent the cluster for three months. But as you know, if you're building the biggest cluster, you got to like, you got to build the whole cluster, you got to pay for the
whole cluster, you can't just rent it for three months. But we can't treat it. But yeah, I mean, the really, you know, once, once you're trying to get into the sort of hundreds of billions, eventually, you got to get to like a hundred billion a year revenue. I think this is where it gets really interesting for the big tech companies, right? Because like, their revenues are on order, you know, hundreds of billions, right? So it's like 10 billion fine, you know, and it'll pay off the,
you know, 2024 size training cluster. But, you know, really one sort of big tech, it'll be gangbusters, there's a hundred billion a year. And so the question is sort of how feasible is 100 billion a year from AI revenue? And, you know, it's a lot more than right now. But I think, you know, if you sort of believe in the trajectory of the AI systems as I do, which we'll probably talk about, it's not that crazy, right? So there's, I think there's like 300 million, you know, ish Microsoft Office
subscribers, right? And so they have co-pilot now, and I don't know what they're selling it for, but, you know, supposedly sold some sort of AI out on for a hundred bucks a month. And you sold that to, you know, a third of Microsoft Office subscribers subscribe to that. That'd be a hundred billion right there. You know, a hundred dollars a month is, you know, a lot. That's a lot. Yeah, it's a lot. It's a lot.
But it's a lot. For a third of office subscribers. Yeah, but it's, but it's, you know, for the average dollar workers, like a few hours of productivity a month, and it's, you know, kind of like, you have to be expecting pretty lame AI progress, so not hit like, you know, some few hours of productivity a month
of, of, of, yeah. Okay, sure. So let's assume all this. Yeah. What, what happens in the next few years in terms of the, what is the one gigawatt training, the AI that's trained on the one gigawatt data center? What can it do? The one on the 10 gigawatt data center? Just map out the next few years of AI progress for me. Yeah, I think probably the sort of 10 gigawattish range is sort of my best guest
for when you get the sort of true AI. I mean, yeah, I think it's sort of like one gigawatt data center. And again, I think actually computers are rated and we're going to talk about that, but what we will talk about compute right now. So, you know, I think sort of 2526, we're going to get models that are, you know, basically smarter than most college graduates. I think sort of the, a lot of the economic usefulness, I think, really depends on sort of, you know, sort of unhoppling.
Basically, it's, you know, the models are kind of, you know, they're smart, but they're limited, right? They're, you know, there's this chat bot, you know, and things like being able to use the computer, things like being able to do kind of like a gentick long horizon tasks. Yeah.
And then I think by 2728, you know, if you extrapolate the trends and, and you know, we'll talk about that more later and I talk about in the series, I think we hit, you know, basically, you know, like as smart as the smartest experts, I think that unhoppling trajectory kind of points to, you know, looks much more like an agent than a chat bot. And much more almost like basically could drop in remote worker, right? So it's not like, I think basically, I mean, I think this is the sort of
question on the economic returns. I think a lot of the, a lot of the intermediate AI systems could be really useful, but you know, it actually just takes a lot of schlep to integrate them, right? Like GVD4, you know, whatever 4.5, you know, probably there's a lot you can do with them in a business use case, but you know, you really got to change your workflows to make them useful. And it's just like, there's a lot of, you know, it's a very Tyler Cowan S taker. This takes a long time
to diffuse. Yeah. It's like, you know, we're an S F and so we missed that or whatever. But I think in some sense, you know, the way a lot of these systems won't be integrated is, is you kind of get this sort of sonic boom where it's, you know, the sort of intermediate systems could have done it, but it would take and slap. And before you do the schlep tend to create them, you get much more powerful systems, much more powerful systems that are sort of unhobbled. And so there's this agent
and there's drop in remote worker. And, you know, and then you're kind of interacting with them like a co-worker, right? You know, you can take do Zoom calls with them and you're slacking them. And you're like, ah, can you do this project? And then they go off and they, you know, go away for a week and write a first draft and get feedback on them and, you know, run tests on their code. And then they come back and then you see it and you tell them a little bit more things or, you know,
and that'll be much easier to integrate. And so, you know, it might be that actually you need a bit of overkill to make the sort of transition easy and to really harvest the gains. What do you get the overkill overkill on the model capabilities? Yeah, yeah. So basically the intermediate models could
do it, but it would take a lot of schlep. And so then, you know, the like, actually, it's just the drop in remote worker kind of a GI that can automate cognitive tasks that actually just ends up kind of, like, you know, basically it's like, you know, the intermediate models would have made the software engineer more productive, but, you know, will the software engineer adopted? And then the,
you know, 27 model is, well, you know, you just don't need the software engineer. You can literally interact with it like a software engineer and it'll do the work of a software engineer. So the last episode I did was with John Schillman. Yeah. And I was asking about basically this. And one of the questions I asked is we have these models that have been coming out. It's in the last year and none of them seem to have significantly surpassed you, but before and certainly not in
the agentic way in which they are interacting with as a coworker. You know, the brag that they got a few extra points on MMLU or something. And even GBD40, it's cool that they can talk like Scarlett Johansson or something, but like, and honestly, I'm going to use that. I guess not anymore. Not anymore. Okay. But the whole coworker thing. So this is going to be a run question, but you can address it in any order. But the, it makes sense to me why they'd be good at answering questions. They
have a bunch of data about how to complete Wikipedia text or whatever. Where is the equivalent training data that enables it to understand what to make what's going on the Zoom call? How does this connect with what they were talking about in the Slack? What is the cohesive project that they're going after based on all this context that I have? Where is that turning data coming from? Yeah. So I think a really key question for sort of AI progress in the next few years is sort of
how hard is it to do sort of unlock the test time compute overhang? So right now, GBD404 answers a question and it kind of can do a few hundred tokens of kind of chain of thought. And that's already huge improvement, right? This is a big unhobbling before answering math questions, just shotgun. And if you try to kind of answer math questions by saying the first thing that came to mind, you
wouldn't be very good. So GB404, things for a few hundred tokens. And if I thought for a few hundred, you know, if I think at like a hundred tokens a minute, and I thought, you think a bunch of more than a hundred tokens. I don't know. If I thought for like a hundred tokens a minute, you know, it's like what GB404 does, maybe it's like, you know, it's equivalent to me thinking for three
minutes, right? You know, suppose GB404 could think for millions of tokens, right? That's sort of plus four rooms, plus four to the magnitude on test time compute, just like on one problem. You can't do it right now. It kind of gets stuck, right? Like write some code, even if you know, you can do a little bit of iterative debugging, but eventually just kind of like, you can't, it kind of gets stuck in something, it can't correct its errors and so on. And, you know, in
the sense, there's this big overhang, right? And like other areas of ML, you know, there's this great paper on AlphaGo, right? Where you can trade off train time and test time compute. And if you can use, you know, four rooms more test time compute, that's almost like, you know, a three and a half room bigger model. Just because again, like you can, you know, if a hundred tokens a minute, a few million tokens, that's a few months of sort of working time. There's a lot more you can do
in a few months of working time than and then right now. So the question is, how hard is it to unlock that? And I think the, you know, the sort of short timelines AI world is if it's not that hard. And the reason that might not be that hard is that, you know, there's only really a few extra tokens you need to learn, right? You need to kind of learn the error correction tokens, the tokens where you're like, ah, I think I made a mistake. Let me think about that again. You need to learn the
kind of planning tokens that's kind of like, I'm going to start by making a plan. Here's my plan of attack. And then I'm going to write a draft and I'm going to like, now I'm going to critique my draft. I'm going to think about it. And so it's not, it's not things that models can do right now. But, you know, the question is, how hard is that? And in some sense, also, you know, there's sort of two
past to agents, right? You know, um, when Cholta was on your podcast, you know, he talked about kind of scaling, leading to more nine's reliability. Um, and so that's one path. I think the other path is this sort of like unhobbling path where you, it needs to, it needs to learn this kind of like system to process. And if it can learn this sort of system to process, it can just use kind of millions of tokens and think for them and be cohesive and be coherent. Um, you know, one analogy. So when
you drive, if you're an analogy, when you drive, right? Okay, you're driving. And um, you know, most of the time you're kind of on autopilot, right? You're just kind of driving and you doing well. And then, um, but sometimes you hit like a weird construction zone or weird intersection, you know, and then I sometimes like, you know, my, my passenger seat, my girlfriend, I'm kind of like, I'd be quiet for a moment. I need to like, let's go ahead. Right, right. And that's sort of like,
you know, you go from autopilot to like the system too is jumping in. And you're thinking about how to do it. And so the scaling scaling is improving that system one autopilot. And I think it's sort of, it's the brute force way to get to kind of agents who just improve that sort of system. But if you can get that system to working, then, you know, I think you could like quite quickly jump, um, you know, to sort of this like more gentified, you know, test time compute overhang is unlocked.
What's the reason to think that this is an easy win in the sense that, oh, you just get the, there's like some loss function that easily enables you to train it to enable the system to thinking. Yeah. There's not a lot of animals that have system to think thinking, you know, it like took a long time for evolution to give us system to thinking. Yeah. The free training it like, listen, I get it. You got like trillions of tokens of internet techs. I get that. Like, yeah, you like
match that and you get all these, yeah, um, all this free training capabilities. What's the reason to think that this is an easy and hobbling? Yeah. So, okay, a bunch of things. So, I,
me, first of all, free training is magical, right? And it's, and it's, and it gave us this huge advantage for, for, for, for models of general intelligence, because, you know, you could just predict the next token, but predicting the next token, I mean, this sort of economist conception, but what it does is let's us model learn these incredibly rich representations, right? Like these sort of representation learning properties are the magic of deep learning.
You have these models, and instead of learning just kind of like, you know, whatever, statistical artifacts or whatever, learn sort of these models of the world, you know, that's also why they can kind of like generalize, right? Because it learned the right representations. And so, you know, you train these models and you have this sort of like raw bundle of
capabilities. That's really useful. And so, this almost unformed raw mass. And sort of the un hobbling we've done over sort of like, GP2 to GP4 was, was, you kind of took this sort of like raw mass and then you like Arleach F did into really good chatbot. And that was a huge win, right? Like, you know, going, going, you know, an Arle, you know, in the original, I think it's struct GPT paper, you know, Arleach F versus Don Arleach F model. It's like a hundred X model
size win on sort of human preference rating. You know, it started to be able to do like simple chain of thought and so on. But you still have this advantage of all these kind of like raw capabilities. And I think there's still like a huge amount that you're not doing with them. And by the way, I think this sort of this pre-training advantage is also sort of the difference to robotics, right? Where I think robotics, you know, you know, I think you people used to say
it was a hardware problem, but I think the hardware stuff is getting solved. It's, but the thing we have right now is you don't have the suit huge advantage of being able to bootstrap yourself with pre-training. You don't have all this sort of unsupervised learning you can do. You have to start right away with the sort of RL self-playing, so on. All right. So now the question is why, you know, why might some of this on hobbling and RL and so on work?
And again, there's sort of this advantage of bootstrapping, right? So you know, your Twitter bio is being pre-trained, right? But you're actually not being pre-trained anymore. You're not being pre-trained anymore. You are pre-trained in like grade school and high school. At some point, you transition to be able being able to like learn by yourself, right? You weren't able to do that in elementary school. I don't know, middle school probably,
high school is a new one. Sort of started, you need some guidance. You know, college, you know, it's just smart. You can kind of teach yourself. And then sort of models are just starting to enter that regime, right? And so it's sort of like it's a little bit, it's probably a little bit more scaling. And then you got to figure out what goes on top. And it won't be trivial, right? So a lot of a lot of deep learning is sort of like, you know, it sort of seems very obvious in retrospect. And
there's sort of this, some obvious cluster of ideas, right? There's sort of some kind of like thing that seems a little dumb, but there's kind of works. But there's a lot of details you have to get right. So I'm not saying this, you know, we're going to get this next month or whatever. I think it's going to take a while to really figure out the details. A while for you is like half a year or something. I don't know. I think it's next month. It's next month. It's next month. It's
between six months and three years, you know? But you know, I think it's possible. And I think there's you know, I think, and this is, I think it's also very related to the sort of issue of the data wall. But I mean, I think the, you know, one intuition on the sort of like learning, learning learning by yourself, right, is sort of pre-training is kind of the words are flying by.
Yeah. Right? You know, and, and, or it's like, you know, the teacher's lecturing to you. And the models, you know, the words are flying by, you know, they're taking, they're just getting a little bit from it. But that's sort of not what you do when you learn from yourself, right? When you learn by yourself, you know, so you're reading a dense math textbook, you're not just kind of like skimming through it once. You wouldn't learn that much from it. I mean, some word cells just
get through, you know, reading, reread the math textbook and then they memorize it. Sort of, you know, like, you just repeated the data, then they memorize. What you do is you kind of like, you read a page, kind of think about it, you have some internal monologue going on, you have a conversational study buddy, you try to practice problem, you know, you fail a bunch of times.
At some point it clicks, then you're like, this made sense, then you read a few more pages. And so we've kind of bootstrapped our way to being able to do that now with models, or like just starting to be able to do that. And then the question is, you know, being able to like read it, think about it, you know, try problems. And the question is, can you, you know, all the sort of self-place
synthetic data are all kind of like making that thing work? Yeah. So basically, you try to translate, translate, like in context, like right, like right now there's like in context learning, right? Super sample efficient. There's that, you know, in the Gemini paper, right? It just like learns a language in context. And then you're pre-training, not at all sample efficient. But, you know, what humans do is they kind of like they do in context learning, you read a book, you think about
it until eventually it clicks. But then you somehow just fill that back into the weights. And in some sense, that's sort of like what are L's trying to do? And like when are L's to prevent a key, but when are L works, are L's kind of magical because it's sort of the best possible data for the model. It's like when you try to practice problem and you know, and then you fail, and at some point you kind of figure it out in a way that makes sense to you, that's sort of like the best
possible data for you, because like the way you would have solved the problem. And that's sort of,
that's what RL is. Rather than just, you know, you kind of read how somebody else solved the problem and doesn't, you know, and just like, like, like, yeah, by the way, if that takes on some familiar, because it was like part of the question I asked John Schillman, that goes to the illustrate the thing I said in the intro where like a bunch of the things I've learned about AI, it's just like, we do these dinners before the interviews and we usually also had a couple of like,
oh, what should I ask John Schillman? Okay, so pose this is the way things go and we get these in hobbling. Yeah, not the problem. And the scaling, right? So it's like, you have this baseline, just enormous force of scaling, right? Where it's like GP2 to GP4, you know, GP2 could kind of like, it was amazing, right? It could string together possible senses, but you know, it could, you could barely do anything. It was kind of like preschooler. And then GP4 is, you know,
it's writing code, it like, you know, can do hard math. It's sort of like smart high school. And so this big jump, and you know, and sort of the essay series that go through and kind of count the orders of my attitude of compute scale up with algorithmic progress. And so sort of scaling alone, you know, sort of by 2728 is going to do another kind of preschool to high school jump on top of GP4. And so that will already be just like at a per token level, just incredibly smart,
that'll get you some more reliability. And then you add these on hobblings that make it look much less like a chatbot, more like this agent, like a drop in remote worker. And you know, that's when things really get gone. Okay. Yeah. I want to ask you most questions about this. I think, yeah, yeah, let's zoom out. Okay. So suppose you're right about this. Yeah. And I guess you, this is because of the 2027 cluster, you've got 10, go over to 2027 10, go out some 28 is the 10,
go out. Okay. So you'll be pulled for it. Okay. Yeah. Something. Yeah. And so I guess that's like 5.5 level by 2027. Like whatever that's called, right? Why does the world look like at that point? You have these remote workers who can replace people. What is the reaction to that in terms of the economy politics, geopolitics? Yeah. So, you know, I think 2023 was kind of a really interesting year to experience as somebody who was like, you know, really falling the ice stuff where, you
know, before that, what were you doing in 2023? I mean, opening eye. Yeah. And and, and, you know, kind of went, you know, I mean, I was, I was been thinking about this and, you know, like talking to a lot of people, you know, and they used before, and it was kind of weird thing. You know, you almost didn't want to talk about AI or AI. And it was kind of dirty word, right? And then 2023, you know, people saw chat, you'd be cheaper the first time in the subject of
you for and it's like exploded, right? And it triggered this kind of like, you know, you know, huge sort of capital expenditures from all these firms and, and, and, you know, the explosion revenue from Nvidia and so on. And, you know, things have been quiet since then, but, you know, the next thing has been in the oven. And I sort of expect sort of every generation, these kind of, like, GeForce is to intensify, right? It's like, people see the models. There's like, you know,
people haven't counted through them. So they're going to be surprised. And they'll be kind of crazy. And then, you know, revenue is going to accelerate, you know, suppose you do hit the 10 billion, you know, I know this year, supposedly like just continuing on this sort of doubling trajectory of, you know, like every six months of revenue doubling, you know, it's like, you're
not actually that far from 100 billion, you know, maybe that's like 26. And so, you know, at some point, you know, like, you know, sort of what happened to Nvidia is going to happen to big tech, you know, like their stocks, they're, you know, that's going to explode. And I mean, I think a lot more people are going to feel it, right? I mean, I think the, I think 2023 was the sort of moment for me where it went from kind of AGI's sort of theoretical abstract thing and you'd make the
models to like, I see it, I feel it. And like, I see the path, I see where it's going. I like, I think I can see the cluster which trained on like the rough combination of algorithms, the people, like how it's happening. And I think, you know, most of the world is not, you know, most of the people feel it are like right here, you know, right? But, but, you know, I think a lot more of the world is going to start feeling it. And I think that's going to start being kind of
intense. Okay. So right now, who feels it, you can, you go on Twitter and there's these GPT wrapper companies like, whoa, GPT for all is going to change our business. So, so various on the wrapper companies, right? Because like, they're the ones that are going to be like the wrapper companies are betting on stagnation, right? The wrapper companies are betting like, you have
these intermediate models and take sure to integrate them. And I'm kind of like, I'm really bearish because I'm like, we're just going to sonic boom you, you know, and we're going to get the unhauled ones going to get the drop in remote worker. And then, you know, your stuff is not going to matter. Okay. Sure. Sure. So that's done. Now, who, so the SF is paying attention now, or the, this crowd here is paying attention. Who is going to be paying attention in 2026, 2027?
And, but, but presumably, this is these are yours in which hundreds of billions of cap-backs is being spent on the eye. I mean, I think the, the national security state is going to be starting to pay a lot of attention. And I, you know, I hope we get to talk about that. Okay. Let's talk about it now. What's happening? Yeah. Like, well, what is the sort of political
reaction immediately? Yeah. And even like, internationally. Like, what, what, what, what, what people see, like, right now, I don't know if like, Xi Jinping, like, reads the news and sees like, yeah, I don't know. What are you doing about this comrade? Yeah. So what happens when the, like, what, what, what, what, he's like, sees a remote replacement and it has $100 billion in revenue. There's a lot of businesses that have $100 billion in revenue.
And people don't like aren't staying up all night talking about it. The question, I think the question is when, when does the CCP and when does the sort of American national security establishment realize that super intelligence is going to be absolutely decisive for national power, right? And this is where, you know, the sort of intelligence explosion
stuff comes in, which, you know, we should also talk about later. You know, it's sort of like, you know, you have a GI, you have the sort of drop in remote worker that can replace, you know, you or me, at least that sort of remote jobs and kind of jobs. And then, you know, I think fairly quickly, you know, I mean, I've got to fault, you know, you turn the crank, you know,
one or two more times, you know, and then you get I think that smarter than humans. But I think even even more than just turning the cramp a few more time, crank a few more times, you know, I think one of the first jobs to be automated is going to be that of sort of an AI research engineer. And if you can automate AI research, you know, I think things can start going very fast.
You know, right now there's already this trend of, you know, half an order of magnitude a year of algorithmic progress, you know, suppose, you know, at that point, you know, you're going to have GPU fleets and the tens of millions for inference, you know, or more. And you're going to be able to run like 100 million human, human equivalents of these sort of automated AI researchers. And if you can do that, you know, you can maybe do, you know, decades worth of sort of ML research
progress in a year, you know, you get the some sort of 10X speed up. And if you can do that, I think you can make the jump to kind of like AI that is as easily smarter than humans, you know, within a year, a couple years. And then, you know, that broadens, right? So you have this, you have this sort of initial acceleration of AI research, that broadens to like you apply R&D to a bunch of
other fields of technology. And the sort of like extremes, you know, at this point, you have like a billion just super intelligent researchers, engineers, technicians, everything, you're just perfectly competent and all the things, you know, they're going to figure out robotics.
Are we talked about it being a software problem? Well, you know, you have, you have a billion of super smart smarter than the smartest human researchers AI researchers on your cluster, you know, at some point during the intelligence explosion, they're going to be able to figure out robotics, you know, and then again, that expands. And, um, you know, I think if you play this picture forward, I think it is fairly unlike any other technology in that it will, I think, you know, a couple
years of lead could be utterly decisive and say like military competition, right? You know, if you look at like GoFour1, right? GoFour1, you know, like the Western coalition forces, you know, they had, you know, like a hundred to one kill ratio, right? And that was like, they had better sensors on their tanks, you know, and they had, they had better, you know, more precision, precision missiles, right? Like GPS, and they had, you know, stealth, and they had
sort of a few, you know, maybe 20, 30 years of technological lead, right? And they, you know, just completely crushed them. Super intelligence applied to sort of broad fields of R&D, and then, you know, the sort of industrial explosion as well, you have the robots, you're just making lots of material. You know, I think that could compress, I mean, basically it compressed kind
of like a century worth of technological progress, it's less than a decade. And that means that, you know, a couple years could mean a sort of GoFour1 style like, you know, advantage in military affairs. And, um, you know, including like, you know, a decisive advantage that even like preempts,
right? Suppose like, you know, how do you find the stealth in nuclear submarines? Like right now, that's a problem of like, you have sensors, you have the software, like, the tech where they are, you know, you can do that, you can find them, you have kind of like millions or billions of like mosquito-like, you know, sized drones, and then, you know, they take out the nuclear submarines, they take out the m-mole launchers, they take out the other nukes, and, anyway, so I think,
enormously destabilizing, enormously important for national power. And at some point, I think people are going to realize that, not yet, but they will. And when they will, I think there will be sort of, you know, I don't think it'll just be the sort of AI researchers in charge. And, you know, I think on the, you know, the CCP is going to, you know, have sort of an all-out effort to like infiltrate American AI labs, right? You know, like billions of dollars, thousands of people,
you know, full force of the sort of, you know, Ministry of State Security. CCP is going to try to, you know, like, outbuild us, right? Like they, you know, they're, you know, power in China, you know, like the electric grid, you know, they added a U.S. is, you know, a complete, like, they added as much power in the last decade as they sort of entire U.S. electric goods. So like, the undergave out cluster, at least the undergave out, so it's going to be a lot easier for them to get.
And so I think sort of, you know, by this point, I think it's going to be like an extremely intense sort of international competition. Okay, so in this picture, one thing I'm uncertain about is whether it's more like what you say, where it's more of an implosion of you have developed an AGI and then you make it into an AI researcher. And for a while, a year or something, you're only using this ability to make hundreds of millions of other AI
researchers. And then like the thing that comes out of this, yeah, really frenetic process is a super intelligence. And then that goes out in the world. Then is developing robotics and helping you take over other countries and whatever. I think it's a little bit more, you know, it's a little bit more kind of like, you know, it's not like, you know, on and off. It's a little bit more gradual,
but it's sort of like it's an explosion that starts nearly. It can do cognitive jobs, you know, the highest RIOs for cognitive jobs is make the AI better like solve robotics, you know, and as you solve robotics, now you can do R&D and you know, like biology and other technology, you know, initially you start with the factory workers, you know, they're wearing the glasses and the
air pods, you know, and the AI is instructing them, right? Because you kind of make any worker into a skilled technician and then you have the robots come in and anyway, so it sort of expands, as process expands. Metas, Ray Vans or a cobble vent to their lava. Well, you know, like whatever, like, you know, the fabs in the US, the constrained skilled workers,
right? You have, you have the, even if you don't have robots that you have the cognitive super intelligence and you know, it can kind of make them all into skilled workers immediately, but that's, you know, it's a very repuried, you know, robots will come soon. Sure. Okay. Okay. So suppose that this is actually how the tech progresses in the United States, maybe because these companies are
already experiencing hundreds of billions of dollars of AI revenue. At this point, you know, companies are borrowing, you know, hundreds of billions of more in the corporate debt markets, you know, but why is a CCP bureaucrat, some 60 year old guy, he looks at this and he's like, oh, it's like co-pilot has gotten better now. Why are they now, I mean, it's much more than
co-pilot has gotten better now. I mean, this, like, yeah, so they're, because to shift the production of an entire country to dislocate energy that is otherwise being used for consumer goods or something and to make it that all feed into the data centers. What? Part of this to whole story is you realize the super intelligence is coming soon, right? And I guess you realize it and maybe I realize it. I'm not sure how much I realize it, but will the national security
apparatus in the United States and will the CCP realize it? Yeah. I mean, look, I think in some sense, this is a really key question. I think we have sort of a few more years of midgame, basically, and where you have a few more 2020, 2020s and that just starts updating more and more people. And, you know, I think, you know, the trend lines will become clear.
You know, I think, I think you will see some amount of the sort of COVID dynamic, right? You know, like COVID was like, you know, February, February of 2020, you know, it's like, honestly, feels a lot like today, you know, where it's like, you know, it feels like this utterly crazy thing is about, you know, is impending is coming. You kind of see the
exponential and yet most of the world just doesn't realize, right? The mayor of New York is like, go out to the shows and this is just, you know, like Asian racism or whatever, you know, and, but, you know, at some point, the exponential is like, you know, some point people saw it. And then, you know, like, just kind of easy, radical reactions came. Right. Okay. So, by the way, what were you doing during COVID? We're writing like February, okay. Freshman, sophomore, what? Junior.
Yeah. But still like, what were like 17 year old junior or something? And then you bought, like, did you short the market or something? Yeah. Yeah. Okay. Did you, did you, did you sell it the right time? Yeah. Okay. Yeah. So that there will be like a March 2020 moment that the thing that was COVID, but here, now then you can like make the analogy that you make in a series that this will
then cause the reaction of like, we got to do the men hand project for America here. I wonder where the politics of this will be like because the difference here is it's not just like we need the bomb to beat the Nazis. It's, we're building this thing that's making all our entry prices rise a bunch and it's automating a bunch of our jobs and the climate change stuff like people are going
to be like, oh my god, it's making climate change worse. And it's helping big tech. Like, politically, this doesn't seem like a dynamic where the national security apparatus or the president is like, we have to step on the gas here and like make sure America wins. Yeah. I mean, again, I think a lot of this really depends on sort of how much people are feeling it, how much people are seeing it. You know, I think there's a thing where, you know,
kind of basically our generation, right? We're kind of so used to kind of, you know, basically peace and like, you know, the world, you know, American, Germany and nothing matters. But, you know, this sort of like extremely intense and these extraordinary things happening in the world and like intense international competition is like very much the historical norm. Like in some sense, it's like, you know, sort of this, there's this sort of 20 year, very unique
period. But like, you know, the history of the world is like, you know, you know, like in World War 2, right? It was like 50% of GDP went to, you know, like, you know, Warped Prime production. You know, the gas bar at over 60% of GDP, you know, and in, you know, I think Germany, Japan, over 100% World War 1, you know, UK, Japan, sorry, UK, France, Germany, all bar at over 100% of GDP.
And, you know, I think the sort of, much more was on the line, right? Like, you know, and, you know, people talk about World War 1 being sort of destructive and you know, like 20 million Soviet soldiers dying and like 20% of Poland, but you know, that was just the sort of like, that happened all the time, right? You know, like seven years war, you know, like whatever 20, 30% of Prussia died, you know, like 30 years war, you know, like, I think, like, you know,
up to 50% of like large swath of Germany died. And, you know, I think the question is, will these sort of like, will people see that the stakes here are really, really high? And that basically is sort of like history is actually back. And I think, you know, I think the American National Security State thinks very seriously about stuff like this. They think very seriously about competition with China. I think China very much thinks of itself on this historical
mission, that you're driven nation of the Chinese nation. It's a lot about national power. I think a lot about like the world order. And then, you know, I think there's a real question on timing, right? Like, do they, do they start taking this seriously, right? Like, when the intelligence explosion is already happening, like, quite late, or do they start taking this seriously, like, two years earlier on that matters a lot for how things play out. But at some point, they will.
And at some point, they will realize that this will be sort of utterly decisive for, you know, not just kind of like some proxy war somewhere, but, you know, like, whether liberal democracy can continue to thrive, whether, you know, whether the CCP will continue existing. And I think that will activate sort of forces that we haven't seen in a long time.
The great conflict, the great power conflict thing definitely seems compelling. I think just all kinds of different things seem much more likely when you think from a historical perspective. When you zoom out beyond the liberal democracy that we've been living in, I had the pleasure to live in America, let's say 80 years, including dictatorships, including all, obviously, war, famine, whatever.
I was reading the Gulagarca pelago and one of the chapters begins with Sojan Eason saying, if you would have told Russian citizens under the Zars that because of all these new technologies, we wouldn't see some great Russian revival or becomes a great power and the citizens are made wealthy. But instead, what you would see is tens of millions of Soviet citizens tortured by millions of beasts in the worst possible ways and that this is what would be the result of the 20th
century. They wouldn't have believed you. They'd have called you a slanderer. Yeah. And, you know, the, you know, the possibilities for dictatorship with superintelligence are sort of even crazier, right? I think, you know, imagine you have a perfectly loyal military and security force, right? That's it. No more rebellions, right? No more popular uprisings, you know, perfectly
loyal. You know, you have, you know, perfect lie detection, you know, you have surveillance of everybody, you know, you can perfectly figure out who's the dissentor, weed them out, you know, no Gorbachev would have ever risen to power, who had some doubts about the system, you know, no
military coup would have ever happened. And I think you, I mean, you know, I think there's a real way in which, you know, part of why things have worked out is that, you know, ideas can evolve and, you know, there's sort of like some, some sense in which sort of time heals a lot of wounds and time, you know, and solves, solves, you know, a lot of debates and a lot of people had really strong convictions, but, you know, a lot of those have been overturned by time because there's been this
continued pluralism and evolution. I think there's a way in which kind of like, you know, if you take a CCP-like approach to kind of like truth, truth is what the party says, when you supercharge that was superintelligence, I think there's a way in which that could just be like locked in and trying
for, you know, a long time. And I think the possibilities are pretty terrifying. You know, your point about, you know, history and sort of like living in America for the past eight years, you know, I think this is one of the things I sort of took away from growing up in Germany is, a lot of this stuff feels more visceral, right? Like, you know, my mother grew up in the former
East, my father in the former West, they like met shortly after the wall fell, right? Like the end of the Cold War was this sort of extremely pivotal moment for me because, you know, it's the reason I exist, right? And then, you know, growing up in Berlin and, you know, the former wall, you know,
my great grandmother, who was, you know, still alive, is very important in my life. You know, she was born in 34, you know, grew up, you know, during the Nazi area, during, you know, all that, you know, then World War II, you know, like, saw the fire bombing of Dresden from the sort of,
you know, country cottage or whatever were, you know, the, the kids were, you know, then, and then, you know, then spends most of her life in sort of the East German communist dictatorship, you know, she'd tell me about, you know, in like 54 when there's like the popular uprising, you know, Soviet tank skimming, you know, her husband was telling her to get home really quickly, you know, get off off the streets, you know, had a, had a son who, who tried to, you know, ride a motorcycle
across, across the Iron Curtain and then was putting the Stasi Prison for a while. You know, and then finally, you know, when she's almost 60, you know, it's the first time she lives in, you know, a free country and a wealthy country. And, you know, when I was a kid, she was, she, the thing she always really didn't want me to do was like get involved in politics because like joining a political
party was just, you know, it was a very bad connotations for her. Anyway, and she sort of raised me when I was young, you know, and so it, you know, it doesn't feel that long ago. It feels very close. Yeah. So I wonder when we're talking today about the CCP. Listen, the people in China will be doing the prod their version of the project will be AI researchers who are somewhat westernized,
who interact with either got educated in the West or have colleagues in the West. Are they going to sign up for the the CCP project that's going to hand over control to Xi Jinping? What's your sense on? I mean, just like fundamentally, they're just people, right? Like, you can't you like convince them about the dangers of superintelligence. Will they be in charge, though? I mean, since this is, I mean, this is also the case, you know, in the US or whatever,
this is sort of like rapidly depreciating influence of the lab employees. Like right now, the sort of AI lab employees have so much power, right? Over this, you know, like, you saw this in November event, so much power, right? But both, I mean, both they're going to get automated and
they're going to lose all their power. And it'll just be, you know, kind of like a few people in charge with their sort of armies of automated eyes, but also, you know, it's sort of like the politicians and the generals and the sort of national security state, you know, a lot of, you know, it's I mean, there's sort of this is the sort of some of these classic scenes from the the Oppenheimer movies, you know, the scientists built it and then it was kind of, you know,
and the bomb was shipped away and it was out of their hands. You know, I actually, yeah, I think I actually think it's good for like lab employees to be aware of this. It's like you have a lot of power now, but you know, maybe not for that long and you know, use it wisely. Yeah, I do I do think they would benefit from some more, you know, organs of representative democracy. What do you mean by that? Oh, I mean, I, you know, in the sort of the, in the opening of board events, you
know, employee power was exercising a very sort of direct democracy way. And I feel like that's how some of how that went about, you know, I think I really highlighted the benefits of representative democracy and having some deliberate organs. Interesting. Yeah. Let's go back to the 100 billion revenue, whatever, as are these companies. I mean, dollar cluster. Yeah, the companies are deploying, we're trying to build clusters that
are this big. Yeah. Where are they building it? Because if you say it's the amount of energy that would require for a small or medium size US state, is it then Colorado gets no power in
it's happening in the United States or is it happening somewhere else? Oh, I mean, I think the, I mean, in some sense, this is the thing that I was trying to find fun is you know, you talk about Colorado gets no power, you know, the easy way to get the power would be like, you know, displaced less economically useful stuff, you know, it's like whatever, buy up the aluminum smelting plant and you know, that has to go out and you know, we're going to replace it with
with the data center because that's important. I mean, that's not actually happening because a lot of these power contracts are really sort of long-term locked in, you know, there's obviously people don't like things like this. And so it sort of, it seems like in practice what it's, what it's requiring at least right now is building new power. The, that might change and I think that that's when things get really interesting when it's like, no, we're just
educating all of the power to the AGI. Okay, so right now it's building new power. 10 gigawatt, I think quite doable. You know, it's like a few percent of like US natural gas production. You know, I mean, when you have the 10 gigawatt, chain and cluster, you have a lot more inference, so that starts getting more, you know, I think 100 gigawatt, that starts getting pretty wild. You know, that's, you know, again, it's like over 20% of US electricity production. I think it's
pretty doable, especially if you're willing to go for like natural gas. But I do, I do think, I do think it is incredibly important, incredibly important that these clusters are in the United States. And why doesn't matters in the US? I mean, look, I think there's some people who are, you know, trying to build clusters elsewhere. And you know, there's like a lot of free flowing middle-eastern money that's trying to build clusters elsewhere. I think this comes back to the sort of like
national security question we talked about earlier. Like would you, I mean, would you do the Manhattan project and the UAE, right? And I think, I think basically like putting, putting the clusters, you know, I think you can put them in the US, you can put them in sort of like allied democracies. But I think once you put them in kind of like dictatorships, authoritarian dictatorships, you kind of create this, you know, irreversible security risk, right? So I mean,
one cluster is there, much easier for them to actually trade the weights. You know, they can like literally steal the AGI, the super intelligence. It's like they got a copy of the, you know, of the of the atomic bomb, you know, and they just got the direct replica of that. And it makes so much easier to them. I mean, we're ties to China. You can ship that to China. So that's a
big huge risk. Another thing is they can just seize the compute, right? Like maybe right now, they just think of this, I mean, in general, I think people, you know, I think the issue here is people are thinking of this as they, you know, chat to BTT, big tech product clusters. But I think the cluster is being planned now, you know, three to five years out, like it will be the like AGI super intelligence clusters. And so anyway, so like when things get hot, you know, they might just
seize the compute. And I don't know, supposedly put like, you know, 25% of the compute capacity in these sort of middle eastern dictatorships, well, they seize that. And now it's sort of a ratio of compute of three to one. And you know, we still have some more. But even like, even, even only, only 25% of compute there, like I think it starts getting pretty hairy. You know, I think three to
one is like not that great of a ratio. You can do a lot with that amount of compute. And then look, even if they don't actually do this, right, even they don't actually seize the compute, even they actually don't steal the weights. There's just a lot of implicit leverage you get, right, they get the C.The AGI table. And you know, I don't know why we're giving authoritarian dictatorships the C.The AGI table. Okay. So there's going to be a lot of compute
in the Middle East if these deals go through. First of all, who is it just like every single big tech company is just trying to figure out what we've been doing? Okay. Okay. I guess there's reports, I think Microsoft or yeah, yeah, yeah, yeah, um, uh, which we'll get into. Yeah. So they you a you get to a bunch of compute because we're building the clusters there. Yeah.
And why so let's say they have 25% of white is a compute ratio matter. Um, it is it, if it's about them being able to kick off the intelligence explosion, isn't it just some threshold where you have a hundred million A.I researchers or you don't? I mean, you can do a lot with, you know, 33 million extremely smart scientists. Um, and, and you know, and again, a lot of the stuff, you know, so first of all, it's like, you know, that might be enough to build the crazy
bio weapons, right? And then you're in a situation where like, now, wow, we've just like they stole the weights. They seized the compute. Now they can make, you know, they they can build these crazy new WMDs that, you know, will be possible super intelligence. And then you've just kind of like proliferated the stuff and you know, it'll be really powerful. Um, and also, I mean, I think,
you know, three three acts on compute isn't actually that much. And so the, um, you know, the, um, you know, I think I think I worry a lot about is, I think everything, I think that risk is situation is if we're in some sort of like really tight neck feverish international
struggle, right? If we're like really close with the CCP and we're like months apart, um, I think the situation we want to be in, we could be in if we played our cards right, there's a little bit more like, you know, the US, you know, building the atomic bomb versus the German project
way behind, you know, years behind. Um, and if we have that, I think we just have so much more wiggle room like to get safety right, we're going to be building like, you know, there's going to be these crazy new WMDs, you know, things that completely undermine, you know, nuclear deterrence, you know, um, intense competition. And um, that's so much easier to deal with if, you know, you're like, you know, it's not just, you know, you don't have somebody right on your tail, you gotta go, go, go,
you gotta go maximum speed, you have no wiggle room. Um, you're worried that at any time they can overtake you. I mean, they can also just try to outbuild you, right? Like they can might, they might literally win like China might literally win, um, if they can steal the weights because they can, they can outbuild you, um, and they maybe have less caution, um, both, you know, good and bad caution, you know, kind of like whatever unreasonable regulations we have, um, or you're just in this really
tight race. And I think it is that sort of like, if you're in this really tight race, the sort of fever struggle, I think that's when sort of there's the greatest peril of self-destruction. So then presumably the companies that are trying to build clusters in the movies realize this, what is the, is it just that it's impossible to do this in America? And if you want American companies to do this at all, then you do it in Middle East or not at all. And you just like, I'm trying to
build the three gorgeous damn cluster. Yeah. I mean, there's a few reasons. One of them is just like people aren't thinking about this as the AI super intelligence cluster. They're just like,
yeah, you know, like cool clusters from my, you know, uh, from my chat. But they're building in the, the plans right now are clusters, which are ones that are like, because if you're doing ones for inference, presumably get like spread them out across the country or something, but the ones they're building, they realize we're going to do one trading run in this thing we're building.
I just think it's harder to distinguish between inference and training compute. And so people can claim it's training compute, but I think they might realize that actually, you know, this is going to be useful for a tiered, yep. Sorry, they might say it's inference compute and actually it's useful for training compute too. Okay, so this isn't that a data and things like that.
Yeah, the future of training, you know, like RL looks a lot like inference, for example, right? Or or you just kind of like end up connecting them, you know, in time, you know, it's, you have, it's like a lot raw material, you know, it's like, you know, it's, it's, it's placing your, uranium refined facilities there. Sure. Okay, so a few reason, right? One is just like, they don't think about the SZJ cluster. Another's just like easy money from the Middle East, right?
Another one is like, you know, people saying some people think that, you know, you can't do it in the US. And, you know, I think we actually face this sort of real system competition here, because again, some people think it's only autocracies that can do this, that can kind of like top down, mobilize the sort of industrial capacity, the power, you know, get the stuff done fast. And again, this is the sort of thing, you know, we haven't faced in a while. But, you know,
during the Cold War, like, we really, there was this sort of intense system competition, right? Like, East, West Germany was this, right? Like West Germany kind of like liberal democratic capitalism versus kind of, you know, communist state plan. And, you know, now it's obvious that the sort of, you know, the free world would win. But, you know, even, even as late as like 61, you know, Paul Samuelson was predicting the Soviet Union would outgrow the United States because they were
able to sort of mobilize industry better. And so yeah, there's some people who, you know, and they ship post about loving America by day, but then they're betting against America, they're betting against the liberal order. And I think I basically just think it's a bad bet. And the reason I think it's a bad bet is I think the stuff is just really possible in the US. And so, there's make it possible in the US. There's some amount that we have to get our act
together, right? So I think there's basically two paths to doing it in the US. One is you just got to be willing to do natural gas. And there's ample natural gas, right? You put your cluster in West Texas, you put it in, you know, Southwest Pennsylvania by the, you know, Marcelo Shale, Tengue cluster, super easy, 100 gigawatt cluster, also pretty doable. You know, I think, you know, natural gas production in the United States is, you know, almost doubled in a decade.
You do that, you know, one more time with the next, you know, seven years or whatever, you know, you could power multiple trillion dollar data centers. But the issue there is, you know, a lot of people have sort of these made these climate commitments and not just government. It's actually the private companies themselves, right? The Microsoft, the Amazon's and so on.
They have these climate commitments, so they won't do natural gas. And, you know, I admire the climate commitments, but I think at some point, you know, the national interest and national security kind of is more important. The other path is like, you know, you can do this sort of green energy mega projects, right? You do the solar and the batteries and the, you know, the SMRs and, and geothermal. But if we want to do that, there needs to be sort of a sort of broad,
regulatory push, right? So like, you can't have permitting take a decade, right? So you got to reform for, you got to like have, you know, blanket Nika Nipa exemptions for this stuff. You know, there's like a name state level regulations, you know, that are like, yeah, you could build, you know, you can build the solar panels and batteries next to your data center, but I still take years, because you know, you actually have to hook it up to the state electrical grid.
You know, and you have to like use governmental powers to create rights of way to kind of like, you know, have multiple clusters and connect them, you know, and have thick cables basically. And so look, I mean, ideally we do both, right? I mean, we do natural gas and the broad regulatory agenda. I think we have to do at least one. And then I think this possible stuff is,
is just possible in the United States. Yeah. I think a good analogy for this, by the way, before the conversation, I was reading, there's a good book about World War II industrial mobilization, you know, it's called Freedom's Forge. And I guess when we think back on the period, especially if you're from, if you read like the Patrick calls and fast and the progress of these stuff, it's like, the Hatsuit capacity back then. And people just got you done. But now,
it's a cluster of case. No, so it's, it was really interesting. So you have people who are from the Detroit auto industry side, like Knutzen who are running mobilization for the United States. And they were extremely competent. Yeah. But then at the same time, you had labor organization and agitation, which is actually very analogous to the climate pledges and climate change concern.
We have today. Yeah. Where they would have these strikes while literally into 1941, that would cost millions of man hours worth of time when we're trying to make tens of millions, sorry, tens of thousands of planes a month or something. And they would just debilitate factories. Before, you know, trivial like pennies on the dollar kind of concessions from capital. And it was concerns that, oh, the auto companies are trying to use the pretext of a potential war
to actually prevent paying labor or the money deserves. And so the, what climate changes today, like you think, ah, fuck, America's fucked, like we're not going to be able to build this shit. Like if you, if you look at Nipa or something, but I didn't realize how debilitating labor was in like, what was that? Right. It was just, you know, before at the, you know, it's sort of like 39 or whatever, the American military was in total shambles, right? You read about it. And it
reads a little bit like, you know, the German military today, right? It's like, you know, military expenditures, I think were less than 2% of GDP, you know, all the European countries that gone even in peacetime, you know, like above 10% of GDP sort of this like rapid mobilization, there's nothing, you know, like we're making kind of like no planes, there's no military contracts, everything had been starved during the Great Depression. But there was this latent capacity.
And you know, at some point, the United States got their act together. I mean, the thing I'll say is I think, you know, the supplies sort of the other way around too to basically to China, right? And I think sometimes people are, you know, they kind of count them out a little bit and they're like
the export controls and so on. And you know, they're able to make some nano-mute ships now. I think there's a question of like how many could they make, but you know, I think there's at least a possibility that they're going to be able to mature that ability and make a lot of some nano-mute or chips. And there's a lot of latent industrial capacity in China and they are able to like, you know,
build a lot of power fast. And maybe that is an activated fray high yet, but at some point, you know, the same way the United States and like, you know, a lot of people in the US and the United States government is going to wake up, you know, at some point the CCP is going to wake up. Yeah. Okay. Oh, going back to the question of presumably companies, are they blind to the fact that there's going to be some sort of, well, okay, so they realize that they realize scaling is a thing, right?
Obviously their whole plans are contingent on scaling. And so they understand that we're going to be in 2020 building the China Gawad data centers. And at this point, the people who can keep up are big tech just potentially at like the edge of their capabilities. Yeah. Then sovereign wealth fund fund of things. Yeah. And also big major countries like America, China, whatever. Yeah. So what's their plan? If you look at like these AI labs, what's their plan given the this
landscape? Do they know on the leverage of having been in the United States? I mean, I think, I don't know. I think I mean, one thing the Middle East does offers capital, but it's like America has plenty of capital, right? It's like, you know, we have trillion dollar companies. Like, what are these Middle Eastern states? They're kind of like trillion dollar oil companies. We have trillion dollar companies and we have very deep financial markets. And it's like, you know, Microsoft could issue
hundreds of billions dollars of bonds and they can pay for these clusters. I mean, look, I think another argument being made, and I think it's worth taking seriously is an argument that look, if we don't work with the UAE or with these Middle Eastern countries, they're just going to go to China. Right? And so, you know, we, you know, they're going to build data centers. They're going to poor money and say, I regard less. And if we don't work with them, you know, they'll just support China.
And look, I mean, I think, I think there's some merit to the argument. And in the sense that I think we should be doing basically benefit sharing with them, right? I think we should talk about this later, but I think basically sort of on the road to AGI, there should be kind of like two tiers of coalitions. It should be the sort of narrow coalition of democracies. That's sort of the
coalition that's developing AGI. And then there should be a broader coalition where we kind of go to other countries, including, you know, dictatorships, and we're willing to offer them, you know, and we're willing to offer them some of the benefits of the AI, some of the sharing. So it's like, look, if, if the UAE wants to use AI products, if they want to run, you know, meta-recommendations and gens, if they want to run, you know, like the last generation models,
that's fine. I think, by default, they just like wouldn't have had this seat at the AGI table, right? And so it's like, yeah, they have some money, but a lot of people have money. And, you know, the only reason they're getting this sort of course, the AGI table, the only reason we're giving, these dictators will have this enormous amount of leverage over this extremely national security relevant technology is because we're, you know, we're kind of getting them excited
and offering it to them. You know, I think, yeah, yeah, who specifically is doing this? Like just the companies who are going there to fundraise are like, this is the AGI's happening, and you can fund it or you can't. It's been reported that, you know, Sam is trying to raise, you know, seven trillion or whatever for a chip project. And you know, it's unclear how many the clusters will be there and so on. But it's, you know, definitely, definitely stuff is happening.
I mean, look, I think another reason I'm a little bit at least suspicious of this argument of, like, look, if the US doesn't work with them, they'll go to China is, you know, I've heard, I heard from multiple people, and this wasn't, you know, from a time I'd open the eye, and I haven't seen the memo, but I have heard from multiple people that, you know, at some point several years ago, open the eye leadership had sort of laid out a plan, the fund in sell AGI by starting a bidding war
between the governments of, you know, the United States, China and Russia. And so, you know, it's kind of surprising to me that they're willing to sell AGI to the Chinese and Russian governments, but also there's something that sort of feels a bit eerily familiar about kind of starting the bidding war and then kind of like playing them off each other. And while, you know, you don't do this, China will do it. So anyway, interesting. Okay, so that's pretty fucked up. But given that,
that's okay. So it's supposed that you were right about, we ended up in this place because we got, I want the way one of our friends put it is that the Middle East has like no other place in the world, billions of dollars or trillions of dollars up for persuasion. And, but it's true. And we have it before we stand on them. Like, you know, the Microsoft board, it's only the dictator. Yeah. Okay. But so let's say you're right that you shouldn't have
gotten them excited about AGI in the first place. But now we're in a place where they are excited about AGI. Yeah. And they're like, fuck, we want us to have GPD-5 where we're going to be off building superintelligence. This item's their piece thing doesn't work for us. And if you're in this place, don't they already have the leverage, aren't you? And this you might as well. I don't think the UAE on its own is not competitive, right? It's like, I mean, they're already
X4 controlled. Like, you know, we're not, you know, there's like, you're not actually supposed to ship and video chips over there, right? You know, it's not like they have any of the leading AI labs. You know, it's like, they have money, but you know, it's actually hard to just translate money into. But the other things you've been saying about laying out your vision is very much there's just almost industrial process. You put in the compute and then you put in the algorithms.
You add that up and you get AGA on the other end. If it's something more like that, then the case for somebody being able to catch up rapidly seems more compelling than if it's some dispose. Well, well, if they can steal the algorithms and if they can steal the way it's not really that's really where sort of, I mean, we should talk about this. This is really important.
And I think, you know, so like how easy would it be for an actor to steal the things that are like the not the things that are released about Scarlett Johansson's voice, but the RL things are talking about the unhobblings. I mean, I mean, I'm extremely easy, right? You know, I, you know,
deep mind even like, you know, they don't make a claim that it's hard, right? Deep mind put out there like whatever frontier safety is something and they like lay out security levels and they let, you know, security levels 0 to 4 and 4 is the reason one resistant to state actors and they say where at level 0, right? And then, you know, I mean, just recently there was like an indictment of a guy who just like stole the code a bunch of like really important AI code and went to China with it.
And you know, all he had to do to steal the code was that, you know, copy the code and put it into Apple notes and the next word it is PDF and that got passed their monitoring, right? And, you know, Google is the best security of any of the AI labs probably because they have the, you know, the Google infrastructure. I mean, I think, I don't know, roughly, I would think of this as like, you know, security of a startup, right? And like, what does security of a startup look like, right?
You know, it's not that good. It's, it's easy to steal. So, even if that's the case. Yeah. A lot of your posts was making the argument that, oh, you know, why are we going to get the intelligence explosion because if we have somebody with the intuition of an Alec Radford, yeah, to become able to come up with all these ideas. Yeah. That intuition is extremely valuable and you scale that up. But if it's a matter of these, if it's just in the code, that like if it's just
the intuition, then we're that's not going to be just in the code, right? And also because of X-Word controls, these countries are going to have slightly different hardware. You're going to have to make different trade-offs and probably rewrite things to be able to be compatible with that, including all these things. Is it just a matter of getting the right pen drive and you plug it into the gigawatt data center and X-3 Gorgeous Dam and then you're off to the races?
I mean, like, there's a few different things, right? So one, one threat model is just stealing the weights themselves. And the weights one is sort of particularly insane, right? Because they can just like steal the literal end product, right? Just like make a replica of the atomic bomb and then they're just like ready to go. And I think that one just is extremely important around the time we've AGI and superintelligence, right? Because it's, you know, China can build a big cluster.
By default, we'd have a big lead, right? Because we have the better scientists, but we make the superintelligence, they just steal it. They're off to the races. Wates are a little bit less important right now because you know, who cares if they steal the GP4 weights, right? Like whatever.
And so, you know, we still have to get started on weight security now because, you know, look, if we think AGI or 27, you know, this stuff is going to take a while and it, you know, it doesn't, you know, it's not just going to be like, oh, we do some access control. It's going to, you know, if you actually want to be resistant to sort of Chinese espionage, you know, it needs to be much more intense. The thing though that I think, you know,
people aren't paying enough attention to is the secret, says you say. And, you know, I think this is, you know, the compute stuff is sexy, you know, we talk about it, but you know, I think that, you know, I think people underrate the secrets because they're, you know, I think they're, you know, that happened in order of magnitude a year just by default, so they're going like progress. That's huge. You know, if we have a few year lead by by default,
you know, that's 10, 30, X 100, X Bayer cluster, if we protected them. And then there's this additional layer of the data wall, right? And so we have to get through the data wall. That means we actually have to figure out some sort of basic new paradigm sort of the AlphaGo step two, right? AlphaGo step one is learns from human imitation. AlphaGo step two is the sort of self play RL. And everyone's working on that right now. And maybe we're going to crack it. And, you know,
if China can't steal that, then, then, then they, you know, then they're stuck. If they can't steal it, they're off to the race. But whatever that thing is, is it like literally, I can write down on the back of a napkin? Because if it's that easy, then why is it that hard for them to figure it out? And if it's more about the intuitions, then you just have to hire Alec Radford, like, what are you copying down? Well, there's a few layers to this, right? So I think at the top is kind of
like sort of the, you know, fundamental approach, right? And sort of like, I don't know, on pre-training, it might be, you know, like, you know, unsupervised learning, next token protection, train on the entire internet. You actually get a lot of juice out of that already. That one's very quick to communicate. Then there's like, there's a lot of details that matter. And you were talking about this earlier, right? It's like probably the way that thing people are going to figure out, it's
going to be like somewhat obvious. Or it's going to be some kind of like clear, you know, not that complicated thing that'll work. But there's going to be a lot of details to getting that right. But if that's true, then again, why, why are we even, why do we think that getting state level security in the service will prevent China from catching up? If it's just like, oh, we know some
sort of self-play RL will require to get pass a data wall. And if it's as easy as you say in the some fundamental science, I mean, again, but it's going to be sold by 2027, you say, like, right? It's like not that hard. I just think, you know, the US and the sort of, I mean, all the leading antelabs in the United States, and they have this huge lead. I mean, by default, you know, China actually has some good LLMs. But you want to have good LLMs? They're just using
this sort of open source code, right? You know, Lama or whatever. And so the, the, I think people really underrate the sort of both the sort of divergence on algorithmic progress and the lead, the US would have by default, because by the, you know, all this stuff was published until recently, right? Like, Shenzhella scaling laws were published. You know, there's a bunch of MOU papers. There's, you know, transformers and, you know, all that stuff was published. And so that's why open source
is good. That's why China can make some good models. That stuff is now, I mean, at least they're not publishing it anymore. And, you know, if we actually kept it secret, it would be this huge edge. To your point about sort of like some tacit knowledge and like Bradford, you know, there's, there's another layer at the bottom that is something about like, you know, large scale engineering work to make these big training ones work. I think that is a little bit more tacit technology.
So I think that, but I think China will be able to figure that out. That's like sort of engineering slap. They're going to figure out how to figure that out, but not how to get the RL thing working. I mean, look, I don't know, Germany during World War II, you know, they went down the wrong path. They did heavy water. And that was wrong. And there's actually, there's an amazing anecdote in the making of the atomic bomb on this, right? So, so, so, secrety is actually one of the most
contentious issues, you know, early on as well. And, you know, part of it was sort of, you know, zillard or whatever really thought, you know, the sort of nuclear chain reaction was possible. And so the atomic bomb was possible and you went around and it was like, eyes is going to be of enormous strategic importance, military importance. And a lot of people didn't believe it or they're kind of like, well, maybe it's possible, but you know, I'm going to act as if it's not possible.
And, you know, science should be open and all these things. And, um, anyway, in some of these early days, so there had been some sort of incorrect measurements made on graphite as a moderator. And that Germany had. And so they thought, you know, graphite was not going to work. We have to do heavy water. But then Fermi made some new measurements on graphite. And they indicated that graphite would work. You know, this is really important. And then, you know, zillard kind of assaulted
Fermi with the kind of another secrecy appeal. And Fermi was just kind of he was pissed off, you know, at a temper tantrum, you know, he was like, he thought it was absurd. You know, like, come on, this is crazy. Um, but, you know, zillard persisted. I think they're up to another guy, pegrim. And then Fermi didn't publish it. And, you know, that was just in time because Fermi not publishing and meant that the Nazis didn't figure out graphite would work. They went down this
path of heavy water. And that was the wrong path. That was one of the sort of, you know, this is a key reason why the sort of German project didn't work out. They're kind of way behind. Um, and, um, you know, I think we face a similar situation on are we, are we just going to instantly leak the sort of how do we get past the data wall? What's the next paradigm? Or are we not? So, and the reason this would matter is if there's like a being one year ahead would be a huge advantage.
In the world where it's like you deploy AI over time and then just like, ah, they're going to catch up anyway. I mean, I interviewed Richard Grotes, the guy who wrote the making an atomic bomb. Yeah. And one of the anecdotes he had was when, so they'd realized America had the bomb. Obviously, we dropped it on Japan. Yeah. And Bariah goes the guy who ran the NKBD. Yeah. And just a famously ruthless guy, just evil. And he goes to I forgot the night. But the guy, the Soviet scientist was running
their version of the man on project who says, Comrade, you will get us the American bomb. Yeah. And the guy says, well, listen, their implosion device actually is not optimal. We should make it a different way. And Bariah says, no, you will get us the American bomb or your family will be camped us. But the thing that's relevant about that anecdote is actually the Soviets would have
had a better bomb if they hadn't copied the American design at least initially. And we would suggest that often in history, this is something that's not just for the Manhattan project, but there's this pattern of parallel invention where because the tech tree implies that the certain thing is next. In this case, it's up play or all whatever. Then people are just like working on that and like people are going to figure out around the same time. There's not there's not going
to be that much gap and who gets it first. But it wasn't like famously that a bunch of people were invented something like the light bulb around the same time and so forth. Yeah. So, but is it just that like yeah, that might be true, but it'll with the one year or the six months or whatever. Two years makes all the difference. I don't know if it'll be two years though. Like, right? I mean, I actually, I mean, I actually think if we locked down the labs, we have we have much better scientists
were way ahead. It would be two years. But even I think even I think I think whether you I think yeah, I think even six months a year would make huge difference. And this gets back to the sort of intelligence. So, like a year might be the difference between, you know, a system that's sort of like human level and a system that is like vastly super human, right? Might be like five of five of them. You know, even on the current pace, right? We went from, you know, I think on the
math benchmark recently, right? Like, you know, three years ago on the math benchmark, we, you know, that was, you know, this is sort of come really difficult high school competition math problems. You know, we were at, you know, a few percent couldn't solve anything now it solved. And that was sort of at the normal pace of our progress. You didn't have sort of a billion super intelligent resources researchers. So, like a year is a huge difference. And then
particularly after super intelligence, right? Once this is applied to sort of lots of elements of R&D, once you get the sort of like industrial explosion with robots and so on. You know, I think a year, you know, a couple years might be kind of like decades worth of technological progress. And might, you know, again, it's like go for one, right? 20, 30 years of technological lead,
totally decisive. You know, I think it really matters. The other reason it really matters is, you know, suppose, suppose they steal the weight, suppose they steal the algorithms and, you know, they're close on our tails. Suppose we still pull out a head, right? We just kind of, we were a little bit faster, you know, we're three months ahead. I think the sort of like world in which we're really neck and neck, you know, you only have a three months lead are incredibly dangerous,
right? And we're in this like fever struggle. We're like, if they get ahead, they get to dominate, you know, sort of maybe they get a decisive advantage. They're about in clusters like crazy. They're willing to throw all caution to the wind. We have to keep up. There's some crazy new WMDs popping up. And then we're going to be in the situation where it's like, you know, crazy new military technology, crazy new WMDs, you know, like deterrence and mutually disturbed attraction,
like keeps changing, you know, every few weeks. And it's like, you know, completely unstable volatile situation. That is incredibly dangerous. So it's I think, I think, you know, both both from just the technologies are dangerous from the alignment point of view. You know, I think it might be really important during the intelligence explosion to have the sort of six month, you know, wiggle room to be like, look, we're going to like dedicate more compute to alignment during this period because we
have to get it right. We're feeling uneasy about how it's going. And so I think in some sense that like one of the most important inputs to whether we will kind of destroy ourselves or whether we will get through this just incredibly crazy period is whether we have that buffer. Why? So before we go further object level in this, I think it's very much worth noting that almost nobody, at least nobody I talk to, thinks about the geopolitical implications of AI. And I think I have some
object level disagreements that we'll get into. But or at least things I want to iron out, I may not disagree in the end. But the basic premise that obviously if you keep scaling and obviously if people realize that this is where intelligence is headed, it's not just going to be like the the same old world where like what model are we deploying tomorrow? And what is the latest? Like well, if people on Twitter like, oh, did you be four? Is going to change your expectations or whatever.
You know, coitus really interesting because before a year or something, when March 2020 hit, yeah, we became clear to the world like president, CEO's media, average person. There's other things happening in the world right now, but the main thing we as a world with are dealing with right now is COVID. Sue and on AGI. Yeah. Okay. And then so this is the quiet period. You know, you want to go to vacation, you know, you want to like, you want to yeah, you want to have, you know, maybe
like now is the last time you can have some kids. You know, my girlfriend sometimes complains that, you know, that I know when I'm like, you know, off doing work or whatever, she's like, I'm not spending time with her. She's like, you know, she's threatened to replace me with like, you know, GP-6 or whatever. And I'm like, GP-6 will also be too busy with it. Yeah, I'm used to it. Okay. Anyway, so what's the answer to the question of why, why are not the people talking
national security? I made this mistake with COVID, right? So I, you know, February of 2020, and I thought just it was going to sweep the world and all the hospitals would collapse and it would be crazy and then, and then, you know, and then it'd be over. And you know, a lot of people thought this kind of the beginning of COVID, they shut down their offices a month or whatever. I think the thing I just really didn't price in was the side of the reaction, right? And within
weeks, you know, Congress spent over 10% of GDP on like COVID measures, right? The entire country was shut down. I was crazy. And so I know I didn't price it in with COVID sufficiently. I don't know why do people underrate it. I mean, I think there's a sort of way in which being kind of in the trenches actually kind of, I think, gives you a less clear picture of the trend lines. You actually have to zoom out that much only like a few years, right? But, you know, you're in the
trenches, you're like trying to get the next model to work. You know, there's always something that's hard. You know, for example, you might underrate algorithmic progress because you're like, ah, things are hard right now or, you know, data wall or whatever. But, you know, you zoom out just a few years and you actually try to like count up how much algorithmic progress made in the last, you know, last few years and it's enormous. But I also just don't think people think about this
stuff. Like I think smart people really underrate espionage, right? And, you know, I think part of the security issue is I think people don't realize like how intense state level espionage can be, right? Like, you know, this is really company had had software that could just zero click hack any iPhone, right? They just put in your number and then it's just like straight download of everything, right? Like the United States infiltrated an air gap, the atomic weapons program, right?
Wild, you know, like, are you about to talk stuff? Yeah, yeah. You know, the, you know, intelligence agencies have just stockpiles of zero days, you know, when things get really hot, you know, I don't know, I'm able to send special forces, right? To like, you know, get go to the data center or something that's, you know, or, you know, I mean, China does this. They threaten people's families,
right? And they're like, look, if you don't cooperate, if you don't give us the intel, there's a good book, you know, along the lines of the Gulag or Kadevela, you know, the inside the aquarium, which is by a Soviet GRU defector, GRU was like military intelligence, earlier I recommend this book to me. And, you know, I think reading that is just kind of like shocked,
I have a 10 sort of state level SV nausea. The whole book was about like, they go to these European countries and they try to like get all the technology and recruit all these people to get the technology. I mean, yeah, maybe one anecdote, you know, so when, so the spy, you know, this eventual defector, you know, so he's being trained, he goes to the kind of GRU spy academy. And so then to graduate from the spy academy, sort of before your center broad, you kind of had
to pass a test to show that you can do this. And the test was, you know, you had to in Moscow, recruit a Soviet scientist and recruit them to give you information, sort of like you would do in the foreign country. But of course, for whomever you recruited, the penalty for giving away sort of secret information was death. And so to graduate from the Soviet spy, this GRU spy academy, you had to condemn my countryman to death. States do this stuff.
I started reading the book on, because it's hard in the series. Yeah. And I was actually wondering, the fact that you use this anecdote, and then you're like, enough, we have a book record made by Ilya, is this some sort of, is this some sort of Easter egg? We'll leave that for our next exercise with a reader. Okay, so the beatings will continue until they're all improved. So suppose that we live in the world in which these secrets are locked down.
But China still realizes that this progress is happening in America. In that world, especially if they realize, and I guess it's a very interesting question, it probably won't be locked down. Okay, but suppose it's probably going to live in the bad world. Yeah, it's going to be really bad. Why are you so confident that they won't be locked down? I mean, I'm not confident that it won't be locked down, but I think it's just, it's not happening. But so tomorrow, the lab leaders get
the message, how hard, what are they had to do? They get the more security guards, they air gap, the, what do they do? So again, I think basically, there's kind of like two reactions there, which is like it's, we're already secure, not. And there's fatalism, it's impossible. And I think the thing you need to do is you kind of got to stay ahead of the curve of basically how egi pillows the CCP. So like right now, you've got to be resistant to normal economic espionage.
They're not. I mean, I probably wouldn't be talking about the stuff that the labs were, because they wouldn't want to wake them up more, the CCP, but they're not. You know, this is like, this stuff is like really trivial for them to do right now. I mean, it's also, anyway, so they're not resistant to that. I think it would be possible for private company to be resistant to it, right? So, you know, both of us have, you know, friends in the kind of like
quantitative trading world, right? And, and, you know, I think actually those secrets are shaped kind of similarly where it's like, you know, you know, they've said, you know, yeah, if I got on a call for an hour with somebody from a competitor firm, I could, most of our alpha would be gone. And that sort of like, that's the like list of details of like really, how to, how to make your good not to worry about that for the soon. You're gonna not worry about that for the soon.
Yeah, anyway. And so, so all alpha could be gone. But in fact, they're alpha persists, right? And you know, often, often for many years and decades. And so this doesn't seem to happen. And so I think there's like, you know, I think there's a lot you could go if you went from kind of current startup security, you know, you just got to look through the window and you can look at the slides.
You know, it's, it's kind of like, you know, good private sector security hedge funds, you know, the wakeable treats, you know, customer data or whatever. Um, I'd be good right now. The issue is, you know, basically the CCP will also get more a GI build. And at some point we're going to face kind of the full force of, you know, the Ministry of State security. And again, you're talking about smart people underwriting
espionage and sort of insane capability of the states. I mean, this stuff is wild, right? You know, they can get like, you know, there's papers about, you know, you can find out the location of like where you are in a video game map just from sounds, right? Like states can do a lot with like electromagnetic emanations, you know, like, you know, at some point, like you got to be working
from a sketch, like your cluster needs to be air gaped and basically be a military base. It's like, you know, you need to have, you know, intense kind of security clearance procedures for employees, you know, they have to be like, you know, all this shit is monitored, you know, they're, you know, they basically have security guards, you know, it's, you know, you, you can't use any kind of like, you know, other dependencies, it's all got to be like intensely vetted, you know, all your hardware
has to be intensely vetted. And, you know, I think basically if they actually really face the full force of state level espionage, I don't really think this is the thing private companies can do. Both, I mean, empirically, right? Like, you know, Microsoft recently had executives emails hacked by Russian hackers and, you know, government emails they posted hacked by government actors, but also, you know, it's basically there's just a lot of stuff that only kind of, you know,
the people behind the security conferences know and only they deal with. And so, you know, I think it's actually kind of resist the sort of full force of espionage you're going to need the government. Anyway, so I think basically we could, we could do it by always being ahead of the curve. I think we're just going to always be behind the curve. And I think, you know, maybe unless we get the sort of government project. Okay, so going back to the naive perspective of we're very much
coming at this from there's going to be a race in the CCP. We must win. Yeah. Listen, I understand like, back people are in charge of the Chinese government, like good the CCP and everything. But just stepping back in a sort of galactic perspective, humanity is developing AGI. And do we want to come at this from the perspective of we need to be China to this are super intelligent, Jupiter brain descendants. We won't know which China will be some like distant
memory that they have America to. So shouldn't it be a more the initial approach to just come to them, like listen, we this is super intelligent. So this is something like we come from a cooperative perspective. Why why immediately sort of rush into it from a hawkish competitive perspective? I mean, look, I mean, one thing I want to say is like a lot of the stuff I talk about in the series
is, you know, is sort of primarily, you know, descriptive. Right. And so I think that on the China stuff, it's like, you know, yeah, and some ideal world, you know, we, we, you know, it's just all, you know, Mary go around in cooperation. But again, it's sort of, I think, I think people wake up to AGI. I think the issue particular on sort of like, can we make a deal, can we make an international treaty? I think it really relates to sort of what is the stability of sort of international
arms control dreamers. Right. And so we did very successful arms control on nuclear weapons in the 80s. Right. And the reason that was successful is because the sort of new equilibrium was stable. Right. So you take go down from, you know, whatever 60,000 nukes to 10,000 nukes. You know, when you have 10,000 nukes, you know, basically breakout, breakout doesn't matter that much. Right. Suppose the other guy now try to make 20,000 nukes. Well, it's like who cares, right? You know, like it's
still mutually short destruction. Suppose a rogue state kind of went from zero nukes to one nukes. Like who cares? We still have way more nukes than you. I mean, it's still not ideal for destabilization. But it, you know, it'd be very different if the arms control agreement had been zero nukes. Right. Because if I had been zero nukes, then it's just like one rogue state makes one nuke. The whole thing is destabilized. Breakout is very easy. You know, your adversary state starts making nukes.
And so basically when you're going to sort of like very low levels of arms or when you're going to kind of in your, in your very dynamic technological situation, arms control is really tough because because breakout is easy. You know, there's, there's, I mean, there's some other sort of stories about this in sort of like 1920s, 1920s, 1930s. You know, it's like, you know, all the European states had done this armament. And Germany was kind of did this like crash program to build the
Luftwaffe. And that was able to like massively destabilize things because not that, you know, they were the first, they were able to like pretty easily build kind of a modern, you know, Air Force because the others didn't really have one. And that, you know, that really destabilized things. And so I think the issue with EGI and super intelligence is the explosiveness of it.
Right. So if you have an intelligence explosion, if you're able to go from kind of EGI to super intelligence, if that super intelligence is decisive, like either, you know, like a year after because you developed some crazy WMD or because you have some like, you know, super hacking ability that
lets you kind of, you know, completely deactivate the sort of enemy arsenal. That means like, suppose, suppose you're trying to like put in a break, you know, like we both, we're both going to like cooperate and we're going to go slower, you know, on the cost of AGI or whatever. You're just going to be such an enormous incentive to kind of race ahead to break out. We're just
going to do the intelligence explosion. If we can get three months ahead, we win. I think that makes it basically, I think any sort of arms control agreement that comes as situation where it's close, very unstable. That's really interesting. This is very analogous to kind of a debate I had with
Rose on the podcast where he argued for nuclear disarmament. But if some country tries to break out and it starts developing nuclear weapons, the six months for whatever that you would get is enough to get international consensus and invade the country and prevent them from getting nukes. And I thought that was sort of, that's not a say like a little bit of stuff. Yeah. But so it's
on this, right? So like, maybe it's a bit easier because you have AGI and so like you can monitor the other person's cluster or something like data centers, you can see them from space, actually. You can see the energy draw their getting. There's a lot of things as you were saying. There's a lot of ways to get information from an environment if you're really dedicated. And also because unlike a nukes, the data centers are nukes you have obviously the submarines, planes, you have
bunkers, mountains, whatever, you have so many different places. A data center that you're 100 gigawatt data center, we can blow that shit up if you're like we're concerned, right? Like just to some cruise missile or something. Yeah. It's like very wonderful to sabotage. I mean, that gets to the sort of, I mean, that gets to the sort of insane vulnerability of this period post
super intelligence, right? Because basically, I think so you love the intelligence explosion. You have these like vastly superhuman things on your cluster, but you're like you haven't done the industrial explosion yet. You don't have your robots yet. You haven't kind of you haven't covered the desert in like robot factories yet. And that is the sort of crazy moment where, you know, say the United States is ahead the CCP is somewhat behind. There's actually an enormous incentive for
for a strike, right? Because if they can take out your data center, they, you know, they know you're about to have just this command and decisive lead. They know if we can just take out this data center, you know, then we can stop it. And you know, they might get desperate. And, um, you know, I so I think it basically we're going to get into a position. It's actually, I think it's going to be pretty hard to defend early on. I think we're basically going to be in a position
where protecting data centers with like the threat of nuclear retaliation. It's like maybe sounds kind of crazy, though, you know, it's the inverse of the LA's are we going to the data centers are nearly rubbing. New clear deterrence for data centers. I mean, this is a, you know, Berlin, you know, in the like late 50s, early 60s, yeah, both Eisenhower and Kennedy multiple times kind of made the threat of full on nuclear war against the Soviets if they tried to encroach on West Berlin.
Um, it's sort of insane. It's kind of insane that that went well, but basically I think that's going to be the only option for the data centers. It's a terrible option. This whole scheme is terrible, right? Like being being being in this like neck and neck race, sort of at this point is terrible. And you know, it's also the, you know, I think I have someone certain to basically
on how easy that the size of the advantage will be. I'm pretty confident that if you have super intelligence, you have two years, you have the robots, you're able to get that 30 year lead, um, look, then you're in this like goal for one situation. You have your like, you know, millions or billions of like mosquito sized drones that can just take it out. I think there's
even a possibility you can kind of get a decisive advantage earlier. So you know, there's these stories, you know, about these as well about, you know, like colonization and like the sort of 1500s where it was a, you know, these like a few hundred kind of spaniards were able to like topple the Aztec empire, you know, couple, I think a couple other empires as well, you know, each of these had a few million people. And it was not like godlike technological advantage. It was some technological
advantage. It was, I mean, it was some amount of disease. And then it was kind of like cunning, strategic play. And so I think there's a, there's a possibility that even sort of early on, you know, it's you haven't gone through the full industrial explosion yet. You have super intelligence, but, you know, you're able to kind of like manipulate the imposing generals, claim your allying with them. Then you have you have some, you know, you have sort of like some crazy new bio weapons.
Maybe maybe there's even some way to like pretty easily get a paradigm that like activates enemy nukes. Anyway, so I think the stuff could get pretty wild. Here's what I think we should do. I really don't want this volatile period. And so a deal with China would be nice. It's going to be really tough if you're in this unstable equilibrium. I think basically we want to get in position where it is clear that the United States that a sort of coalition of democratic allies will
win is clear. The United States are declared to China. You know, that will require having locked down the secrets that will require having built the 100 gigawatt cluster in the United States and having done the natural gas and doing what's necessary. And then when it is clear that the Democratic coalition is well ahead, then you go to China and then you walk for the Medieval. And you know, China will know they're going to win. This is going to be, they're very scared of
what's going to happen. We're going to know we're going to win, but we're also very scared of what's going to happen because we really want to avoid this kind of like break net break neck race right at the end. And where things could really go awry. And, you know, and then and so then we offer them a deal. I think there's an incentive to come to the table. I think there's a sort of more stable arrangement you can do. It's a sort of an Adams for peace arrangement. And we're like,
look, we're going to respect you. We're not we're not going to like we're not going to use super intelligent against you. You can do what you want. You're going to get you're like you're going to get your slice of the galaxy. We're going to like we're going to benefit share with you. We're going to have some like compute agreement where it's like there's some ratio of compute that you're allowed to have. And that's like enforced with her like opposing a eyes or whatever.
And we're just not going to do we're just not going to do this kind of like volatile sort of WMD arms race to the death. We're good. And sort of it's like a new world order that's US led that sort of democratic led with that respects China. Let's do what they want. Okay, there's so much to do so much there. First on the galaxy's thing. I think it's just a funny anecdote. I want to kind of want to tell it. And this we're at an event. The number
respecting chat on how it's rules here. I'm not revealing anything about it. But we're talking to somebody or a leopold was talking to somebody influential. Afterwards that person asked the group. Leopold told me that he wants he's not going to spend any money on consumption until he's ready to buy galaxies. And he goes the guy goes I honestly don't know if you meant galaxies like the brand of private plane galaxy or the physical galaxies. And there was an actual debate.
Like you went away to the restroom and there was an actual debate among people who are very influential about he can't amend galaxies. And I've learned in other people who knew you better be like no he means galaxies. I mean the galaxies. I think it'll be interesting. I mean I think there's I mean there's two ways to buy the galaxies. One is like at some point you know it's like post-Sufferantaloges. Yeah, they're surprised. But by the way I love okay so what happens is
he's out there. I'm laughing at Asam. I'm not even saying good people were like having this debate. And then so Leopold comes back. And the guy and people somebody's like oh Leopold we're having this debate about whether you meant you want to buy the galaxy or you want to buy the other thing. And Leopold assumes they must mean not the private plane galaxy first the actual galaxy. But do you want to buy the property rights with the galaxy? Or actually just send out the probes right now. Exactly.
Oh my god. All right. Back to China. Yeah. There's a whole bunch of things I could ask about that plan about whether you're going to get credible promise. You will get some part of the galaxies whether they care about that. I mean you have your eyes help you enforce stuff. Okay sure. We'll leave that aside. That's a different rabbit hole. The thing I want to ask is but it has to be the thing we need. The only way this is possible is if we lock it down. I see. If we don't lock it down we are in
the fever struggle. Greatest peril mankind will have ever seen. So but given the fact that during this period instead of just taking their chances and they don't really understand how this AI governance scheme is going to work. We're going to check whether we have that to get the galaxies. Yeah. The data centers they can't be built underground. They have to be really built above ground. Taiwan is right off the coast of us. They need the chips from there. Yeah. Why aren't we just going
to invade? Listen, we don't want like worst case scenario is they win the super intelligence which they're on track to do anyways. Wouldn't this instigate them to either invade Taiwan or blow up the data center in Arizona or something like that? Yeah. Yeah. I mean look I mean talked about the data center one and then you probably have to like threaten nuclear retaliation to protect that. They might also just blow it up. There's also maybe ways they can do it without sort of attribution.
Right? Like you sucks. Ness. Sucks net. Yeah. I mean look this is I mean this is part of we'll talk about this later but you know I think I think we need to be working on the sucks net for the Chinese project but the but by the way to Taiwan I mean Taiwan the Taiwan thing the you know I talk about you know EGI about you know 27 or whatever you know about the like terrible 20s.
No. I mean sort of in the sort of Taiwan watchers circles people often talk about like the late 2020s is like an excellent period of risk for Taiwan because sort of like you know military modernization cycles and basically extreme fiscal tightening on the military budget in the United States over the last decade or two has meant that sort of we're in this kind of like you know trough in the late 20s of like you know basically oral naval capacity and you know that's sort of
when China is saying they want to be ready. So it's already kind of like it's kind of pitching you know there's some sort of like you know parallel timeline there. Yeah look it looks appealing to
invade Taiwan. I mean maybe not because they you know basically remote cut off of the chips and so then it doesn't mean they get the chips but it just means they you know it's just it's you know the machines are deactivated but look I mean imagine if during the cold war you know all of the world's uranium deposits had been in Berlin you know and Berlin was already I mean almost
multiple times in this caused nuclear war so God help us all. Well the Groves had a plan after the after the war yeah that the plan was that the America would go around the world and getting the rights for every single uranium deposit because it didn't realize how much uranium there was in the world and because this was the thing that was feasible. Not realizing of course that there's like huge deposits in the Soviet Union itself. Right. Okay. There's a there's always a lot of East German
workers who kind of got screwed. Oh interesting. Got cancer. Okay so the framing we've been talking about that we've been assuming. Yeah and I'm not sure I buy yet is that United States. Yeah this is our leverage. This is our data center to try and as a competitor. Right now obviously that's not the way things are progressing. Private companies control these AI's they're deploying them. It's a market based thing. Why will it be the case that the it's like the United States it has
this leverage or is doing this thing versus China is doing this thing. Yeah I mean look on the on the project you know I mean there's sort of descriptive and prescriptive claims or normative positive claims. I think the main thing I'm trying to say it's you know you know look we're at these SF parties or whatever and I think people talk about AGI and they're always just talking about the private AI labs and I think I just really want to challenge that assumption it just seems
like seems pretty likely to me you know as we've talked about for reasons we've talked about that look like the national security state is going to get involved and you know I think there's a lot of ways this could look like right is it like nationalization is it a public private
partnership is it a kind of defense contract or like rematialism is it a sort of government project that suits up all the people and so there's a spectrum there but I think people are just vastly underreating the chances of this more or less looking like a government project and look I
mean look if if you know it's sort of like you know do you think you think like we all have literal like you know when we have like literal superintelligence on our cluster right and it's like you know you have a hundred billion they're like sorry that you have a billion like superintelligence
scientists they can like hack everything they can like stuck snip the Chinese data centers you know they're starting to build the Robo armies you know you like you really think that'll be like a private company and the government will be like oh my god what is going on you know like yeah so
suppose there's no China suppose there's people like Iran North Korea who theoretically at some point we go to do superintelligence but they're not on our heels and they don't have the ability to be on our heels in that world are you advocating for the national project or do you prefer the private
path forward yeah so I mean two responses to this one is I mean you still have like Russia you sell these other countries you know you've got to have Russia proof security right it's like you you can't you can't just have Russia steal all your stuff and like maybe their clusters aren't
going to be as big but like they're still going to be able to make the crazy bio weapons and the you know the musculoskeletal drone so warm you know and so on and so I mean I think I think I think the security component is just actually a pretty large component of the project in the sense of like
I currently do not see another way where we don't kind of like instantly proliferate this to everybody and so yeah so I think it's sort of like you still have to deal with Russia you know Iran North Korea and you know like you know Saudi and Iran are going to be trying to get it
because they want to screw each other and you know Pakistan and India because they want to screw each other there's like this enormous destabilization still that set look I agree with you if you know if you know if you know by some somehow things are checking out differently I'm like you
know IGI would have been in 2005 you know sort of like unparalleled you know American Germany I think there would have been more scope for less government involvement but again you know as we're talking about earlier I think that would have been sort of this like very unique moment
in history and I think basically you know almost all other moments in history there would have been this sort of great power competitor so okay so let's get into this debate so I my position here is if you look at the people who are involved in the Manhattan project itself yeah many of them
regretted their participation as you said now we can infer from that that we should sort of start off with a cautious approach to the nationalized ASI project then you might say well listen obviously the super did they regret their participation because of the project or because of the technology
itself I think people will regret it but I think it's it's it's about the nature of the technology and it's not about it's project I think they also probably had a sense that different decisions would have been made if it wasn't some concerted effort that everybody had agreed to participate in
that if it wasn't in the context of this we need to race to be to the Germany in Japan you might not develop so that's a technology part but also like you wouldn't actually like you hit them with you know it's like the sort of the destructive potential the sort of you know military potential
it's not it's not because of the project it is because of the technology and that will unfold regardless you know I think this underrates the power of modeling you imagine you go through like the 20th century in like you know a decade you know it's just the sort of the sort of yes great
actually so yeah let's actually run that example yeah yeah as you actually what there was some reason that the 20th century would be run through in one decade do you think the cause of that should have been sort of then like the technologies that happened through the 20th century shouldn't
have been privatized that it should have been a more sort of concerted government led project you know look there is a history of just dual use technologies right and so I think AI in some sense is going to be dual use in the same way and so there's going to be lots of civilian uses of it
right like nuclear energy it's like itself right there's like you know there's the government project to develop the military angle of it and then you know it was like you know then the government worked with private companies there's a sort of like real like flourishing of nuclear energy
until you know the environment's lost stopped it you know um um um planes right like Boeing or actually you know the Manhattan project wasn't the biggest defense R&D project during World or two it was the B-29 bomber right because they need the bomber that had long enough range to
reach Japan um to destroy their cities um and then you know Boeing made some Boeing made that B-boing made the B-47 made the B-52 you know the plane the US military uses today and then they use that technology later on to um to you know build the 707 and this sort of the but what is later on
mean in this context because in the other like I get what it means after a war to privatize but if you have the government has ASI let me just let me back up an explain my concern yeah so you have the only institution in our society which has a monopoly on violence yes um and then
we're going to give the give it some uh in a way that's not broadly deployed access to the ASI yeah the counterfactual and this maybe sound silly but yeah listen we're going to go through hired higher levels intelligence yeah private companies yeah we'll be required by regulation
to increase their security yeah but they'll still be private companies and they're deploy this and they're going to release the AGI now like Donalds and JP Morgan and some random startup are now more effective organizations because they have a bunch of AGI workers and it'll be sort of like
the industrial revolution in the sense that the benefits were widely diffused if you don't end up in a situation like that then the I mean even backing up like what is it retrying to why do we want to win against China we want to win against China because we don't want a top down authoritarian
system yeah to win yeah now if the way to beat that is that the most important technology that humanity will have has to be controlled by a top down government like what was a point like why to maybe so let's like run our cards with privatization that's the way we get to this
classic liberal market-based system we want for the ASI yeah all right so a lot to talk about here yeah um I think yeah maybe I'll start a bit about like actually looking at what the private world would look like and I think this is part of where the sort of there's no alternative comes from and then let's look like look at like what the government project looks like what checks and balances look like and so on all right private world I mean first of all okay so right like a lot of people
right now talk about open source and I think there's this sort of misconception that like AGI development is going to be like oh it's going to be some like beautiful decentralized thing and you know like you know some giddy community of coders who gets to like you know collaborate on it that's not how it's going to look like right you know it's you know the $100 billion trillion dollar cluster it's not going to be that many people that have it the algorithms you know it's like right now open
source is kind of good because people just use the stuff that was published and so they basically you know the algorithms were published or you know as mistroll they just kind of like leave deep mind and you
know take all the secrets out there and they just kind of replicate it um um that's not going to continue being the case and so you know the sort of like open source alternative you know also people say stuff like you know 1026 flops it will be in my phone or you know it's no it won't you know
it's like Moore's Law is really slow I mean AGI chips are getting better but like you know the $100 billion computer will not cost you know like $1,000 you know within your lifetime or whatever aside from me so it's going to be it's going to be like two or three you know big players um
on the private world and so look a few things so first of all you know you talk about the sort of like you know enormous power that sort of super intelligence will have and the government will have I think it's pretty plausible that the alternative world is that like
one AI company has that power right and it's basically we're talking about lead you know it's like what I don't know opening eyes is six months lead and then you know so then you're not talking you're talking about basically you know the most powerful weapon ever and it's you know you're
kind of making this like radical bet on like a private company CEO is the benevolent dictator no no no this is not necessarily like any other thing that's privatized we don't count on that being benevolent we just look to think of for example somebody who manufactures industrial fertilizer
yes right this the person with this factory if they went back to an ancient civilization they could like blow up Rome yeah they could probably blow up Washington DC and I think in their series you talk about Tyler Collins phrase of muddling through yeah and I think even with privatization
people sort of underrate that there are actually a lot of private actors who have the ability to like there's a lot of people who control the water supply or whatever and we can count on cooperation and market-based incentives to basically keep a balance of power sure I get the things are
proceeding really fast yes but we like we have a lot of historic governance this is the thing that works best so look I mean I mean what do we do with nukes right the the way we keep the sort of nukes and check is not like you know a sort of beefed-up second amendment where like each state
has their own like little nuclear arsenal and like you know Dario and Sam of their own little nuclear arsenal no no it's like it's institutions it's constitutions it's laws it's it's courts and so so I don't actually I'm not sure that this you know I'm not sure that the sort of balance
of power analogy holds in fact you know sort of the government having the biggest guns this was sort of like an enormous civilizational achievement right like Lundfrieden in the sort of holy Roman Empire right you know if somebody from the town over kind of committed crime on you you know
you didn't kind of start a sort of a you know a big battle between the two towns no you take it to a court of the holy Roman Empire and they would decide and it's it's a big achievement now the the thing about you know the industrial fertilizer I think the key differences kind of speed and
often seafence balance issues right so it's like 20th century and you know 10 years in a few years um that is an incredibly scary period and it is incredibly scary you know because it's you know you're going through just this sort of enormous array of destructive technology and this sort of
like enormous amount of like you know basically military event I mean you would have gone from you know kind of like you know you know you know Bayonets and horses to kind of like tank armies and fighter jets in like a couple years and then from you know like you know and then to like you know nukes and you know ICBMs and still you know and just like in a matter of years and so it is sort of that speed um that creates I think basically the way I think about is there's going to be this initial
just incredibly volatile and incredibly dangerous period and somehow we have to make it through that that's going to be incredibly challenging um that's where you need the kind of government project if you can make it through that then you kind of go to like you know now we can now you know the
situation has been stabilized you know we don't face this imminent national security threat you know it's like yes there were kind of WMDs that came along the way but either we've managed to kind of like have a sort of stable off in defense balance right like I think by weapons initially
or huge issue right like an attacker can just create like a thousand defense aesthetic you know viruses and spread them and it's like going to be really hard for you to kind of like make it offense against each but maybe at some point you figure out the kind of like you know universal
defense against every possible virus and then you're in a stable situation again on the offense defense balance or you do the thing you know you do with planes where it's there's like you know the certain capabilities that the private sector isn't allowed to have and you've like figured out
what's going on restrict those and then you can kind of like let let you know you can you let this sort of civilian civilian uses so I'm skeptical of this because well there's no sorry I mean the other important thing is so I talked about this sort of you know maybe it's like it's it's a you
know it's you know it's one company with all this power and I think it's like I think it is unprecedented because it's like the industrial fertilizer guy cannot overthrow the US government I think he's quite plausible that like the AI company super intelligence can overthrow the
but the multiple AI companies right and I buy the one of them could be ahead so it's not obvious that'll be multiple I think it's again if there's like a six monthly maybe maybe there's two or three I agree but if there's two or three then what you have is just like the crazy race between these two
or three companies you know it's like you know whatever Demis and Sam they're just like I don't want to let the other one win and and they're both developing their nuclear arsenals and they're just like also like come on the government is not going to let these people you know are they going
to let like you know is Dario going to be ordered the one developing the kind of like you know you know super hacking stocks net and like deploying against the Chinese data center the other issue though is it won't just if it's two or three it won't just be two or three there'll be two
or three and I'll be China and Russia and North Korea because the private and the private lab world there's no way they'll have security that is good enough I think we're also assuming that somehow if you nationalize it like the security just especially in the world the way
the thing this stuff is priced in by the CCP that now you've like got a nail down and I'm not sure why we would expect that to be the case but on this the only one who does stuff so if it's not Sam or Dario who's we don't want to trust them to be benevolent dictator or whatever now
we're just going to be talking about it so by here we're counting on if it's because you can cause a coup the same capabilities are going to be true of the government project right and so the modal president in 2020 2025 but Donald Trump will be the person that you don't trust
Sam or Dario to have these capabilities and why okay I agree that like I'm worried if the Sam or Dario have a one-year lead on ASI in that in that world then I'm like concerned about this being privatized but in that exact same world I'm very concerned about Donald Trump having the
capability and potentially for living a world where the take office floor then you anticipate in that world I'm like very much I want the private companies so like in no part of this matrix this is obviously true that the government led project is better than the private project
with the government project a little bit and checks and balances in some sense I think my argument is a sort of birken argument which is like American checks and balances have held for you know over 200 years and through crazy technological revolutions you know the US military could kill like every civilian in the United States. You're going to make that argument the private public
balance of power itself for hundreds of years and then like. But yeah why is it housed because the government has the biggest guns and is never before has a single CEO or a random nonprofit board had the ability to launch nukes. And so again it's like you know what is the track record of the government checks and balances or is the track record of the private company checks and balances. Well the iLab you know like first-stress test you know what really badly you know that didn't
really work you know. I mean even worse in the in the sort of private company world so it's both like it is not just that it is like the two private companies down the CCP and they just like instantly have all the shit and then it's you know they probably won't have good enough internal control. So it's like not just like the random CEO but it's like you know rogue employees that can kind of like use these superintelligence to do whatever they want. And this won't be sure of
the government like the rogue employees won't exist on the project. Well the government actually like you know has decades of experience and like actually really cares about the stuff. I mean it's like they deal with nukes they deal with really powerful technology and it's you know that this is like this is the stuff that the national security state cares about. You know again to the go let's talk about the government checks and balances a little bit. So you know what what are
checks and balances in the government world. First of all I think it's actually quite important that you have some amount of international coalition and I talked about these sort of two tiers before. Basically I think the inner tier is a sort of modeled on the Quebec agreement right this was like Churchill and and Roosevelt they kind of agreed secretly. We're going to like pull our efforts on nukes but we're not going to use them against each other and we're not going to use them against
anyone else with their consent. And I think basically look bring in bring in the UK they have deep mind bringing in the kind of like Southeast Asian states who have the chip supply chain bring in some more kind of like NATO close democratic allies for you know talent and industrial resources and you have this sort of like you know so you have you have those checks and balances in terms
of like more international countries at the table. Sorry somewhat separately but then you have the sort of second tier of coalitions which is the sort of Adams for peace thing where you go to a bunch of countries including like the UAE and you're like look we're going to basically like you
know there's a deal similar to like the NPT stuff where it's like you're not allowed to like do the crazy military stuff but we're going to share the civilian applications we're in fact going to help you and share the benefits and you know sort of kind of like this new sort of post
super intelligence world order. All right US checks and balances right so obviously Congress is going to have to be involved right appropriately and trillions of dollars and probably ideally you have Congress needs to kind of like confirm whoever's running this so you have Congress you have
like different factions of the government you have the courts I expect the first amendment to continue being really important and maybe that I think that sounds kind of crazy to people but I actually think again I think these are like institutions that will tell the test of time
in a really sort of powerful way you know eventually you know this is why honestly alignment is important is like you know the eyes you program the eyes to follow the constitution and it's like you know why does the military work it's like generals you know are not allowed to follow unlawful orders are not allowed to follow on constitutional orders you have the same thing for the eyes. So what's wrong with this argument? Well you say listen maybe you have a point in the world
where we have extremely fast take-off it's like one year from age to age I to ASI. Yeah and then you have the like years after ASI where you have this like extraordinary. Sure. I think maybe you have a point. Yeah we don't know you have these arguments will like get into the weeds on them about why that's a more likely world but like maybe that's not the world we live in. Yeah and in the other world I'm like very on the side of making sure that these things are
privately held. Now why? I mean natural I love this so when you nationalize yeah that's a one-way function you can't go back why not wait until we have more evidence on which of those worlds we live in. Why I think like rushing on the nationalization might be a bad idea while we're not sure. And yeah okay I'll just just wanted that first. I mean I don't expect this to nationalize tomorrow.
If anything I expected to be kind of with COVID where it's like kind of too late like ideally you nationalize it early enough to like actually lock stuff down but probably be kind of chaotic and like you're gonna be trying to like do this crash program lock stuff down and it'll be kind of late and it'll be kind of clear what's happening. We're not gonna nationalize when it's not clear what's happening. I think the whole the whole historically institutions have held up well first of all the
battery almost broken a bunch of times. This is like this is this is this is this is the end break the first time we're in the earth and that's some people who are say that we shouldn't be that concerned about nuclear war say or it's like lesson we have the nuke for 80 years and like we've been fine so far so the risk must be low and then the answer to that is no actually the it is a really high risk and the reason we've avoided is like people have gone through a lot of effort to
make sure that this thing doesn't happen. I don't think that giving government ASI without knowing what that implies is going through the lot of effort and I think the base rate like you you can
talk about America. I think America is very exceptional not just in terms of dictatorship but in terms of every other country in history has had a complete drawdown of wealth because of war revolution something America is very unique in not having that and the historical base rate we're talking about great power competition I think that has a really big that's something we
haven't been thinking about the last 80 years but it's really big. Yeah dictatorship is also something that is just the default state of mankind and I think relying on institutions which in an ASI world like there's what's fundamentally right now if the government tried to overthrow there's a it's much harder if you don't have the ASI right like there's people who have a
K-A-R-4 15s and I there's like things that make it hard and crush the R-15s. No I think it actually be pretty hard the reason I was being non-Mina Afghanistan is pretty hard. It's a whole country. Yeah I agree but like I'm good. I agree. I agree. I agree. Some of the ASI. Yeah I think it's just like easier if you have what you're talking about with institution with the constitution their legal restraints there are courts there are checks and
balances. The crazy bet is the bet which you're like private companies see you. The same thing by the way it isn't the same thing true of NUX where we have these institutional agreements about non-proliferation and whatever and we're still very concerned about that being broken and
somebody getting NUX and like you should stay up that night worrying about that. That's a very situation but ASI is going to be a really precarious situation as well and like given given how precarious NUX are we've done pretty well and so what is privatization in this world even mean? I mean I think the other thing is like what would happen after. I mean the other thing you know because we're talking about like whether the government project is good or not and it's like I
have very mixed feelings about this as well. Again I think my primary argument is like you know
if you're at the point where this thing has like vastly superhuman hacking capabilities. If you're at the point where this thing can develop you know bio weapons you know like increasing bio weapons ones that are like targeted you know can kill everybody but the hand Chinese or you know that you know you know would would wipe out you know entire countries where you're talking about like building robo armors you're talking about kind of like drone swarms that are you know again the mosquito
sized drones that could take it out you know United States national security state is going to be intimately involved with this and this will you know the labs whether you know and I think again the government a lot of what I think is the government project looks like it is basically a joint
venture between like you know the cloud providers between some of the labs and the government and so I think there is no world in which the government isn't intimately involved in this like crazy period the very least basically you know like the intelligence agencies need to be running
security for these labs so they're already kind of like they're controlling everything they're controlling access to everything then they're going to be like probably again if we're in this like really volatile international situation like a lot of the initial applications it'll it'll stock
it's not what I want to use ASI for will be like trying to somehow stabilize this crazy situation somehow we need to prevent like proliferation of like some crazy new WMDs and like the undermining of mutually assured destruction to kind of like in a North Korea and Russia and China and so
I think you know I basically think your world you know I think there's much more spectrum than your acknowledging here and I think basically the world in which it's private labs is like extremely heavy government involvement and really what we're debating is like you know what form of government project but it is going to look much more like you know the national security state than anything it does look like like a startup as it is right now and I think the yeah look I think
something like that makes sense yeah I would be if it's like the Manhattan Project that I'm very worried where it's like this is part of the US military where I've had some more like listen you got to talk to Jake Sullivan before you like run the next training line like Lockheed Martin Skunkward's
part of the US military it's like they call the shots yeah I don't think that's great I think that's I think that's bad I think it would be bad if that happened with ASI and like what is what is the scenario what is the alternative what is the alternative okay so it's yeah closer to my end
of the spectrum where yeah you do have to talk to Jake Sullivan before you can launch an extraening cluster yeah but there's many companies who are still going for it yeah and the government will be intimately involved in the security yeah the but the like three different companies are trying
to do the stocks that attack yeah what he is launching launching okay so are you is the activating Chinese data service I think this is similar to the story you can tell about there's a lot of like literally the big tech right now yeah I think Sasha if you wanted to he probably like
could get his engineers like what are the zero days and windows and the companies and the guy and like well how do we get in full straight the president's computer so that like we can shut down no no no like right now I'm saying Sasha could do that right because he knows he knows
they shut down what do you mean government wouldn't let them do that yeah I think there's a story you can tell where like they could pull up pull off a coup or whatever but like I think there's like multiple companies okay okay I'm just saying like something closer to so what's wrong with
the scenario where you the government is there's like multiple companies going for it yeah but the AI is still broadly deployed and alignment works in the sense that you can make sure that it's not you the system level prompt is like you can't help people make bio weapons or something but these
are still broadly deployed so that I mean I expect the AI to be broadly deployed I mean first of the event is a government project yeah I mean look I think first of all like I think the matter is the world you know open sourcing their eyes you know that are two years behind or whatever yeah super
valuable role they're gonna like you know and so there's gonna be some question of like either the offense defense balance is fine and so like even if they open sourced two year old AI is it's fine or it's like there's some restrictions on the most extreme dual use capabilities like you know
you don't let private company sell kind of crazy weapons and that's great and that will help with the diffusion and you know you know after the government project you know there's gonna be this initial tense period hopefully that's stabilized and then look yeah like Boeing they're gonna go out and they're gonna like make do all the flourishing civilian applications and you know like nuclear energy you know like all the civilian applications will have their day I think part of my argument here is
that and how does that proceed right because in the other world there's existing stocks of capital that are worth a lot of the clusters they'll be still be Google clusters and so Google because they got the contract from the government there'll be the ones that control the AI but like why why are
they trading with anybody else why is there a startup get like they'll be the same it'll be the same companies that would be doing it anyway but in this in this world they're just contracting with the government or like their DPA for all their compute goes to the government
and but it's not like it's very natural after you get the asylum and we're building the robot armies and building fusion reactors or whatever that that's what we got from it we'll get to build robot armies yeah that one worried or like the fusion reactors and stuff because
it's a situation we have today because if you already have the robot armies and everything like the existential society doesn't have some leverage where it makes sense with the government to yeah the the sense that there's like a lot of capital that the government wants and there's
other things like why was Boeing privatized after government has the biggest guns and the waiver regulated institutions constitutions legal restraints okay so tell me what privatize it should look like in the SI world afterwards afterwards like the Boeing example right it's like
you have this government who gets it like what the Google Microsoft and we're selling it to like the already have a robot factory it's like why are they selling it to us like they already have their they don't need like our this is some change in the SI world because we didn't get like the
the the the ASI broadly deployed throughout the stake-off so we don't have the robot like we don't have like the fusion reactors and whatever advanced decades of advanced science that you're talking about so like it just what what are they trading with us for trading with whom for everybody who
was not part of the project they got the technology that's decades ahead yeah I mean like that's a whole nother issue like how does like economic distribution work or whatever I don't know that'll be brought yeah I don't say I don't basically I'm kind of like I don't see the alternative the
alternative is you like overturning a 500 year civilizational achievement of Lonfleet and you basically instantly leak the stuff to the CCP and either you like barely scrape out ahead and but you're in this fever struggle you're like proliferating crazy WMDs it's just like
enormously dangerous situation enormously dangerous on alignment because you're in this kind of like crazy race at the end and you don't have the ability to like take six months to get alignment right you know alternative is you know I'll try to it is like you aren't actually bundling your efforts
to kind of like win the race against the authoritarian powers um you know yeah and so you know I don't like it you know I wish I wish the thing we use the ASI for is to like you know cure the diseases and do all the good in the world but it is my prediction that sort of like by the in the end game what will be at stake will not just be kind of cool products but what will be at stake is like whether the liberal democracy survives like whether the CCP survives like what the
world order for the next century will be and when that is at stake forces will be activated that are sort of way beyond what we're talking about now and like you know in in this sort of like crazy race at the end like the sort of national security implications will be the most important you know
and sort of like you know we'll do it's like yeah you know nuclear energy how did it stay but in the initial kind of period when you know when this technology was first discovered you had to stabilize this situation how to get new because you had to do it right um and then and then the civilian
applications have the day I think of closer analogy to what this is because nuclear I agree the nuclear energy is a thing that happens in Iran and it's like dual use in that way but it's it's something that happened like literally a decade after nuclear weapons were developed yeah because
everything is with AI like the immediately all the applications are unlocked and it's closer to the literal I mean this is analogy people like soliciting me in the context of the AI is like assume your society had a hundred million more John winnigment yeah yeah and I don't think like if that was
literally what happened yeah tomorrow you just have a hundred million more of them yeah the approach should have been well some of them will convert to ISIS and we need to like we'd be really careful about that and then like oh you know like what if I've watched them are born in China and then we
like if we get a nationalized John winnigment so I'm like though I think it'll be generally a good thing and I'd be concerned about one power had getting like all the John winnigments I mean I think the issue is this sort of like bottling up in the sort of intensely short period of time like this
enormous sort of like you know unfolding of technological progress of an industrial explosion and I think we do worry about the 100 million John winnigment it's like rise of China why are we worried about the rise of China because it's like a hundred billion people and they're able to do a lot of industry and do a lot of technology and but it's just like you know the rise of China times like you know 100 because not just 100 one billion people it's like a billion super intelligent crazy you know
crazy things so and and in like in a very short period let's talk practically yeah because if the goal is we need to beat China part of that is protecting I mean that's one of the goals right yeah I agree I agree well one of the goals is to be China and also like that manage this incredibly crazy
scary period yeah right so part of that is making sure we're not leaking our own next secrets to them part of that is a lot of cluster uh huh I mean building the trillion dollar cluster that's right yeah but like
your whole point the Microsoft and release corporate bonds that are I think Microsoft can do the like hundreds of billions of dollars I think I think the trillion dollar cluster is closer to a national after I thought a year earlier if wait was that American capital markets are deep and they're
pretty good I mean I think the trillion I think it's possible it's private it's possible um basically like I you know it's by the way this point we have it a GI that's a drivally accelerating productivity I think the trillion dollar cluster is going to be planned before before the
it's it's I think I think I think I think it's sort of like you get the AGI on the like 10 gigawatt cluster like intelligence maybe you have like one more year where you're kind of doing some final and hobbling to fully unlock it then you know the intelligence explosion and meanwhile the like
trillion dollar clusters almost finished and then you like and then you do your super intelligence and your trillion dollar cluster or you run it on your trillion dollar cluster and by the way you have not just your trillion dollar cluster but like you know hundreds of millions of GPUs and inference
clusters everywhere and this isn't result like I think private this world I think private companies have the capital and can raise capital do then you will need the government force to do it fast yeah I was just about to ask like wouldn't it be the like we know a company's are on track to be
able to do this and be trying out if they're unhindered by yeah um we climb a fledgish or whatever well that's part of what I'm saying so if that's okay and if it really matters that we be China yeah there's all gonna gonna be all kinds of practical difficulties of like will the AI researchers
actually join the AI effort if they do yeah there's gonna be three different teams at least were currently doing private pre-training on different um different companies yeah now who decides at some point you're gonna have to like you're like you'll know the hyper parameters of the trillion dollar
cluster parameters and they like who decides that just like merging extremely complicated research and development processes across very different organizations yeah this is somehow supposed to speed up America against the Chinese like why don't we just let brain brain and deep mine merge and it
was like a little messy was yeah it was pretty messy and it was also the same company and also much earlier on in the process pretty pretty similar right same code different code bases and like lots of different infrastructure and different teams and it was like you know it wasn't like
it wasn't the smoothest of all processes but you know deep mine is doing I think very well I mean look you give the example of covid and the code code example is like listen we woke up to it maybe it was laid but then we deployed all this got money and covid response to government was a cluster
fuck over and like the only part of it that was worked is i agree warp speed was like enabled by the government it was literally just getting the permission that you can actually do we do also taking making like the big and arrogant comments or whatever but i agree but it was like fundamentally was
like a private sector like effort yeah that was the only part of covid that worked i mean i think i think again i think the project will look closer to operation warp speed and it's not even i mean i think i think you'll have all the companies involved in the government project i'm not that
sold that merging is that difficult you know you have one and you select one code base and you know you run free training on like GPUs with you know one code base and then you do the sort of second rl step on the you know the other code base with tpu is like that i think it's fine um
i mean to the topic of like well people sign up for it it went sign up for it today i think this would be kind of crazy to people um but also you know i mean this is part of the like secrets thing you know people gather at parties or whatever you know you know this um you know i don't think anyone
has really gotten up in front of these people and been like look you know the thing you're building is the most important thing for like the national security of the united states for like whether you know like you know the free world will have another century ahead of it like this is the thing
you're doing is really important like for your country for democracy um and um you know don't talk about the secrets and it's not just about you know a deep mind or whatever it's about it's about you know these really important things um and so you know i don't know like again we're talking
about the Manhattan project right this stuff was really contentious initially um but you know at some point it was like clear that the stuff was coming it was clear that there was like sort of a a real sort of like exudancy on the military national security front and um you know i think a
lot of people come around on the like whether it'll be competent i i agree i mean this is again where it's like a lot of the stuff is more like predictive in the sense i think this is like reasonably likely and i think not enough people are thinking about it you know like a lot of people
think about like AI lab politics or whatever um but like nobody has a plan for the project you know it's like you know like sure they're pessimistic about it and like we don't have a plan for it we need to do it very soon because ag is upon us yeah then fuck the only capable
competent technical institutions capable of making AI right now our private companies let's go to play that leading role it'll be a sort of a partnership basically but the uh the other thing is like you know again we talked about world or two and you know american unpreparedness the
vinaigret world or two is complete you know complete shambles right and so there is a sort of like very company you know i think america has a very deep bench of just like incredibly competent managerial talent you know i think that you know there's a lot of really dedicated people and um
you know i think basically a sort of operational warp speed public private partnership something like that you know it's sort of what i imagine it would look like yeah i mean the the recruiting the talent is an interesting question because the same sort of thing where initially for the man that
in project you had to convince people we've got to beat the nazis and you got to get on board i think a lot of them maybe regretted how much they accelerated the bomb and i want i think this is generally a thing of war where i mean i think they're also wrong to regret it but
yeah and why it was what's the reason for regretting it i think there's a world in which you don't have the the way in which nuclear weapons were developed after the war was pretty explosive because there was a precedent that you actually can use nuclear weapons then because of the race that was
set up you immediately go to the h bomb um i mean my view is again this is this is related to the view on a i and maybe some of our disagreement is like that was inevitable like of course like you know there's this you know world war and then obviously there was the you know cold war right
after of course like you know the military and the whole angle of this would be like you know pursued with ferocious intensity and i don't really think there's a world in which that doesn't happen which like ah we're all not going to build nukes and also just like nukes went really well
i think that could have gone terribly right you know like in you know again i mean this sort of i think this is like not physically possible with nukes this sort of pocket nukes for everybody but i think sort of like w md's that are sort of proliferated democratized and like all the countries
have it like the us leading on nukes and then sort of like building this new world order that was kind of us led or at least sort of like a few great powers and a non proliferation regime for nukes a partnership and a deal that's like look no military sort of application of nuclear technology
but we're going to help you with the civilian technology we're going to enforce safety norms on this the world that worked it worked and it could have gone so much worse i okay so i'm assuming i don't know if we can work with it you know they were i mean this is i mean i say this a bit
in the piece but it's like actually the a bomb you know like the a bomb and Hiroshima and the shock it was just like you know the fire bombing yeah fire bombing i think i think the thing that really changed the game was like the super you know the the the the the the the the h bomb and i cbms
and then i think that's really when it took it to like a whole new level i think part of me thinks when you say we they will tell the people that we for the free world to survive we need to pursue this project it's it sounds similar to world war two is as we wait let me show you
so world war two is a sad story obviously in this path that had happened but also like the victory is sad in the sense that why you bring goes into protect Poland yeah and at the end the ussr which is yeah you know as your family knows is incredibly brutal ends up occupying half of europe
right and the like part of like the over protecting the free world that's what we got to rush the AI and like if we end up with the american eil avaya then i think there's a world where we look back on this where it's has the same sort of twisted irony that brin going into a world where two had
about trying to protect Poland look i mean i think there's going to be a lot of unfortunate things that happen i'm just like i'm just hoping we make it through i mean to the to the point of it's like i really don't think the pitch will only be the sort of like you know the race i think the race will
be sort of a a backdrop to it i think the sort of general like look it's important that democracy shaped this technology we can't just like leak this stuff to you know north korea is it going to be important i think also for the just safety including alignment including the sort of like creation of new w md's um i'm not currently sold there's another path right so it's like if you just have the break net grace both internationally because you're just instantly leaking all the stuff including the
weights and just you know the commercial race you know demis and dario and and and sam you know just kind of like they all want to be first um and it's incredibly rough for safety and then you say okay safety regulation but you know it's sort of like you know the safety regulation that people talk
about it's like oh well-nissed and they take years and they figure out what the expert consensus is and then they it's not what's going to happen to the project as well but i think i mean i think the sort of alignment angle during the intelligence explosion it's going to you know it's not a process of like years of bureaucracy and then you can kind of write some standards i think it looks much more like basically a war and like you have a fog of war it's like look it's like is it safe to do
the next room you know and it's like ah you know like you know we're like three rooms into the intelligence explosion we don't really understand what's going on anymore um you know the um um uh you know like a
bunch of our like generalization scaling curves are like kind of looking not great you know some of our like automated eye researchers that are doing alignment are saying it's fine but we don't quite trust them in this test you know the like the eyes started doing naughty things and uh but then
we like hammered it out and then it was fine and and like ah should we should we go ahead should we take you know another six months also by the way you know like China just all the weights are we you know they're about to like deploy the room where are we like what do we do i think it's this
i think it is this crazy situation um and um you know basically you you were lying much more on kind of like a sane chain of command than you are on sort of some like you know the liberative regulatory scheme i wish you had you were able to do the liberative regulatory scheme and this is the
thing about the private companies too i don't think you know they'll claim they're going to do safety but i think it's really rough when you're in the commercial race and there's startups you know and startups startups startups you know i think they're not fit to handle w and d's
yeah i'm coming closer to your position uh but part of me also so with responsible scaling policies i was told that people who are advancing that that the way to think about this because they know i'm like a libertarian type of person yeah yeah and the way they approach me about it was
that fundamentally this is a way to protect market uh based development of a g i in the sense that if you didn't have this at all then you would have this sort of misuse and then you would have to be nationalized yeah and the rsp's are a way to make sure that through this deployment you can
still have a market based order but then there's these safeguards that make sure that things don't go off the rails yeah and i wonder if the if it seems like your story seems self-consistent but it does feel i know this was never your position so i'm not like i'm not looping you into this
but there's a sort of um modern baly almost in the sense of well look i hear here's what i think about rsp type stuff for sort of safety regulation that's happening now i think they're important for helping us figure out what world we're in and like flashing the warning signs when we're
coast right and so the story we've been telling is sort of like you know sort of what i think the modal version of this decade is but it's like i think there's lots of ways it could be wrong i really you know i we should talk about the data while more i think there's like again i think
there's a world where the stuff stagnates right there's a world where you don't have a g i um and so i basically you know the rsp thing is like preserving the optionality let's see how the stuff goes but like we need to be prepared like if if the red lights start flashing if we're getting the
automated eye researcher then it's like and it's crunch time and then it's time to go i think okay i can be on the same page on that that we should have a very very strong fryer on a pursuing and market based way unless you're right about what the explosion looks like at the
intelligence lotion and so like i don't move yet but in that world where like really does seem like alecracker can be automated and yeah that is the only bottleneck to getting tsi okay i think we can leave it about um i i can i can yeah i i am somewhat of the way there okay okay um yeah i
hope it goes well it's gonna be uh very stressful and again right now is the chill time um go your vacation while it lasts it's funny to look out over i've just like this is san francisco yeah yeah and open the eyes right there you know and the topics there i mean again this is kind of
like you know it's like you guys have this enormous power over how it's how it's going to go for the next couple years and that power is depreciating yeah um who's you guys like you know people at labs yeah yeah um but it is this sort of crazy world and you're talking about like you know i feel like
you talk about like oh maybe the nationalized is soon it's like you know almost nobody like really like feels it sees what's happening and it's it's i think this is the thing that i find stressful about all the stuff is like look maybe i'm wrong like if i'm right we're in this crazy situation
where there's like you know like a few hundred guys like paying attention um and um it's it's daunting i went to Washington a few months ago yeah and i was talking to some people who are doing AI policy stuff there yeah and i was asking them how likely the thing nationalization is yeah
and they said oh you know like it's really hard to nationalize stuff it's been a long time since you've done it yeah it's these very specific procedural constraints on what kinds of things can be nationalized and then i was asked well like ASI so that means because there's there's
constraints that defense production actor whatever that that won't be nationalized there's the Supreme Court would overturn that and they're like yeah i guess that would be nationalized that's the short summary of my past or my view on the project
okay so uh but before we go further on the ASI stuff let's just back up okay you uh we begin the conversation i think people we confuse you graduate valedictorian of Columbia when you're in 19 uh huh so you got to college when you were 15 right and you're and you're you're in
Germany then you got to college at 15 yeah uh how the fuck did that happen i i really wanted out of Germany um i you know i went to kind of a you know German public school it was a it's not a good environment for me um and uh you know i mean it in what sense it's just like no peers
the other yeah look i mean it wasn't yeah it wasn't you know there's i mean there's also just a sense in which um sort of like there's this particular sort of German cultural sense i think in the u.s you know there's all these like amazing high schools and like sort of an appreciation of
excellence and in Germany there's really this sort of like Paul poppy syndrome of him right where it's um you know you're the curious kid in class and you want to learn more instead of the teacher being like ah that's great they're like they kind of resent you for it and they're like trying to
crush you and um i mean there's also like there's no kind of like elite universities for undergraduate which is kind of crazy um um so you know the sort of you know there's sort of like come basically like the meritocracy was kind of crushed in Germany at some point um also i mean there's a sort of
incredible sense of you know complacency um you know across the board i mean one of the things that always puzzles me is like you know even just going to a u.s. college was just kind of like radical act and like you know it doesn't seem radical to anyone here because it's like ah this is obviously the thing you do and you can go to Columbia you go to Columbia but it's you know it's very unusual and it's it's it's wild to me because it's like you know this is where stuff is happening you can get
so much of a better education and you know like america's were you know it's where where where where all the stuff is and um people don't do it and and so um yeah anyway so i you know i know i skipped a few grades and and uh you know i think um at the time it seemed very normal to me to kind of like
go to college in 15th kind of america i think um you know now one of my sisters is now like turning 15 you know and so then i you know and i look at her and i'm like now i understand how my mother and as you get to call you're like presumably the only 15 year old yeah yeah as it was just
like normal for you to be a 15 year old like what was the initial years like normal at the time you know i did yeah so yeah it's like now i understand why my mother is worried and you know i think you know i worked i worked on my parents for a while you know eventually i was you know
i persuaded them no but yeah i felt felt very normal at the time and it was great it was also great because i you know i actually really like college right um in some sense it sort of came at the right time for me um where you know i um um i mean i i you know for example i really appreciated the
sort of like liberal arts education and you know like the court curriculum and reading sort of core works of political philosophy and literature and um you you did what econ and i mean my majors were math and statistics and economics um um um um but you know Columbia is a sort of pretty
heavy court curriculum yeah liberal arts education and honestly like you know i shouldn't have done all the majors i should have just i mean the best courses were sort of the courses where it's like there's some amazing professor and it's some history class and it's um i mean that's that's
honestly the thing i would recommend people spend the time on in college um was there one professor or class that stood out that way i mean if you there's like a class by Richard Betts um on uh war piece and strategy um adam twos obviously fantastic um uh and you know has written very
riveting books uh yeah you should have them on the podcast by the way how you try okay try man you got to give me but you gotta get on the pod yeah and i'll be so good um okay so then in a couple of years yeah we were talking to Tyler common recently and he said that when the the way we for he first encountered you yeah was you wrote this paper on economic growth and existential rest and he said i when i found reddit i couldn't believe that a 17 year old had
had burdened. I thought if this was a MIT dissertation, I'd be impressed. So you were like, what, how did you go from? You're like, I guess we've been junior than you're writing. You're writing, you know, pretty novel, economic papers. Why, why did you get interested in this, this kind of thing? And what was the process to get in that? I don't know. I just, you know, I get interested in things. In some sense, it's sort of like,
it feels very natural to me. It's like, I get excited about a thing. I read about it. I immerse myself. I think I can learn information very quickly and understand it. I mean, I think to the paper, I mean, I think one actual, at least for the way I work, I feel like, sort of moments of peak productivity matter much more than sort of average productivity. I think there's some jobs, you know, like CUO or something, you know, like average productivity
really matters. But I think there's sort of, I often feel like I have periods of, like, you know, there's some, there's a couple months with sort of net for lessons. And I'm like, you know, and the other times I'm sort of computing stuff in the background. And at some point, you know, like writing the series, this is also kind of someone. It's just like, you write it and it's like, it's really flowing. And that's sort of what ends up
matter. I think even for CEOs, it might be the case that the peak productivity is very important. There's one of our following chat and mouse rules, one of our friends in a group chat has pointed out how many famous CEOs and founders have been bipolar manic, which is very much the peak, like the call option on your productivity is the most important thing you get it by just increasing the volatility through bipolar. Okay, so that's interesting.
And so you get interested in economics first, first of all, why economics? Like you could read about anything at this moment. Like you, if you wanted, you know, you kind of got a little start on them. All right. You wasted all these years on Econ. There's an alternate world where you're like on the super alignment team at 17 instead of 21 or whatever. I mean, in some sense, I'm still doing economics, right? You know, what is
straight lines on a graph? And like figuring out what the trends are and like thinking about the feedback loops and equilibrium arms control dynamics. And you know, it's, I think it is a sort of a way of thinking that I find very useful. And you know, like what, you know, Dario and Ilya seeing scaling early in some sense, that is a sort of very economic way of thinking. Also, the sort of physics, like empirical physics, you know, a lot of them
are physicists. I think the economists usually can't code well enough and that's their issue. But I think it's that sort of way of thinking. I mean, the other thing is, you know, I thought they were sort of, you know, I thought a lot of the sort of like core ideas of economics. I thought we're just beautiful. And you know, in some sense, I feel like I was a little
duped, you know, where it's like actually, Econ academia is kind of decadent now. You know, I think that, you know, for example, the paper I wrote, you know, it's sort of, I think to take away, you know, it's a long paper, it's 100 pages of math or whatever, I think the core takeaway, I can, you know, kind of give the core intuition for in like, you know, 30 seconds and it makes sense. And it's, and it's like, you don't actually need the
math. I think that's the sort of the best pieces of economics are like that. We do the work, but you do the work to kind of uncover insights that weren't obvious to you before. Once, once you've done the work, it's like some sort of like mechanism falls out of it that like makes a lot of crisp and intuitive sense that like explains some facts about the world that you can then use in arguments. And I think, you know, I think, you know,
like a lot of Econ 101 like this and it's great. A lot of Econ in the, you know, the 50s and the 60s, you know, was like this. And, you know, Chad Jones papers are often like this. I really like Chad Jones papers for this. No, I think, you know, why did I ultimately not pursue Econ academia was number of reasons. One of them was Tyler Cowan. You know, he kind of took me aside and he was kind of like, look, I think you're one of the like top
young economists I've ever met, but also you should probably not go to grads. Oh, interesting. Yeah. I didn't realize that. Well, yeah. And it was good because you kind of introduced me to the, you know, I don't know, like the Twitter weirdos or just like, you know, I think the takeaway from that was kind of, you know, got to move out less one more time. Wait, Tyler, did you shoot the Twitter weirdos? A little bit. Yeah. Or just kind of like the sort of brought
you like the 60 year old, the old economist to interview to that Twitter. Yeah. I had been, I had so went from Germany, you know, completely, you know, on the periphery, it was kind of like, you know, a US elite institution and sort of got some vibe of like sort of, you know, your merit, a crackly, you know, US society. And then sort of, yeah, basically this sort of like, there was a sort of directory then to being like, look, I, you know, find the
true American spirit. I got to come out here. But the other reason I didn't become economist was because at least econ academia was sort of, I think sort of econ academia has become a bit decadent. And maybe it's just ideas getting harder to find. And maybe it sort of things, you know, and the sort of beautiful, simple things have been discovered.
But like, what are econ papers these days? You know, it's like, you know, it's like 200 pages of like empirical analyses on what happened when you know, like Wisconsin bought, you know, 100,000 more textbooks on like educational outcomes. And I'm really happy that work happened. I think it's important work. But I think it is not in government covering these sort of like fundamental insights and sort of mechanisms in society. Or, you know,
it's like even the theory work is kind of like, here's a really complicated model. And the model spits out, you know, if the Fed does X, you know, then why happens? You have no idea what that hat, why that happened? Because it's like gazillion parameters and they're all calibrated in some way and it's some computer simulation. And you have no idea about the validity, you know, yeah. So I think I think the sort of, you know, the most important
insights are the ones where you have to do a lot of work to get them. But then there's sort of this crisp intuition. Yeah. The P versus NP of sure. Yeah. Yeah. That's really interesting. So just going back to your time in college. Yeah. You say that peak productivity kind of explains the, yeah, this paper and things. But the valedictorian, that's getting straight A's or whatever is very much average productivity phenomenon. Right. So there's
one award for the highest GPA, which I want. But the valedictorian is like, I'm on the people, which have the highest GPA and like selected by faculty. Okay. So it's not, it's not just peak productivity. It's just, it's just, it's just, I generally just love this stuff. You know, I just, I was curious and I thought it was really interesting and I love learning about it. And, and I love kind of like it made sense to me. And, you know,
it was very natural. And so, you know, I think I'm, you know, I'm not, you know, I think one of my faults is I'm not that good at eating glass or whatever. I think there's some people who are very good at it. I think this sort of like, the sort of moments of peak productivity come when I, you know, I'm just really excited and engaged and, and, and, and, and love it. And, you know, I, I, you know, if you take the courses, you know, that's
what you got in college. Yeah. It's the Bruce Banner code and Avengers. You know, I'm always angry. I'm always excited. I'm always curious. That's, I'm always be quite timid. So it's interesting, by the way, when you were in college, I was also in college. I think you were despite being a year younger than me. I think you're, you're ahead in college than me or at least two, maybe two years ahead. And we met around this time.
Yeah. Yeah. Yeah. We also met, I think, through the Tyler Cowan universe. Yeah. Yeah. And it's very insane how small the world is. Yeah. I think I, did I reach out to you? I must have. Yes. I don't know. Yeah. When I had a couple of videos that they had, a couple hundred views or something. Yeah. Yeah. It's a small world. Yeah. I mean, this is the
crazy thing about the eye world, right? It's kind of like, it's the same few people at the Kerasa parties and they're the ones, you know, running the models that deep mind. And I, you know, open the eye and then through a pick and, and, you know, I mean, I think
some other friends of ours have mentioned this. We're now later in their career and very successful that, you know, they actually met all the people who are also kind of very successful in Silicon Valley now, like, you know, when they're, when they're, when they're, you know, when they're 20s or really 20s. I mean, look, I actually think, you know, and why is it a small world? I mean, I think one of the things is some amount of, like, you know, some sort
of agency. And I think in a funny way, this is a thing I sort of took away from the sort of Germany experience where it was, I mean, look, I, I, it was crushing. I really didn't like it. And it was, like, it was such an unusual move to kind of skip grades and such an unusual move to come to the United States. And, you know, a lot of these things I did were kind of unusual moves. And, you know, there's some amount where, like, just like, just trying
to do it. And then it was fine. And it worked. That kind of reinforced, like, you know, you don't, you don't just have to kind of conform to what the opportune window is. You can just kind of try to do the thing, the thing that seems right to you. And like, you know, most people can be wrong. I know things like that. And I think that was kind of a, you know, valuable kind of like early experience, those sort of formative.
Okay. So after college, what did you do? I did econ research for a little bit, you know, Oxford and stuff. And then, then I worked at Future Fund. Yeah. Okay. So, and so tell me about it. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Future Fund was, you know, it was a foundation that was, you know, funded by San Bank and
freed. I mean, we were our own thing. You know, we were based in the Bay. You know, at the time, this was in sort of early 22, it was, it was this just like incredibly exciting opportunity, right? It was that basically like a startup, you know, foundation, which is like, you know, it doesn't come along that, that often that, you know, we thought would be
able to give away billions of dollars. You know, thought would be able to kind of like, you know, remake how philanthropy is done, you know, from first principles, thought would be able to have, you know, this like great impact, you know, we cause as we focused on where, you know, biosecurity, you know, AI, you know, finding exceptional talent and
putting them to work on hard problems. And you know, like a lot of the stuff we did, I was, I was really excited about, you know, like academics who would, you know, usually take six months with send us emails like, ah, you know, this is great. This is so quick and, you know, and straightforward. You know, in general, I feel like I've often find that with like, you know, a little bit of encouragement, a little bit of sort of empowerment,
kind of like removing excuses, making the process easy. You know, you can kind of like, people do great things. I think like the future fund that I think is a context for people who might not realize, not only where you guys planning on deploying billions of dollars, but it is a team of four people. Yeah, yeah, yeah. So you at 18 are on a team of four people
that is in charge of deploying billions of dollars. Yeah. I mean, just, I mean, yeah, I'm future fund, you know, the, yeah, I mean, the, yeah, so that was, that was sort of the heyday, right? And then obviously, you know, when, when in sort of, you know, November of 22, you know, it was kind of revealed that Sam was this, you know, giant fraud. And from one day to the next, you know, the whole thing collapsed. I was just really tough.
I mean, you know, obviously, yeah, it was devastating. It was devastating. Obviously, for the people at their money and FTX, you know, closer to home, you know, all the, you know, all these grantees, you know, we wanted to help them and we thought they were doing amazing projects. And so, but instead of helping them, we ended up saddling them with like a giant problem. You know, personally, it was, you know, it was a startup, right?
And so I, you know, I'd worked 70 hour weeks every week for, you know, basically a year on this to kind of build this up. You know, we're a tiny team. And then from one day to the next, it was all gone and not just gone, it was associated with this giant fraud. And so, you know, that was incredibly tough. Yeah. And there were, were there any signs early on that SPF was? Yeah. And like obviously, I didn't know he was a fraud. And the whole,
you know, I would have never worked there. You know, and, you know, we weren't, you know, we were a separate thing. We weren't working with the business. I mean, I think I do think there were some takeaways for me. I think one takeaway was, you know, I think there's a, I had this tendency, I think people in general have this tendency to kind of like, you know, give successful CEOs the pass on their behavior because, you know, they're successful
CEOs. And that's how they are. And that's just the festival CEO things. And, you know, I didn't know Stan McPrinfried was a fraud, but I knew SPF and I knew he was extremely risk-taking, right? I knew he, he was narcissistic. You didn't tolerate disagreement well, you know, sort of by the end, he and I just like didn't get along well. And sort of, I think the reason for that was like, there's some biosecurity grants you really liked because
they're kind of cool and flashy. And at some point, I'd kind of run the numbers and didn't really seem that cost effective and I pointed that out and he was pretty unhappy about that. And so I knew his character. And I think, you know, I feel like one takeaway for me was, you know, like I think it's really worth paying attention to people's character, including like people you work for in successful CEOs. And, you know, that can save you a lot of
pain down the line. Okay. So after that, at Jackson Plodes and you're out. And then you got into, you went to OpenAI, the super alignment team had just started. I think you were, you were like part of the initial team. And so what was the original idea? What was compelling about that for you to join? Yeah, totally. So I mean, what was the goal of the super alignment
team? You know, the alignment team at OpenAI, you know, other labs, sort of like several years ago kind of had done sort of basic research and they developed RLHF, reinforcement learning from your feedback. And that was sort of a, you know, ended up being really successful technique for controlling sort of current generation of AI models. What we were trying to do was basically kind of be the basic research bet to figure out what is the successor to
RLHF. And the reason we needed that is, you know, basically, you know, RLHF probably won't scale to superhuman systems. RLHF relies on sort of human raiders who kind of thumbs up, thumbs down, you know, like the model said something, it looks fine. Looks good to me. At some point, you know, the superhuman model is the superintelligence. It's kind of right, you know, a million lines of, you know, crazy complex code. You don't know
at all what's going on anymore. And so how do you kind of steer and control these systems? How do you hide side constraints? You know, the reason I joined was, I thought this was an important problem. And I thought it was just a really solvable problem, right? I thought this was basically, you know, there's, I think there's a still do, I mean, even more so do, I think there's a lot of just really promising sort of ML research on alignment on sort
of aligning superhuman systems. And maybe we should talk about that a bit more later. But so it was so solvable, you solved it in the year. Oh, yeah. Over. It's a look open. Now I wanted to do this like really ambitious effort on alignment. And you know, it was backing it. And you know, I liked a lot of the people there. And so I was, you know, I was really excited. And I was kind of like, you know, I think there
was a lot of people sort of on alignment. There's always a lot of people kind of making hay about it. And, you know, I appreciate people highlighting the importance of the problem. And I was just really into like, let's just try to solve it. And let's do the ambitious effort. You know, let's do the operation warp speed for solving alignment. And it seemed like an amazing opportunity to do so. Mm-hmm. Okay. And now basically the team doesn't exist. I think the head of it has left.
I both had to put it up left. Yeah. And Ilya, that's what the news of the last week. Mm-hmm. What happened? Why did the thing break down? I think opening eyes sort of decided to take things in a somewhat different direction. Meaning what? I mean, that super alignment isn't the best way to frame the, uh, the thing. No, I mean, look, obviously, sort of after the November board events, you know, there are personnel changes. I think Ilya leaving was just incredibly tragic for opening
eyes. And, um, you know, I think some amount of repartization, I think some amount of, you know, I mean, there's been some reporting on the superliming compute commitment. You know, there's this 20% compute commitment is part of, you know, how a lot of people were recruited. You know, it's like, we're going to do this ambitious effort in alignment. And, um, you know, some amount of, you know, not keeping that and deciding to go in a different direction.
Mm-hmm. Okay. So now, Ilya and Ilya has left. So this team itself has dissolved. But you were the sort of first person who left or was forced to leave. You were, the information reported that you were fired for leaking. What happened? Is this accurate? Yeah. Um, look, why don't I, why don't I tell you what they claim I leaked it and you can tell me what you think? Um, yeah. So opening, I did claim to employees that I was fired for
leaking. Uh, and, you know, I and others have sort of pushed them to say what the leak is. And so here's the response in full. Um, you know, sometime last year, I had, um, uh, written a sort of brainstorming document on preparedness on safety and security measures we need in the future on the path to AGI. Um, and I shared that with three external researchers
for feedback. So that's it. That's the leak. Um, you know, I think for context, it was totally normal at opening eye at the time to share sort of safety ideas with external researchers for feedback. Um, you know, it happened all the time. Um, you know, the dock was sort of my idea as, you know, before I shared it, I reviewed it for anything sensitive. Um, uh, the internal version had a reference to a future cluster, but I redacted that for
the external copy. Um, you know, there's a link in there to some, to some slides of mine, internal slides. Um, you know, that was a dead link to the external people I shared it with, you know, the slides weren't shared with them. And, um, so obviously I, um, I pressed them to sort of tell me what is the confidential information in this document and what they came back with was a line in the dock about planning for AGI by 2728 and that setting
timelines for preparedness. Um, you know, I wrote the stock, you know, a couple months after the super alignment announcement, we put out, you know, this sort of four year planning horizon, I didn't think that planning horizon was sensitive. You know, it's, it's the sort of thing. Sam says publicly all the time. Um, I think sort of John said, uh, yeah, he said, I'm a couple weeks ago. Um, anyway, so that's it. That's it. So that's
a pretty thing for, uh, if the cause was leaking, that seems pretty thin. Was there anything else to it? Yeah. I mean, so that was, that was the leaking claim. I mean, say a bit more about sort of what happened. Yeah. Um, so one thing was, um, last year I had a written a memo in journal memo about opening eye security. I thought it was, you know, egregiously insufficient. You know, I thought it wasn't sufficient to protect the theft of model
weights or key algorithmic secrets from foreign actors. Um, so I wrote this memo. I shared it with a few colleagues, a couple members of leadership, um, who sort of mostly said it was helpful. Um, but then, you know, a couple weeks later, a sort of major security incident occurred. Um, and that prompted me to share the memo with a couple members of the board. Uh, and so after I did that, you know, days later, it was made very clear to me that leadership
was very unhappy with me having shared this memo with the board. Um, you know, apparently the board had hassled leadership about security. Um, and then I got sort of an official HR warning for this memo, uh, you know, for sharing it with the board, uh, the HR person told me it was racist to worry about CCPS view and not, um, and they said it was sort of unconstructive. Um, and you know, look, I think I probably wasn't at my most diplomatic, you know, I definitely
could have been more politically savvy. Um, but, you know, I thought it was a really, really important issue and, um, you know, the security incident had been really worried. Um, anyway, and so I guess the reason I bring this up is, uh, when I was fired, it was sort of made very explicit that the security memo is a major reason for my being fired. Um, you know, I think it was something like, you know, the reason that this is a firing and out of warning
is because of the security memo. Um, the you sharing it with the board, the warning I'd gotten for the security memo. Um, anyway, and I mean, some other, you know, what might also be helpful context is the sort of questions they asked me when they fired me. So, you know, this was a bit over a month ago. I was pulled, you know, aside for a chat with a lawyer,
you know, that quickly turned very adversarial. And, you know, the questions were all about my views on AI progress on AGI on the level security appropriate for AGI on, you know, whether government should be involved in AGI on, you know, whether I and super alignment were loyal to the company, um, on, you know, what I was up to during the opening I board events, you know, things like that. And, um, you know, then they, you know, chatted to a
couple of my colleagues and then they came back and told me I was fired. Um, and, you know, they'd gone through all of my digital artifacts from the time, you know, time out of me at messages, docs and that's when they found, you know, the leak. Um, yeah. And so anyway, so the main claim they made was this leaking allegation. You know, that's what they told employees. Um, they, you know, the security memo. Um, there's a couple other
allegations they threw in. One thing they said was that I was on forthcoming during the investigation because I didn't initially remember who I'd shared the doc with. The sort of preparedness brainstorming doc only that I had sort of spoken to some external researchers about these ideas. And, um, you know, look, the doc was over six months old. You know, I'd spent the day on it. Um, you know, it was a Google doc. I shared with my open and
email. It wasn't a, you know, a screenshot or anything I was trying to hide. Um, it simply didn't stick because it was such a non-issue. Um, and then they also claimed that I was engaging on policy in a way that they didn't like. Um, and so what they cited there was that I had spoken to a couple of external researchers, you know, somebody that I think tank, um, about my view that a G.I. would become a government project, you know, as we discussed.
Uh, you know, in fact, I was speaking to lots of sort of people in the field about that at the time. I thought it was a really important thing to think about. Um, anyway, and so they found, you know, they found a DM that I'd written to like a friendly colleague, you know, five or six months ago where I related this and that, you know, they cited that. Um, and, you know, I had thought it was well within open and norms to kind of talk about high level
issues on the future of AGI with external people in the field. So that's, that's what they alleged. That's what happened. Um, you know, I've, I've spoken to kind of a few dozen former colleagues about this, you know, since, um, I think the sort of universal reaction is kind of like, you know, that's insane. Um, I was sort of surprised as well, you know,
I, I had been promoted just a few months before. Um, I think, uh, you know, I think I really is comment for the promotion case at the time was something like, you know, Leopold's amazing. We're lucky to have him. Um, but look, I mean, I think the thing I understand and I think it's some sense is reasonable is like, you know, I think I ruffled some feathers
and, you know, I think I was, I was probably kind of annoying at times. You know, it's like, I security stuff and I kind of like repeatedly raised that and maybe not always in the most diplomatic way. Um, you know, I didn't sign the employee letter during the board events,
you know, despite pressure to do so, um, anywhere one of like eight people or something. I'd like, yeah, I guess that I think the sort of two senior most people didn't sign were Andre and yeah, and you know, I mean, on the letter, by the way, I, um, by the time on sort of Monday morning, um, when that letter was going around, I think probably it was appropriate for the board to resign. I think they'd kind of like lost too much credibility and trust
with the employees. Um, but I thought the letter had a bunch of issues. I mean, I think one of them was it just didn't call for an independent board. I think it's sort of like basics of corporate governance, having an independent board. Anyway, you know, it's other things, you know, I, um, in sort of other discussions, I'd press leadership for sort of opening
eye to abide by its public commitments. Um, you know, I raised a bunch of tough questions about whether it was consistent with the opening I mission and consistent with the national interest to sort of partner with authoritarian dictatorships to build the core infrastructure for AGI. Um, so you know, look, you know, it's a free country, right? So what I love about this country, you know, we talked about it. Um, and so I, they have no obligation to
keep me on staff. Um, and, you know, I think in some sense, I think it would have been perfectly reasonable for them to come to me and say, look, you know, we're taking the company in a different direction, you know, we disagree with your point of view. Um, you know, we don't trust you enough to sort of tow the company line anymore. And, um, you know, thank you so much for your work at OpenAI, but I think it's time to part ways. I think that would have
made sense. I think, you know, we did start sort of materially diverging on sort of views on important issues. I'd come in very excited in line with OpenAI, but that sort of changed over time. And, um, look, I think, I think there would have been a very amicable way to part ways. Um, and I think it's a bit of a shame that it sort of, this is the way it went down. Um, you know, all that being said, I think, um, you know, I really want to emphasize,
um, there's just a lot of really incredible people out of me. And, um, it was an incredible privilege to work with them. And, um, you know, overall, I'm just extremely grateful for my time there. When you left now that there's, now there's been reporting about, uh, an NDA that former employees have to sign in order to have access to their Vesor Decree. Mm hmm. Did you sign some, a such NDA? No. Um, what was one of my situation was a little
different in that it was sort of, I was basically right before my cliff. Um, but then, you know, they still offered me the equity, um, um, but I didn't want to sign a non disparagement, you know, freedom is priceless. And how much was the equity? It's like, uh, close to a million dollars. Mm hmm. So it was definitely a thing you were, you and other reservoir era
of that this is like a choice that OpenAI is explicitly offering you. Yeah. And presumably the person on OpenAI staff knew that we're offering them equity, but they had assigned this NDA that has these conditions that you can't, for example, give the kind of statements about your thoughts on AGI and OpenAI that you're giving on this podcast right now. Like, I don't know what the whole situation is. I certainly think sort of vested equity
is pretty rough if you're conditioning that on the NDA. It might be a somewhat different situation if it's a sort of separate agreement. Right. But, uh, an OpenAI employee who had signed it, presumably could not give the podcast that you were giving today. Quite possibly not. Yeah. I don't know. Okay. So analyzing the situation here, I guess if you were to, yeah, the board thing is really tough because if you were trying to defend them, you would
say, well, listen, you were just kind of going outside the regular chain and command. And maybe there's a point there, although the way in which the person from HR thinks that you have an adversarial relationship with, or you're supposed to have an adversarial relationship with the
board where to, to give the board some information, which is relevant to what, whether OpenAI is fulfilling its mission and whether I can do that in a better way is part of the leak as if the board is that is supposed to ensure that OpenAI is following its mission as a sort of external actor. That seems pretty. I mean, I think, I think, I mean, to be clear, the leak allegation was just that sort of document. Right. Feedback. This is just sort of a separate thing that they
said, and they said, I wouldn't have been fired if not for the security memo. They said, they said you wouldn't have been fired. The reason this is a firing and not a warning is because of the warning you had gotten for the security memo. Oh, before you left, the incidents with the board
happened where Sam was fired and then rehardy CEO. And now he's on the board. Now, Ilya and Jan, who are the heads of the superlimin team in Ilya, who is a co-founder of OpenAI, obviously the most significant in terms of stature, a member of OpenAI from a research detective. They've left. It seems like, especially with regards to superlimin stuff, and just generally the OpenAI, a lot of this sort of personnel drama has happened over the last few months. What's
going on? Yeah, there's a lot of drama. Yeah. So why is there so much drama? You know, I think there would be a lot less drama if all OpenAI claimed to be with sort of building chat GPT, or building business software. I think where a lot of the drama comes from is OpenAI really believes they're building AGI, right? And it's not just a claim that make for marketing purposes, whatever. There's this report that Sam is raising, you know, $7 trillion for chips, and it's like
that stuff only makes sense if you really believe in AGI. And so I think what gets people sometimes is sort of the cognitive dissonance between sort of really believing AGI, but then sort of not taking some of the other implications seriously. You know, it's going to be incredibly powerful technology, both for good and for bad. And that implicates really important issues, like the national security
issues we spoke about. Like, you know, are you protecting the secrets from the CCP? Like, you know, does America control the core AI infrastructure, or does it, you know, a Middle Eastern dictator control the core AI infrastructure? And then I mean, I think the thing that, you know, really gets people
is the sort of tendency to kind of then make commitments. And sort of like, you know, they say they take these issues really seriously, they make big commitments on them, but then sort of frequently don't follow through, right? So, you know, again, as mentioned, there's this commitment around superliming compute, you know, sort of 20% of compute for this long term safety research effort. And I think, you know, you and I could have a totally reasonable debate about what is the appropriate
level of compute for super alignment. But that's not really the issue. The issue is that this commitment was made, and it was used for group people, and you know, it was, it was very public. And it was made because, you know, there's a recognition that there would always be something more urgent than a long term safety research effort, you know, like some new product or whatever. And then, in fact, they just, you know, really didn't keep the commitment. And so, you know,
there was always something more urgent than long term safety research. I mean, I think another example of this is, you know, when I raised these issues about security, you know, they would, they would tell me, you know, securities are number one priority. But then, you know, invariably, when it came time to sort of invest serious resources, when it came time to make trade-offs,
to sort of take some pretty basic measures, security would not be prioritized. And so, yeah, I think it's the cognitive dissonance, and I think it's the sort of unreliability that causes a bunch of the drama. So, let's zoom out, talk about the part, a big part of the story, and also a big motivation of the way in which it must proceed with regards to geopolitics and everything, is that once you have the AGI, pretty soon after you proceed to ASI because super intelligence,
because you have these AGI's which can function as researchers into further AI progress. And with a matter of years, maybe less, you go to something that is like super intelligence. And then from there, then you can do up, according to your story, do all this research and development and robotics, and pocket nukes and whatever other crazy shit. But, at a high level, okay, but there's, I was skeptical of this story for many reasons. At a high level, it's not clear to me that input
output model of research is how things actually happen in research. We can look at economy wide, right? Patrick Hollis and others have made this point that, from compared to 100 years ago, we have 100x more researchers in the world. It's not like progress is happening 100x faster. So, it's clearly not the case that you can just pump in more population into research and you get higher research on the other end. I don't know why it would be different for the researches
themselves. Okay, great. So, this is getting into some good stuff. I have a kind of class of disagreement. I have with Patrick and others. So, obviously inputs matter. So, it's like, United States produces a lot more scientific and technological progress than, you know, Lichtenstein, or Switzerland. And even if I made Patrick Hollis and dictator of Lichtenstein or Switzerland, and Patrick Hollis was able to implement his utopia of ideal institutions,
keeping the town full fix. He's not able to do some crazy high school immigration thing or like, you know, whatever, some crazy genetic breeding scheme or whatever he wants to do. Keeping the town full fix, but amazing institutions. I claim that still, even if you made Patrick Hollis and dictator of Switzerland, maybe you get some factor. But Switzerland is not going to be able to outcompete the United States and scientific and technological cars. Obviously,
magnitude is matter. Okay. No, I'm not sure I agree with this. There's been many examples in history where you have small groups of people who are part of like Bell Labs or Skunk Works or something. It's a couple hundred researchers. Open AI, right? Couple hundred researchers. They do. Highly selected though, right? You know, it's like it's like saying that's part of the area. Patrick Hollis and his dictator is going to do a good job of this. Well, yes, if you can
highly select all the best AI researchers in the world, you might only need a few hundred. But if you know, that's that's the talent pool. It's like you have the, you know, 300 best AI researchers in the world. But there's there has been, it's not a case of from 100 years to now. There haven't
been population has increased massively. A lot of the work, in fact, you would expect the density of talent to have increased in the sense that malnutrition and other kinds of debilitivity, poverty, whatever that have debilitated past talent at the same sort of level as no longer. Yeah. So to the 100x point, right? So I don't know if it's 100x. I think it's easy to inflate these things. Probably at least 10x. And so people are sometimes like, ah, you know, like, you know,
come on, ideas haven't gotten them a charter to find. You know, why would you have needed this 10x increase in research effort? Whereas to me, I think this is an extremely natural story. And why is it a natural story? It's a straight line on a log log plot. This is sort of a deep learning researchers dream, right? What is this log log plot? On the x axis, you have log cumulative research
effort. On the y axis, you have some log GEP or ooms of algorithmic progress or, you know, log transistors per square inch or, you know, in the sort of experience curve for solar, kind of like, you know, whatever the log of, you know, the price for a lot of solar. And it's extremely natural for that to be a straight line. You know, this is sort of a class. Yeah, it's a classic. And, you know, it's basically the first thing is very easy. Then basically, you know, you have to have log
increments of cumulative research effort to find the next thing. And so, you know, in some sense, I think this is a natural story. Now, one objection kind of people then make is like, oh, you know, isn't it suspicious, right? That like ideas, you know, well, we increased research effort 10x. And ideas also just got 10x harder to find. And so it perfectly, you know, equilibrates. And so there I say, you know, it's just, it's an equilibrium. It's an adageous equilibrium,
right? So it's like, you know, isn't it a coincidence that supply equals demand, you know, in the market clears, right? And that's, and same thing here, right? So it's, you know, ideas getting, how much ideas have gotten harder to find is a function of how much progress you've made. And then, you know, what the overall growth rate has been is a function of how much ideas have gotten harder to find in ratio to how much you've been able to like increase research effort.
What is the sort of growth, the log cumulative research effort? So in some sense, I think the story is sort of like fairly natural. And you see this, you see this not just economy-wide, you see it in kind of experience curve for all sorts of individual technologies. So I think there's some process like this. I think it's totally possible that, you know, institutions have gotten worse by some factor. Obviously, there's some sort of exponent of diminishing returns on more people,
right? So like serial time is better than just paralyzing. But still, I think it's like, clearly inputs matter. Yeah, I agree. But if the coefficient of how fast I diminish as you grow the input is high enough, then the abstract, the fact that inputs matter is in that relevant. Okay. So I mean, we're talking to very high level, but just like take it down to the actual concrete thing here, opening eye has a staff of at most low hundreds who are directly involved in
the algorithmic progress in future models. If it was really the case that you could just arbitrarily scale this number and you could have much faster algorithmic progress. And that would result in much higher, much better AI's for opening eye, basically. Yeah. Then it's not clear why opening eye doesn't just go out and hire every single person with 150 IQ, of which there are hundreds of
thousands in the world. And my, my story there is there's transaction costs to managing all these people that don't just go away if you have a bunch of AI's that their, these tasks aren't easy to paralyze. And I think you, I'm not sure how you would explain the fact of like, why it doesn't open the eye go on a recruiting binge of every single genius in the world. Okay. Great. So let's
talk about the opening eye example and let's talk about the automated eye researchers. So I mean, the opening eye case, I mean, just, you know, it's kind of like look at the inflation of like AI researcher salaries over the last year. I mean, I think like, I don't know, I don't know what it is, you know, 4x, 5x is kind of crazy. So they're clearly really trying to recruit the best AI researchers in the world. And you know, I don't know, it's, they do find the best AI researchers in the
world. I think my response to your thing is like, you know, almost all of these 150 IQ people, you know, if you just hire them tomorrow, they wouldn't be good AI researchers. They wouldn't be an ally grad for, but they're willing to make investments that take your subpoena out of the four, they're the data centers they're buying right now will come online in 2026 or something. Why wouldn't they be able to make every 150 IQ person some of them won't work out. Some of them won't
have the traits we like. But some of them by 2026 will be amazing AI researchers. Why, why, why, why aren't they making that bet? Yeah. And so sometimes this happens, right? Like smart physicists have been really good at AI research. You know, it's like all the anthropocophanies. But, but like, if you talk to, I had Dario in the podcast, I'm like, they have this very careful policy of like, we're not going to just hire arbitrarily. We're going to be extremely selective. We're
not sure yet. Training is not as easily scalable, right? So training is very hard. You know, if you just hired 100,000 people, it's like, I mean, you couldn't train them all. If you're really hard to train them all, you know, you wouldn't be doing any eye research. Like, you know, there's huge costs of bringing on a new person training them. This is very different with AI's, right? And I think this is, it's really important to talk about the sort of like advantages they
I will have. So it's like, you know, training, right? It's like, was it take to be an alieq Radford? You know, we need to be in a really good engineer, right? They are going to be an amazing engineer. They're going to be amazing at coding. You can just train them to do that. They need to have, you know, not just be good engineer, but have really good research intuitions and like really
understanding planning. And this is stuff that, you know, I like Radford, or you know, somebody like him is acquired over years of research over just like being deeply immersed in deep learning, having tried lots of things himself and failed. They are going to be able to read every research paper ever written, every experiment ever run at the lab, you know, like gaining the intuitions from all of this. They're going to be able to learn and parallel from all of each other's
experiment, you know, experiences. You know, I don't know, what else? You know, it's like, what does it take to be an alieq Radford? Well, there's a sort of cultural acclimination aspect of it, right? You know, if you hire somebody new, there's like, politicking, maybe they don't fit in. Well, in the AI case, you just make replicas, right? There's a like motivation aspect for it, right? So it's like, you know, I like, you know, that could just like duplicate alieq Radford.
And before I run every experiment, I haven't spent like, you know, a decade's worth of human time, like double checking the code and thinking really careful, be careful about it. I mean, first of all, on how that many alieq Radford's and, you know, he wouldn't care, and he would not be motivated. But, you know, the AI's, I can just be like, look, I have 100 million of you guys. I'm just going to put you on just like, really making sure this code is cracked. There are no bugs. This experiment
has thought through every hyper parameter is cracked. Final thing I'll say is, you know, the 100 million human equivalent AI researchers, that is just a way to visualize it. So that doesn't mean you're going to have literally 100 million copies. You know, so there's trade-offs you can make between serial speed and in parallel. So you might make the trade-off is, look, we're going to run them at, you know, 10X, 100 X serial speed. It's going to result in fewer tokens overall because
of sort of inherent trade-offs. But, you know, then we have, I don't know what the numbers would be, but then we have, you know, 100,000 of them running at 100 X human speed and thinking and, you know, and there's other things you can do on coordination, you know, they can kind of like share latent space, it tends to each other's contacts. There's basically this huge range of possibilities of things you
can do. The 100 million thing is more, I mean, another illustration of this is, you know, if you kind of run the math in my series, and it's basically, you know, 2728, you have this automated AI researcher, you're going to be able to generate an entire internet's worth of tokens every single day. So it's
clearly sort of a huge amount of like intellectual work that you can do. I think the analogous thing there is today we generate more patents in a year than during the actual physics revolution and the early 20th century that we're generating across like half a century or something. And are you making more physics progress in a year today than we were? So yeah, we're going to generate all these tokens. Yeah. Are you generating as much, codified knowledge as humanity has been able to generate
into initial creation that internet? Internet tokens are usually final output, right? This is a lot of these tokens. If we talked about the unhobbling, right? And I think of a kind of like, you know, a GPN token is sort of like one token of my internal monologue. Yeah. And so that's how you do the math on human equivalents. You know, it's like 100 tokens a minute. And then, you know, humans
working for X hours. And you know, what is the equivalent there? I think this goes back to something we're talking about earlier where well, I haven't we seen the huge revenues from people often ask this question that if you took GPD for back 10 years and you show people this, they think this is going to be automated. This is already automated. Yeah. Yeah. And so there's just sort of a modisponance modus tolands here where part of the explanation is like, oh, it's like just on the
verge, you need to do these unhobblings. And part of that is probably true. Right. But there is another lesson to learn there, which is that just looking at face value at a set of abilities. Yeah. There's probably more sort of hobblings that you don't realize that are hidden behind the scenes. I think the same will be true of the AGI is that you have running as AI researchers. Uh-huh. I think a lot of things I basically agree. Right. I think I think my story here is like, you know,
I talk about I think there's going to be some long tail, right. And so maybe it's like, you know, 2627, you know, like the proto-automated engineer and it's really good at engineering. It doesn't have the research intuition yet. You don't quite know how to put them to work. Um, but you know, this sort of even the underlying pace of AI progress is already so fast, right. In three years, from not being able to do any kind of like math at all, it's now crushing, crushing these math
competitions. Um, and so you have the initial thing and like 2627, maybe this sort of auto, it's an automated research engineer. It speeds you up by 2x. You go through a lot more progress in that year. By the end of the year, you figured out like the remaining kind of unhoblinings, you've like got a smarter model and you know, maybe then that thing or maybe it's two years, you know, and that thing, just like that thing really can do automated 100%. Um, and again, you know, they
don't need to be doing everything. They don't need to be making coffee, you know, they don't need to like, you know, maybe there's a bunch of, you know, uh, tacit knowledge and a bunch of other fields. But you know, AI researchers that AI labs really know the job of an AI researcher. And it's in some sense, it's a sort of, there's lots of clear metrics. It's all virtual. There's code. It's things you can kind of develop and train for. So I mean, another thing is how do you actually
manage a million AI researchers? Humans, the sort of comparative ability we have that we've been especially trained for is like working in teams. Yeah. And despite this fact, we have for thousands of years who've been learning about how we work together in groups. And despite this management is a cluster far right? It's like most companies are badly managed. It's, it's really hard to do this stuff. Yeah. For AI's, the, the sort of like, we talk about
AGI, but it'll be some bespoke set of abilities. Some of which will be higher than humans, a bunch will be at human level. And so it'll be some bundle and we'll need to figure out how to put these bundles together with their human overseers with the equipment and everything. And the idea that as soon as you get the bundle, you'll figure out how to get mill, like just shove millions of them together and manage them. I'm just very skeptical of like any other
revolution or technological revolution in history has been very piecemeal. Much more piecemeal than you would expect on paper if you just thought about what is the industrial revolution? Well, we dig up coal that powers a steam engine. You use a steam engine to run these real roads that helps us get more coal out. And there's sort of like, factorial store you can tell where like a six, six hours you can be pumping thousands of times more coal. But in real life,
it takes centuries often, right? In fact, the electrification, there's this famous study about how to initially to electrify factories. It was decades before after electricity to change from the pull of pullies and water, real-based system that we had for steam engines to one that's works with more spread out electrical motors and everything. I think this will be the same kind of thing. It might take like decades to actually get millions of AI researchers to work together.
Okay. Great. This is great. Okay. So a few responses to that. First of all, I mean, I totally agree with the kind of like real world bottlenecks type of thing. I think this is sort of, you know, I think it's easy to underrate. You know, basically what we're doing is we're removing the labor constraint. We automate labor and we like kind of exploit technology. But you know, there's still lots of other bottlenecks in the world. And so I think this is part of why the story is it kind of like
starts pretty narrow at the thing where you don't have these bottlenecks. And then only over time, as we let it kind of expands this sort of broader areas, AI, this is part of why I think it's like initially this sort of AI research explosion, right? It's like AI research doesn't run into these real world bottlenecks. It doesn't require, you know, like plow field or dig up coal. It's just you're just doing AI research. The other thing, you know, the other thing about like in your model,
yeah, research. It's not complicated like about flipping a burger. It's just AI research. I mean, I mean, the speakers, people make these arguments like, oh, you know, AGI won't do anything because it can't flip a burger. Yeah, we'll be able to flip burger, but it's going to be able to do algorithmic progress, you know, and then, and then, and then when it does all the arithmetic progress, we'll figure out how to flip a burger.
You know, you know, look, the, the, the, the, the other thing is about, you know, again, these, the sort of quantities are lower bound, right? So it's like, this is just like, we can definitely run a hundred million of these. Probably what will happen is one of the first things we're going to try to figure out is how to like, again, run like, you know, translate quantity into quality, right? And so it's like, even at the baseline rate of progress, you're like quickly
getting smarter and smarter systems, right? If we said it was like, you know, four years between the preschool and the high school, right? So I think, you know, pretty quickly, you know, there's probably some like simple algorithmic changes you find, you know, instead of one Alec Radford, you have a hundred, you know, you don't even need a hundred million. And then, and then you get even smarter systems. And now these systems are, you know, they're capable of
sort of creative, complicated behavior. You don't understand. Maybe there's some way to like use all this test time compute in a more unified way rather than all these parallel copies. And, you know, so there won't just be quantitatively superhuman though, pretty quickly become qualitatively superhuman. You know, it's sort of like, you know, you're a high school student,
you're like trying to wrap yourself, wrap your mind around kind of standard physics. And then there's some like super smart professor who is like quantum physics, it all makes sense to him, and you're just like, what is going on? And sort of, I think pretty quickly, you kind of enter that regime, just given even the underlying pace of AI progress. But even more quickly than that, because you have the sort of accelerated force of now this automated AI research, I agree that
over time, you would, I'm not denying that ASI is the best possible. I'm just like, how is this happening in a year? Like, you, okay, first of all, I think the story is sort of like, basically, I think it's a little bit more continuous, you know, right? Like, I think already, you know, like, I talked about, you know, 25, 26, you're basically going to have models as good as a college graduate. And you know, I don't know where the unhobbling is going to be, but I think it's possible that
even then you have kind of the proto-automated engineer. So there's, I think there is a bit of, like, a smear, kind of an AGI smear or whatever, where it's like, there's sort of unhobblings that you're missing, there's kind of like ways of connecting them you're missing, there's like some level intelligence you're missing. But then at some point, you are going to get the thing that is like a 100% automated Alec Radford. And once you have that, you know, things really take off, I think.
Yeah, okay, so let's go back to the unhobbling. Yeah. Is there, we're going to get a bunch of models by the end of the year. Is there something, let's suppose we didn't get some capacity by the end of the year. Yeah. Is there some such capacity which lacking would suggest that AGI progress is going to take longer than you are projecting? Yeah, I mean, I think there's, there's two kind of key things. So it's the unhobbling and there's the data wall, right? I think you
just talk about the data wall for a moment. I think the data wall is, you know, even though kind of, like, all of this stuff has been about, you know, crazy eye progress. I think the data wall is actually sort of underrated. I think there's like a real scenario where we're just stagnating. Yeah. You know, because we've been running this tailwind of just like, it's really easy to bootstrap, and you just do unsupervised learning next token prediction. It learns these amazing world models,
like, bam, you know, great model. And you just got to buy some more compute, you know, do some simple efficiency changes, you know, and again, like so much of deep learning, all these like big gains on efficiency have been like pretty dumb things, right? Like, you know, you got a normalization layer, you know, you know, you fix the scaling laws, you know, and these already have been huge things,
let alone kind of like obvious ways in which these models aren't good yet. Anyway, so data wall, big deal, you know, I don't know, some like put some numbers on this, you know, some like you do common crawl, you know, online is like, you know, 30 trillion tokens, Lama 3 you trained on 15 trillion tokens. So you're basically already using all the data. Yeah. And then, you know, you can get somewhat further by repeating it. So there's an academic
paper by, you know, Boz, Barach, and some others that does scaling laws for this. And they're basically like, yeah, you can repeat it sometime. After 16 times of repetition, just like returns, basically go to zero, you're just completely screwed. And so I don't know, so you can get another 10x on data from rep, say like Lama 3, you know, Lama 3 is already kind of like at the limit of
all the data, you know, maybe you can get 10x more by repeating data. You know, I don't know, maybe that's like at most 100x better model than GP4, which is like, you know, 100x effective compute from GP4 is, you know, not that much, you know, if you do half an order of magnitude a year of compute half an order of magnitude a year of algorithmic progress, you know, that's kind of like two years from GP4. So, you know, GP4 finished pre-treatening in
2022, you know, 24. So I think one thing that really matters, I think we won't quite know by end of the year, but you know, 25, 26, are we cracking the data wall? Okay, so suppose we had three orders of magnitude less data in common crawl on the internet,
then we just happened to have now. And for decades, the internet, other things we've been rapidly increasing the stock of data that humanity has, is it your view that for contingent reasons, we just happened to have enough data to train models that are just powerful enough at 4.5 level, where they can kick off the self-play RL loop. Or is it just that we, you know, if it had been
three UMS higher, then it would probably sort of been slightly faster. In that world, we would have been looking back at like, oh, how hard it would have been to like kick off the RL explosion just 4.5, but we would have figured it out. And then so in this world, we would have gotten to GP3 in and we'd have to kick off some sort of RL explosion. But we would have still figured it out. We didn't just like, look out on the amount of data we happen to have in the world.
I mean, three UMS is pretty rough, right? Like, three UMS, if less data means like six UMS smaller, six UMS, less compute model and to tell the scaling laws, you know, that's basically capping out at like GP2, really bad. So I think that would be really rough. I think you do make an interesting point about the contingency. You know, I guess earlier we were talking about the sort
of like when in the sort of human trajectory, are you able to learn from yourself? And so, you know, if you go with that analogy, again, like if you've only gotten the preschooler model, it can't learn from itself. You know, if you've only gotten the elementary schooler model, it can't learn from itself. And you know, maybe GP4, you know, smart high schoolers, really where it starts. Ideally, you have a somewhat better model, but then it really is able to kind of like learn from
itself or learn by itself. So I think there's, I think I mean, I think maybe one UMS data, I would be like more iffy, but maybe still doable. Yeah, I think I would feel chiller if we had, you know, like one or two interesting exercise to get probably distributions of HEI contingent done across like the data. Yeah. Okay. I think that makes me skeptical of this story. Yeah. Is that the things it totally makes sense for each training work so well. Yeah.
These other things, their stories of in principle, why they ought to work like a humans can learn this way and so on. Yes. And maybe they're true. But I worry that a lot of this case is based on sort of first principles, evaluation of how learning happens that fundamentally we don't understand
how humans learn. And maybe there's some key thing we're missing. Yeah. On the sort of sample efficiency, yeah, humans actually, maybe there's a you say, well, the fact that these things are way less sample efficient in terms of learning than humans are suggests that there's a lot of room for improvement. Yeah. Another perspective is that we are just on the wrong path altogether,
right? That's why there's a sample inefficient when it comes to pre-trading. Yeah. So, yeah, I mean, just like there's a lot of like this first-wants-wants-wants-arguments stack on top of each other where you get these unhopplings and then you get to HEI. Yeah. Then you because of these reasons where you can stack all these things on top of each other, you get to ASI. Yeah. And I'm worried that there's too many steps of this. Yeah.
Sort of first-wants-wants-wants-wants-wants-arguments. I mean, we'll see, right? I mean, on the on the sort of sample efficiency thing, again, sort of first principles, but I think again, there's this clear sort of missing middle. And so, you know, and sort of like, you know, people hadn't been trying. Now people are really trying, you know, and so it's sort of, you know, I think often again in deep learning, something like the obvious thing works.
And there's a lot of details to get right. So it might take some time, but it's now people are really trying. So I think we get a lot of signal in the next couple years. You know, on hobbling, I mean, what is the signal on hobbling that I think would be interesting? I think the question is basically like, are you making progress on this test time compute thing, right? Like, is this thing able to think longer or rise in than just a couple hundred tokens?
Right? That was unlocked by Chain of Thought. And at that point in particular, the many people who have longer timelines have come on the podcast, have made the point that the weight is trained this long horizon RL. It's not, I mean, earlier talking about like, well, they can think for five minutes, but not for a longer. But it's not because they can't physically output an hour's worth of tokens. It's just really,
at least from what I understand what they say. Right? Like, even like Gemini has like a million in context, and the million of context is actually great for consumption. And it solves one important on hobbling, which is the sort of onboarding problem, right? Which is, you know, a new coworker, you know, in your first five minutes, like a new smart high school intern,
first five minutes, not useful at all. A month in, you know, much more useful, right? Because they've like looked at the monorepone, understand how the code works, and they've like read your internal docs. And so being able to put that in context, great, solves this onboarding problem. Yeah, but they're not good at sort of production of a million tokens yet. Yeah.
Right. But on the production of a million tokens, there's no public evidence that there's some easy loss function where you can, GPT-4 has gotten a lot better since, it's actually, so the GPT-4 gains since launch, I think are a huge indicator that there's like, you know, so you talked about this with John, someone on the podcast. John said this was mostly post-training gains. Right. You know, if you look at the sort of LMSIS scores, you know, it's like 100
E-Low or something. It's like a bigger gap than between Cloud 3, Obis and Cloud 3, Hikou. And the price difference between those is 60X. But it's not really authentic. It's like better in the same chat box. Right. Like, you know, it went from like, you know, 40% of the crux is like, whether or like, be able to, like, I think it indicates that clearly,
there's stuff to be done on hobbling. I think, yeah, I think the interesting question is like, this time of year from now, you know, is there a model that is able to think for like, you know, a few thousand tokens coherently, collectively, agentically? And I think probably there's, you know, again, this is what I'd feel better if we had an Umur-chu more data because it's like, the scaling just gives you this sort of like tailwind, right? We're like, for example, tools,
right? Tools, I think, you know, talking to people who try to make things work with tools, you know, actually sort of GP4 is really when tools start to work. And it's like, you can kind of make them work with GP3.5, but it's just really tough. And so it's just like, having GP4, you can kind of help it learn tools in a much easier way. And so there's a bit more tailwind from scaling. And then, yeah, and does, I don't know if it'll work, but it's a key question.
Oh, yeah. I think it was a good place to sort of close that part where we know what the crux is and what the progress, what evidence would that would look like on the agi to super intelligence. Maybe it's the case that the games are really easy right now and you can just sort of let loose an aliex rat for giving a compute budget and it comes out the other end with something that is an additive, um, like change the spark of the code. This is a compute multiplier,
changes to the part. What other parts of the world? Okay, maybe there's an interesting way to ask this. Yeah. How many other domains in the world are like this where you think you could get the equivalent of in one year, you just throw enough intelligence across multiple instances. Yeah. And you would just come out the other end with something that is remarkably decades centuries ahead. Yeah. Like you start off with no flight and then you're the right brother is
a million instances of GPT-6 and you come out the other end with starling. Yeah. Like the days that your model of how things work. I think you're exaggerating the timelines a little bit, but, but you know, I think you know, decades worth of progress in a year or something. I think that's a reasonable problem. Um, so I think this is where, you know, basically the sort of automated eye research that comes in because it gives you this enormous
tail headwind on all the other stuff, right? So it's like, you know, you automate AI research with your sort of automated alec radfords, you come out the other end, you've done another five ooms, you have a thing that is like vastly smarter, not only is it vastly smarter, you like, you know, you've been able to make it good at everything else, right? You're like, you're solving robotics. The robots are important, right? Because like for a lot of other things, you do actually need to like
try things in the physical world. Um, I mean, I don't know, maybe you can do a lot in simulation. Those are the really quick worlds. I don't know if you saw the like last and video GTC, you know, it's all about the like digital twins and just like having like your manufacturing processes
and simulation. I don't know. Like again, if you have these like, you know, super intelligent, like cognitive workers, like can they just like make simulations of everything, you know, kind of off of load style and then, and then, you know, make a lot of progress in simulation possible. But I also just think you're going to get the robots. Um,
again, I agree about like there are a lot of real world bottlenecks, right? And so, you know, I don't know, it's quite possible that we're going to have, you know, crazy drone storms, but also, you know, like lawyers and doctors still need to be humans because of like, you know,
regulation. Um, but, you know, I think, you know, you kind of start narrowly, you broaden, and then the world's in which you kind of let them lose, which again, because of I think these competitive pressures, we will have to let them lose in some degree on, you know, various national security applications. Um, I think like white rapid progress is possible. The other thing, though, is it's sort of, you know, basically in the sort of explosion after there's kind of two components,
there's the A, right in the production function, the like growth of technology. And that's massively accelerated by you know, you have a billion super intelligent scientists and engineers and technicians,
you know, superbly competent and everything. You also just automated labor, right? And so it's like, even without the whole technological explosion thing, you have this industrial explosion, at least if you let them, let them lose, which is like, now you can just build, you know, you can cover Nevada and like, you know, you start with one robot factory as producing more robots. And basically this like just the cumulative process, um, because you've taken labor out of the
equation. Um, yeah, that's super interesting. Yeah, although when you increase the care of the L without increasing the A, you can look at the Soviet Union or China where they rapidly increase inputs. Yeah. And that does have the effect of being geopolitically game changing where you, it is remarkable. Like you go to Shanghai over a sex, I mean, these crazy cities in a decade. Right, right. And that's, I mean, the closest thing to like people talk about 30% growth rates or
whatever for on the air. 10% of it is totally possible. Yeah. And that's yeah. But without productivity gains, it's not like the industrial revolution where like you're, from the perspective of you're looking at a system from the outside, you're goods have gotten cheaper, they can manufacture more things. But you know, it's, it's not like the next century is coming at you. Yeah, it's both, it's both. So you know, both that are important. The other thing I'll say is like, all this stuff,
I think the magnitudes are really, really important, right? So, you know, we talked about a 10X of research effort or maybe 10, 30X over a decade, you know, even without any kind of like self-improvement type loop, you know, we talk the sort of even in the sort of Jeep before the AGI story, we're talking about an order of magnitude of effective compute increase the year, right? Half an order of magnitude of compute, half an order of magnitude of algorithm progress that sort of translates into
effective compute. And so you're doing a 10X a year, right? Basically on your labor force, right? So it's like it's a radically different world if you're doing a 10X or 30X in a century versus a 10X a year on your labor force. So the magnitude's really matter. They also really matter on the sort of intelligence explosion, right? So like just the automated AGI research part. So you know, one story you could tell there is like, well, ideas get harder to find, algorithm progress is
going to get harder. Yeah, right now you have the easy ones, but in like four or five years, there'll be fewer easy ones. And so the sort of automated AGI researchers are just going to be, what's necessary to just keep it going, right? Because it's gotten harder. But that's sort of,
it's like a really weird knife edge assumption economics where you assume it's just enough. But isn't that the equilibrium story you were just telling with why the economy as a whole has 2% economic growth because you just pursued on the equal, I guess you're saying by the time you get the equilibrium here is it's like way faster, at least, you know, and it's at least and
it depends on the sort of exponents. But it's basically it's the increase it's like suppose you need to like 10X effective research effort in AGI research in the last, you know, four or five years to keep the pace of progress. We're not just getting a 10X, you're getting, you know, a million X or a hundred thousand X. There's just magnitude really matter. And the magnitude is just basically,
you know, one one way to think about this is that you have kind of two exponentials. You have your sort of like normal economy that's growing at, you know, 2% a year and you have your like AI economy and that's going at like 10X a year. And it's starting out really small. But sort of eventually it's going to it's just it's it's it's it's way faster and eventually it's going to overtake. Right. Even if you have you can almost sort of just do the simple revenue extrapolation, right.
If you think your AI economy, you know, that has some growth rate. I mean, it's a very simplistic way and so on. But there's there's this sort of 10X a year process and that will eventually kind of like transition the sort of whole economy from as it broadens from the sort of, you know, 2% a year to the sort of much faster growing process. And I don't know, I think that's very like
consistent with historical chain, you know, stories of right. There's this sort of like, you know, there's this sort of long run hyperbolic trend, you know, it manifested in the sort of like sort of change in growth mode in the yostrel revolution. But there's just this long run hyperbolic trend. And you know, now you have the sort of now you have that another sort of change in
growth mode. Yeah, yeah. I mean, that was one of the questions I asked Tyler. Yeah. When I had him on the podcast is that you do go from the fact that after 1776 you go from a regime of negligible economic growth 2%. Yeah. It's really interesting. It shows that I mean, from the perspective of somebody in the middle ages or before, yeah, 2% is equivalent to the sort of 10%. Yeah. I guess you're projecting green hire for the AI economy. But I mean, I think again, and there's all this stuff,
you know, I have a lot of uncertainty, right. So a lot of the time I'm trying to kind of tell the modal story. I think it's important to be kind of concrete and visceral about it. And you know, I have a lot of uncertainty basically over how the 2030s play out. And basically the thing I know is it's going to be working crazy. But, but you know, exactly what, you know, where the bottlenecks are and so on. I think that will be kind of like. So I let's talk through the numbers here. You
hundreds of millions of AI researchers. So right now, GPT 4.0 turbos like 15 bucks for a million tokens, outputted and a human thinks 150 tokens a minute or something. And if you do the math on that, I think it's for an hour's worth of human output. You it's like 10 cents or something. Now, cheaper than a human worker. Huh? cheaper than a human worker. Oh yeah. I can't do the job. Yeah.
That's right. That's right. But by the time you're talking about models that are trained on the 10 gigawatt cluster, then you have something that is forwarders of magnitude, more expensive, yeah, inference, three orders of magnitude, something like that. So that's like $100 an hour of labor. And now you're having hundreds of millions of such laborers. Is there enough compute to do with the model that is a thousand times bigger, this kind of labor?
Great. Okay. Great question. So I actually don't think inference costs for frontier models are necessarily going to go up that much. So I mean, one historical data point is isn't the test time sort of thing that it will go up even higher. I mean, we're just doing per token. Right. And then I'm just saying, you know, if suppose each model token was the same as sort of a human token thing at 100 tokens a minute. So it's like, yeah, it'll use more. But the
sort of if you just the token calculations is already pricing that in. The the question is like per token pricing, right? And so like GB3 when it launched was like actually more expensive than GB4 now. And so over just like, you know, vast increases in capability gains, inference costs is remain constant. That's sort of wild. I think it's worth appreciating. And I think it gestures that sort of an underlying pace of algorithmic progress. I think there's a sort of like more
theoretically grounded way to why inference costs would stay constant. And it's the fourth following story, right? So on Chichelle's scaling laws, right? You, you know, half of the additional compute you allocate to bigger models and half of it you allocate to more data, right? But also, if we go with the sort of basic story of half an order of year more compute and half an order of magnitude a year of algorithmic progress, you're also kind of like you're saving half an order of
magnitude a year. And so that kind of would exactly compensate for making the model bigger. The caveat on that is, you know, obviously not all training efficiencies are also inference efficiencies, you know, a bunch of the time they are separately, you can find inference efficiencies. So I know given this historical trend, given the sort of like, you know, baseline sort of theoretical reason, you know, I know I I think it's not crazy baseline assumption that actually these models,
the frontier models are not necessarily going to get more expensive, proto-can. Oh really? Yeah. Like, okay, that's that's wild. We'll see, we'll see. I mean, the other thing, you know, maybe they get, you know, even if they get like 10x more expensive than, you know, you have 10 million instead of 100 million, you know, so it's like, it's not really, you know, like, it's not bad. Okay, so part of the intelligence division is that each of them has to run experiments. That's our yeah, yeah,
GBD4 size. Uh-huh. And the result of that, so that takes up a bunch of compute. Yes, they need to consolidate the results of experiments and what is the synthesized. I mean, you have a much bigger influence street anyway than you're training. Sure. Okay. But I think the experiment compute is a constraint. Yeah. Okay. I'm going back to maybe a sort of bigger fundamental thing we're
talking about here. We're projecting, we're in a series you say we should denominate the probability of getting to AGI in terms of orders and magnitude of effective compute. Yeah. Effective here, accounting for the fact that yeah, there's a compute, quote-unquote compute multiplier. Yeah. If you have better algorithm. Yes. And I'm not sure that it makes sense to be confident that this is a sensible way to project progress. It might be, but I'm just like,
I have a lot of uncertainty about it. Uh-huh. It seems similar to somebody trying to project when we're going to get to the moon and they're like looking at the Apollo program in the 40s or something and they're like, we have some amount of effective jet fuel. And if we get more efficient engines, then we have more effective jet fuel. Okay. And so we're going to like probability of getting to move the moon based on the amount of effective jet fuel we have. And I don't deny that jet fuel is
important to launch rockets. Uh-huh. But that seems like an odd way to denominate when you're going to get to the moon. Yeah. Yeah. So I mean, I think these cases are pretty different. I don't know. I don't think there is a sort of clear, I don't know how rocket science works, but I didn't, I didn't get the impression that there's some clear scaling behavior with like, you know, the amount of jet fuel. I think the, um, I think in AI, you know, I mean, first of all,
the scaling laws, you know, they've just helped, right? And so if you, a friend of mine pointed
this out and I think it's a great point. If you kind of concatenate both these sort of original cappelin scaling laws paper that I think went from 10 to the negative nine to 10 petaflop days and then, you know, concatenate additional compute to from there to kind of GP4, you assume some algorithmic progress, you know, it's like the scaling laws of hell, you know, like probably over 15 ooms, you know, I know this rough calculate probably maybe even more held for a lot of rooms.
They've held for the specific loss function, which they're training, which is, uh, training makes token. Whereas the, the progress you are forecasting will be required for further progress. Yes. In capability. Yeah. It was specifically, we know that scaling can't work because of the data wall. And so there's some new thing that has to happen. And I'm not sure whether the, you can extrapolate that same scaling career to tell us whether these hobblings will also like it's,
is this not on the same graph? The hobblings are just a separate thing. Yeah. Exactly. This is sort of like, you know, it's yeah. So I mean, a few, a few things here, right? Okay. So, the, um, on the, on the effective compute scaling, the, um, you know, in some sense, I think it's like people center the scaling laws because they usually explain in the sort of like why, why is scaling matter? Um, the scaling laws like came way after people, or at least, you know, like,
Dario Ilya realized that scaling mattered. And I think, you know, I think that almost more important than the sort of lost curve is just like, just in general, make, you know, there's this great quote from Dario on your, on your, on your podcast, it's just like, you know, Ilya was like, for models, they just want to learn, you know, you make them bigger, they learn more. And, and that just applied just across domains generally, you know, all the capabilities. And so, um, and you
can look at this in benchmarks. Again, like you say, headwind, data wall, and I'm sort of bracketing that and talking about that separately. The other thing is on hobblings, right? If you just put them on the effective compute graph, these on hobblings would be kind of huge, right? So like, I think, what does it even mean? Like, what is it? What is on the y-axis here?
Like, say, MLPR on the benchmark or whatever, right? And so, you know, like, you know, we mentioned the sort of, you know, the, the, the LMSIS differences, you know, Arleach F, you know, again, as good as a hundred X, chain of thought, right? Chain of just going from this prompting chain, a simple algorithm like chain can be like 10X effective compute increases on like math benchmarks. I think this is like, you know, I think this is useful to illustrate that on hobblings are large.
But I think they're like, I kind of think of them as like slightly separate things. And kind of the way I think about is that like at a per token level, I think GP4 is not that far away from like a token of my internal monologue, right? Even like 3.5 to 4 took us kind of from like the bottom of the human range to the top of the human range on like a lot of, you know, on a lot of, you know, kind of like high school tests. And so it's like a few more 3.5 to 4 jumps per token
basis, like, per token intelligence. And then you've got to unlock the test time. You've got to solve the onboarding problem, make it use a computer. And then you're getting real close. I'm reminded of, again, the story might be wrong. Right. It is strikingly plausible. I agree. And so I think actually, I mean, the other thing I'll say is like, you know, I say this 2027 timeline. I think it's unlikely, but I do think there's worlds that are like AGI next year. And that's
basically if the test time compute overhang is really easy to crack. If it's really easy to crack, then you do like four rooms of test time compute, you know, from a few hundred tokens to a few million tokens, you know, quickly. And then, you know, again, maybe it's, maybe it only takes one or two to 3.5 to four jumps per token, like one or two of those jumps for token plus uses test time compute. And you basically have the proto automated engineer. So I'm reminded of
a Stephen Pinker releases his book on what is it? The better angels of our nature. And it's like a couple of years ago or something. And he says the secular decline in violence and war and everything. And you can just like plot the line from the end of World War II. And in fact, before World War II, then these are just aberrations, whatever. And basically, as soon as it happens, you crane, Gaza, everything is like, so basically the ASI and crazy global console.
Right. ASI and crazy UW and D's. I think this is a sort of thing that happens in history. You see history line and you're like, oh my gosh. And then just like as soon as you make that prediction, who was that famous author? So yeah, just, you know, again, people are predicting deep learning without a wall every year. Maybe one year they're right. But it's like gone a long way and hasn't had a wall. I don't have that much more to go. And so yeah.
I guess I think this is a sort of plausible story. And I let's just run with it and see what it implies. Yeah. So we were talk in your series, you talk about alignment from the perspective of this is not about some Doomer scheme to get the point zero on personal probability distribution, where things don't go off the rails. It's more about just controlling the systems, making sure they do what we intend them to do. If that's the case, and we're going to be in the
sort of geopolitical conflict with China. And part of that will involve, and what we're worried about is then making the CCP boss that go out and take the red flag of Mao across the galaxies or something. Then shouldn't we be worried about alignment as something that if in the wrong hands, this is the thing that enables brainwashing, sort of dictatorial control. This seems like a worrying thing. This should be part of this sort of algorithmic secrets we
keep hidden, right? How to line these models? Because that's also something the CCP can use to control their models. I mean, I think in the world where you get the Democratic coalition, yeah, I mean, also, alignment is often dual use, right? Like Arleicchef, it's like alignment team developed. It was great. It was a big win for alignment, but it's also obviously makes these
models useful. But yeah, so alignment enables the CCP boss. Alignment also is what you need to get the sort of whatever USAIs, follow the constitution and disabay, unlawful orders, and respect separation of powers and checks and balances. So yeah, you need alignment for whatever you want to do. It's just it's the sort of underlying technique. Tell me what you make of this take. I'm in the string with this a little bit. So fundamentally, there's many different ways
the future could go. There's one path in which the LEASER type crazy AIs with the nanobots, take the future and turn everything into gray goo or paperclips. And the more you solve alignment, the more that path the decision tree is circumscribed. And then so the more you solve alignment, the more it is just different humans and divisions they have. And of course, we know from history that things don't turn out the way you expect. So it's not like you can decide the future.
But it will appear. You want the error crotch. But for the perspective of anybody who's looking at the system, it will be like, I can control where this thing is going to end up. Enter the more you solve alignment and the more you circumscribe the different futures that are the results of AI will. The more that accentuates the conflict between humans and their visions of
the future. And so in the world of alignment is solved. And the world in which alignment is solved is the one is the world in which you have the most sort of human conflict over where to take AI. Yeah, I mean, by removing the worlds in which the AIs take over, then the remaining worlds are the ones where it's like the humans decide what happens. And then as we talked about, there's a whole lot of yeah, a whole lot of worlds and how that could go. And I worry. So when you think about
alignment, and it's just controlling these things, yeah. Just think a little forward. And there's worlds in which hopefully, you know, human descendants or some version of things in the future merge with super intelligences and they have the rules of their own, but they're in some sort of law and market based order. I worry about if you have things that are conscious and should be treated with rights. If you read about what aligns schemes actually are, and then you read these
books about what actually happened during the cultural revolution. What happened when Stalin took over Russia? And you have a very strong monitoring from different instances where one everybody's tacit watching each other. You have brainwashing, you have red teeming where you have the spice stuff you were talking about where you try to convince somebody you're on like a defector and you see if they defect with you. And if they do, then you realize they're an enemy. And then you
take and listen, I maybe I'm stretching the analogy too far. But the way, like the ease of such these alignment techniques actually map on something you could have read about during like mouse cultural revolution is a little bit troubling. Yeah, I mean, look, I think sent you in AI's a whole other topic. I know if we want to talk about it. I agree that like it's going to be very important how we treat them. You know, in terms of like what you're actually programming these
systems to do. Again, it's like alignment is just it's a technical, it's a technical problem, a technical solution, it enables the CCP bots. I mean, in some sense, I think the, you know, I almost feel like the sort of model and also about talking about checks and balances is sort of, you know, like the Federal Reserve or Supreme Court justices. And there's a funny way in which they're kind of this like very dedicated order, you know, Supreme Court justices. And it's amazing.
They're actually quite high quality. Yeah. Right. And they like really smart people. They really believe in the Constitution. They love the Constitution. They believe in their principles. They have, you know, these these these wonderful, you know, about, you know, and yeah, they have different persuasions, but they have sort of, I think, very sincere kind of debates about what is the meaning
of the Constitution? You know, what is the best actuation of these principles? You know, I guess that scoop, you know, by the way, our conversation sort of skittest or arguments like this podcast, you know, but when everyone out of high quality content, I mean, I think there's going to be a process of like figuring out what the Constitution should be. I think, you know, this Constitution has like worked for a long time. You start with that. Maybe eventually things change enough that
you want edits to that. But anyway, you want them to like, you know, for example, for the checks and balances, they like, they really love the Constitution and they believe in it and they take it really seriously. And like, look, at some point, yeah, you are going to have like AI police and AI military, but I think sort of like, you know, being able to ensure that they like, you know, believe in it in the way that like Supreme Court justice does or like in the way that like a
federal reserve job, you know, uh, uh, uh, official takes their job really seriously. Yeah. Yeah. And I guess a big open question is whether if you do the project or something like the project. Sorry, the other important thing is like a bunch of different factions need their own AI's, right? And so it's, I, it's really important that like each local party gets to like have their, you know, and like whatever, crazy, you might totally disagree with their values, but it's like,
it's really important that they get to like have their own kind of like super intelligence. And, and again, I think it's that these sort of like classical liberal processes play out, including like different people of different persuasions and so on. And I don't mean, the advisors might not make them, you know, wise, they might not follow the advice or whatever, but I think it's important. Okay. So speaking of alignment, you seem pretty optimistic. So let's
run through the source of the optimism. Yeah. I think they're, you laid out different worlds in which we could get AI. Yeah. There's one that you think is low probability of next year where G B D four plus scaffolding plus unhoplings gets you to AGI. Not G B four, you know, like, sorry, sorry. So G B five. Yeah. Yeah. And there's ones where it takes much longer. There's ones where it's something that's a couple years. Yeah. So G B D four seems pretty aligned in
the sense that I don't expect to go off the rails. Yeah. Maybe with scaffolding. Yeah. Things might change. Yeah. Exactly. So the, and you maybe you'll keep turn at there's cranks to keep going up. And one of the cranks gets you to ASI. Yeah. Is there any point at which the sharp left turn happens? Is it when you start? Is it the case that you think plausibly when they act more like agents? This is a thing to worry about. Yeah. Is there anything qualitatively that
you expect to change with regards to alignment perspective? Yeah. So I don't know if I believe in this concept of sharp left turn. But I do think there's basically, I think there's important qualitative changes that happen between now and kind of like somewhat superhuman systems, kind of like early on the intelligence explosion. And then important qualitative changes that happen from like early intelligence explosion to kind of like true superintelligence and all its power
in mind. And let's talk about both of those. And so, okay. So the first part of the problem is one, we're going to have to solve ourselves, right? We have to kind of have to line the like initial AIs and the intelligence explosion, the sort of automated out Bradford. I think there's kind of like, I mean two important things that change from GPD4, right? So one of them is, you know, if you believe the story on like synthetic data or L or self play to get past the data
wall. And if you believe this on a hobbling story, you know, at the end you're going to have things, you know, they're agents, right? Including they do long term plans, right? They have long, you know, they're somehow they're able to act over long horizons, right? But you need that, right? That's the sort of prerequisite to be able to do the sort of automated I research.
And so, you know, I think there's basically, you know, I basically think sort of pre-training a sort of alignment neutral in the sense of like, it has all these representations, it has good representations that, you know, as representations of doing bad things, you know, but there's, there's, it's not like, you know, scheming against you or whatever. I think this sort of
misalignment can arise once you're doing more kind of long horizon training, right? And so, you're training, you know, again, two simplified examples, but to kind of illustrate, you know, you're training an AI to make money. And, you know, if you're just doing that with reinforcement learning, you know, it's, you know, it might learn to commit fraud or lie or to see you or seek power
simply because those are successful strategies in the world, right? So maybe, you know, RL is basically it explores, maybe it figures out like, oh, it tries to like hack and then it gets some money and that made more money, you know, and then if that's successful, if that gets reward, that's just reinforced. So basically, I think you're, there's sort of more serious misalignments, kind of like misaligned long-term goals that could arise between now and, or that sort of,
necessarily have to be able to arise if you're able to get long horizon system. That's one. What you want to do in that situation is you want to add side constraints, right? So you want to add, you know, don't lie, don't deceive, don't commit fraud. And so how do you add those side constraints, right? The sort of basic idea you might have is like Arleach F, right? You're kind of, like, yeah, it has this goal of like, you know, make money or whatever, but you're watching what it's
doing. It starts trying to like, you know, lie or deceive or fraud or whatever, break the law, you're just kind of like thumbs down, don't do that, you want to reinforce that. The sort of critical issue that comes in is that these AI systems are getting superhuman, right? And, and they're going
to be able to do things that are too complex for humans to evaluate, right? So again, even early on, you know, in the intelligence explosion, the automated AI researchers and engineers, you know, they might write millions, you know, billions, trillions of lines of complicated code, you know,
they might be doing all sorts of stuff. You just like, don't understand anymore. And so, you know, in the million lines of code, you know, is it somewhere kind of like, you know, hacking, hacking or like, accelerating itself or like, you know, trying to go for the nukes or whatever, you know, like, you don't know anymore, right? And so this sort of like, you know, thumbs up, thumbs down, pure Arleach F, doesn't fully work anymore. Second part of the picture, and the, we should maybe
talk more about this. First part of the picture, I think it's going to be like, there's a hard technical problem of what, what do you do sort of post Arleach F, but I think it's a solvable problem. And it's like, you know, there's various things in bullish on. I think there's like, ways in which deep learning has shaped out favorite way. The second part of the problem is you're going from your like initial systems and intelligence explosion to like super intelligence in you.
It's like, many ooms it ends up being like, by the end of it, you have a thing that's
vastly smarter than humans. I think the intelligence explosion is really scary from an alignment point of you because basically if you have the strap intelligence explosion, you know, less than a year, two years, or whatever, you're going, say in the period of a year from systems were like, you know, failure would be bad, but it's not catastrophic to like, you know, saying a bad word, it's like, you know, it's something, something goes awry to like, you know, failure is like, you know,
an extra traded itself, it starts hacking the military, you can do really bad things. You're going less than a year from sort of a world in which like, you know, it's some descendant of current systems and you kind of understand it and it's like, you know, has good properties. There's something that potentially has a very sort of alien and different architecture, right? After having gone through another decade of amalgamances. I think one example there that's
very salient to me is legible and faithful chain of thought, right? So a lot of the time when we're talking about these things, we're talking about, you know, it has tokens of thinking and then it uses many tokens of thinking and, you know, maybe we bootstrapper ourselves by, you know, it's pre-trained, it learns to think in English, we do something else on top so it can do the sort of
longer chains of thought. And so, you know, it's very plausible to me that like for the initial automated alignment researchers, you know, we don't need to do any complicated mechanism interpretability and just like literally you read what they're thinking, which is great. You know, it's like huge advantage, right? However, I'm very likely not the most efficient way to do it, right? There's like probably some way to have a recurrent architecture. It's all
internal states. There's a much more efficient way to do it. That's what you get by the end of the year. You know, you're going this year from like Arle HF plus plus some extension works to like, it's vastly superhuman. It's like, you know, it's to us like, you know, an expert in the field might be to like an elementary school or middle school. And so, you know, I think it's this sort of incredibly sort of like hairy period for alignment. The thing you do have is you have the
automated eye researchers, right? And so you can use the automated eye researchers to also do alignment. And so in this world, yeah, why are we optimistic that the project is being run by people who are thinking, I think so here's here's here's here's something to think about. Okay. The opening eye, yeah, you start off with people who are very explicitly thinking about exactly these kinds of things. Yes, right? But are they still there? No, but you're still here. Here's
the thing. No, no, even the people who are there, even like the current leadership is like exactly these things are going to find them in interviews and their blog posts talking about. And what happens is when as you were talking about when some sort of trivial and y'all talked about it, this is not just you, we all talked about his tweet thread when there is some trade-off that has to be made to us, we need to do this flash release this week and not next week
because whatever Google I was the next week. So we're going to get it. And then the trade-off is made in favor of the more careless decision. When we have the government or the national security advisor or the military or whatever, which is much less familiar with this kind of discourse, there's a nationally thinking in this way about, huh, where the chain of thought is unfaithful,
and how do we think about the features that are represented here? Why should we be optimistic that a project run by people like that will be thoughtful about these kinds of considerations? I mean, they might not be. I agree. I think a few thoughts, right? First of all, I think the private world, even if they sort of nominally care, is extremely tough for alignment. A couple
of reasons. One, you just have the race between the commercial labs, right? And it's like, you don't have any headroom there to be like, ah, actually, we're going to hold back for three months, like, get this right. And, you know, we're going to dedicate 90% of our compute to automated alignment research instead of just like pushing the next zoom. The other thing though is like, in the private world, you know, China has stolen your way, China has your secrets, they're right on your tails,
you're in this fever struggle, no room at all for maneuver. They're like, the way it's like absolutely essential to get alignment right. And you get it during this intelligence explosion, you got it right, is you need to have that room to maneuver and you need to have that clear lead. And, you know, again, maybe you've made the deal or whatever, but I think you're an incredibly tough spot if you don't have this clearly. So I think the sort of private world is kind of rough
there. On like whether people take it seriously, you know, I know I have some faith in sort of sort of normal mechanisms of a liberal society. If alignment is an issue, which, you know, we don't fully know yet, but sort of the science will develop, we're going to get better measurements of alignment, you know, and the case will be clear and obvious. I worry that there's, you know,
I worry about worlds where evidence is ambiguous. And I think a lot of them, a lot of the most scary kind of intelligence explosion scenarios are worlds in which evidence is ambiguous. But again, it's sort of like, if evidence is ambiguous, then that's the world in which you really want the safety margins. And that's also the world's in which kind of like running the intelligence
explosion is sort of like, you know, running a war, right? It's like, the evidence is in big us, we have to make these really tough trade-offs and you like, you better have a really good chain of command for that. And it's not just like, you know, you're going yet, wow, let's go, you know, it's cool. Yeah. Let's talk a little bit about Germany. Uh-huh. We're making the analogy to World War II. And you made a really significant point
many hours ago. The fact that throughout history, World War II is not unique, it least when you think in proportion, yeah, to the size of the population. Yeah. But these other sorts of catastrophes where a subsequent different portion of the population has been killed off. Yeah. After that, the nation recovers and they get back to their heights. Uh-huh. So what's interesting after World War II is that Germany, especially, and maybe Europe as a whole, obviously they experienced
fast economic growth in the direct aftermath because of catch-up growth. But subsequently, we just don't think of Germany as, we're not talking about Germany potentially launching an intelligence explosion and they're going to get in and see it at the EI table. We were talking about Iran and North Korea and Russia. We're going to talk about Germany, right? Look at their allies. Yeah. But so what happened? I mean, the World War II and now it didn't
like come back to the Southern years wars on thing, right? Yeah. Yeah. Yeah. I mean, look, I'm generally very bearish on Germany. I think in this context, I'm kind of like, you know, it's a little bit, you know, I think you're underwriting a little bit. I think it's probably still one of the, you know, top five most important countries in the world. You know, I mean, Europe overall, you know, it still has, I mean, it's GDP that's like close to the United States, the size of GDP,
you know, um, and there's things actually that Germany's kind of good at, right? Like state capacity, right? Like, you know, the, you know, the roads are good and they're clean and they're all maintained and, and, you know, in some sense, the sort of, um, a lot of this is the sort of flip side of things that I think are bad about Germany. Yeah. Right? So in the US, it's a little bit,
like there's a bit more of sort of wild west feeling to the United States, right? And it includes the kind of like crazy bursts of creativity, includes like, you know, um, political candidates that are sort of, you know, there's a much broader spectrum and, you know, much, you know, like, both in Obama and Trump is somebody you just wouldn't see in the sort of much more confined kind of German political debate. Um, you know, I wrote this blog post at some point, you're
as political stupid about this. Um, but anyway, and so there's this sort of punctilious sort of rule following that is like good in terms of like, you know, keeping your kind of state capacity functioning. Um, but that is also, um, you know, I think I kind of, I think there's this sort of
very constrained view of the world in some sense. Um, um, you know, and that includes kind of, you know, I think after World War II, there's a real backlash against anything like elite, you know, and, um, you know, again, no, you know, no elite high schools or elite colleges and sort of what, by the law, excellence isn't cherished, you know, there's a, yeah, why is that the logical intellectual, um, think to rebel against if what if you're trying to overcorrect from the Nazis?
Yeah. Why is it because the Nazis were very much into leadism? What's, I don't understand why that's a logical sort of, uh, kind of reaction. I know, maybe it was sort of a kind of reaction against the sort of like whole like Aryan race and sort of that sort of thing. I mean, I also just think there was a certain amount in what amount certain, I mean, look, look at sort of World 1, end of World 1, resented World War II for German, right? And sort of, um, you know, a common
narrative is that the piece of Versailles, you know, was, was too strict on Germany. You know, the piece imposed after World War II was like much more strict, right? It was a complete, you know, the whole country was destroyed, you know, it was, uh, you know, you know, and all the, most of the major cities, you know, over half of the housing stock had been destroyed, right? Like, you know, in some birth cohorts, you know, like 40% of the men had died.
Half of the population displaced. Oh, yeah, I mean, almost 20 million people were at displaced, right? Huge, crazy, right? You know, like, and the borders are way smaller than the Versailles border. Yeah, exactly. And, and, and, and sort of a complete imposition of a new
political system. Right. And, and, you know, on both sides, you know, and, um, yeah. So it was, but in some sense, that worked out better than the post World War I piece, where then there was this kind of resurgence of German nationalism and, you know, in some sense, the thing that has been a pattern. So it's sort of like, it's unclear if you want to wake the sleeping beast. I do think that at this point, you know, it's gotten a bit too sleepy. Yeah. Um, um, I do think it's an
interesting point about we underrate the American political system. Yeah. And I've been making the same correction myself. Yeah. Uh, there's, there was this book about burdened by a Chinese economist called China's world view. Yeah. And overall, I was a big fan, but they made a really interesting point in there. Yeah. Which was the way in which candidates rise up through the Chinese. Yeah. Hierarchy for politics for administration. Uh-huh. In some sense, it's lex for you're not going to
get some Marjorie Taylor green or somebody running. Right. Right. Uh, don't get that in Germany either. Right. Yeah. But you're, he explicitly made the point in the book that that also means we're never going to get a handry Kissinger Barack Obama. Right. In, uh, we're going to get like, by the time they end up in the charge of the, uh, the, the polyurethal, polyurethal, they'll be like some
six-year-old, you're like, rad, who's never like ruffled any feathers. Yeah. Yeah. Yeah. I mean, I think, there's something really important about the sort of like very raucous people debate and, um, I mean, yeah, in general, kind of like, you know, there's the sense in which in America, you
know, lots of people live in their kind of like own world. I mean, like we live in this kind of bizarre little like bubble in San Francisco and people, you know, and, and, um, but I, I think that's important for this sort of evolution of idea of error correction and that sort of thing. You know, there's other ways in which the German system is more functional. Yeah. But it's interesting that this is major mistakes, right? Like the sort of defense spending, right?
And, you know, then, you know, Russia and Mezu, Crane, and, and you're like, wow, what do we do? Right. No, that's a really good point, right? The main issues, there's everybody agrees, but exactly. Yeah. So I have consensus blob kind of thing. Right.
And on the China point, you know, just having this experience of like reading German newspapers, and I think how much, you know, how much more poorly I would understand the sort of German debate and sort of the sort of state of mind from just kind of a far, um, I worry a lot about, you know, where I think it is interesting just how kind of impenetrable China is to me. It's a billion people, right? And like, you know, almost everything else is really global.
I have a globalized internet and I kind of, I kind of, the sense of what's happening in the UK. You know, I probably, even if I didn't read German newspapers, sort of what I have a sense of what's happening in Germany, but I really don't feel like I have a sense of what like, you know, what is the state of mind or the state of political debate, you know, of a sort of average Chinese
person or like an average Chinese lady. And, um, yeah, I think that, that I find that distance kind of worrying and I, you know, and there's, and you know, there's some people who do this and they do really great work where they kind of go through the like party documents and the party speeches. And it seems to require a kind of a lot of interpretive ability where there's like very specific words and Mandarin that like mean we'll have one connotation, not the other connotation.
Um, but yeah, I think it's sort of interesting given how globalized everything is. I mean, now we have basically perfect translation machines and it's still so, so impenetrable. That's really interesting. I've been, I should, I'm sort of ashamed almost that I haven't done this yet. Yeah, I think many months ago, I, when Alexi interviewed me on his YouTube channel, I said,
I'm meaning to go to China to actually see for myself what's going on. And actually, I should, so by the way, if anybody listening has a lot of context on China, if I went to China, who could introduce me to people, yeah, please email me because I've got, you got to do some pods and you got to find some of the Chinese AI researchers, man. I know. I was thinking at some point, again, this is, I don't know if they can speak freely, but I was thinking of there's,
so they had these papers and on the paper, they'll say who, who's a co-author. Yeah, it's funny because while I was thinking of just emailing, cold emailing everybody, like, here's my cabin, like, can you, let's, let's just talk. I just want to see what is the vibe. Even they don't tell me anything. I'm just like, what kind of person is this? How westernized are they? Yeah.
But I was, I was saying this, I just remembered that in fact, by dance, on, according to mutual friends, we have a Google, they cold emailed every single person on the Gemini paper and said, if you come work for by dance, we'll make you an L.A.D. engineer. You'll report directly to the CTO and, in fact, this actually, I'm, that's how the secrets go over, right? No, I'm actually figuring out, I meant to ask this earlier, but, yeah, suppose they hired what, if there's only a
hundred or so people, maybe less, we're working on the key algorithmic secrets. Yeah, yeah, yeah. If they hired one such person, yeah, is all the off-a-gon that these labs have? Yeah. If this person was intentional about it, they could get a lot. I mean, they couldn't get the sort of like, I mean, actually, you could probably just also actually treat the code. They could get a lot of the key ideas. Again, like, you know, up until recently stuff was published, but, you know,
they could get a lot of the key ideas if they tried. If they like, you know, I think there's a lot of people who don't actually kind of like book around to see what the other teams are doing, but, you know, I think you kind of can. But yeah, I mean, they could. It's scary. Right. I think the project makes more sense there were, you can just recruit a Manhattan project engineer and then just get it. And it's like these are secrets that can be used for like probably every training
on the future. That'll be like, maybe are the key to the data wall that are like, they can't go on or they can't go on that are like, you know, they're going to be worth, you know, giving sort of like the multipliers on compute, you know, hundreds of billions, trillions of dollars, you know, and all that takes is, you know, China to offer a hundred million dollars to somebody. Right. Yeah, come work for us. Right. And then, and then, yeah, I mean, yeah, I'm really uncertain on how
sort of seriously China is taking AGI right now. One anecdote that was related to me on the topic of anecdotes, the by another sort of like, you know, kind of researcher in the field was at some point they were at a conference with somebody in Chinese AI researcher. And he was talking to him and he was like, I think it's really good that you're here. And like, you know, we got to
have the international coordination and stuff. And apparently this guy said that I'm the kind of most senior most person that they're going to let leave the country to come to things like this. Mm-hmm. Wait, what's it take away? They're not letting really senior actors in the country. Interesting. Kind of classic, you know, Eastern Block move. Yeah.
I don't know if this is true, but it's what I hear is interesting. So I thought the point you're made earlier about being exposed to German newspaper and also to, because earlier, you mentioned economics and law and national security. You have the variety and intellectual diet there has exposed you to thinking about the geopolitical clash on here in ways others.
Talking about AI, I mean, this is the first episode I've done about this where we talked about things like this, which is now that I think about it, we are to give it that this is an obvious thing in retrospect. I should have been thinking about. Anyways, so that's one thing we've been missing. What are you missing? And national security, you're thinking about so you can't say national security. What are, what, what, what, like, perspective are you probably under exposed to as a result?
In China, I guess you mentioned. Yeah, so I think the China one is an important one. I mean, I think another one would be a sort of very Tyler Kownes take, which is like, you're not exposed to how well, like, how will a normal person in America, both like use AI, you know, probably not. And that being kind of like bottlenecks of the fusion of these things. I'm overrating the revenue because I'm kind of like, ah, you know, everyone has to stop
it. But you know, kind of like, you know, Joe Schmoh engineer at a company, you know, like, well, they will they be able to integrate it. And then also the reaction to it, right? You know, I mean, I think this was a question again hours ago, where it was about like, you know, won't people kind of rebel against this? Yeah. And they won't want to do the project. I don't know, maybe they will. Yeah. Here's a political reaction that I didn't anticipate. Yeah. So Tucker Carlisle and
resolutely end the Chirolgan episode. I already told you about this part. I'm just going to tell the story again. So Tucker Carlisle and his uncle, and yeah, and they start talking about World War Two. Ah, and Tucker says, well, listen, I'm going to say something that my fellow conservatives won't like, but I think nuclear weapons are immoral. I think it was obviously moral that we use them on Agasaki and Hiroshima. And then he says, in fact, nuclear weapons are always immoral,
aha, except when we would use them on data centers. In fact, it would be immoral not to use them on data centers because look, we're people in Silicon Valley, these fucking nerds are making AG of super intelligence. And they say that it could enslave humanity. We made machines to serve humanity, not to enslave humanity. And they're just going on and making these machines. And so we should of course be nuking the data centers. And that is definitely not a political reaction in 2024.
I was expecting. Who knows? Yeah, it's going to be crazy. It's going to be crazy. The thing we learned with COVID is that also the left right reactions that you would anticipate just based on hunches, it completely flipped. It's actually initially like kind of the right is like, you know, it's like so contingent. And then and then and then and then the left was like this is racist. And then it flipped, you know, the left was really into the coat. Yeah.
Yeah. And the whole thing also is just like so blunt and crude. And so yeah, I think I think probably in general, you know, I think people are really under, you know, people like to make sort of complicated technocratic AI policy proposals. And I think especially if things go kind of fairly rapidly on the path to AI, you know, there might not actually be that much space for kind of like complicated kind of like, you know, clever proposals. It might just be kind of a bunch
of cruder reactions. Yeah. Look, and then also when you mentioned the spies and the natural security getting involved and everything. And you can talk about that in the abstract. But now that we're living in San Francisco and we know many of the people who are doing the top AI research is also a little scary to think about people I personally know and friends with. It's not unfeasible if they have secrets in their head that are worth $100 million or something.
Uh-huh. Kidnapping, assassination, sabotage. Oh, they're family. Yeah, it's really bad. Yeah. And it is to the point on security, you know, like right now, it's just really formed. But no, at some point as it becomes like really serious, it's things, you know, you're going to want the security cards. Yeah. Yeah. Yeah. So presumably you have thought about the fact that people in China will be listening to this and reading your series. Yeah. And somehow you made
the trade off that it's better to let the whole world know. Yeah. Yeah. And also including China. Yeah. And make them up to AGI, which is part of the thing you're worried about is China League of AGI. Yeah. Then to stay silent. Yeah. And it's curious. Well, I walk me through how you've thought about that trade off. Yeah, I actually look, I think this is a tough trade off. I thought about this a bunch, you know, I think, you know, I think people in the PRC will read this. I think,
you know, I think there's some extent to which sort of cat is out of the bag. You know, this is like not, you know, AGI being a thing people are thinking about very seriously. He's not new anymore. There's sort of, you know, a lot of these takes are kind of old or you know, I've had, I had, you know, some reviews a year ago, might not have written it up a year ago. And part because I think
this cat wasn't out of the bag enough. You know, I think the other thing is I think to be able to manage this challenge, you know, I think much broader swath in society will need to wake up. Right. And if we're going to get the project, you know, we actually need sort of like, you know, a broad bipartisan understanding, the challenge is facing us. And so, you know, I think it's a tough trade off, but I think this sort of need to wake up people in the United States and the sort of
Western rule and the Democratic coalition is ultimately imperative. And, you know, I think my hope is more people here will read it than the PRC. You know, and I think people sometimes underrate the importance of just kind of like writing it up, laying out the Shijie picture. And, you know, I think you have done actually a great service to the sort of mankind in some sense,
by, you know, with your podcast. And, you know, I think it's overall been good. Okay. So by the way, you know, on the topic of, you know, Germany, you know, we were talking at some point about kind of immigration story. Right. You have a kind of interesting story you haven't told. And I think you should tell. So a couple of years ago, I was in college and I was 20. Yeah, I was about to turn 21. Yeah. I think it was. So you're, yeah, you came from India when you were really. Right. Yeah.
So I don't know. I was, I was eight or nine. I lived in India. And then we moved around all over the place. Right. Because of the backlog for Indians. Yeah. Green card backlog. Yeah. It's we were we've been in the queue for like decades. Even though you came at eight years along, the, you know, each one V. Yeah. And when you're 21, you get kicked off. Yeah. The queue. And you had to restart the process. I'm on my dad's, my dad's a doctor and I'm like on his A1B as a
dependent. But when you're 21, you get kicked off. Yeah. And so I'm 20. And I just like kind of don's on me. Yeah. This is my situation. Yeah. And it's completely screwed. Right. And so I also had experience that my dad, yeah, we've like moved all around the country. They have to prove that him as a doctor is like, you can't get native talent. Yeah. And you can't start up. Yeah. Yeah. Yeah. So where can you not get? And like even getting the H1B for you would have been like
in a 20% lottery. Right. If you're lucky, you're in the same. And I had to prove that they can't get native talent, which means like for him and like North, we live in North Dakota for three, as well as Virginia for three years. Maryland, West Texas. Yeah. And so kind of don't on me. This is my situation. I just turned 21. Yeah. I'll be like on this lottery. Even if I get the lottery, yeah, I'll be fucking code monkey for the rest of my life. Because this thing isn't
going to let up. Yeah. Can't do a startup. Exactly. And so at the same time, I had been reading for the last year, I've been super obsessed with program essays. My plan at the time was to make a startup or something. I was super excited about that. Yeah. And it just occurred to me that I couldn't do this. Yeah. That like this is just not in the car for me. Yeah. And so I was kind of depressed about it. I remember I kind of just blew. I was in a days through finals because I had
like, it just occurred to me. And I was really like anxious about it. Yeah. And I remember thinking to myself at the time, yeah, that if somehow I end up getting my green card with Brighton 21, there's no fucking way I'm turning it becoming a code monkey. Yeah. Because the thing that I've did like this feeling of dread that I have is this realization that I'm just going to have to be
a code monkey. And I realized that's my default path. Yeah. If I if I hadn't sort of made a proactive effort not to do that, I would have graduated college as a computer science student and I would have just done that. And that's the thing I was super scared about. Yeah. So that was a important sort of realization for me. Anyway, so COVID happened because of that since there weren't foreigners coming, the backflopter fast. And by the skin of my teeth, like a few months before I turned 21,
it's extremely contingent reasons. I ended up getting a green card because I got a green card. I could the whole podcast. Exactly. It's like college. And it was like bumping around. And I got was like I graduated just a semester early. I'm going to like do this podcast. Yeah. What happens? And it was it had an end to never green card. And the best case scenario. The fact you know, and it only existed. Yeah. Yeah. It's actually because I think it's hard. It's probably it's, you know,
what is the impact of like immigration reform? Right. What is the impact of clearing, you know, like whatever 50,000 green cards in the backlog? And you're such like an amazing example of like, you know, all of this is only possible. And it's yeah, it's I mean, it's just a cartilage tragic. Right. This is so dysfunctional. Yeah. Yeah. No, it's insane. I'm glad you didn't. I'm glad you kind of like, you know, tried the, you know, the, the unusual path. Well, yeah. But I could only do it.
I'm obviously I was extremely fortunate that I got the green card. I was like, I had a little bit of saved up money. I got a small grant on college. Thanks to the fuse or fun to like do this for basically the equivalent of six months. And so it turned out really well. And then at each time. And I was like, Oh, okay, podcast. Come on. Like I wasted a few months on this. Let's now go do something real. Yeah. Something big would happen. Yeah. I would. Yeah. Jeff Bezos would, huh?
Cap with it. Yeah. Yeah. Yeah. But so there would always be just like the moment I'm about to quit the podcast. Something like Jeff Bezos would say that something nice about me on Twitter. The Elia episodes gets like a half a million views. You know, and then now this is my career. But yeah, it was sort of very looking back on it. Incredibly contingent. Yeah. The things worked out
the right way. Yeah. I mean, look, if the AGI stuff goes down, you know, it will be the most important kind of like, you know, source of, it'll be how maybe most of the people who kind of end up feeling the AGI. We're sure to buy it. Yeah. Yeah. Also very much, you're very linked with the story in many ways. First, the, I got like a $20,000 grant from a fuse or fun right out of college. And that's a stain to me for six months or how long it was. Yeah. And without that, I wouldn't.
It's kind of crazy. Yeah. Your underwear was it. Yeah. It was, you know, it's tiny, but yeah, you know, it's, it goes to show kind of how far small grants can go. Yeah. Sort of the emergency ventures too. Yeah. Exactly. The immersion ventures. Yeah. And the, um, well, the last year I've been in San Francisco, we've just been in close contact the entire time and just bouncing ideas back and forth. We're just basically the all fly I have. I think people would be
surprised by how much I got from you, Shulto, Trenton, a couple of others. I mean, it's been, it's been an absolute pleasure. Yeah. I like it. I like it. It's been super fun. Yeah. Um, okay. So some random questions for you. Yeah. If you could convert to Mormonism. Yeah. And you could really believe it. Yeah. Would you do it? Would you push the button? Well, okay. Okay. Uh, before I answer that question, one sort of observation about the Mormons.
So actually, there's that, there's an article that actually made a big impact on me. Yeah. Because by my kick hop and at some point, you know, the Atlantic or whatever about the Mormons. And I think the thing he kind of, you know, and I think he even like interviewed
Romney and it and so on. And I think the thing I thought was really interesting in this article was he kind of talked about how the experience of kind of growing up different, you know, growing up very unusual, especially if you grow up Mormon outside of Utah, you know, like the only person doesn't drink caffeine, you don't drink alcohol, you're kind of weird. Um, how that kind
of got people prepared for being willing to be kind of outside of the norm later on. And like, you know, Romney, you know, was willing to kind of take stands alone, you know, in his party, because he believed, you know, what he believed is true. And I don't, I mean, probably not to the same way, but I feel a little bit like this from kind of having grown up in Germany, having, you know, and really not having like this or German system and having been kind of an outsider or something.
I think there's a certain amount in which kind of, yeah, growing up in an outsider gives you kind of an usual strength. Um, later on to be kind of like, yeah, willing to say what you think. Yeah. And, um, yeah. So that is one thing I really appreciate about the Mormons, at least the ones that, you know, grew up inside of Utah. I think, you know, the fertility rates, they're good, they're important. So they're going down as well, right? Yeah. Right. This, this, this is the thing that
really clinched the kind of fertility client story. Yeah. Even the Mormons, yeah, even the Mormons, right? You're like, oh, this is like a, you sort of a good sur, the Mormons are replaced, everybody. I don't know if it's good, but it's like, at least, you know, at least come on, you know, like it's these some people will maintain high, you know, but it's no, no, you know, even the Mormons and sort of basically one, one's the sort of these religious subgroups that
have high fertility rates. Right. Once they kind of grow big enough, they become, they're too close in contact with sort of normal society and become normalized, more fertility rates drop from, I remember the exact numbers, maybe like four to two in the course of 10, 20 years. Anyway, so it's like, you know, now people point to the, you know, Amish or whatever,
but I'm just like, it's probably just not scalable. And if you grow big enough, then there's just like, you know, this sort of like, you know, this sort of like overwhelming force of modernity kind of gets you. Yeah. No, if I could convert to Mormonism, look, I think there's something, I don't believe it, right? If I believed it, I obviously would convert to Mormonism, right? Because you got to, you got to convert to it. But you can choose the world in which you do believe it.
I think there's something really valuable and kind of believing in something greater than yourself and believing and having a certain amount of faith. You do, right? That's what this is. Yeah. And, and, and, you know, there's a, you know, feeling some sort of duty to the thing greater than yourself. Yeah. And, you know, maybe my version of this is somewhat different, you know, I think I feel some sort of duty to like, I feel like there's some sort of historical weight on
like, how this might play out. And I feel some sort of duty to like make that go well. I feel some sort of duty to, you know, our country, to the national security of the United States. And,
you know, I think, I think that, I think I can be a force for a lot of good. I, the, uh, going back to the opening, I think, just to, uh, the thing that's especially impressive about that is, look, there's people who add the company who have through years and decades of building up savings from working in tech have probably
tens of millions, liquid, more than that in terms of their equity. And the person, very many people were concerned about the clusters in the Middle East and the secrets leaking to China and all these things. But the person who actually made a hassle about it, and I think hassling people
is so underrated. I think that I, I, I think that one person who made a hassle about it is the 22-year-old who has less than a year of the company who doesn't have savings built up, who, who, who, who, who, who, who's a solidified member of the, um, I think that's a sort of like, maybe, maybe, maybe it's me being naive and, you know, not having a knowing how big companies work and, you know, but like, I, there's a, you know, I think sometimes a bit of a speech geontologist, you know,
I kind of believe in saying what you think. Yeah. Sometimes friends tell me I should be more of a speech consequentialist. No, I think, um, I, I, I, I really think the amount of people who, when they have the opportunity to talk to the person, we'll just bring up the thing. I've just been with you in multiple contexts and I guess I shouldn't reveal who the person is or what the context was.
But I've just been like, very impressed that the dinner begins and by the end, somebody who has a major voice in how things go, is seriously thinking about a worldview they would have found incredibly alien before the dinner or something. Um, and I've been impressed with it like, just, like, just give them the spiel and hassle them. Um, I mean, look, I just, I think, I think,
I think I feel the stuff pretty viscerally now. Yeah. I think there's a time, you know, there's a time when I thought about the stuff a lot, but it was kind of like econ models and like, you know, kind of like these sort of theoretical abstractions and, you know, you talk about human brain size or whatever. Right. And I think, you know, since, um, I think since at least last year, you know, I feel like, you know, I feel like I can see it. Yeah. And I, I just, I feel it. Um, and I think I
can like, you know, I can sort of see the cluster that AGI can be trained. Yeah. I can see the kind of rough combination of algorithms and the people that be involved in how this is going to play out. Yeah. And, um, you know, I think, um, look, we'll see how it plays out. There's many ways this could be wrong. There's many ways it could go. But I think this could get very real. Yeah. Should we talk about what you're up to next? Sure. Yeah. Okay. So you're starting an investment
firm. Yep. Anker investments from Nat Friedman, Daniel Gross, Patrick Lawson, John Collison. First of all, why is this thing to do? You believe the AGI is coming in a few years? Yes. Well, well, well, well, why the investment firm? A good question, fair question.
Okay, so I mean, a couple of things. One is just, you know, I think we talked about this earlier, but it's like, the screen doesn't go blank, you know, when sort of AGI or spring intelligence happens, I think people really underrate the sort of, basically, the sort of decade after. You have the intelligence explosion. That's maybe the most sort of wild period. But I think the decade after is also going to be wild. And, you know, this combination of human institutions,
but super intelligence have crazy kind of geopolitical things going on. You have the sort of broadening of this explosive growth. And basically, yeah, I think it's going to be a really important period. I think capital really matter, you know, eventually, you know, like, you know, going to go to the stars, you know, going to go to the galaxies. So anyway, so part of the answer is just like, look, I think don't do it done right. There's a lot of money to be made, you know,
I think if AGI were priced in tomorrow, you could maybe make a hundred acts. Probably you can make even way more than that because of the sequencing. And, and, and, you know, capital matters. I think the other reason is just, you know, some amount of freedom and impendence. And I think, you know, I think there's some people who are very smart about the say-i stuff and who are kind of like, see it coming. But I think only us all of them, you know, are kind of, you know, constrained
in various ways, right? They're in the labs, you know, they're in some, you know, some other position where they can't really talk about the stuff. And, you know, in some sense, I've really admired sort of the thing you have done, which is I think it's really important that there's sort of voices of reason on this stuff publicly, where people who are in positions to kind of advise, important actors and so on. And so I think there's a, you know, basically the thing this investment
firm will be will be kind of like, you know, a brain trust on AI. It's going to be all of that situation awareness. We're going to have the best situational awareness in the business. You know, we're going to have way more situational business than any of the people who manage money in the, you know, New York. Yeah. But definitely going to, you know, we're going to do great on investing.
But it's the same sort of situational awareness that I think is going to be important for understanding what's happening, being a voice of reason publicly and, and, and sort of being able to be in a position to advise. Yeah. I, the book about Peter Teal, yeah, they had a, interesting quote about his hedge fund. I think it got terrible returns. So, this is an example. Right. That's the, that's the, that's a big case. Right. It's like two theoretical. Sure. Yeah.
But they had an interesting quote that it's, it's, that it's like basically a thing tank inside of a hedge fund. Yeah. And I'm trying to build. Right. Yeah. So, presumably, you've thought about the ways in which these kinds of things can blow. There's a, there's a lot of interesting business history books about people who got the thesis right. Yeah. But timed it wrong. Yeah. Where they, they buy that internet is going to be a big deal. Yeah. They sell it the wrong time and buy the
wrong time during the dot com boom. Yeah. And so they miss out on the gains, even though they're right about the, yeah. Yeah. Yeah. Well, what is that trick to preventing that kind of thing? Yeah. I mean, look, obviously, you can't, you know, not blowing up is sort of like, you know, task number one and two or whatever. I mean, you know, I think this investment firm, it is going to just be betting on AGI, you know, betting on AGI in super intelligence before
the decade is out, taking that seriously, making the bets you would make. Yeah. If you took that seriously. So, you know, I think if that's wrong, you know, firm is not going to do that well. The thing you have to be resistant to is like, you have to be able to resistance yet, you know, one or a couple or a few kind of individual calls, right? You know, it's like AI stagnates for a year because of the data wall or like, you know, you got, you got the call wrong on like when revenue
would go up. And so that's pretty critical. You have to get timing, right? I do think in general that the sort of sequence of bets on the way to AGI is actually pretty critical. I think a thing people underrate. So, all right. I mean, yeah. So like, where does the story start? Right? So like, obviously, the sort of only bet over the last year was in video. And, you know, it's obvious now, very few people did it. This is sort of also, you know, classic debate. I and a friend had with
another colleague of ours, where this colleague was really into TSM, you know, TSMC. And he was just kind of like, well, you know, like these tabs are going to be so valuable. And also like in video, there's just a lot of videos and credit grips, right? It's like maybe somebody else makes better GPUs. That was basically right. But sort of only in video had the AI beta, right? Because only in video was kind of like a large fraction AI, the next few doubling, which is like meaningfully
explode the revenue, whereas TSMC was, you know, a couple percent AI. So even though there's going to be a few doubling of AI, not going to make that big of an impact. All right. So it's sort of like, the only place to find the AI beta basically was in video for a while. You know, now it's broadening, right? So now TSM is like, you know, 20 percent AI by like 27 or something is what they're saying.
One more doubling, it'll be kind of like a large fraction of what they're doing. And, you know, there's a whole stack, you know, there's like, you know, there's people making memory and co-ost and power. You know, utility companies are starting to get excited about AI. And they're like, oh, it'll, you know, power production, the United States will grow, you know, not 2.5 percent, 5 percent of the next five years. And I'm like, no, it'll grow more.
You know, at some point, you know, you know, like a Google or something becomes interesting. You know, people are excited about them with AI because it's like, oh, you know, AI revenue will be, you know, 10 billion or tens of billions. I'm kind of like, ah, I don't really care about them before then. I care about it. You know, once it, you know, once you get the AI beta, right? And so at some point, you know, Google will get, you know, $100 billion of revenue for me. I
mm hmm. Probably their stock will explode. You know, they're going to become, you know, $5 trillion, $10 trillion company. Anyway, so the timing there is very important. You have to get the timing right. You have to get the sequence right. You know, at some point, actually, I think like, you know, there's going to be real tailwinds to equities from real interest rates. Right? So, basically, in these sort of explosive growths worlds, you would expect real interest rates to go up a lot.
Both on the sort of like, you know, a basic both sides of the equation, right? On the supply side, around the sort of demand for money side, because, you know, people are going to be making these crazy investments. You know, initially in clusters, and then in the robot factories or whatever, right? And so they're going to be borrowing like crazy. They want all this capital, high ROI.
And then on the sort of like consumer saving side, right? To like, you know, to to give up all this capital, you know, this sort of like, oily equation, standard sort of intratemporal transfer, you know, trade off of consumption. Standard. Some of our friends have a paper on this, you know, basically if you expect, if consumers expect real growth rates to be higher, you know, interest rates are going to be higher, because they're less willing to give up consumption.
Right. You know, consumption in the, there was, they're less willing to give up consumption day for consumption in the future. Anyway, so at some point, real interest rates will go up if sort of data is greater than one, that actually means equities, you know, higher growth rate expectations, being equities go down, because the sort of interest rate effect outweighs the growth rate effect. And so, you know, at some point, there's like big, the big bond short, you got to get that right.
You know, you got to get it right, that, you know, nationalization, you know, like, you got, you know, anyway, so there's this whole sequence of things, you got to get that right. And you know, no, no, no, no, no, no, no, no, no, no, no, yeah. And so you've, look, you've got to be really, really careful about your like overall expositioning, right? And because, you know, you know, if you expect these kind of crazy events to play out, there's going to be crazy things
you didn't see. You know, you do also want to make the sort of kind of bets that are tailored to your scenarios in the sense of like, you know, you want to find bets that are bets on the tails, right? You know, I don't think anyone is expecting, you know, interest rates to go above, you know, 10% like real interest rates. But, you know, I think there's at least a serious chance of that, you know, before the decade is out. And so, you know, maybe there's some like cheap insurance,
you can buy on that, you know, that pays off. Very silly question. Yeah. In these worlds, yeah, our financial markets where you make these kinds of bets going to be respected and like, you know, like, it's my fidelity account. I'm going to mean anything when we have a 50% economic growth. Like, who's, who's like, we got to respect his property rates? The bond for the sort of 50, 50% economic growth. That's pretty deep into it. I mean, again, there's this little sequence of
things. But yeah, no, I think property rates will be extracted again in the sort of modal world, the project. Yeah. At some point, at some point, there's going to be figuring out the property rates for the galaxies, you know, and that'll be interesting. That will be interesting. So there's an interesting question about going back to your strategy about, well, the 30s will really matter a lot about how the rest of the future goes. Yeah. And you want to be in a position
of influence by that point, because of capital. It's worth considering, as far as I know, there's probably a whole bunch of literature on this. I'm just riffing. But the landed gentry behind me during the before the beginning of the Industrial Revolution. I'm not sure if they were able to leverage their position in a sort of georgist or pickety type sense in order to accrue the returns that were realized through the Industrial Revolution. Yeah. And I don't know
what happened. At some point, they were just weren't the landed gentry. But I'd be concerned that even if you make great investment calls, you'll be like the guy who owned a lot of land, to farm land before the Industrial Revolution. And like the guy who's actually going to make a bunch of shamanis, the one with the sea mentioned, even he doesn't make that much money. Most of the benefits are widely diffused and so forth. I mean, I think that the analog is like you sell your land,
you put it all in sort of the people who are building the new industry. I think the sort of like real depreciating asset for me is human capital. Yeah. No, look, I'm serious. It's like, there's something about like, I don't know, it was like the auditorium of Columbia, the thing that made you special is your smart. But actually, that might not matter in like four years because it's actually automatable. And so anyway, a friend joke that the sort of investment
from his perfectly hedge for me, it's like either like AGI this decade. And yeah, your human capital is appreciated, but you've turned that into financial capital. Or like no AGI this decade, in which case, maybe the firm doesn't do that well. But you know, you're still in your 20s. Excellent. And what's your story for why AGI hasn't been priced in? The story,
financial markets are supposed to be very efficient and it's very hard to get an edge. Yeah. Here, naively, you just say, well, I've looked at the scaling curves and they imply that we're going to be buying much more compute and energy than the analysts realize. Yeah. Sure, in those analysts, be broke by now. What's going on?
Yeah. I mean, I used to be a true EMH guy. Yeah. I was an economist. Yeah. I am, you know, I think the thing I, you know, changed by mind on is that I think there can be kind of groups of people, smart people, you know, who are, you know, say they're in San Francisco, who do just have alpha over the rest of society and kind of seeing the future. And so like COVID, right? Like I think there's just honestly kind of similar group of people
who just saw that and called it completely correctly. And, you know, they showed at the market, they did really well. You know, a bunch of other sort of things like that. So, you know, why is AGI not priced in? You know, it's sort of, you know, why, why has the government nationalized the labs yet? This society hasn't priced it in yet and sort of it hasn't completely diffused. And, you know, again, it might be wrong, right? But, um,
I just think sort of, you know, not that many people take these ideas seriously. Yeah. Yeah. Yeah. Yeah. A couple of other sort of ideas that I was playing around with with regards to reading and changing talk about. But the, the systems competition, there's a very interesting, um, the one of my favorite books about world virtues, the Victor Davis Hanson, um, summary of everything. Uh-huh. And he explains why the allies made better decisions than the axes.
Why did they? And so obviously, there were some decisions that the axes made that were pretty, like Blitzkrieg, whatever, um, those sort of by accident though, what in what sense? That they just had the infrastructure left over. Well, no, I mean, the sort of, I think, I mean, I don't, I mean, I think sort of my read of it is, Blitzkrieg wasn't kind of some like a genius strategy. It was just kind of, there's like more like their hand was forced. Um, I mean,
this is sort of the very Adam Tuzian story of World War II, right? But it was, um, you know, there's sort of this long war versus short war. I think it's actually kind of an important concept. I think sort of Germany realized that if they were in a long war, including the United States, you know, they would not be able to compete industrially. So their only path to victory was like, make it a short war, right? And that, that sort of worked much more spatically than they thought,
right? And sort of take over France and take over much of Europe. And so then, you know, the decisions are invading the Soviet Union, it was, you know, it was, it was, um, look, if, it was, it was about the Western front in some sense, because it was like, we've got to get the resources. Yeah. No, we don't, we're actually, we don't actually have a bunch of the stuff we need, like, you know, oil and so on. You know, Auschwitz was actually just this giant chemical plant to
make kind of like synthetic oil and a bunch of these things. It was the largest Nessirah project in Nazi Germany. Um, and so, you know, and sort of they thought, well, you know, we completely crushed them in World War I, you know, they'll be easy. We'll invade them. We'll get the resources.
And then we can fight on the Western front. And even during the sort of whole invasion of the Soviet Union, even though kind of like a large amount of the sort, you know, the sort of deaths happened there, you know, like a large fraction of German industrial production was actually, you know, like planes and naval, you know, and so on, those drafted, you know, towards the
Western front and towards the, you know, the Western allies. Well, and then to the point that Hansen was making was, by the way, I think this concept of like long war and short war is kind of interesting and with respect to thinking about the China competition. Yeah. Which is like, you know, I worry a lot about kind of, you know, the kind of sort of American, like late and American industrial capacity, you know, like I think China builds like 200 times more ships
than we do right now. You know, some crazy way. And so it's like, maybe we have a sort of superiority, say in the non-AI world, we have the superiority in military material, kind of like win a short war, at least, you know, kind of defend Taiwan in some sense. But like if it actually goes on, you know, it's like maybe China is much better able to mobilize, mobilize industrial resources in a way that like we just don't have the same ability anymore.
I think this is also relevant to the AI thing in the sense of like if it comes down to sort of a game about building, right, including like maybe AGI takes the trillion dollar cluster, not the hundred billion dollar cluster, maybe or even maybe AGI takes the, you know, is on the hundred billion dollar cluster. But you know, it really matters if you can run, you know, 10X, you can do one more of a magnitude of compute for your super intelligence or whatever.
That, you know, maybe right now they're behind, but they just have this sort of like raw, like industrial capacity to build us. And that matters both in the run-up to AGI and after, right, where it's like you have the super intelligence on your cluster. Now it's time to kind of like expand the explosive growth. And you know, like will we let the robo factories run wild? Like maybe not, but like maybe China will. We're like, you know, will we, will yeah, will we produce
the, how many, how many of the drones will we produce? And I think, yeah, so there's some sort of like outbuilding and industrial explosion that I work with. You've got to be one of the few people in the world who is both concerned about alignment, but also wants to make sure that we'll let the robo factories proceed once you get the ASI to be that China. Which is like, it's all part of the picture.
Yeah, yeah. Yeah. And but by the way, speaking of the ASI's and the robot factories, one of the interesting things, one of the interesting things, there's this question of what you do with industrial scale intelligence. And obviously it's not chatbots. Yeah. But it's a, I think it's very hard to predict. Yeah. But the history of oil is very interesting. We're in the, I think it's in the 1860s that we figure out how to refine oil some a geologist. And so then standard oil, I guess
hard to add. There's a huge boom. It changes American politics. Yeah. Entire legislators are getting bought out by oil interest. Yeah. And presidents are going elected based on the divisions about oil and breaking them up and everything. And all of this has happened. Yeah. The world isn't ever illusionized before the car has been invented. Uh-huh. And I, so when the light bubble was invented, I think it was like 50 years after oil refining had been discovered.
It's majority of standard oil history is before the car is invented. The carousine lamp. Exactly. So it's just used for lighting. And then they thought oil would just no longer be relevant. Yeah. So there was a concern that standard oil would go to bankrupt when, um, when the light bubble was invented. Yeah. And but then there's sort of, you realize that there's immense amount of compressed energy here. Yeah. You're going to have billions of gallons of this stuff
a year. Yeah. Yeah. And it's hard to sort of predict and advance what you can do with that. Yep. Yep. And then later on in terms of transportation, cars, uh, with, uh, yeah. That's, that's, that's what it would be used for. And anyways, with intelligence, maybe one answer is the intelligence explosion. Right. But even after that, yeah. So you have all these ASIs and you have enough compute, especially the compute they'll build to run hundreds of millions of GPUs. Well,
hum. Yeah. But what are we doing with that? Yeah. And it's very hard to predict and advance. I think we're very interesting to figure out what the Jupiter brain will be doing. So look, there's situational awareness. Yeah. Um, where things stand now. Uh-huh. And we've gotten a good dose of that. The, uh, obviously a lot of the things we're talking about now, you couldn't have prejudged many years back in the past. Right. And part of your role that implies that things
will accelerate because of AI getting the process. Yeah. But many other things that we are, that are unpredictable, fundamentally, yeah. Basically how people will react, how the political system will react, how foreign adversaries will react. Yep. That those things will become evident over time. Yep. So the situational awareness is not just knowing what the picture stands now, but being in a position to react appropriately to new information, to change a world view as a
result, to change your recommendations as a result. Yep. What is the appropriate way to think about situational awareness is a continuous process, rather than as a one-time thing you realized? Yep. No, I think this is great. Look, I think there's, um, there's a sort of mental flexibility
in willing to change your mind. That's really important. I actually think this is sort of like how a lot of brains have been broken, the AGI debate, and the sort of the tumors who actually, you know, I think we're really prescient on AGI and thinking about the stuff, you know, like a decade ago, but you know, they haven't actually updated on the empirical realities of deep warning. They're sort of like the proposals are really kind of an even unworkable. This really makes sense. You know,
there's people who come in with sort of a predefined ideology. They're just kind of like, you know, the E-AX a little bit, you know, like they like to ship post about technology, but they're not actually thinking through like, you know, I mean either they're sort of sedicatnationists who think the stuff is only going to be, you know, a chatbot. And so of course it isn't risky, or they're just not thinking through the kind of like actually immense national security implications and how that's
going to go. And you know, I actually think there's kind of a risk in kind of like having written the stuff down and like put it online and you know, there's a there's a I think this sometimes happens to people is just sort of calcification of the worldview because now they publicly articulated this position. And you know, maybe there's some evidence against it, but they're
clinging to it. And so I actually, you know, I want to give the big disclaimer on like, you know, I think it's really valuable to paint this sort of very concrete and visceral picture. I think this is currently my best guess on how this decade will go. I think if it goes anywhere like this, it will be wild. But you know, given given the map of pace of progress, we're going to keep getting a lot more information. And you know, I think it's important to sort of keep your head
on straight about that. You know, I feel like the most important thing here is that, you know, this relates to some of the stuff we've talked about and you know, sort of the world being surprisingly small and so on. You know, I feel like I used to have this rule view of like, look, there's important things happening in the world, but there's like people who are taking care of it, you know, and there's like the people in government and there's again, even like AI labs have idealized
and people are on it, you know, surely they must be on it, right? And I think just some of this personal experience, even seeing how kind of COVID went, you know, people aren't necessarily, there's not some sort of, not as that somebody else is just kind of on it and making sure this goes well,
however it goes. You know, the thing that I think will really matter is that there are sort of good people who take this stuff as seriously as it deserves and who are willing to kind of take the implication seriously, who are willing to, you know, have situational awareness or willing to change their minds or willing to sort of steer the picture in the face. And you know, I'm counting on those good people. All right, that's a great place to close Leopold. Thanks so much, Tarkash.
Yeah, this is the excellent. Hey, everybody, I hope you enjoyed that episode with Leopold. There's actually one more riff about German history that he had after a break and it was pretty interesting, so I didn't want to cut it out. So I've just included it after this outro. You can advertise on the show now. So if you're interested, you can reach out at the form in the description below. Other than that, the most helpful thing you can do is just share the episode if you enjoyed it.
Send it to group chats, Twitter, wherever else. You think people who might like this episode might congregate. And other than that, I guess here's this riff on Frederick the Great. See you on the next one. I mean, I think the actual funny thing is, you know, a lot of this sort of German history stuff we've talked about is sort of like not actually stuff I learned in Germany. It's sort of like stuff that I
learned after. And there's actually, you know, funny thing where I kind of would go back to Germany over Christmas or whatever. So we understand the street names. It's like, you know, Gnives now in Scharnhorst and then all these like Prussian military reformers. And you're like finally understood, you know, Sansa C and you're like Frederick, Frederick the Great is this really interesting figure where he's this sort of in some sense kind of like gay lover of arts, right? Where he,
he hates speaking German. He only wants to speak French. He like plays the flute. He composes. He has all the sort of great, you know, artists of his day, you know, over at Sansa C. And he actually had this sort of like really tough upbringing where his father was this sort of like really stir and sort of Prussian military man. And he had had a Frederick the Great as a sort of a 17 year old or whatever. He basically had a male lover. And what his father did was
and prison his son. And then I think hang his male lover in front of him. And again, his father was this kind of very strong Prussian guy. He used this kind of gay, you know, lover of arts. But then later on, Frederick the Great turns out to be this like, you know, one of the most kind of like, you know, successful kind of Prussian conquerors, right? Like he gets Silesia. He wins the seven years war. You know, also, you know, amazing military strategist, you know, amazing military
strategy at the time consisted of like he was able to like flank the army. And that was crazy, you know, and that was brilliant. And then, and then they like almost lose the seven years war. At the very end, you know, the sort of the the the Russian saw changes. And he's like, I'm actually kind of a Prussia Stan. You know, I think I'm like, I'm into this stuff. And then he lets,
you know, let's spread the great loose and how let's let's let's let the army be okay. And um, anyway, sort of like a yeah, kind of bizarre, interesting figure in German history.