The Government Knows AGI is Coming - podcast episode cover

The Government Knows AGI is Coming

Mar 04, 20251 hr 6 min
--:--
--:--
Listen in podcast apps:
Metacast
Spotify
Youtube
RSS

Summary

Ezra Klein interviews Ben Buchanan, former AI advisor in the Biden White House, about the rapid approach of AGI and its implications. They discuss the US-China competition, national security risks, labor market disruptions, and the need for both safety measures and proactive preparation. Buchanan emphasizes the importance of government readiness and international cooperation in navigating this transformative technology.

Episode description

Artificial general intelligence — an A.I. system that can beat humans at almost any cognitive task — is arriving in just a couple of years. That’s what people tell me — people who work in A.I. labs, researchers who follow their work, former White House officials. A lot of these people have been calling me over the last couple of months trying to convey the urgency. This is coming during President Trump’s term, they tell me. We’re not ready.

One of the people who reached out to me was Ben Buchanan, the top adviser on A.I. in the Biden White House. And I thought it would be interesting to have him on the show for a couple reasons: He’s not connected to an A.I. lab, and he was at the nerve center of policymaking on A.I. for years. So what does he see coming? What keeps him up at night? And what does he think the Trump administration needs to do to get ready for the AGI — or something like AGI — he believes is right on the horizon?

This episode contains strong language.

Mentioned:

Machines of Loving Grace” by Dario Amodei

Ninety-five theses on AI” by Samuel Hammond

What It Means to be Kind in a Cruel World” by The Ezra Klein Show with George Saunders

Book recommendations:

The Structure of Scientific Revolutions by Thomas Kuhn

Rise of the Machines by Thomas Rid

A Swim in a Pond in the Rain by George Saunders

Thoughts? Guest suggestions? Email us at [email protected].

You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.

This episode of “The Ezra Klein Show” was produced by Rollin Hu. Fact-checking by Michelle Harris, with Kate Sinclair and Mary Marge Locker. Mixing by Isaac Jones, with Efim Shapiro and Aman Sahota. Our supervising editor is Claire Gordon. The show’s production team also includes Elias Isquith, Kristin Lin and Jack McCordick. Original music by Pat McCusker. Audience strategy by Kristina Samulewski and Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. Special thanks to Switch and Board Podcast Studio.

Unlock full access to New York Times podcasts and explore everything from politics to pop culture. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify.

Transcript

New York Times games make me feel like I'm amazing. Wordle makes me feel things that I don't feel from anyone else. I absolutely love spelling bee. The Times crossword puzzle is a companion that I've had longer than anyone outside of my immediate. When I can finish a hard puzzle without pins, I feel like the smartest person in the world. It gives me joy every single day. Join us and play all New York Times games at nytimes.com slash games. Subscribe by March 16th.

From New York Times Opinion, this is The Ezra Klein Show. you For the past couple of months, I've been having this strange experience where person after person, independent of each other, from AI labs, from government, has been coming to me and saying, it's really about to happen.

We're really about to get to artificial general intelligence. And what they mean is that they have believed for a long time that we are on a path to creating transformational artificial intelligence. Artificial intelligence capable of doing. Basically anything a human being can do behind a computer, but better than most human beings can do it. And before they thought, you know, maybe take five or 10 years, 10 or 15 years. But now they believe it's coming inside of two to three years.

inside Donald Trump's second term. And they believe it because of the products they're releasing right now. They believe because of what they're seeing inside the places they work. And I think they're right. If you've been telling yourself this isn't coming, I really think you need to question that. It's not Web3. It's not Vaporware. A lot of what we're talking about is already here right now. And I think we're on the cusp of an era.

in human history that is unlike any of the eras we have had before. And we're not prepared in part because it's not clear what it would mean to prepare. We don't know. what this will look like, what it will feel like. We don't know how labor markets will respond. We don't know which country is going to get there first. We don't know what it will mean for war. We don't know what it will mean for peace.

And as much as there is so much else going on in the world to cover, I do think there's a good chance that when we look back on this era in human history, this will have been the thing that matters. This will have been the event horizon, the thing that the world before it and the world after it.

We're just different worlds. One of the people who reached out to me is Ben Buchanan, who was the former special advisor for artificial intelligence in the Biden White House. And I thought Buchanan would be interesting to bring on for a couple of reasons. One is that this is not a guy working for an AI lab. So he's not being paid by the big AI labs to tell you this technology is coming. The second...

is that he was at the nerve center of what policy we have been making in recent years. And we have been doing things. And in particular, we've been doing things to try to stay ahead of China. And he's been at the center of that, working on the national security side.

And three, because there's now been a profound changeover in administrations. And the new administration between Elon Musk and Marc Andreessen and David Sachs and J.D. Vance has a lot of people with very, very, very strong views on AI. It's not something that they're going in without having thought about. So we're at this moment of a big transition in the policymakers. And they are probably going to be in power when a GI or something like it hits the world.

So what are they going to do? What kinds of decisions are going to need to be made? And what kinds of thinking do we need to start doing now to be prepared for something that virtually everybody who works in this area is trying to tell us as loudly as they possibly can? Is coming. As always my email.

Ben Buchanan, welcome to the show. Thanks for having me. So you give me a call after the end of the Biden administration. And I got a call from a lot of people in the Biden administration who wanted to tell me about all the great work they did.

You sort of seem to want to warn people about what you now thought was coming. What's coming? I think we are going to see extraordinarily capable AI systems. I don't love the term artificial general intelligence, but I think that will fit in the next couple of years. quite likely during Donald Trump's presidency. And I think...

There's a view that this has always been something of corporate hype or speculation. And I think one of the things I saw in the White House when I was decidedly not in a corporate position was trend lines that looked very clear. And what we tried to do under the president's leadership was... get the U.S. government and our society ready for these systems. Before we get into what it would mean to get ready, what does it mean? Yeah. When you say extraordinarily capable systems, capable of what?

The sort of canonical definition of AGI, which again is a term I don't love, is a system... It'll be good if every time you say AGI, you caveat that you dislike the term. It'll sink in, right? Yeah, people really enjoy that. I'm trying to get it in the training data, Ezra. A canonical definition of AGI is a system capable of doing almost any cognitive task a human can do.

I don't know that we'll quite see that in the next four years or so, but I do think we'll see something like that, where the breadth of the system is remarkable, but also its depth, its capacity to, in some cases, exceed human capabilities, kind of regardless of the cognitive discipline. Systems that can replace human beings in cognitively demanding jobs. Yeah, or key parts of cognitively demanding jobs, yeah. I will say, I am also pretty convinced we're on the cusp of this. So I'm not...

I'm not coming at this as a skeptic, but I still find it hard to mentally live in the world of it. So do I. So I used Deep Research recently, which is a new OpenAI product. It's sort of on their more pricey tiers. Most people, I think, have not used it. But it can build out something that's more like a scientific analytical brief in a matter of minutes.

I work with producers on the show. I hire incredibly talented people to do very demanding research work. And I asked it to do this report on the tensions between the Madisonian constitutional system. and the sort of highly polarized nationalized parties we now have. And what it produced in a matter of minutes was, I would at least say, the median of what any of the teams I've worked with on this could produce within days.

I've talked to a number of people at firms that do high amounts of coding, and they tell me that by the end of the year, by the end of next year, they expect most code will not be written by human beings. I don't really see how this cannot have labor market impact.

I think that's right. I'm not a labor market economist, but I think that the systems are extraordinarily capable. And some ways I'm very fond of the quote, the future is already here. It's just unevenly distributed. And I think unless you are engaging with this technology.

you probably don't appreciate how good it is today. And then it's important to recognize today is the worst it's ever going to be. It's only going to get better. And I think that is the dynamic that in the White House, we were tracking and that... the next White House and our country as a whole is going to have to track and adapt to in really short order. And what's fascinating to me is that this is the first revolutionary technology that is not funded by the Department of Defense, basically.

And if you go back historically, last hundred years or so, nukes, space, early days of the internet, early days of the microprocessor, early days of large-scale aviation, radar, GPS, the list is very, very long. All of that tech is fundamentally comes from DoD money.

And it's the private sector inventing it, to be sure. But the central government role... gave the Department of Defense and the U.S. government an understanding of the technology that by default, it does not have an AI, and also gave the U.S. government a capacity to shape where that technology goes, that by default, we don't have an AI.

There are a lot of arguments in America about AI. The one thing that seems not to get argued over, that seems almost universally agreed upon and is the dominant, in my view, controlling priority and policy. Is it we get to AGI, a term I've heard you don't like, before China does? Why? I do think there are profound economic and military...

and intelligence capabilities that would be downstream of getting to AGI or transformative AI. And I do think it is fundamental for U.S. national security that we continue to lead AI. The quote that certainly I thought about a fair amount was actually from Kennedy in his famous Rice speech in 62, the going to the moon speech. We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard.

everyone remembers it because he's saying we're going to the moon but actually i think he gives the better line when he talks about the importance of space for space science like nuclear science and all technology has no conscience of its own Whether it will become a force for good or ill depends on man. And only if the United States occupies a position of preeminence can we help decide. whether this new ocean will be a sea of peace or a new terrifying theater of war.

And I think that is true in AI, that there's a lot of tremendous uncertainty about this technology. I am not an AI evangelist. I think there's huge risks to this technology. But I do think there is a fundamental role for the United States in being able to shape where it goes.

Which is not to say we don't want to work internationally, which is not to say we don't want to work with the Chinese. It's worth noting that in the president's executive order on AI, there's a line in there saying we are willing to work even with our competitors on AI safety and the like. But it is worth saying that I think...

Pretty deeply, there is a fundamental role for America here that we cannot abdicate. Paint the picture for me. You say there'd be great economic, national security, military risks if China got there first. Help me help the audience here imagine a world where... China gets there first. So I think let's look at just a narrow case of AI for intelligence analysis and cyber operations.

pretty out in the open that if you had a much more powerful AI capability, that would probably enable you to do better cyber operations on offense and on defense. What is a cyber operation? Breaking into an adversary's network to collect information, which if you're collecting in a large enough volume... AI systems can help you analyze. And we actually did a whole big thing through DARPA, the Defense Advanced Research Project Agency.

called the AI Cyber Challenge to test out AI's capabilities to do this. And I would not want to live in a world in which China has that capability on offense and defense and cyber, and the United States does not. And I think that is true in a bunch of different domains that are core to... national security competition my sense already has been that most people most institutions are pretty hackable to a capable state actor not everything but a lot of them

And now both the state actors are going to get better at hacking and they're going to have much more capacity to do it in the sense that you can have many more AI hackers than you can human hackers. Are we just about to enter into a world where...

We are just much more digitally vulnerable as normal people. And I'm not just talking about people who the states might want to spy on. But, you know, you will get versions of these systems that just all kinds of bad actors will have. Do you worry it's about to get truly dystopic? What we mean canonically when we speak of hacking is finding vulnerability in software, exploiting that vulnerability to get illicit access.

I think it is right that more powerful AI systems will make it easier to find vulnerabilities and exploit them and gain access. And that will yield an advantage to the offensive side of the ball. I think it is also the case that more powerful AI systems on the defensive side will make it easier to write more secure code in the first place, reduce the number of vulnerabilities that can be found, and to better detect the hackers that are coming in.

We tried as much as possible to shift the balance towards the defensive side of this. But I think it is right that in the coming years here, this sort of transition period we've been talking about. That there will be a period in which sort of older legacy systems that don't have the advantage of the newest AI defensive techniques or software development techniques will on balance be more vulnerable to a more capable offensive actor. Which is what most people use.

I don't know if that's right, actually. I mean, you have an iPhone in your pocket that, or Google Pixel. People are often not that quick about updating. Yeah. I mean, the less tech literate you are, the more... Sure, I'm thinking more about like... legacy power systems and server mainframes and the like that could be two decades old that haven't been turned on for a long time. So that I think is where I feel the risk most acutely. I think...

For all of the risks that come with the monoculture of most people's personal tech platforms these days, one of the upsides is they do push security updates pretty regularly. They push them with new emojis that get people to download the updates. And on balance, I think people are probably better at patching their personal software now than they were.

15 years ago. It gets very annoying if you don't. The flip of that is the question, which I know a lot of people worry about, which is the security of the AI labs themselves. It is very, very, very valuable for another state. to get the latest open AI system. And, you know, the people at these companies, and I've talked to them about this, on the one hand, know this is a problem. And on the other hand, it's really annoying to work in a truly secure way.

I've worked in this gift for the last four years, a secure room where you can't bring your phone and all of that. That is annoying. There's no doubt about it. How do you feel about the vulnerability right now? Of AI labs? Yeah. I worry about it. I think there's a hacking risk here. I also, you know, if you hang out in the right San Francisco house party, they're not sharing the model, but they are talking to some degree about the techniques they use and the like, which have tremendous value.

I do think it is a case to come back to this kind of intellectual through line of this is national security relevant technology, maybe world-changing technology that's not. coming from the auspices of the government and doesn't have the kind of government imprimatur of security requirements. That shows up in this way as well. We, in the National Security Memorandum, the president's side, tried to signal this to the labs and tried to say to them, we're, as the U.S. government...

want to help you in this mission. This was signed in October of 2024, so there wasn't a ton of time for us to build on that, but I think it's a priority for the Trump administration. I can't imagine anything that is more nonpartisan than protecting American companies that are inventing the future. There's a dimension of this that I find people bring up to me a lot, and it's interesting.

is that processing of information. So compared to, you know, spy games between the Soviet Union and the United States, we all just have a lot more data now. We have all the satellite data. We, I mean, obviously we will not eavesdrop on each other, but obviously we eavesdrop on each other and have all these kinds of things coming in. And I'm...

told by people who know this better than I do, that there's just a huge choke point of human beings and their currently fairly rudimentary programs analyzing that data. And that there's a view that what it would mean. to have these truly intelligent systems that are able to inhale that and do pattern recognition is a much more significant change in the balance of power than people outside this understand.

Yeah, I think we were pretty public about this. And the president signed a national security memorandum, which is basically the national security equivalent of an executive order that says this is a fundamental area.

of importance for the United States. I don't even know the amount of satellite images that the United States collects every single day, but it's a huge amount. And we have been public about the fact that we simply do not have enough humans to go through all of this satellite imagery and it would be a terrible job if we did.

And there is a role for AI in going through these images of hotspots around the world, of shipping lines and all that, and analyzing them in an automated way and surfacing the most interesting and important ones for human review. I think at one level, you can look at this and say, well, it doesn't software just do that. And I think that at some level, of course, is true.

At another level, you could say the more capable that software, the more capable the automation of that analysis, the more intelligent advantage you extract from that data. And that ultimately leads to a better position for the United States. I think the first and second order consequences of that are also striking. One thing it implies is that in a world where you have strong AI, the incentive for spying goes up.

Because if right now we are choked at the point of we are collecting more data than we can analyze, well, then each marginal piece of data we're collecting isn't that valuable. I think that's basically true. I think you need to have it. I firmly believe you need to have...

rights and protections that hopefully are pushing back and saying, no, there's key kinds of data here, including data on your own citizens, and in some cases citizens of allied nations, that you should not collect, even if there's an incentive to collect it. And for all of the flaws of the United States.

intelligence oversight process and all the debates we could have about this, we do have those kinds of structures. And that, I think, is fundamentally more important for the reason you suggest in the era of tremendous AI systems. How frightened are you by the national security implications of all this? Which is to say that...

the possibilities for surveillance states. So Sam Hammond, who's an economist at the Foundation for American Innovation, he had this piece months back called 95 Theses on AI. And one of them that I think about a lot is he makes this point that... A lot of laws right now, if we had the capacity for perfect enforcement, would be constricting, like extraordinarily constricting, right? Laws are written knowing that...

Human labor is scarce. And there's this question of what happens when the surveillance state gets really good? What happens when AI makes the police state a very different kind of thing than it is now? What happens when we have like... warfare of endless drones, right? I mean, the company Andorail has become like a big, you know, you hear about them a lot now. They have a relationship, I believe, with OpenAI. Palantir is in a relationship with Anthropic. We're about to see a real...

change in this in a way that I think is, from the national security side, frightening. And there I very much get why we don't want China way ahead of us. I get that entirely. But just in terms of the capacities it gives our own government.

How do you think about that? Yeah, I would decompose essentially this question about AI and autocracy or the surveillance state, however you want to define it, into two parts. The first is the China piece of this. How does this play out in a state that is truly in its bones?

an autocracy and doesn't even make any pretense towards democracy and the like. And I think we can probably agree pretty quickly here. This makes very tangible something that... you know, is probably core to the aspirations of their society of like a level of control that only an AI system could help bring about that I just find terrifying.

As an aside, I think there's a saying in both Russian and Chinese, something like, heaven is high and the emperor is far away, which is like, historically, even in those autocracies, there was some kind of space where... the state couldn't intrude because of the scale and the breadth of the nation. And it is the case that in those autocracies, I think AI would...

could make the force of government power worse. Then there's a more interesting question in the United States. Basically, what is the relationship between AI and democracy? And I think I... share some of the discomfort here. There have been thinkers historically who've said, you know, part of the ways in which... We revise our laws. Our people break the laws. And there's a space for that. And I think there is a humanness to our justice system that I wouldn't want to lose.

And we tasked the Department of Justice in running a process and thinking about this and coming up with principles for the use of AI in criminal justice. I think there's, in some cases, advantages to it. like cases are treated alike with the machine, but also I think there's tremendous risk of bias and discrimination and so forth because the systems are flawed and in some cases because the systems are ubiquitous. And I do think there is a risk of a fundamental...

encroachment on rights from the widespread unchecked use of AI in the law enforcement system that we should be very alert to and that I, as a citizen, have grave concerns about. I find this all makes me incredibly uncomfortable. One of the reasons is that there is a, it's like we are summoning an ally, right? We are trying to build an alliance with another, like an almost interplanetary ally. And we are in a competition with China.

to make that alliance, but we don't understand the ally. And we don't understand what it will mean to let that ally into all of our systems and all of our planning. As best I understand it, every company really working on this, every government really working on this.

believes that in the not-too-distant future, you're going to have much better and faster and more dominant decision-making loops by being able to make much more of this autonomous to the AI, right? Once you get to what we're talking about as AGI, you want to... turn over a fair amount of your decision-making to it. So we are rushing towards that because we don't want the other guys to get there first without really understanding what that is or what that means.

It seems like a potentially historically dangerous thing that AI reached maturation at the exact moment that the US and China... are in this like Thucydides trap style race for superpower dominance. That's a pretty dangerous set of incentives in which to be creating the next turn in.

intelligence on this planet. Yeah, there's a lot to unpack here. So let's just go in order. But basically, bottom line, I think I, in the White House and now post-White House, greatly share a lot of this discomfort. And I think part of the appeal... for something like the export controls is it identifies a choke point that can differentially slow the Chinese down, create space for the United States to have a lead, ideally, in my view, to spend that lead on safety and...

coordination and not rushing ahead, including, again, potential coordination with the Chinese, while not exacerbating this arms race dynamic. I would not say that we... tried to race ahead in applications to national security. So part of the national security memorandum is a pretty lengthy kind of description of what we're not going to do.

Yeah, but you're not in power anymore. Well, that's a fair question. Now, they haven't repealed this. The Trump administration has not repealed this. But I do think it's fair to say that... For the period while we had power, the foundation we were trying to build with AI, we were very cognizant to the dynamic you were talking about, a race to the bottom on safety. And we were trying to guard against it, even as we tried to assure a position of U.S. preeminence. Is there anything to the...

The concern that by treating China as such an antagonistic competitor on this, who we will do everything, including export controls on advanced technologies, to hold them back, that we have made them into a more... intense competitor. I mean, there is a, I do not want to be naive about the Chinese system or the ideology of the CCP. Like they want strength and dominance and to see the next era be a Chinese era. So maybe there's nothing you can do about this, but.

It is pretty damn antagonistic to try to choke off the chips for the central technology of the next era to the other biggest country. I don't know that it's pretty antagonistic to say we are not going to sell you the most advanced technology in the world. That does not in itself, that's not a declaration of war.

That is not even self-declaration of a Cold War. I think it is just saying this technology is incredibly important. Do you think that's how they understood it? This is more academic than you want, but my, you know. academic research when i started as a professor was basically on the the studies trap or what in academia we call a security dilemma of how nations misunderstand each other so i'm sure the chinese in the united states misunderstand each other at some level in this area but i think

But I don't think they do. The plain reading of the facts is that not selling chips to them, I don't think is a declaration. But I don't think they do misunderstand us. I mean, maybe maybe they see it differently, but I think you're being a little. Look, I'm aware of how politics in Washington works. I've talked to many people doing this. I've seen that.

turn towards a much more confrontational posture with China. I know that Jake Sullivan and President Biden wanted to call this strategic competition and not a new Cold War. And I get all that. I think it's true. And also... We have just talked about, and you did not argue the point, that our dominant view is we need to get to this technology before they do.

I don't think they look at this like, oh, you know, like nobody would ever sell us the top technology. I think they understand what we're doing here. To some degree. I don't want to trigger this. I'm sure they do see it that way. On the other hand, we...

set up an AI dialogue with them. And, you know, I flew to Geneva and met them and we tried to talk to them about AI safety and the like. So I do think in an area as complex as AI, you can have multiple things be true at the same time. I don't regret for a second. the export controls. And I think, frankly, we are proud to have done them when we did them, because it has helped ensure that here we are a couple years later, we retain the edge in AI for as good and talented as DeepSeek is.

Well, you say that. What made DeepSeek such a shock, I think, to the American system was here is a system that appeared to be trained on much less compute for much less money that was competitive at a high level with our frontier systems. How did you understand what DeepSeek was and what assumptions it required that we rethink or don't? Yeah, let's just take one step back. So we're tracking the history of DeepSeek here.

We'd been watching DeepSeq in the White House since November of 23, or thereabouts, when they put out their first coding system. And there's no doubt that the DeepSeq engineers are extremely talented. And they got better and better of their systems throughout 2024.

We were hardened when their CEO said, I think, that the biggest impediment to what DeepSeek was doing was not their inability to get money or talent, but their inability to get advanced chips. Clearly, they still did get some chips, some they bought legally, some they smuggled, so it seems. And then in...

December of 24, they came out with a system called version three, DeepSeek version three, which actually I think is one that should have gotten the attention. It didn't get a ton of attention, but it did show they were making strong.

algorithmic progress in basically making systems more efficient. And then in January of 25, they came out with a system called R1. R1's actually not that unusual. No one would expect that to take a lot of computing power, just as a reasoning system that extends the underlying.

v3 system that's a lot of nerd speak the key thing here is when you look at what deep seek has done i don't think the media hype around it was warranted and i don't think it changes the fundamental analysis of of what we are doing they still are constrained by computing power We should tighten the screws and continue to constrain them. They're smart. Their algorithms are getting better.

But so are the algorithms of U.S. companies. And this, I think, should be a reminder that the ship controls are important. China is a worthy competitor here, and we shouldn't take anything for granted. But I don't think this is a time to say the sky is falling or the fundamental scaling laws have broken.

Where do you think they got their performance increases from? They have smart people. There's no doubt about that. We read their papers. They're smart people who are doing exactly the same kind of algorithmic efficiency work that companies like Google, Anthropic, and OpenAI are doing. One common argument I heard on the left, Lena Kahn made this point actually in our pages, was that this proved our whole paradigm of AI development was wrong.

That we were saying we did not need all this compute. We were saying we did not need these giant mega companies. That this was showing away towards like a decentralized, almost solar punk version of AI development. And that in a sense, the American system and imagination had been captured by like these three big companies. But what we're seeing from China was that that wasn't necessarily needed. We could do this on less energy.

fewer chips, less footprint. Do you buy that? I think two things are true here. The first is there will always be a frontier, or at least for the foreseeable future, there'll be a frontier that is computationally an energy. And our companies, we want to be at that frontier. Those companies have very strong incentive to look for efficiencies, and they all do. They all want to get every single less.

juice of insight from each squeeze of computation. But they will continue to need to push the frontier. And then in addition to that, there'll be a kind of slower diffusion that lags the frontier, where algorithms get more efficient. fewer computer chips are required, less energy is required. We need, as America, to win both those competitions. One thing that you see around the export controls, the AI firms want the export controls.

The semiconductor firms don't. When DeepSeek rocked the U.S. stock market, it rocked it by making people question NVIDIA's long-term worth. And NVIDIA very much doesn't want these export controls. So you at the White House were, I'm sure, at the center of a bunch of this lobbying back and forth. How do you think about this? Every AI chip, every advanced AI chip that gets made will get sold.

The market for these chips is extraordinary right now, I think, for the foreseeable future. So I think our view was we put the export controls on. But NVIDIA didn't think that. The stock market didn't think that. We put the export controls on the first ones in October 2022.

Nvidia's stock has 10x'd since then. I'm not saying we shouldn't do the export controls, but I want you to take the strong version of the argument, not the weak one. I don't think Nvidia's CEO is wrong. That if we say Nvidia cannot export its top chips to China... that in some mechanical way in the long run reduces the market for NVIDIA's chips.

Sure. I think the dynamic is right. I'm not suggesting if they had a bigger market, they could charge on the margins more. That's obviously the supply and demand here. I think our analysis was considering the importance of these chips and the AI systems they make to U.S. national security.

trade-off that's worth it. And NVIDIA, again, has done very well since we put the export controls out. NVIDIA is currently trading even post-deep seek at something like 50 times earnings. So the market's continuing to expect they will grow. And I agree with that. This is Somini Sengupta. I'm a reporter for The New York Times. I've covered nine conflicts, written about earthquakes, terror attacks, droughts, floods, many humanitarian crises. My job is to bear witness.

Right now, I'm writing about climate change. And I'm trying to answer some really big and urgent questions about life on a hotter planet. Like, who is most vulnerable to climate change? Should we redesign our cities? Should we be eating differently? What happens to the millions of people who live by the coast as the oceans rise? To make sense of this, I talk to climate scientists, inventors, activists. Mostly, I document the impact of global warming.

And that impact is highly, highly unequal. My colleagues and I are doing our best to answer complicated questions like these, but we can't do that without our subscribers. If you'd like to subscribe, go to nytimes.com slash subscribe and thank you. The Biden administration was also generally concerned with AI safety. I think it was influenced by people who care about AI safety. And that's created a kind of backlash from the accelerationist.

or what gets called the acceleration side of this debate. So I want to play a clip for you from Mark Andreessen, who is obviously a very significant venture capitalist, a top Trump advisor. describing the conversations he had with the Biden administration on AI and how they sort of radicalized him in the other direction. Ben and I went to Washington in May of 24. And, you know, we couldn't meet with Biden because, as it turns out, at the time, nobody could meet with Biden.

But we were able to meet with senior staff. And so we met with very senior people in the White House, you know, in the inner core. And we basically relayed our concerns about AI. And their response to us was, yes, the national agenda on AI, as we will implement in the Biden administration. the second term is we are going to make sure that AI is going to be only a function of two or three large companies. We will directly regulate and control those companies. There will be no startups.

This whole thing where you guys think you can just start companies and write code and release code on the internet, those days are over. That's not happening. The conversation he's describing there, were you part of that conversation? I met with him once. I don't know exactly, but I met with him once. Would that characterize the conversation he had with you?

He talked about concerns related to startups and competitiveness and the like. I think my view on this is you look at our record on competitiveness. It's pretty clear that we wanted a dynamic ecosystem. The AI executive order, which President Trump just repealed, had a pretty lengthy section on competitiveness.

The Office of Management and Budget Management memo, which governs how the US government buys AI, had a whole carve-out in it or a call-out in it saying, we want to buy from a wide variety of vendors. The Chips and Science Act has a bunch of things in there about competition.

I think our view on competition is pretty clear. Now, I do think there are structural dynamics related to scaling laws and the like that will force things towards big companies that I think in any respects we were pushing against. I think the track record is pretty clear of us in competition.

I think the view that I understand him as arguing with, which is a view I've heard from people in the safety community, but it's not a view I'd necessarily heard from the Biden administration, was that you will need to regulate. the frontier models of the biggest labs when it gets sufficiently powerful. And in order to do that, you will need there to be controls on those models.

You just can't have the model weights and everything floating around so everybody can run this on, you know, their home laptop. I think that's the tension he's getting at. It gets at a bigger tension we'll talk about in a minute, but which is how much to regulate. this incredibly powerful and fast-changing technology such that on the one hand, you're keeping it safe, but on the other hand, you're not overly slowing it down or making it impossible for smaller companies to comply.

with these new regulations as they're using more and more powerful systems. Yeah, so in the president's executive order, we actually tried to wrestle with this question, and we didn't have an answer when that order was signed in October of 23.

And what we did on the open source question in particular, and I think we should just be precise here at the risk of being academic again, what we're talking about are open weight systems. Can you just say what weights are in this context and then what open weights are? So when you have the training process for an AI system, you run this algorithm through this huge amount of computational power that processes the data. The output at the end of that training process.

Loosely speaking, and I stress this is the loosest possible analogy, they are roughly akin to the strength of connections between the neurons in your brain. And in some sense, you could think of this as the raw AI system. And when you have these weights, one thing that some companies like Meta and DeepSeek choose to do is they publish them out on the internet, which makes them, we call them open-weight systems. And the crucial thing about an open-weight system on the good side is that...

It's much easier to innovate with that system to use as a basis for future systems because you've got access to the raw thing. On maybe the riskier side is any safeguards that were built into that system refuse when it asks you, when a user asks you to help develop a biological weapon are pretty easy to remove. I'm a huge believer in the open source ecosystem. Many of the companies that publish the ways for their...

do not make them open source. They don't publish the code and the like. So I don't think they should get the credit of being called open source systems at the risk of being pedantic. But open-weight systems is something we thought a lot about in 23 and 24. And we... sent out a pretty wide-ranging request for comment for a lot of folks. We got a lot of comments back. And what we came to in the report that was published in July or so of 24 was basically there was not evidence yet to constrain.

the open-weight ecosystem, that the open-weight ecosystem does a lot for innovation and the like, which I think is manifestly true, but that we should continue to monitor this as the technology gets better basically exactly in the way that you described. So we're talking here a bit about the sort of race dynamic and the safety dynamic.

When you were getting those comments, not just on the open weight models, but also when you were talking to the heads of these labs and people were coming to you, what did they want? What? would you say was like the consensus to the extent there was one from AI world of what they needed to get there quickly. And also because I know that many people in these labs are worried about what it would mean if these systems were unsafe.

What was what you would describe as their consensus on safety? I think I mentioned before this core intellectual insight of this technology for the first time maybe in a long time is a revolutionary one, not. funded by the government and its early incubator days that was the theme from the labs which is a sort of a like don't you know we're inventing something very very powerful

Ultimately, it's going to have implications for the kind of work you do in national security, the way we organize our society. And more than any kind of individual policy request, they were basically saying, get ready for this. The one thing that we did that could be the closest thing we did to any kind of regulation was one action, which was after the labs made voluntary commitments to do safety testing, we said, you have to share those safety test results with us.

And you have to help us understand where the technology is going. And that only applied really to the top couple labs. The labs never knew that was coming. weren't all thrilled about it when it came out. So the notion of this was kind of a regulatory capture that we were asked to do this is simply not true. But I, in my experience, never got...

discrete individual policy lobbying from the labs. I got much more. This is coming. It's coming much sooner than you think. Make sure you're ready. To the degree that they were asking for something in particular, it was maybe a corollary of that. We're going to need a lot of energy. And we want to do that here in the United States. And it's really hard to get the power here in the United States. But that has become a pretty big question. If this is all as potent as we think it will be.

And you end up having a bunch of the data centers containing all the model weights and everything else. in a bunch of Middle Eastern petrostates. Speaking hypothetically. Speaking hypothetically, because they will give you huge amounts of energy access in return for just at least having some purchase.

on this AI world, which they don't have the internal engineering talent to be competitive in, but maybe can get some of it located there. And then there's some technology, right? There is something to this question. Yeah, and this is... Actually, I think an area of bipartisan agreement, which we can get to. But this is something that we really started to pay a lot of attention to in later part of 23 and most of 24, when it was clear this was going to be a bottleneck.

And in the last week or so in office, President Biden signed an AI infrastructure executive order, which has not been repealed, which basically tries to accelerate the power development and the permitting of... power and data centers here in the United States, basically for the reason that you mentioned.

As someone who truly believes in climate change and environmentalism and clean power, I thought there was a double benefit to this, which is that if we did it here in the United States, it could catalyze the... clean energy transition and like and these companies for a variety of reasons in general are willing to pay more for clean energy and on things like geothermal and the like

Our hope was we could catalyze that development and bend the cost curve and have these companies be the early adopters of that technology so we'd see a win on the climate side as well. There are warring cultures around how to prepare for AI. And I sort of mentioned AI safety and AI accelerationism. And J.D. Vance just went to the sort of big AI summit in Paris. And I'll play a clip of what he said.

When conferences like this convene to discuss a cutting-edge technology, oftentimes I think our response is to be too self-conscious, too risk-averse. But never have I encountered a breakthrough in tech that so clearly caused us to do precisely the opposite. What do you make of that? So I think he is setting up a dichotomy there that I don't quite agree with.

The irony of that is if you look at the rest of his speech, which I did watch, there's actually a lot that I do agree with. So he talks, for example, I think he's got four pillars in the speech. One's about centering the importance of workers. One's about American preeminence and like those are.

entirely consistent with the actions that we took and the philosophy that I think the administration, which I was a part of, espoused and that I certainly believe. Insofar as what he is saying is that safety and opportunity are in fundamental tension, then I disagree. And I think if you look at the history of technology and technology adaptation, the evidence is pretty clear that the right amount of safety action.

unleashes opportunity and in fact unleashes speed. So one of the examples that we studied a lot and talked to the president about was the early days of railroads. And in the early days of railroads, there were tons of accidents and crashes and deaths. People were not inclined to use railroads as a result.

And then what started happening was safety standards and safety technology. Block signaling so that trains could know when they were in the same area. Air brakes so that trains could brake more efficiently. Standardization of train track widths and gauges and the like. This was not always popular at the time, but with the benefit of hindsight, it is very clear that that kind of technology and to some degree policy development of safety standards.

made the American railroad system in the late 1800s. And I think this is a pattern that shows up a bunch throughout the history of technology. To be very clear, it is not the case that every safety regulation, every technology is good.

And there certainly are cases where you can overreach and you can slow things down and choke things off. But I don't think it's true. There's a fundamental tension between safety and opportunity. That's interesting because I don't know how to get this point of regulation right. I think the counter argument to vice president.

Vance, is nuclear. So nuclear power is a technology that both held extraordinary promise, maybe still does, and also you can really imagine every country wanting to be in the lead on. The series of accidents, which most of them did not even have a particularly significant body count, were so frightening to people. The technology got...

regulated to the point that certainly all of nuclear's advocates believe it has been largely strangled in the crib from what it could be. The question then is, when you look at the actions we have taken on AI,

Are we strangling in the crib and have we taken actions that are akin to? I'm not saying that we've already done it. I'm saying that, look, if these systems are going to get more powerful and they're going to be in charge of more things, things are both going to go wrong and they're going to go weird. It's not possible for it to be otherwise.

To roll out something this new in a system as complex as human society. And so I think there's going to be this question of, what are the regimes that make people feel? comfortable moving forward from those kinds of moments? Yeah, I think that's a profound question. I think what we tried to do in the Biden administration was set up the kind of institutions in the government to do that in as clear-eyed, tech-savvy a way as possible.

Again, with the one exception of the safety test results sharing, which some of the CEOs estimate cost them one day of employee work, we did not put anything close to regulation in place. We created something called AI Safety Institute. purely national security focused, cyber risks, bio risks, AI accident risks, purely voluntary. So, and that has relationships, memorandum of understanding with Anthropic, with OpenAI, even with XAI, Elon's company.

And basically, I think we saw that as an opportunity to bring AI expertise into the government, to build relationships between public and private sector in a voluntary way. And then as the technology develops, it will be up to... Now the Trump administration decided what they want to do with it. I think you are quite diplomatically understating, though, what's a genuine disagreement here. And what I would say Vance's speech was signaling was the arrival of a different culture.

in the government around AI. There has been an AI safety culture where, and he's making this point explicitly, that we have all these conferences about what could go wrong. And he is saying, stop it.

Yes, maybe things could go wrong, but instead we should be focused on what could go right. And I would say, frankly, this is like the Trump-Musk, which I think is in some ways the right way to think about the administration. Their generalized view, if something goes wrong, we'll deal with a thing that went wrong afterwards.

But what you don't want to do is move too slowly because you're worried about things going wrong. Better to break things and fix them than have moved too slowly in order not to break them. I think it's fair to say that there is a cultural difference between the Trump administration and us on some of these things. But we held conferences on what you could do with AI and the benefits of AI. We talked all the time about how you...

need to mitigate these risks, but you're doing so so you can capture the benefits. And I'm someone who... reads an essay like Dario Amadei's CEO of Anthropics, Machines of Loving Grace, and says about basically the upside of AI, and says there's a lot in here we can agree with. And the president's executive order said we should be using AI more in the executive branch.

Here you want a cultural difference. I get that. But I think when the rubber meets the road, we were comfortable with the notion that you could both realize the opportunity of AI while doing it safely. And now that they are in power, they will have to decide how do they translate.

Vice President Vance's rhetoric into a governing policy. And my understanding of their executive order is they've given themselves six months to figure out what they're going to do. And I think we should judge them on what they do. Let me ask about the other side of this, because what I liked about Vance's speech...

is I think he's right that we don't talk enough about opportunities. But more than that, we are not preparing for opportunities. So if you imagine that AI will have the effects and possibilities that it's... backers and advocates hope. One thing that that implies is that we're going to start having a much faster pace of the discovery or proposal of novel drug molecules, a very high promise.

The idea here from people I've spoken to is that I should be able to ingest an amount of information and build sort of modeling of diseases in the human body that could get us a much, much, much better drug discovery pipeline. If that were true, then you can ask this question.

well, what's the choke point going to be? And our drug testing pipeline is incredibly cumbersome. It's very hard to get the animals you need for trials, very hard to get the human beings you need for trials, right? You could do a lot to make that. to prepare it for a lot more coming in. You could think about human challenge trials, right? There are all kinds of things like this. And this is true in a lot of different domains, right? Education, et cetera.

I think it's pretty clear that the choke points will become the difficulty of doing things in the real world. And I don't see society also preparing for that, right? We're not doing that much on the safety side, maybe because we don't know what we should do. Also on the opportunity side, this question of how could you actually make it possible to translate the benefits of this stuff very fast seems like a much richer conversation than I've seen anybody seriously having.

Yeah, I think I basically agree with all of that. I think the conversation when we were in the government, especially in 23 and 24. was starting to happen we looked at the the clinical trials thing i you've read about healthcare for however long i don't claim expertise on healthcare but it does seem to me that we want to get to a world where we can

take the breakthroughs, including breakthroughs from AI systems, and translate them to market much faster. This is not a hypothetical thing. It's worth noting, I think, quite recently, Google came out with, I think they called it co-scientist NVIDIA and the ARC Institute, which does great work. had the most impressive biodesign model ever that has a much more detailed understanding of biological molecules. A group called Future House has done similarly great work.

So I don't think this is a hypothetical. I think this is happening right now. And I agree with you that there's a lot that can be done institutionally and organizationally to get the federal government ready for this.

I've been wandering around Washington DC this week and talking to a lot of people involved in different ways in the Trump administration or advising the Trump administration, different people from different factions of what I think is the modern right. I've been surprised how many people...

understand either what Trump and Musk and Doge are doing or at least what it will end up allowing as related to AI, including people I would not really expect to hear that from, not tech right people. But what they basically say is... There is no way in which the federal government as constituted six months ago moves at the speed needed to take advantage of this technology, either to integrate it into the way the government works or for the government to take advantage of what it can do.

That we are too cumbersome, endless interagency processes, too many rules, too many regulations. You have to go through too many people. That if the whole point of AI is that it is this unfathomable acceleration of cognitive work. The government needs to be stripped down and rebuilt to take advantage of it. And like them, hate them, what they're doing is stripping the government down and rebuilding it.

Maybe they don't even know what they're doing it for, but one thing it will allow is a kind of creative destruction that you can then begin to insert AI into at a more ground level. Do you buy that? It feels kind of orthogonal from what I've observed from Doge. I mean, I think...

Elon is someone who does understand what AI can do, but I don't know how starting with USAID, for example, prepares the U.S. government to make better AI policy. So I guess I don't buy it that that is the motivation for Doge. Is there something to the broader argument? And I will say I do buy not the argument about Doge. I would sort of make the same point you just made. What I do buy is that I know how the federal government works pretty well. And it is too slow.

To modernize technology. It is too slow to work across agencies. It is too slow to radically change the way things are done and take advantage of things that can be productivity enhancing. I couldn't agree more. I mean, the existence of my job in the White House, the White House Special Advisor for AI, which David Sachs now is, and I...

had this job in 2023, existed because President Biden said very clearly, publicly and privately, we cannot move at the typical government pace. We have to move faster here. I think we probably need to be careful. I'm not here for stripping it all down, but I agree with you. We have to move much faster. So another major part of Vice President Vance's speech was signaling to the Europeans that we are not going to sign on to.

complex multilateral negotiations and regulations that could slow us down. And that if they passed such regulations anyway, in a way that we believe is penalizing our AI companies, we would retaliate. How do you think about the differing position the new administration is moving into vis-a-vis Europe and its approach, its broad approach to tech regulation? Yeah, I think the honest answer here is...

We had conversations with Europe as they were drafting the EU AI Act. But at the time that I was in, the EU AI Act was still kind of nascent. And the Act had passed, but a lot of the actual... details of it had been kicked to a process that my sense is still unfolding. Speaking of slow-moving bureaucracies. Exactly, exactly.

I guess I didn't have, maybe this is a failing on my part, I did not have particularly detailed conversations with the Europeans beyond a general kind of articulation of our views. They were respectful. We were respectful. But...

I think it's fair to say we were taking a different approach than they were taking. And we were probably, insofar as safety and opportunity are a dichotomy, which I don't think they are a pure dichotomy, we were ready to move very fast in the development of AI. One of the other things that Vance talked about...

and that you said you agreed with, is making AI pro-worker. What does that mean? It's a vital question. I think we instantiate that in a couple of different principles. The first is that AI in the workplace needs to be... implemented in a way that is respectful of workers and the like. And I think one of the things I know the president thought a lot about was it is possible for AI to make workplaces worse and in a way that is dehumanizing and degrading.

ultimately destructive for workers. So that is sort of a first distinct piece of it that I don't want to neglect. The second is, I think we want to have AI deployed across our economy in a way that...

increases workers, agencies, and capabilities. And I think we should be honest that there's going to be a lot of transition in the economy as a result of AI. I don't know what that will look like. You can find Nobel Prize winning economists who will say it won't be much. You can find other folks who will say it'll be a ton.

I tend to lean towards the it's going to be a lot side, but I'm not a labor economist. And the line that Vice President Vance used is the exact same phrase that President Biden used, which is give workers a seat at the table in that transition. And I think.

That is a fundamental part of what we're trying to do here. And I presume what they're trying to do here. So I've sort of heard you beg off on this question a little bit by saying you're not a labor economist. I am not a labor economist. I will promise you the labor economists do not know what to do about AI. You were the top advisor for AI. You were at the nerve center of the government's information about what is coming. If this is half as big.

as you seem to think it is, it's going to be the single most disruptive thing to hit labor markets ever, given how compressed the time period in which it will arrive is. It took a long time to lay down electricity. It took a long time to build railroads. I think that is basically true, but I want to push back a little bit. So I do think we are going to see a dynamic in which it will hit parts of the economy first. It will hit certain firms first.

It will be an uneven distribution across society. Well, I think it will be uneven. And that's, I think, what will be destabilizing about it, in part. If it were just even, then you might just come up with an even policy to do something about it. Sure. But precisely because it's not even, and it's not going to put, I don't think, 42% of the labor force out of work overnight. Let me give you an example of the kind of thing I'm worried about, and I've heard other people worry about.

There are a lot of 19-year-olds in college right now studying marketing. There are a lot of marketing jobs. AI, frankly, can do perfectly well right now. As we get better at knowing how to direct it, I mean, one of the things to slow this down is simply firm adaptation. Yes. But the thing that will happen very quickly is you have firms that are built.

around AI. It's going to be harder for the big firms to integrate it, but what you're going to have is new entrants who are built from the ground up with their organization is built around one person overseeing these seven systems. So you might just begin to see triple the unemployment among marketing graduates. I'm not convinced you'll see that in software engineers because I think AI is going to both, you know.

take a lot of those jobs. It also create a lot of those jobs because there's going to be so much more demand for software. But you could see it happening somewhere there. There's just a lot of jobs that are doing work behind a computer. And as... Companies absorb machines that can do work behind a computer for you. That will change their hiring.

You must have heard somebody think about this. You guys must have talked about this. We did talk to economists and try to texture this debate in 23 and 24. I think... The trend line is even clearer now than it was then. I think we knew this was not going to be a 23 and 24 question. Frankly, to do anything robust about this is going to require Congress. And that was just not in the cards at all. So it was more of a...

an intellectual exercise than it was a policy exercise. Policies begin as intellectual exercises. Yeah, I think that's fair. I think the advantage to AI that is in some ways a countervailing force here, though I hear you and I...

mostly agree with your size argument, is that it will increase the amount of agency for individual people. So I do think we will be in a world in which the 19-year-old or the 25-year-old will be able to use a system to do things they were not able to do before. And I think insofar as...

The thesis we're batting around here is that intelligence will become a little bit more commoditized. What will stand out more in that world is agency and the capacity to do things or initiative and the like. And I think that could... in the aggregate lead to a pretty dynamic economy. And the economy you're talking about of small firms and dynamic ecosystem and robust competition, I think on balance at an economy scale is not in itself a bad thing. I think where...

I imagine you and I agree, and maybe Vice President Vance as well agrees, we need to make sure that for individual workers and classes of workers, they're protected in that transition. I think we should be honest, that's going to be very hard. We have never done that well.

Couldn't agree with you more. Like in a big way, Donald Trump is president today because we did a shitty job on this with China. This is a kind of like the way I'm pushing on this is that we have been talking about this, seeing this coming for a while.

And I will say that as I look around, I do not see a lot of useful thinking here. And I grant that we don't know the shape of it. At the very least, I would like to see some ideas on the shelf for if the disruptions are severe, what we should think about doing. We are so addicted in this country to an economically useful tale that our success is in our own hands. It makes it very hard for us to react with, I think, either compassion or realism.

When workers are displaced for reasons that are not in their own hands because of global recessions or depressions because of globalization. There are always some people with like the agency, the creativity, and they become hyperproductive. And, you know, look at them. Why aren't you them? I'm definitely not doing that. I know you're not saying that. But it's very hard that such an ingrained American.

way of looking at the economy, that we have a lot of trouble doing it. You know, we should do some retraining, right? Are all these people going to become nurses? Right? I mean, there are things that I can't do. Like, how many plumbers do we need? I mean, more than we have, actually.

But does everybody move into the trades? What were the intellectual thought exercises that all these smart people at the White House who believe this was coming, you know, what were you saying? So I think, yes, we were thinking about this question.

I think we knew it was not going to be a question we were going to confront in the president's term. We knew it was a question that you would need Congress for to do anything about. I think insofar as what you're expressing here seems to me to be like a deep dissatisfaction with the available answers. I share that. I think a lot of us shared that. You can get the usual stock answers of a lot of retraining. I share your sort of doubts that that is the answer.

You probably talk to some Silicon Valley libertarians or something and they'll say, or tech folks, and they'll say, well, universal basic income. I think I believe and I think the president believes there's a kind of dignity that. work brings and doesn't have to be paid work, but there needs to be something that people do each day that gives them meaning. Insofar as what you were saying is like, you have a discomfort with where this is going on the labor side.

Speaking for myself, I share that, even though I don't know the shape of it. I guess I would say more than that, I have a discomfort with the quality of thinking right now, sort of across the board, but I will say on the Democratic side, right? Because I have you here as a representative of the past administration. I have a lot of disagreements with the Trump administration, to say the least. But I do understand the people who say, look, Elon Musk, David Sachs, Marc Andreessen, J.D. Vance.

At the very highest levels of that administration are people who spend a lot of time thinking about AI and have considered very unusual thoughts about it. And I think sometimes Democrats are a little bit institutionally constrained for thinking unusually. So I'm hearing a lot. I take your point on the export controls. I take your point on the exec orders, the Eye Safety Institute. But to the extent Democrats are the party, want to be, imagine themselves to be the party of the working class.

And to the extent we've been talking for years about the possibility of AI-driven displacements, yeah, when things happen, you need Congress, but you also need thinking that becomes policies at Congress. So I guess I'm trying to push, like, was this not being...

talked about? There were no meetings? There were no, you guys didn't have Claude write up a brief of options? Well, you know, we definitely didn't have Claude write up a brief because we had to get over government use of AI. See, but that's like itself slightly damning.

Yeah, I mean, Ezra, I agree that the government has to be more forward-leaning on basically all of these dimensions. It was my job to push the government to do that. And I think on things like government use of AI, we made some progress.

I don't think anyone from the Biden administration, less of all me, is coming out and saying, we solved it. I think what we're saying is like we were building a foundation for something that is coming that was not going to arrive during our time in office and that the next team is going to have to, as a matter of...

American national security, and in this case, American economic strength and prosperity, address. I will say this gets to something I find frustrating in the policy conversation about AI, which is... You sit down with somebody and you start the conversation and they're like, the most transformative technology, perhaps in human history, is landing into human civilization in a two to three year time frame. And you say...

Wow, that seems like a really big deal. What should we do? And then things get a little hazy. Now, maybe we just don't know. But what I've heard you kind of say a bunch of times is like, look. We have done very little to hold this technology back. Everything is voluntary. You know, the only thing we asked was a sharing of safety data. Now, income, the accelerationists, you know, Mark Andreessen has criticized you guys extremely straightforwardly.

Is this policy debate about anything? Is it just the sentiment of the rhetoric, right? Like, if it's so fucking big, but nobody can quite explain what it is we need to do or talk about, except for maybe export chip controls. Are we just not thinking creatively enough? Is it just not time? Match the kind of calm, measured tone of the second half of this with where we started for me. I think there should be an intellectual humility about...

Before you take a policy action, you have to have some understanding of what it is you're doing and why. So I think... It is entirely intellectually consistent to look at a transformative technology, draw the lines on the graph and say, this is coming pretty soon without having the 14-point plan of this is what we need to do in 2027, 2028. I think chip controls are unique in that this is a robustly good thing that we could do early to buy the space I talked about before. But I also think that...

We tried to build institutions like the AI Safety Institute that would set the new team up, whether it was us or someone else, for success in managing the technology. Now that it's them, they will have to decide.

as this technology comes on board, how do we want to calibrate this under regulation? What are the kinds of decisions you think they will have to make in the next two years? You mentioned the open source one. I have a guess where they're going to land on that, but that I think there's... an intellectual debate there that is rich. We resolved it one way by not doing anything. They'll have to decide if they want to keep doing that. Ultimately, they'll have to...

answer a question of what is the relationship between the public sector and the private sector? Is it the case, for example, that the kind of things that are voluntary now with AI Safety Institute will someday become mandatory? Another key decision is we tried to get the ball rolling on the use of AI for national defense in a way that is consistent with American values.

they will have to decide what does that continue to look like? And do they want to take some of the safeguards that we put in place away to go faster? So I think there really is a...

bunch of decisions that they are teed up to make over the next couple of years that we can appreciate they're coming on the horizon without me sitting here and saying, I know with certainty what the answer is going to be in 2027. And then I'll ask our final question. What are three books you'd recommend to the audience?

One of the books is The Structure of Scientific Revolution by Thomas Kuhn. And this is a book that coined the term paradigm shift, which basically is what we've been talking about throughout this whole conversation of a shift in technology and scientific understanding and its implications for society. And I like how Kuhn, in this book, which was written in the 1960s, gives a series of historical examples and theoretical frameworks for how do you think about a paradigm shift.

And then another book that has been very valuable for me is Rise of the Machines by Thomas Ritt. And that really tells the story of how machines that were once the playthings of... dorks like me became in the 60s and the 70s and the 80s, things of national security importance. We talked about some of the revolutionary technologies here, the internet, microprocessors and the like, that...

emerged out of this intersection between national security and tech development. And I think that history should inform the work we do today. And then the last book is definitely an unusual one, but I think is vital. And that's A Swim in the Pond in the Rain by George Saunders. And he's this great essayist and short story writer and novel writer. And he teaches Russian literature. And he, in this book...

takes seven Russian literature short stories and gives a literary interpretation of them. And what strikes me about this book is fundamentally the most human endeavor I can think of. He's taking... great human short stories, and he's giving a modern interpretation of what those stories mean. And I think when we talk about the kinds of cognitive tasks that are a long way off for machines...

I kind of at some level hope this is one of them, that there is something fundamentally human that we alone can do. I'm not sure if that's true, but I hope it's true. I'll say I had him on the show for that book. It's one of my favorite ever episodes. People should check it out. Ben Buchanan, thank you very much. Thanks for having me. Bye.

This episode of The Ezra Klein Show is produced by Roland Hu. Fact-checking by Michelle Harris with Mary Marge Locker and Kate Sinclair. Mixing by Isaac Jones with the famous Shapiro and Amin Sauta. Our supervising editor is Claire Gordon.

The show's production team also includes Elias Isquith, Christian Lynn, and Jack McCordick. Original music by Pat McCusker. Audience strategy by Christina Samulewski and Shannon Busta. The executive producer of New York Times Opinion Audio is Annie Rose Strasser.

This transcript was generated by Metacast using AI and may contain inaccuracies. Learn more about transcripts.
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast