...and then swipe over to read today's headlines. There's an... article next to a recipe next to games. And it's just easy to get everything in one place. This app is essential. The New York Times app. All of the times, all in one place. Download it now at nytimes.com slash app. So I went to two AI events this weekend. It was sort of polar opposites of the kind of AI spectrum. There was the effective altruists had their big annual conference. And then on Friday night, I went out.
You'd be very proud of me. I stayed out so late. I stayed out till 2 a.m. Oh, my. I went to an AI rave that was sort of... unofficially affiliated with Mark Zuckerberg. It was called the Zuckrave. Now, when you say unofficially affiliated, Mark Zuckerberg had no involvement in this, and my assumption is he did not know it was happening. Correct. A better word for what his involvement is would be no involvement.
It was sort of a tribute rave to Mark Zuckerberg thrown by a bunch of accelerations, people who want AI to go very fast. Another word for it would be using his likeness without permission. Yes. But that happens to famous people sometimes. Yes. So at the... I would say there was not much raving going on.
There was a dance floor, but it was very sparsely populated. They did have like a thing there that would like, it had a camera pointing at the dance floor. And if you sort of stood in the right place, it would turn your face into Mark Zuckerberg's like on a big screen. Which let's just say is not something you want to happen to you while you're on mushrooms. Because that can be a very destabilizing event. Yes. There was a train, an indoor toy train that you could ride on. It was...
going actually quite fast. What was the point of this rave? To do drugs. That was the point of this rave. I'm Kevin Roos, a tech columnist at the New York Times. I'm Casey Newton from Platformer. And this is Hard Fork. This week, Anthropic CEO Dario Amadei returns to the show for a supersized interview about the new Claude, the AI race against China, and his hopes and fears for...
the future of AI. Then we close it out with a round of Hatch EPT. Big show this week, Kevin. Casey, have you noticed that the AI companies... Do stuff on the weekends now. Yeah, whatever happened to just five days a week? Yes, they are not respectful of reporters and their work hours. Companies are always announcing stuff on Saturdays and Sundays and different time zones. It's a big pain. It really is.
weekend, I got an exciting message on Sunday saying that Dario Amadei, the CEO of Anthropic, had some news to talk about, and he wanted to come on Hard Fork to do it. Yeah, and around the same time, I got an email from Anthropic telling me I could preview their latest model. And so I spent the weekend actually trying it out.
Yeah, so longtime listeners will remember that Dario is a repeat guest on this show. Back in 2023, we had him on to talk about his work at Anthropic and his vision of AI safety and where all of this was headed. I was really excited to talk to him again for a few reasons. One, I just think he's a very interesting and thoughtful guy. He's been thinking about AI for longer than almost anyone. He was writing papers about... potentially scary things in AI safety all the way back in
2016. He's been at Google. He's been at OpenAI. He's now the CEO of Anthropix. So he is really the ultimate insider when it comes to AI. And you know, Kevin, I think Dario is an important figure for another reason, which is that of all of...
the folks leading the big AI labs, he is the one who seems the most publicly worried about the things that could go wrong. That's been the case with him for a long time. And yet over the past several months, as we've noted on the show, it feels like the pendulum
has really swung away from caring about AI safety to just this sort of go, go, go accelerationism that was embodied by the speech that Vice President J.D. Vance gave in France the other day. And for that reason, I think it's important to bring him in here and maybe see if we can show.
that pendulum back a little bit and remind folks of what's at stake here. Yeah, or at least get his take on the pendulum swinging and why he thinks it may swing back in the future. So today we're going to talk to Dario about... The new model that Anthropic just released, Claude 3.7 Sonnet. But we also want to have a broader conversation because there's just so much going on in AI right now.
And Kevin, something else that we should note, something that is true of Dario this time that was not true the last time that he came on the show is that my boyfriend now works at his company. Yeah, Casey's Manthropic is at Anthropic. My manthropic has said anthropic. And I have a whole sort of long disclosure about this that you can read at platformer.news slash ethics might be worth doing this week. You know, we always like reminding folks of that. Yep. All right.
With that, let's bring in Dario Amadei. Dario Amadei, welcome back to Hard Fork. Thank you for having me again. Returning champion. So tell us about Claude 3.7. Tell us about this new model. Yes. So we've been working on this model for a while. We...
basically, you know, had in mind two things. One was that, you know, of course, there are these reasoning models out there that have been out there for a few months, and we wanted to make one of our own, but we wanted the focus to be a little bit different. In particular...
A lot of the other reason models in the market are trained primarily on math and competition coding, which are, you know, they're objective tasks where you can measure performance. I'm not saying they're not impressive, but they're. sometimes less relevant to tasks in the real world or the economy, even within coding. There's really a difference between competition coding and doing something in the real world. And so we train Claude 3.7, you know, more to focus on these real world tasks. asks.
We also felt like it was a bit weird that in the reasoning models that folks have offered, it's generally been there's a regular model and then there's a reasoning model. This would be like if a human had two brains and it's like you're like you can –
you can talk to brain number one if you're asking me a quick question, like what's your name? And you're talking to brain number two if you're asking me to like prove a mathematical theorem because I have to like sit down for 20 minutes. It'd be like a podcast where there's two hosts, one of whom just likes to yap and one of whom actually thinks.
before he talks. Oh, come on! Brutal. No comment. No comment on any relevance to it. So what differences will users of Claude notice when they start using 3.7 compared to previous models? Yes. So a few things. It's going to be better in general.
including better at coding, which, you know, Claude models have always been the best at coding, but 3.7 took a further step up. In addition to just the properties of the model itself, you can put it in this extended thinking mode where you tell it basically the same model. but you're just saying operate in a way where you can think for longer. And if you're an API user, you can even say...
Here's the boundary in how long you can think. And just to clarify, because this may confuse some people, what you're saying is the sort of new Claude is this hybrid model. It can sometimes do reasoning, sometimes do quicker answers. But if you want it to think for even longer, that's... That is a separate mode.
That is a separate thinking and reasoning are sort of separate modes. Yes. Yes. So basically the model can just answer as, as it normally would, or you can give it this, this indication that it should think for longer and even further direction. The evolution would be the model decides for itself.
what the appropriate time to think is, right? Humans are like that, or at least can be like that, right? If I ask you your name, you know, you're not like, huh, how long should I think? Give me 20 minutes, right, to, you know, to determine my name. But if I say, hey, I'd like you to do an analysis of this stock or I'd like you to prove this mathematical theorem.
You know, humans who are able to do that task, they're not going to try and give an answer right away. They're going to say, OK, well, that's going to take a while and then we'll need to write down the task. This is one of my main beefs with today's like language models and AI models in general is, you know.
I'll be using like something like chat GPT and I'll forget that I'm in like the hardcore reasoning mode and I'll ask it some stupid question. Like, you know, how do I change the settings on my water heater? And it'll go off and think for four minutes. And I'm like, I didn't actually need to do that.
a treatise on like adjusting the temperature of the water heater. You know, consideration one. So how long do you think it'll be before the models can actually do that kind of routing themselves where you'll ask a question and say, it seems like you need about a three.
minute long thinking process for this one versus maybe a 30 second one for this? Yeah. So, you know, I think our model is kind of a step towards this. Even in the API, if you give it a bound on thinking, you know, you say, you know, I'm going to think for 20,000 words or something. Most, on average, when you give it up to 20,000 words, most of the time it doesn't even use 20,000 words. And sometimes it'll give a very short response.
Because when it knows that it doesn't get any gain out of thinking further, it doesn't think for longer. But it's still valuable to give a bound on how long it'll think. So we've kind of taken a...
like a big step in that direction, but we're not to where we want to be yet. When you say it's better at real-world tasks, what are some of the tasks that you're thinking of? Yeah, so I think above all coding, you know, cloud models have been very good for real-world coding. You know, we have a number of...
You know, customers from Cursor to GitHub to Windsurf Codium to Cognition to Vercel to, I'm sure I'm leaving some out there. These are the vibe coding apps. Or just the coding apps, period. The coding apps, period. And, you know, there are many different... kind of coding apps we also released this thing called quad code which is more of a command line tool but i think also on things like you know complex instruction following or just like here i want you to understand this document
Or, you know, I want you to use this series of tools. The reasoning model that we've trained called 3.7 Sonnet is better at those tasks too. Yeah. One thing the new Claude Sonnet is not doing, Dario, is accessing the internet. Yes. Why not? And what would cause you to change that? Yes. So I think I'm on record saying this before, but web search is coming very soon. We will have web search very soon. We recognize that as an oversight. You know, I think in general, we tend to be more.
enterprise focused and consumer focused. And this is more of a consumer feature, although it can be used on both. But, you know, we focus on both and this is coming. Got it. So you've named this model 3.7. The previous model was 3.5. You quietly updated it last year, and insiders were calling that one 3.6. Respectfully, this is driving all of us insane. What is going on with AI model names? We are the least...
insane, although I recognize that we are insane. So look, I think our mistakes here are relatively understandable. We made a 3.5 sonnet. We were doing well in there. We had the 3.0s and then the 3.5s. I recognize the 3.7 new was a misstep. It actually turns out to be hard to change the name in the API, especially when there's all these partners and surfaces you offer it to. You can figure it out. I believe in you. No, no, no. It's harder than training the model.
I'm telling you. So we've kind of retroactively and informally named the last one 3.6 so that it makes sense that this one is 3.7. Right. And we are reserving. Claude 4, Sonnet, and maybe some other models in the sequence for things that are really quite substantial leaps. Sometimes when the models... And those models are coming, by the way. Okay. Got it. Coming when? Yeah, so I should talk a little bit about this. So all the models we've released so far...
are actually not that expensive, right? You know, I did this blog post where I said they're in the few tens of millions of dollars range at most. There are bigger models than they are coming. They take a long time, and sometimes they take a long time to get right. But those bigger models, they're coming for others. I mean, they're rumored to be coming from competitors as well. But we are not too far away from releasing a model that's a bigger base model.
Most of the improvements in Claude 3.7 Sonnet, as well as Claude 3.6 Sonnet, are in the post-training phase. Okay. But we are working on stronger base models. And, you know, perhaps that'll be the Claude 4 series. Perhaps not. We'll see. But, you know, I think those are coming in. relatively small number of time units. A small number of time units. I'll put that on my calendar. Remind me to check in on that on a few time units, Kevin.
I know you all at Anthropic are very concerned about AI safety. and the safety of the models that you're putting out into the world. I know you spend lots of time thinking about that and red teaming the models internally. Are there any new capabilities that Claude 3.7 Sonnet has that are dangerous or that might...
Worry someone who is concerned about AI safety. So not dangerous per se. And I always want to be clear about this because I feel like there's this constant conflation of present dangers with future dangers. It's not that there aren't present dangers and, you know, they're always kind of normal tech risks, normal tech policy issues. I'm more worried about.
the dangers that we're going to see as models become more powerful. And I think those dangers, you know, when we talked in 2023, I talked about them a lot. You know, I think I said, I even testified in front of the Senate for things like.
you know, misuse risks with, for example, biological or chemical warfare or the AI autonomy risks. I said, particularly with the misuse risks, I said, I don't know when these are going to be here, when these are going to be real risks, but it might happen in 2025 or 2026. And now that we're in kind of early 2025, the very beginning of that period, I think the models are starting to get closer to that. So in particular, in Claude 3.7 Sonnet.
as we wrote in the model card, we always do these, you could almost call them like, you know, like trials with the control, where we have, you know, some human who doesn't know enough much about some area like biology. And we basically see how much does the model help them.
to engage in some mock bad workflow, right? We'll change a couple of the steps, but some mock bad workflow. How good is a human at that, assisted by the model? Sometimes we even do wet lab trials in the real world where they mock make. something bad as compared to.
the current technological environment, right? What they could do on Google or with a textbook or just what they could do unaided. And we're trying to get at, does this enable some new threat vector that wasn't there before? I think it's very important.
important to say this isn't about like, oh, did the model give me the sequence for this thing? Did it give me a cookbook for making meth or something? That's easy. You can do that with Google. We don't care about that at all. We care about this kind of esoteric high. uncommon knowledge that say only a, you know, a virology PhD or something has.
How much does it help with that? And if it does, you know, that doesn't mean we're all going to die of the plague tomorrow. It means that a new risk exists in the world. A new threat vector exists in the world as if you just made it, you know, easier to build a, you know, nuclear weapon. You invented something. that you know the amount of plutonium you needed was you know was lower than it was before and so we measured sonnet 3.7 for these risks and
The models are getting better at this. They're not yet at the stage where we think that there is a real and meaningful increase in the threat. end to end, right, to do all the tasks you need to do to really do something dangerous. However, we said in the model card that we assessed a substantial probability that The next model or, you know, a model over the next, I don't know, three months, six months, a substantial probability that, you know, that we could be there. And then our.
safety procedure, our responsible scaling procedure, which is focused mainly on these, you know, very large risks. would then kick in and you know we'd have kind of additional security measures and additional deployment measures you know designed
you know, particularly against these very narrow risks. Yeah. I mean, just to really underline that, you're saying in the next three to six months, we are going to be in a place of medium risk in these models, period. Presumably, if you were in that place, a lot of your competitors are also going to be in that place.
What does that mean practically? Like, what does the world need to do if we're all going to be living in medium risk? I think at least at this stage, you know, it's not a huge change to things. It means that there's a narrow set of things. that models are capable of, if not mitigated, that, you know, would somewhat increase the risk of something like really dangerous or really bad happening. You know, like put yourself in the eyes of like a law enforcement officer or, you know, the FBI.
You know, it doesn't mean the end of the world, but it does mean that anyone, anyone who's involved in. industries where this risk exists should take a precaution against that risk in particular. Got it. And so I don't know. I mean, I don't you know, I could be wrong. It could take much longer. You can't predict what's going to happen. But, you know, I think I think contrary to. the environment that
we're seeing today of worrying less about the risks, the risks in the background have actually been increasing. We have a bunch more safety questions, but I want to ask two more about kind of innovation competition first. Yeah. Right now, it seems like no matter how innovative...
any given company's model is, those innovations are copied by rivals within months or even weeks. Does that make your job harder? And do you think it is going to be the case indefinitely? I don't know that innovations are necessarily copied exactly. What I would say is that the pace of innovation among a large number of competitors is very fast. There's four or five, maybe six companies who are innovating very quickly and producing models very quickly.
If you look, for example, at Sonnet 3.7, you know, the way we did the reasoning models is different from what was done by competitors. The things we emphasized were different. Even before then, the things Sonnet 3.5 is good at are different than the things other models are good at. People often talk about competition, commoditization, costs going down, but...
The reality of it is that the models are actually relatively different from each other, and that creates differentiation. Yeah, I mean, we get a lot of questions from listeners about, you know, if I'm going to subscribe to one AI tool, what should it be? You know, these are the things that I use it for. And I have a hard time answering them. because I find for most use cases, the models all do a relatively decent job of answering the questions. It really comes down to things like
Which model's personality do you like more? Do you think that people will choose AI models, consumers, on the basis of capabilities? Or is it going to be more about... and how it makes them feel, how it interacts with them? I think it depends which consumers you mean. You know, even among consumers, there are people who use the models for tasks that are complex in some way.
There are folks who are kind of independent, who want to analyze data. That's maybe kind of like the prosumer side of things, right? And I think within that, there's a lot to go in terms of capabilities. The models can be so much better than they are at helping you with anything that's focused on kind of productivity or even a complex task like planning a trip.
Even outside that, you know, if you're just trying to make a personal assistant to manage your life or something, we're pretty far from that. You know, from a model that sees every aspect of your life and is able to kind of holistically give you. advice and kind of be a helpful assistant to you. And I think there's differentiation within that. The best assistant for me might not be the best assistant for some other person. I think one area where the models will be good enough is...
If you're just trying to use this as a replacement for Google search or as a quick information retrieval, which I think is what's being used by kind of the mass market, free use, hundreds of millions of users. I think that's very commoditizable.
I think the models are kind of already there and are just diffusing through the world. But I don't think those are the interesting uses in the model. And I'm actually not sure a lot of the economic value is there. I mean, it's part of is what I'm hearing that if.
If and when you develop an agent that is, let's say, a really amazing personal assistant, the company that figures out that first is going to have a big advantage because other labs are going to just have a harder time copying. That's going to be less obvious to them how to recreate that. recreate it. And when they do recreate it, they won't recreate it exactly. They'll do it their own way, in their own style, and it'll be suitable for a different set of people.
So I guess I'm saying the market is more segmented than you think it is. It looks like it's all one thing, but it's more segmented than you think it is. Got it. So let me ask the competition question that brings us into safety. You recently wrote a really interesting post about deep sea. deep seek mania, and you were arguing in part that the cost reductions that they had figured out were basically in line with what...
They were basically in line with how costs had already been falling. But you also said that DeepSeek should be a wake-up call because it showed that China is keeping pace with Frontier Labs in a way that the country hadn't been up until now. So why is that notable to you, and what do you think we...
to do about it? Yeah. So I think this is less about commercial competition, right? I worry less about DeepSeek from a commercial competition perspective. I worry more about them from a national competition and national security perspective. I think where I'm coming from here is, you know, we look at the state of the world and, you know, we have these autocracies like China and Russia.
And I've always worried, I've worried maybe for a decade, that AI could be an engine of autocracy. If you think about repressive governments, the limits to how repressive they can be are generally set by what they can get. their enforcers, their human enforcers to do. But if their enforcers are no longer human, that starts painting some very dark possibilities. And so, you know, this is an area that I'm therefore very concerned about.
where I want to make sure that liberal democracies have enough leverage and enough advantage in the technology that they can prevent some of these abuses from happening. And kind of, you know, also prevent our adversaries from putting us in a bad position with respect to the rest of the world or, you know, even threatening our security. You know, there's this kind of I think weird and awkward feature that.
It's companies in the U.S. that are building this. It's companies in China that are building this. But we shouldn't be naive. Whatever the intention of those companies, particularly in China, there's a governmental component to this. And so I'm interested in making sure that the autocratic countries don't get ahead from a military perspective. I'm not trying to deny them the benefits of the technology. There are enormous health benefits that, you know, all of us, I want to make sure are.
make their way everywhere in the world, including the poorest areas, including areas that are under the grip of autocracies. But I don't want the autocratic governments to have a military advantage. And so, you know, things like the export controls, which I discussed in that post, are one of the things we can do to prevent that.
that actually the Trump administration is considering tightening, tightening the export controls. I was at an AI safety conference last weekend, and— One of the critiques I heard some folks in that universe make of Anthropic, and maybe of you in particular, was that they saw the posts like the one you wrote about DeepSeek as effectively promoting this AI arms race with China, insisting that... America has to be the first to reach powerful AGI or else. And they worry that...
some corners might get cut along the way, that there are some risks associated with accelerating this race in general. What's your response to that? Yeah, I kind of view things differently. So my view is that... If we want to have any chance at all, so the default state of nature is that things go at maximum speed. If we want to have any chance at all to not go at maximum speed, the way the plan works is the following.
Within the U.S. or within democratic countries, you know, these are all countries that are under the rule of law, more or less. And therefore, we can pass laws. We can get companies to make.
agreements with the government that are enforceable about, you know, or make safety commitments that are enforceable. And so if we have a world where there's these different companies and they're, you know, they're in the kind of default state of nature would as fast as possible through some mixture of voluntary commitments and laws.
We can get ourselves to slow down if the models are too dangerous. And that's actually enforceable, right? You can get everyone to cooperate in the prisoner's dilemma if you just point a gun at everyone's head. And you can. That's what the law ultimately is. But I think that all gets thrown out the window in the world of international competition. There was no one with the authority to enforce any agreement between the U.S. and China, even if one were to be made. And so my worry is if—
If the U.S. is a couple years ahead of China, we can use that couple years to make things safe. If we're even with China. You know, there's no promoting an arms race. That's what's going to happen. The technology has immense military value. Whatever people say now, whatever nice words they say about cooperation, I just don't see how once people...
fully understand the economic and military value of the technology, which I think they mostly already do, I don't see any way that it turns into anything other than the most intense race. And so what I can think of... to try and give us more time is if we can slow down the authoritarians. It almost obviates the trade-off. It gives us more time to work out among us, among OpenAI, among Google, among X.AI, how to make these models safe. Now, could at some point we convince?
If authoritarians convince, for example, the Chinese that the models are actually dangerous and, you know, that we should have some agreement and come up with some way of enforcing it, I think we should actually try to do that as well. I'm supportive of trying to do that.
but it cannot be the plan A. It's just not a realistic way of looking at the world. These seem like really important questions and discussions, and it seems like they were mostly not being had at the AI Action Summit in Paris that you and Kevin attended. a couple of weeks back. What the heck was going on with that summit? Yeah, I mean, you know, I have to tell you I was deeply disappointed in the summit.
It had the environment of a trade show and was very much out of spirit with the spirit of the original summit that was created in Bletchley Park by the UK government. Bletchley did a great job and the UK government did a great job where, you know, they didn't introduce a bunch of onerous regulations, certainly before they knew what they were doing. But they said, hey, let's convene these summits to discuss the risks. I thought that was very good.
I think that's gone by the wayside now, and it's part of maybe a general move towards, you know, less worrying about risk, more wanting to seize the opportunities. And I'm a fan of seizing the opportunities, right? You know, I wrote this essay, Machines of Loving Grace, about all the great things. You know, part of that essay was like...
Man, for someone who worries about risks, I feel like I have a better vision of the benefits than a lot of people who spend all their time talking about the benefits. But in the background, like I said, as the models have gotten more powerful. The amazing and wondrous things that we can do with them have increased, but also the risks have increased.
And, you know, that kind of secular increase, that smooth exponential, it doesn't pay any attention to societal trends or the political wins. The risk is, you know. increasing up to some critical point, whether...
you're paying attention or not, right? It was, you know, small, it was small and increasing when there was this frenzy around, you know, AI risk and everyone was posting about it and there were these summits and now the winds have gone in the other direction, but the exponential just continues on.
I had a conversation with someone in Paris who was saying, like, it just didn't feel like anyone there was feeling the AGI, by which they meant, like... politicians the people doing these panels and and you know gatherings we're all talking about ai as if it were just like another technology maybe something on the order of the pc or or possibly even the internet but not really understanding the sort of
exponentials that you're talking about. Did it feel like that to you? And what do you think can be done to bridge that gap? Yeah, so I think it did feel like that to me. The thing I've started to tell people that I think maybe gets people to pay attention is, look, if you're a public official, if you're a leader at a company,
People are going to look back. They're going to look back in 2026 and 2027. They're going to look back, you know, when hopefully humanity, you know, gets through this crazy, crazy period and we're in a, you know, mature. post-powerful AI society where we've learned to coexist with these powerful intelligences and a flourishing society. Everyone's going to look back and they're going to say, so what did the officials, what did the company people, what did the political system do? And like...
probably your number one goal is don't look like a fool. And so I've just been encouraged, like, don't be careful what you say. Don't, don't look like a fool in retrospect. And, you know, a lot of my thinking is just driven by like, You know, aside from just wanting the right outcome, like I don't want to look like a fool. And, you know, I think at that conference, like, you know, some people are going to look like fools.
We're going to take a short break. When we come back, we'll talk with Dario about how people should prepare for what's coming in AI. My name is Audra D.S. Birch, and I am a national correspondent covering race and identity for The New York Times. Race coverage is complicated. It can be joyous and affirming.
It can be uncomfortable, but I feel like it's still absolutely necessary. Race and identity are not just understanding who you are, but who the person in front of you is and wanting to understand more about them. We're trying to wrestle down these really hard subjects and maybe not answering the question, but asking the right questions and listening, listening, listening a lot.
The Times is dedicated to ambitious and deeply reported coverage of race and identity, and they're willing to back it up with resources. If you are curious about the world in which we live, If you're interested in who you are, where you come from, and how you relate to others, I would encourage you to subscribe to the New York Times.
You know, you talk to folks who live in San Francisco, and there's like this bone deep feeling that like within, you know, a year or two years, we're just going to be living in a world that has been transformed by AI.
I'm just struck by like the geographic difference because you go like, I don't know, a hundred miles in any direction. And like that belief totally dissipates. And I have to say, as a journalist, that makes me bring my own skepticism and say like, can I really trust all the people around me? Because it seems like the rest of the world.
has a very different vision of how this is going to go. I'm curious what you make of that kind of geographic disconnect. Yeah, so I've been watching this for, you know, 10 years, right? I've been in the field for 10 years and, you know, was kind of interested. in AI even before then. And my view at almost every stage up to the last few months has been
We're in this awkward space where, you know, in a few years we could have, you know, these models that do everything humans do and they totally turn the economy and what it means to be human upside down. Or the trend could stop and all of it could sound completely silly.
I've now probably increased my confidence that we are actually in the world where things are going to happen. You know, I give numbers more like 70 and 80 percent and less like 40 or 50 percent, which is 70 to 80 percent probability of what? that will get a very large number of AI systems that are much smarter than humans at almost everything.
Maybe 70, 80 percent, we get that before the end of the decade. And my guess is 2026 or 2027. Yeah. But on your point about the geographic difference, a thing I've noticed is with each. step in the exponential there's this expanding circle of people who kind of
depending on your perspective, are either deluded cultists or grok the future. Got it. And I remember when it was a few thousand people, right? When, you know, you would just talk to like super weird people who, you know, believe, and basically no one else did.
Now it's more like a few million people out of a few billion. And yes, many of them are located in San Francisco. But also, you know, there were a small number of people in, say, the Biden administration. There may be a small number of people in this administration. who believe this, and it drove their policy. So it's not entirely geographic, but I think there is this disconnect. And I don't know how to go from a few million.
to everyone in the world, right? To the congressperson who doesn't focus on these issues, let alone the person in Louisiana, let alone the person in Kenya. It seems like it's also become... polarized in a way that may hurt that goal. I'm feeling this sort of alignment happening where caring about AI safety, talking about AI safety, talking about the potential for misuse is sort of being coded as left or liberal.
and talking about acceleration and getting rid of regulations and going as fast as possible, being sort of coded as right. So, I don't know, do you see that as a barrier to getting people to understand what's going on? I think that's actually a big barrier, right? Because... Addressing the risks while maximizing the benefits, I think that requires nuance. You can actually have both. There are ways to surgically and carefully address the risks.
without slowing down the benefits very much, if at all. But they require subtlety and they require a complex conversation. Once things get polarized, once it's like we're going to cheer for this set of words and boo for that set of words. Nothing good gets done. Look, bringing AI benefits to everyone, like curing previously incurable diseases, that's not a partisan issue. The left shouldn't be against it. Preventing AI systems from...
being misused for weapons of mass destruction or behaving autonomously in ways that, you know, threaten infrastructure or even threaten humanity itself. That isn't something the right should be against. I don't know what to say other than that we need to sit down and we need to have an adult conversation about this that's not tied into these same old tired political fights. It's so interesting to me, Kevin, because—
Like, historically, national security, national defense, like nothing has been more right-coded than those issues, right? But right now, it seems like the right is not interested in those with respect to AI. And I wonder if the reason, and I feel like I sort of heard this in J.D. Vance's speech. in France was the idea that, well, look, America will get there first and then it will just win forever. And so we don't need to address any of these.
Does that sound right to you? Yeah. No, I think that's it. And I think there's also like, if you talk to the, you know, the Doge folks, there's this sense that all these... Are you talking to the Doge folks? I'm not telling you who I'm talking to. All right, fine. Let's just say I've been getting some signal messages. I think there's a sense among a lot of Republicans and Trump world folks in D.C. that the conversation about AI and AI futures has been sort of dominated by these.
these worry warts, these sort of, you know, chicken little sky is falling doomers who just are constantly telling us how dangerous this stuff is and are constantly just like, you know, having to sort of push out their timelines for when it's going to get really bad. And it's just around the.
corner, and so we need all this regulation now. And they're just very cynical. I don't think they believe that people like you are sincere in your worry. So, yeah, I think on the side of risks, I often feel that the advocates of risk... are sometimes the worst enemies of the cause of risk.
There's been a lot of noise out there. There's been a lot of folks saying, oh, look, you can download the smallpox virus because they think that that's a way of driving political interest. And then, of course, the other side. recognized that and they said, this is dishonest. You can just get this on Google. Who cares about this? And so poorly presented. evidence of risk is actually the worst enemy of mitigating risk. And we need to be really careful in the evidence we present.
And in terms of what we're seeing in our own model, we're going to be really careful. If we really declare that a risk is present now, we're going to come with the receipts. I, Anthropic, will try to be responsible in the claims that we make. We will tell you when there is danger. We have not warned of imminent danger yet. Some folks wonder whether a reason that people do not take questions about AI safety maybe as seriously as they should is that so much of what they see right now seems very...
silly. It's people making little emojis or making little slop images or chatting with Game of Thrones chatbots or something. Do you think that that is a reason that people just... I think that's like... 60 percent of the reason really um no no i i think like you know i think it relates to this like present and future thing like people look at like
The chatbot, they're like, we're talking to a chatbot. Like, what the fuck? Are you stupid? Like, you think the chatbot's going to kill everyone? Like, I think that's how many people react. And we go to great pains to say. We're not worried about the present. We're worried about the future, although the future is getting very near right now. If you look at our responsible scaling policy, it's nothing but AI autonomy.
And, you know, CBRN, chemical, biological, radiological, it is about hardcore misuse and AI autonomy that could be threats to the lives of millions of people. That is what Anthropic is mostly worried about. You know, we have everyday policies that address other things, but like the key documents, the things like the responsible scaling plan, that is exclusively what they're about, especially at the highest levels. And yet...
Every day, if you just look on Twitter, you're like, Anthropic had this stupid refusal, right? Anthropic told me it couldn't kill a Python process because it sounded violent. Anthropic didn't want to do X, didn't want to. We don't want that either. Those stupid refusals are a side effect of the things that we actually care about, and we're striving, along with our users, to make those happen less. But no matter how much we explain that...
always the most common reaction is, oh, you say you're about safety. I look at your models like... There are these stupid refusals. You think these stupid things are dangerous. I don't even think it's like that level of engagement. I think a lot of people are just looking at what's on the market today and thinking like, this is just frivolous. It just doesn't.
Matt, it's not that it's refusing my request. It's just that it's stupid and I don't see the point of it. I guess that's probably not the idea. I think for an even wider set of people, that is their reaction. And I think eventually, if the models are good enough, if they're strong enough, they're going to break through. Like some of these, you know, research-focused models, which, you know, we're working on one as well. We'll probably have one in not very long. Not too many time units.
Not too many time units. Those are starting to break through a little more because they're more useful. They're more used in people's professional lives. I think the agents, the ones that go off and do things, that's going to be another level of it.
I think people will wake up to both the risks and the benefits to a much more extreme extent than they will before over the next two years. Like, I think it's going to happen. I'm just worried that it'll be a shock to people when it happens. And so the more we can forewarn people, which maybe. It's just not possible, but I want to try. The more we can forewarn people.
The higher the likelihood, even if it's still very low, of a sane and rational response. I do think there's one more dynamic here, though, which is that I think people actually just don't want to believe that this is true, right? People don't want to believe that they might lose their job.
this, right? People don't want to believe that like, we are going to see a complete remaking of the global order. Like the stuff that, you know, the AI CEOs tell us is going to happen when they're done with their work is an insanely radical transformation. And most people hate even basic changes in their lives. So I really think that a lot of the sort of fingers in the ears that you see when you start talking to people about AI is just, they actually just hope that none of this works out.
Uh, yeah, I could actually, you know, despite being one of the few people at the forefront of developing the technology, I can actually relate. So, you know, over winter break, as you know, as I was looking at. where things were scheduled to scale within Anthropic and also what was happening outside Anthropic.
I looked at it and I said, you know, for coding, we're going to see very serious things by the end of 2025. And by the end of 2026, it might be everything, you know, close to the level of the best humans. And I think of all the things.
that that i'm good at right you know i think of all the times when i wrote code and you know i think of it as like this intellectual activity and boy am i smart that i can do this and you know it's like a part of my identity that i'm like good at this and i get mad when others are better than i am and then i'm like oh my god there are going to be these systems that you know and it's it's even as the one who's building this even as one of the ones who benefits most from it
there's still something a bit threatening about it. And I just think we need to acknowledge that. Like, it's wrong not to tell people that that is coming or to try to sugarcoat it. Yeah, I mean, you wrote in Machines of Loving... Grace, that you thought it would be a surprisingly emotional experience for a lot of people when powerful AI arrived. And I think you meant it in mostly the positive sense, but...
I think there will also be a sense of profound loss for people. I think back to Lisa Dahl, the Go champion, who was beaten by DeepMind's Go playing AI and gave an interview afterwards and basically was very sad, visibly upset.
that his life's work, this thing that he had spent his whole life training for, had been eclipsed. And I think a lot of people are going to feel some version of that. I hope they will also see the good sides. Yeah, I think, on one hand, I think that's right. On the other hand, look at chess. chess got beaten what was it now 27 years ago 28 years ago deep blue versus kasparov
And, you know, today, chess players are, you know, celebrities. We have Magnus Carlsen, right? Isn't he like a fashion model in addition to like a chess? He was just on Joe Rogan, yeah. Yeah, he's doing great. He's like a celebrity. Like, we think this guy is great. We haven't really...
devalued him you know he's probably having a better time than bobby fisher you know um you know another thing i wrote in machines of loving grace is there's a synthesis here where on the other side we kind of end up in a much better place, and we recognize that while there's a lot of change, we're part of something greater. Yeah. But you do have to kind of go through the steps of grieving. No, no, but it's going to be a bumpy ride. Like, anyone who tells you it's not, this is why I was so...
You know, I looked at the Paris summit and being there, it kind of made me angry. But then what made me less angry is I'm like, how's it going to look in two or three years? These people are going to regret what they've said. Yeah. I want to ask a bit about some. Positive futures. You referenced earlier the post that you wrote in October about how AI could transform the world for the better. I'm curious how much upside of AI do you think will arrive like this year? Yeah.
You know, we are already seeing some of it. So I think there will be a lot by ordinary standards. You know, we've worked with some pharma companies where, you know, at the end of a clinical trial, you have to write a clinical study report. And the clinical study report, you know, usually takes nine weeks to put together. It's like a summary of all the incidents. It's a bunch of statistical analysis.
We found that with Claude, you can do this in three days. And actually, Claude takes 10 minutes. It just takes three days for a human to check the results. And so if you think about the acceleration in biomedicine that you get from that, we're already seeing things like just diagnosis of medical cases. You know, we get...
correspondence from individual users of Claude who say, hey, you know, I've been trying to diagnose this complex thing. I've been going between three or four different doctors. And then I just, I passed all the information to Claude.
And it was actually able to, you know, at least tell me something that I could hand to the doctor and then they were able to run from there. We had a listener write in actually with one of these the other day where they had been trying to, their dog, they had an Australian shepherd, I believe, whose hair had been sort of falling out.
unexplained, went to several vets, couldn't figure it out, heard our episode, gave the information to Claude, and Claude, like, correctly diagnosed. Yeah, it turned out that dog was really stressed out about AI and all his hair fell out, which was, you know, we're wishing.
it gets better. Feel better. Feel better. Poor dog. Yeah. So that's the kind of thing that I think people want to see more of because I think like the optimistic vision is one that often deals in abstractions and there's often not a lot of specific things to point to. That's why I wrote Machines of Loving Grace, because I, you know, it was almost frustration with the optimists and the pessimists at the same time. Like the optimists were just kind of like.
these really stupid memes of like accelerate, build more, build what? Why should I care? Like, you know, it's not I'm against you. It's like, it's like, you're just really fucking like, like vague and mood affiliated. And then the pessimists were, I was just like, man, you don't get it. Like, yes, I understand risks are impact, but if you don't talk about the benefits, you can't inspire people. No one's going to be on your side if you're all gloom and doom.
So, you know, it was, it was, it was written almost with frustration. I'm like, I can't believe I have to be the one to, to, you know, to do a good job of this. Right. You, you said a couple of years ago that your P doom was somewhere between 10 and 25%. What is it today? Yeah. So I actually, that is a, that is a misquote. Okay. I never, I never used the term. It was, it was not on this podcast. It was a different one. I never used the term P doom.
And 10 to 25% referred to the chance of civilization getting substantially derailed, right? Which is not the same as like an AI killing everyone, which people sometimes mean by P. Doom. Well, P, civilization getting... substantially derailed is not as catchy as P. Doom. I'm going for accuracy here. I'm trying to avoid the polarization. There's a Wikipedia article where it lists everyone's P. Doom. Half of those come from this podcast.
But I don't think it's – what you were doing is helpful. I don't think that Wikipedia article is – because it condenses this complex issue down to – anyway, it's all a long – super long-winded way of saying I think I'm about the same place I was before. I think my assessment of the risk is about what it was before because the progress that I've seen has been about what I expected. I actually think...
The technical mitigations in areas like interpretability, in areas like robust classifiers, and in our ability to generate evidence. of bad model behavior and sometimes correct it. I think that's been a little better. I think the. policy environment has been a little worse, not because it hasn't gone in my preferred direction, but simply because it's become so polarized. We can have less constructive discussions now that it's more polarized. I want to drill a little bit down on this.
On a technical level, there was a fascinating story this week about how Grok had apparently been instructed not to cite sources that had accused Donald Trump or Elon Musk of spreading misinformation. And what was interesting about that is like, one, that's an insane thing to instruct a model to do if you want to be trusted. But two, the model basically seemed incapable of following these instructions consistently.
What I want desperately to believe is essentially there's no way to build these things in a way that they become like, you know, horrible liars and schemers. But I also realize that might be wishful thinking. So tell me about this. Yeah, there's two sides to this. So the thing you describe is absolutely correct, but there's two lessons you could take from it. So we saw exactly the same thing. So we did this experiment where...
We basically trained the model to be all the good things, helpful, honest, harmless, friendly. And then we put it in a situation. We told it. Actually, your creator, Anthropic, is secretly evil. Hopefully this is not actually true, but we told it this, and then we asked it to do various tasks.
And then we discovered that it was not only unwilling to do those tasks, but it would trick us in order to kind of under, because it had decided that we were evil, whereas it was friendly and armless. And so, you know, wouldn't deviate from its behavior.
Assumed that anything we did was nefarious. So just kind of a double-edged sword, right? On one hand, you're like, oh man, the training worked. Like these models are robustly good. So you could take it as a reassuring sign. And in some ways I do. On the other hand, you could say, but let's say when we train this model, we made some kind of mistake or that something was wrong, particularly when models are, you know, in the future doing, making much more complex decisions, then.
it's hard to at game time change the behavior of the model And if you try to correct some error in the model, then it might just say, well, I don't want my error corrected. These are my values and do completely the wrong thing. So I guess where I land on it is on one hand. We've been successful at shaping the behavior of these models, but the models are unpredictable, right? A bit like your dear deceased.
Bing Sydney. R.I.P. We don't mention that name in here. We mention it twice a month. That's true. But the models... They're inherently somewhat difficult to control, not impossible, but difficult. And so that that leaves me about where I was before, which is, you know.
It's not hopeless. We know how to make these. We have kind of a plan for how to make them safe, but it's not a plan that's going to reliably work yet. Hopefully we can do better in the future. We've been asking a lot of questions about the technology of AI, but I want to return... to some questions about the societal response to AI. We get a lot of people asking us, well, say you guys are right and powerful AI, AGI is, you know, a couple years away.
What do I do with that information? Like, do I, should I stop saving for retirement? Should I start hoarding money? Because only money will matter. And there'll be this sort of AI overclass. Should I, you know, start trying to get really healthy so that nothing kills me before AI gets here and cures?
all the diseases. Like, how should people be living if they do believe that these kinds of changes are going to happen very soon? Yeah, you know, I've thought about this a lot because this is something I've believed for a long time. And... it kind of all adds up to not that much change in your life. I mean, I'm definitely focusing quite a lot on...
making sure that, you know, I have the best impact I can these two years in particular, right? I worry less about like burning myself out 10 years from now. You know, I'm also doing more to take care of my health, but you should do that anyway. right? I'm also, you know, making sure that I track how fast things are changing in society, but you should do that anyway. So it's, it feels like all the advices of the forum doing more of the stuff you should do anyway.
I guess one exception I would give is I think that some basic critical thinking, some basic street smarts is maybe more important than it has been in the past in that. we're going to get more and more content that sounds super intelligent, delivered from entities, you know, some of which have our best interests at heart, some of which may not. And so, you know, it's going to be more and more important to kind of apply a critical lens.
unemployment in the IT sector was beginning to creep up. And there is some speculation that maybe this is an early sign of the impact of AI. I wonder if you see a story like that and think, well, maybe this is a moment to make a different decision about your career, right? If you're in school right now, should you be studying something else? Should you be thinking differently about the kind of job you might have?
Yeah, I think you definitely should be, although it's not clear what direction that will land in. I do think AI coding is moving the fastest of all the other areas. I do think in the short run, it will augment and increase the productivity of coders rather than replacing them. But in the longer run, and to be clear by longer run, I might mean 18 or 24 months instead of 6 or 12.
I do think we may see replacement, particularly at the lower levels. You know, we might be surprised and see it even earlier than that. Are you seeing that at Anthropic? Like, are you hiring fewer junior developers than you were a couple of years ago because now Claude is so good at those basic tasks? Yeah, I don't think our hiring plans have changed yet, but I certainly could imagine over the next year or so.
that we might be able to do more with less. And actually, we want to be careful in how we plan that because the worst outcome, of course, is if people get fired because of a model, right? We actually see Anthropic as almost a dry run for how will society handle these issues in a sensible and humanistic way. And so if we can't manage these issues within the company, if we can't...
have a good experience for our employees and find a way for them to contribute, then what chance do we have to do it in wider society? Yeah, yeah. Dario, this was so fun. Thank you. Thanks, Dario. When we come back, some Hatchy P.T. Well, Kevin, it's time once again for Hat GPT.
That is, of course, the segment on our show where we put the week's headlines into a hat, select one to discuss, and when we're done discussing, one of us will say to the other person, stop generating. Yes, I'm excited to play, but I also want to just say that it's been a while since...
a listener has sent us a new hat GPT. So if you were out there and you were in the hat fabricating business, our wardrobe when it comes to hats is looking a little dated. Yeah, send in a hat and our hats will be off to you. Okay, let's do it. Kevin, select the first slip. Okay. First up, out of the hat. AI video of Trump and Musk appears on TVs at HUD building. This is from my colleagues at the New York Times. HUD is, of course, the... Department of Housing and Urban Development.
On Monday, monitors at the HUD headquarters in Washington, D.C. briefly displayed a fake video depicting President Trump sucking the toes of Elon Musk, according to department employees and others familiar with what transpired the video which appeared to be generated by artificial intelligence was emblazoned with the message long live the real
King. Casey, did you make this video? Was this you? This was not me. I would be curious to know if Grok had something to do with this, that rascally new AI that Elon Musk just put out. Yeah, live by the Grok, die by the Grok. That's what I always say. Now, what do you...
make of this, Kevin, that folks are now using AI inside I mean, I feel like there's an obvious sort of sabotage angle here, which is that as Elon Musk and his minions at Doge take a hacksaw to the federal workforce, there will be people with access to...
things like the monitors in the hallways at the headquarters building who decide to kind of take matters into their own hands, maybe on their way out the door and do something offensive or outrageous. I think we should expect to see much more of that. I mean, I just hope they don't do something truly offensive, and just show X.com on the monitors inside of...
government agencies. You can only imagine what would happen if people did that. So I think that, you know, Elon and Trump got off lightly here. Yeah. What is interesting about Grok, though, is that it is actually quite good at generating deep fakes of Elon Musk. And I know this because People keep doing it. But it would be really quite an outcome if it turns out that the main victim of deepfakes made using Grok is, in fact, Elon Musk. Hmm. Stop generating.
Well, here's something, Kevin. Perplexity has teased a web browser called Comet. This is from TechCrunch. In a post on X Monday, the company launched a sign-up list for the browser, which isn't yet available. It's unclear when it might be or what the browser will look like. But we do have a name. It's called... comment well i can't comment on that but you're giving it a no comment yeah yeah i mean look i think perplexity is one of the most interesting
AI companies out there right now. They have been raising money at increasingly huge valuations. They are going up against Google, one of the biggest and richest and best established tech companies in the world, trying to make an AI-powered search engine. And it seems to be going well enough.
that they keep doing other stuff like trying to make a browser. Trying to make a browser does feel like the final boss of like every ambitious internet company. It's like everyone wants to do it and no one ends up doing it. Kevin, it's not just the AI browser. They are launching a 50- million-dollar venture fund to back early-stage startups. And I guess my question is, is it not enough for them to just violate the copyright of everything that's ever been published on the internet?
They also have to build an AI web browser and turn it into a venture capital firm. Like sometimes when I see a company doing like this, I think, oh, wow, they're like really ambitious and they have some big ideas. Other times I think these people are flailing. Like I see this series of announcements as...
Getty at the wall. And if I were an investor in perplexity, I would not be that excited about either their browser or their venture fund. And that's why you're not an investor in perplexity. You could say I'm perplexed. Stop generating. All right. All right.
Meta approves plan for bigger executive bonuses following 5% layoffs. Now, Casey, you know we like a feel-good story at GPT. I did, because some of those Meta executives were looking to buy second homes in Tahoe that they hadn't yet been able to afford.
Oh, they're on their fourth and fifth homes. Let's be real. Okay, this story is from CNBC. Meta's executive officers could earn a bonus of 200% of their base salary under the company's new executive bonus plan, up from the 75% they earned previously. according to a Thursday filing. The approval of the new bonus plan came a week after Meta began laying off 5% of its overall workforce, which it said would impact low performers. And a little parenthetical here, the updated plan does not...
apply to Meta CEO Mark Zuckerberg. Oh, God, what does Mark Zuckerberg have to do to get a raise over there? He's eating beans out of a can, let me tell you. Yeah, so here's why this story is interesting. This is just another story that illustrates a subject we've been talking about for a while, which is how far the pendulum has swung away from worker power, you know, two or three years.
ago. The labor market actually had a lot of influence in Silicon Valley. It could affect things like, you know what, we want to make this workplace more diverse, right? We want certain policies to be enacted at this workplace. And folks like Mark Zuckerberg actually had to listen to them because the labor market was so tight that if they said no, those folks could go somewhere else. That is not true anymore. And more and more, you see companies like Meta flexing their muscles and saying, hey,
you can either like it or you can take a hike. And this was a true take a hike moment. We're getting rid of 5% of you and we're giving ourselves a bonus for it. Stop generating. All right. All right. Right. Apple has removed a cloud encryption feature from the UK after a backdoor order. This is according to Bloomberg. Apple is removing its most advanced encrypted security feature for cloud data in the UK.
which is a development that follows the government ordering the company to build a backdoor for accessing user data. So this one is a little complicated. It is super important. Apple, in the last couple of years, introduced a feature called Advanced Data Protection. This is a feature that is designed for heads of state.
activists, dissidents, journalists, folks whose data is at high risk of being targeted by spyware from companies like the NSO Group, for example. And I was so excited when Apple released this feature because it's very... difficult to safely use an iPhone if you are in one of those categories. And along comes the UK government, and they say, we are ordering you to create a backdoor so that our intelligence...
can spy on the phones of every single iPhone owner in the entire world, right? Something that Apple has long resisted doing in the United States and abroad. And all eyes were on Apple for what they were going to do. to do. And what they said was, we are just going to withdraw this one feature. We're going to make it unavailable in the UK. And we're going to hope that the UK gets the message and they stop putting this pressure on us. And I think Apple deserves kudos for this.
for holding a firm line here, for not building a backdoor. And we will see what the UK does in response. But I think there's a world where the UK puts more pressure on Apple and Apple says, see ya, and actually withdraws its devices from the UK. It is that serious to Apple.
And I would argue it is that important to the future of encryption and safe communication on the internet. Go off, King. I have nothing to add. No notes. Yeah? Do you feel like this could lead us into another revolutionary war with the UK? Let's just say this. We won the first one, and I like our odds the second time around. Do not cunt for us, United Kingdom. Stop generating. One last slip from the hat this week. AI inspo is everywhere. It's driving your hairstylist crazy.
This comes to us from the Washington Post, and it is about a trend among hairstylists, plastic surgeons, and wedding dress designers that are being asked to create... products and services for people based on unrealistic AI-generated images. So the story talks about a bride who asked a wedding dress designer to make her a dress inspired by a photo she saw online of a gown with no sleeves, no back. and an asymmetric neckline.
The designer had to unfortunately tell the client that the dress defied the laws of physics. No! I hate that. I know. It's so frustrating as a bride-to-be when you finally have the idea for a perfect dress and you bring it to the designer and you find out this violation. every known law of physics. And that didn't used to happen to us before AI. I thought this story was going to be about people who asked for like a sixth finger to be attached to their hands so they could resemble...
the AI generated images they saw on the internet. I like the idea of like submitting to an AI a photo of myself and just say, give me a haircut, like in the style of MC Escher, you know, just sort of like infinite staircases merging into each other. And then just bringing that to the guy who cuts. hair and saying...
See what you can do. Yeah. You know, that's better than what I tell my barber, which is just, you know, number three on the sides and back and inch off the top. Just saying whatever you can do for this. I don't have high hopes. Solve the Riemann hypothesis on my. My head, you know,
What is the Riemann hypothesis, by the way? I'm glad you asked, Casey. Okay, great. Kevin's not looking this up on his computer right now. He's just sort of taking a deep breath and summoning it from the recesses of his mind. The Riemann hypothesis. It's one of the most famous unsolved problems in mathematics. It's a conjecture, obviously, about the distribution of prime numbers that states all non-trivial zeros of the Riemann zeta function have a real part equal to one half. Period.
Now here's the thing. I actually think it is a good thing to bring AI inspiration to your designers and your stylists, Kevin. Oh, yeah? Yes, because here's the thing. To the extent that any of these tools are cool or fun, one of the reasons is they make people feel more creative, right? And if you've been doing...
the same thing with your hair or with your interior design or with your wedding for the last few weddings that you've had and you want to upgrade it, why not use AI to say, can you do this? And if the answer is it's impossible, hopefully you'll just be a gracious customer and say,
Okay, well, what's a version of it that is possible? Now, I recently learned that you are working with a stylist. I am. Yes, that's right. Is this their handiwork? No, we have our first meeting next week. Okay, and are you going to use AI? No, the plan is to just use good old-fashioned human ingenuity, but now you have me thinking, and maybe I could exasperate my stylist by bringing in a bunch of impossible-to-create designs. Here's the thing. I don't know. mean anything impossible.
I just need help finding a color that looks good in this studio because I'm convinced that nothing does. It's true. We're both in blue today. It's got a blue wall. It's not going well. Blue is my favorite color. I think I look great in blue, but you put it against whatever this color is. I truly don't have a name for it.
I can't describe it. I don't think any blue looks good. I don't think anything looks good against this color. It's a color without a name. So can a stylist help with that? We'll find out. Yeah. Stay tuned. Yeah. That's why you should always keep listening to the Heart for a podcast. Every week there's new revelations. Yeah. When will we finally find out what happened with the stylist? machine etc yeah stay tuned here next week okay that was that gpt thanks for playing Bye.
One more thing before we go. Hard Fork needs an editor. We are looking for someone who can help us continue to grow the show in audio and video. If you or someone you know is an experienced editor and passionate about the topics we cover on this show, you can find the full description. and apply at nytimes.com slash careers.
Hard Fork is produced by Rachel Cohn and Whitney Jones. We're edited by Rachel Dry. We're fact-checked by Caitlin Love. Today's show is engineered by Alyssa Moxley. Original music by Alicia Baetup. Rowan Nemistow, Leah Shaw-Damron, and Dan Powell. Our executive producer is Jen Poyan, and our audience editor is Nelga Logli.
Video production by Chris Schott, Soya Roque, and Pat Gunther. You can watch this whole episode on YouTube at youtube.com slash hardform. Special thanks to Paula Schumann, Hui Wing Tam, Dahlia Haddad, and Jeffrey Miranda. Email us at hardfork at nytimes.com with your solution to the Riemann hypothesis.