I'm Helene Cooper. I cover the U.S. military for The New York Times. So I'm sitting in my car in a parking lot outside the Pentagon. I had a cubicle with a desk inside the building for years, but the Trump administration has taken that away. People in power have always made it difficult for journalists. It hasn't stopped us in the past. It's not going to stop us now. I will keep working to get you the facts. This work doesn't happen without subscribers to the New York Times.
Casey, how was your Memorial Day weekend? My Memorial Day weekend was, it was good. I was like, you know, I need to unplug, as you know, I needed to unplug a bit. I'm not a big unplugger. I normally am very comfortable feeling plugged. Yeah, you're a screen maxer. I'm a screen maxer. But this was a weekend where I was like, okay, I got to get out of this danged house.
got to see some nature and so went uh with my boyfriend up to fort funston this beautiful part of san francisco great beach giant dunes that sit atop this battery of guns that could shoot rounds 13 miles in the ocean And I was like, I'm so excited to just kind of stare at the ocean. And so we sort of climb up into the dunes and we sit down and the big waves are rolling in and the winds pick up and I'm being sandblasted in my face at like 40 minutes.
an hour. And within 30 seconds, I have grit in my teeth. And I'm thinking, this was not the nature I was promised. Why do I feel like I'm dying? But it did do a great job of exfoliating your skin. My skin has never really looked smoother. abrasion, and some people pay lots of money for it. Yes, I have been abrased. I've been majorly abrased.
I'm Kevin Russo, tech columnist at the New York Times. I'm Casey Noon from Platformer. And this is Hard For. This week, is AI already taking away jobs? Kevin makes the case. Then, Anthropic Chief Product Officer Mike Krieger joins us to discuss Claude For, the future of work. and the viral saga over whether an AI could blackmail you. And finally, it's time for Hard Four Crimes Division. Is blackmail still a crime? I hope so.
Well, Kevin, you have delivered some interesting news to us via The New York Times this week, and that is that the job market is not looking great for young. Yes, graduation season is upon us. Millions of young Americans are getting their diplomas and heading out into the workforce. And so I thought it was high time to investigate what is going on with jobs and AI and specifically with entry-level white-collar jobs, the kind that a lot of recent college graduates are applying for.
because there are a couple things that have made me think that we are starting to see signs of a looming crisis for entry-level white-collar jobs. So I thought I should investigate that. Yeah, well, I'm excited to talk about this because I got an email today from a recent college grad. And she wanted to know if I could help her get a job in marketing and tech. And I thought, if you're just emailing me asking for a job.
There must be a crisis going on in the job market. Yes, that would not be my step one in looking for a job, or maybe my step 500. But you've actually spent a lot of time looking into this question. So tell us a little bit about what you did and what you... we're trying to figure out exactly. So I've been interested in this question of AI and automation for years. And like, when are we going to start to see large scale changes to employment from the use of AI?
And there are a couple things that make me worried about this moment specifically and whether we are starting to see signs of an emerging jobs crisis for entry-level white-collar workers. The first one is economic data. So if you look at the unemployment rate for college graduates right now, it is unusually high. It's about 5.8.
percent in the U.S. That has risen significantly, about 30 percent since 2022. Recently, the New York Federal Reserve put out a bulletin on this and said the employment situation for recent college graduates had, quote, deteriorated noticeably. And this tracks with some data that we've been getting from job websites and recruiting firms showing that especially for young college graduates in fields like tech and finance and consulting, the job picture is just much worse than it was.
even a few years ago. And that rate that you mentioned, Kevin, that is higher for young people in entry-level jobs than it is for unemployment in the United States overall. Is that right? Yes. Unemployment in the United States is actually doing quite well. We're in a very...
tight labor market, which is good. We have pretty close to full employment. But if you look specifically at the jobs done by recent college graduates, it is not looking so good. And actually, the sort of job placement rates at a bunch of colleges and even top business schools like Harvard and Wharton and Stanford.
are worse this year than they have been in recent memory. I was having dinner with a Wharton student last week, and she was telling me that a lot of her classmates had yet to be placed, and it was a real concern. So anecdotally, that sounds right to me. So that's the economic data that you're seeing. What else is making you worried? So one of the other things that's making me worried is the rise of so-called agentic AI systems, these AI tools that cannot just do a question and answer.
session or respond to some prompt, but you can actually give them a task or a set of tasks and they can go out and do it and sort of check their own work and use various tools to complete those assignments. One of the things that actually has updated me the most on this front are these Pokemon demos. Casey, do you know what I'm talking about here? You're talking about like Claude plays Pokemon? Yes. So within the last few months, it's become very trendy for AI companies to test their agentic...
AI systems, by having them play Pokemon, essentially from scratch with no advanced training. And some of them do quite well. Google said on stage at IO last week that... Gemini 2.5 had actually been able to finish the entire game of Pokemon. One of the games. There are, I think, probably at least 36 different Pokemon games on the market. And I actually know for a fact, Google was playing a different Pokemon game than Anthropic was. Oh, interesting. So I'm not...
Pokemon expert, but I also, like, I think people see these Pokemon demos and they think, well, that's cute, but, like, you know, how many people play Pokemon for a living? It seems like more of a stunt than a real improvement in capabilities. But the thing I am hearing from researchers in...
the AI industry and people who work on these systems is that this is not actually about Pokemon at all. That this is about automating white-collar work. Because if you can give an AI system a game of Pokemon and it can sort of figure out...
how to play the game, how to, I don't know Pokemon very well. I'm more of a Magic the Gathering guy, but my sense is you have to like go to various places and complete various tasks and collect various Pokemon. You have to go into various gyms. You take your Pokemon. They compete against rival Pokemon and your Pokemon.
have to vanquish the others in order for you to progress through the game, Kevin. I hope that was helpful. Exactly. So as I was saying, that is how you play Pokemon. And what they are telling me is that this is actually some of the same techniques that you would use to...
train an AI to, for example, do the work of an entry-level software engineer or a paralegal or a junior consultant. Yeah. If your job is mostly like writing emails and updating spreadsheets, that is a kind of video game. And if an AI system can just look... Exactly. One of the signs that is worrying me is that it does seem like these AI agents are becoming capable of carrying out longer and longer sequences of tasks, right? Yeah, so tell us about that.
Recently, Anthropic held an event to show off the newest model of Claude, Claude Opus 4, I believe it's called. I believe it's Claude 4 Opus, actually. Claude 4 Opus? Got your ass. Oh, yeah. It's Claude 4 Sonnet and Claude 4 Opus. Sometimes I feel like you don't respect the names.
of these products. Do you know how much work went into the naming of these products? At least five minutes. They spent at least five minutes coming up with that and then you're just going to shit all over it. I'm so sorry. Sorry. So anyway, Claude Opus 4. Claude 4 Opus. Well, no, I... I swear it's Claude. It's Claude 4 Opus. No, it's Claude Opus 4. What? I'm looking at the Anthropic blog post. Oh my God. Claude Opus 4 and Claude Sonnet 4.
This is so confusing to me. It's like your boyfriend doesn't even work there. I'm going to be in big trouble when I get home. So, okay. Back to my point. So Anthropic holds this event last week where they're showing off their latest and greatest versions of Claude. And one of the things they say about Claude Opus 4, their newest, most powerful model,
is that it can code for hours at a time without stopping. And in one demo with a client on a real coding task, Claude was able to code for as much as seven hours uninterrupted. You might think, well, that's just coding. Maybe that's a very special field. And there are some things about coding that make it low-hanging fruit for these sort of reinforcement learning models that can learn how to do tasks over time.
The problem for workers is that a lot of jobs, especially at the entry levels of white-collar occupations, are a lot like that, where you can build these sort of... reinforcement learning environments where you can collect a bunch of data. You can sort of have it essentially play itself like it would play Pokemon and eventually get very good at those kinds of tasks. You know, at Google I.O. last week, Kevin, they showed off a demo.
of a feature where you can teach the AI how to do something. You effectively show that you say to the AI, hey, watch me do this thing. And then it watches you do the thing and then it can replicate the thing. Can you imagine how many managers all around the world took a look at that and said,
Once I could teach the computer how to do things, a bunch of people about to lose their damn jobs. Totally. And this is why some of the people building this stuff are starting to say that it's not just going to be software engineering that becomes displaced by these AI. It's going to be all kinds of different work. Dario Amadei, the CEO of Anthropic, gave an interview to Axios this week in which he said that within one to five years, 50% of entry-level white-collar jobs could be replaced.
That could be wildly off. Maybe it is much harder to train these AI systems in domains outside of coding. But given what is happening just in the tech industry and just in software engineering, I think we have to take seriously the possibility that we are about to see a real bloodbath for entry-level white-collar workers. Yeah, absolutely. And we wonder why people don't like AI.
All right. So first, we've got the economic data showing that there is some sort of softness around hiring for young people. We also just have the rise of these agentic systems. But is there evidence out there, Kevin, that says that the AI actually is already replacing these jobs? So I talked to a bunch of economists and people who study the effects of AI on labor markets, and what they said is that, you know, we can't conclusively see yet in the large economic...
samples that AI is displacing jobs. But what we can see are companies that are starting to change their policies and procedures around AI to sort of... prioritize the use of AI over the use of human labor. So I'm sure you've been following these stories about these so-called AI-first companies. Shopify was an early example of this. Duolingo also did something related. this, where basically they are telling their employees, before you go out and hire a human for a given job or a given task,
see if you can use AI to do that task first. And only if the AI can't do it, are you allowed to go out and hire someone. Yeah, and by the way, if you're wondering, Hard Fork is an AI second organization because at Hard Fork, the listener always comes first. That's true.
So... I think that what worries me in addition to sort of the hints of this that we see in the economic data and the kind of evidence that these AI agents are getting much better, much more quickly than people anticipated, is just that the culture of automation and employment is changing.
very rapidly at some of the big tech companies. Yeah, this feels like a classic case where the data is taking a while to catch up to the truth on the ground. I also collect stories about this and would share maybe just a few things I've noticed.
over the past couple of weeks here, Kevin. The Times had a great story about how some Amazon engineers say that their managers are increasingly pushing them to use AI, raising their output goals and becoming less forgiving about them missing their deadlines. which is a sort of buy now, pay it later company, says its AI agent is now handling two thirds of customer service chats. The CEO of IBM said the company used AI agents to replace the work of 200 HR employees.
Now, he says that they took the savings and plowed that into hiring more programmers and salespeople. And then finally, the CEO of Duolingo says that the company is going to gradually stop using contractors to do work that AI can handle. So that's just a collection of anecdotes. looking for kind of spots on the horizon where it seems like there is truth to what Kevin is saying. I do think we're seeing that. Yeah. And I think the thing that makes me.
confident in saying that this is not just a blip, that there's something very strange going on in the job market now, is talking with young people who are out there looking for jobs, trying to plan their careers. Things do not feel normal to them. So recently... I had a conversation with a guy named Trevor Chow. He's a 23-year-old recent Stanford graduate, really smart guy, really skilled, the kind of person who could go work anywhere he wanted basically after graduation.
And he actually turned down an offer from a high frequency trading firm and decided to start a startup instead. And his logic was that... Basically, we might only have a few years left where humans have any kind of advantage in labor markets, where we have leverage, where our ability to sort of do complex and... hard things is greater than those of AI systems. And so basically...
You want to do something risky now and not wait for a career that might take a few years or decades to pay off. And so, you know, the way he explained it to me is like all of his friends are making these kind of similar... calculations about their own career planning now they're looking out at the job market as it exists today and saying like that doesn't look great for me but maybe i can sort of find a way around some of these limitations
That's interesting. Well, let me try to bring some skepticism to this conversation, Kevin, because I know in your piece you identified several other factors that helped to explain why young people might be having trouble finding jobs. You have tariffs.
You have just sort of the overall economic uncertainty that the Trump administration has created. You have the sort of long tail of disruption from the pandemic or even the Great Recession, right, that I think some economists believe that we might not totally have recovered from. So it seems like there are a lot of explanations out there for why young folks are having trouble finding jobs that don't involve AI maybe at all. Yeah, I think that's a fair point. And I want to be really...
careful here about claiming that all of the data we're seeing about the unemployment being high for recent college graduates is due to AI. We don't know that. I think we will have to wait and see.
if there is more evidence that AI is starting to displace massive numbers of jobs. But I think what the data is failing to capture, or just at least not capturing yet, is how eager... and motivated the ai companies that build this stuff are to replace workers Every major AI lab right now is racing to build these highly capable, autonomous AI agents that could essentially become a drop-in.
remote worker that you would use in place of a human remote worker, they see potentially trillions of dollars to be made doing this kind of thing. When they are talking openly and honestly about it, they will say, like, the barrier here is not some new algorithm that we have to develop or some new research breakthrough. It's literally just we have to start paying attention to a field.
caring about it enough to collect all the data and build the reinforcement learning training environments to automate work in that field. And so they are just kind of... planning to go sort of industry by industry and collect a bunch of data and use that to train the models to do the equivalent of whatever the entry-level worker does. And, like, that could happen pretty quickly. Yeah. That feels like a threat. Yeah, it's not great. And I think the argument that they would make is that...
you know, some of these entry-level jobs were pretty rote anyway, and maybe that's not the best use of young people's skills. I think the counter-argument there is like, those skills are actually quite important for building the knowledge that you need to become a contributor to a field later on. I don't know about you, but my first job in journalism...
involved a bunch of rote and routine work. One of my things that I had to do was, like, write corporate earnings stories where I would take some, you know, an earnings report from a company and, like... pull out all the important pieces of data and like put it into a story and like get it up on the website very quickly.
And like, was that the most thrilling work I can imagine doing or the highest and best use of my skills? No, but it did help me develop some of these skills like reading an earnings statement that became.
pretty critical for me later on. Interesting. For what it's worth, my first job, I think it was actually the most physical job in journalism I ever had. I covered a small town, and so I spent all of my days just driving down to City Hall, going down to the police station, sitting at the city council. meeting, making phone calls, a lot of drudgery sort of came in later. But let me raise...
Maybe an obvious objection to the idea that, oh, young people, don't worry. These jobs that we're eliminating, it was just a bunch of drudgery. Anyway, the young people need to pay their rent. Yes. You know, the young people need to buy health insurance. Yes. And so I think they're not.
going to take a lot of comfort from the idea that the jobs that they don't have weren't particularly exciting. Yes. And the optimistic view is that, you know, if you just shift workers off of these like entry level rote tasks.
into more productive or more creative or more collaborative roles. You kind of like free them up to do higher value work. But I just don't know that that's going to happen. I mean, I'm talking to people at companies who are saying things like, we don't really see a need for junior level software engineers, say, because now we can hire a mid-level software engineer and give them a bunch of AI tools and they can do all of the debugging and the code review.
and the stuff that the 22-year-olds used to do. Yeah. Let me ask about this in another way. I think a lot of times we have seen CEOs use AI as the scapegoat for a bunch of layoffs that they already wanted to do anyway, or a bunch of sort of management decisions that they wanted to make anyway. Earlier this year, there was a story in the San Francisco Standard that Mark Benioff, the CEO of Salesforce, said the company
would not hire engineers this year due to AI. I went to Salesforce's career page this morning, Kevin. There were hundreds of engineering jobs there. I don't know what wires got crossed. You know, the story I read was in February. Maybe something has changed since then. But...
Talk to me a little bit about the hype element in here because I do feel like it's real. Yes, there's definitely a hype element in here. I worry that companies are kind of getting ahead of what the tools can actually deliver. I mean, you mentioned Clark. the buy now pay later company, a couple of years ago, they made this big declaration that they were going to pivot to using AI for customer service. And they announced this partnership with open AI and like they were going to.
tried to drive down the number of human customer support agents to zero. And then recently they've been backtracking on that. They've been saying, well, actually customers didn't like the AI customer service that they were getting. And so we're going to have to start hiring humans again. So I do think that this is a risk of. some of this hype is that it tempts
executives at these companies to move faster than the technology is ready for. Well, and speaking of that, one of my favorite stories from this week was about a guy who has set up a blog, Kevin, where I wonder if you saw this. He keeps a database of every time that a lawyer has been.
caught using citations that were hallucinated by AI. Did you see this? No. There are more than a hundred. We've talked about this issue on the show a couple of times, and I've thought this must just be a small handful of cases because who would be crazy enough to bet their entire career on a hallucinated lead?
goal citation. Turns out more than 100 people. And so a lot of people might be listening to this conversation saying, Kevin, you're telling me that we're standing on the brink of AI taking over everything. These things still suck in super important ways. So help us square.
that issue. Like, we know these systems are not reliable for many, many jobs. So how can it be that so many CEOs are apparently ready to just junk their human workforces? So I think part of the misunderstanding here is that they're like two different kinds of work. There's work that can be sort of easily judged and verified to be correct or incorrect.
like software engineering. In software engineering, like, either your code runs or it doesn't. And that's a very clear signal that can then be sent back to the model in these sort of reinforcement learning systems to make it better over time. Most jobs are not like that. Right. Most jobs, including law, including journalism, including lots of other white collar jobs, do not have this very clearly defined indicator of success or failure. And so that's actually like what is.
stopping some of these systems from improving in those areas is that it's not as easy to like train the model and say, give it a million examples of what a correct answer looks like and a million examples of what an incorrect answer looks like and sort of have it over time, learn to do more. of the correct thing. So I think in law, this is a case where you do actually have more subjective outputs. And so it's going to be a little harder to automate that work. But I would say we also have to...
compare the rates of error against the human baseline, right? You mentioned this database of cases in which human lawyers had used hallucinated citations in their briefs. I imagine there are also human paralegals or lawyers who would make mistakes in their briefs as well. And so I think for law firms or any company trying to figure out, like, do we bring in AI to do a job? The question they're asking is not...
is this AI system completely error-free? Is this less likely to make errors than the humans I currently have doing this work? Right, and like in so many things, if the system is like 20% worse than a human but 80% less expensive...
A lot of CEOs are going to be happy to make that trade. Totally. All right. Well, so let's bring it home here. I imagine we might have some college students listening or some recent college grads. They're now thoroughly depressed. They're drinking. It's Friday morning. They're wasted.
of sober up, Kevin, what would you tell them about what to do with any of this information? Is there anything constructive that they can do, assuming that some of these changes do come to pass? So I really haven't... heard a lot of good and constructive ideas for young people who are just starting out in their careers you know people will say stuff like oh you should just you know be adaptable and resilient and that's sort of like what
Hassabis told us last week on the show when we asked him like what young people should do. I don't find that very satisfying in part because it's just so hard to predict like which industries are going to be disrupted by this technology.
But I don't know. Have you heard any good advice for young people? Well, I mean, I think what you're running into, Kevin, is the fact that our entire system for young grads is set up for them to take entry-level jobs and gradually acquire more skills. And what you're saying is that...
that part of the ladder is just going to be hacked off with a chainsaw. And so what do you do next? So of course, there's no good answer, right? The system hasn't been built that way. I think that in general, the internet has been a pressure mechanism. forcing people to specialize, to get niche-y. The most money and the most opportunity is around developing some sort of scarce expertise. I have tried to build my career as a journalist by trying to identify a couple ways where I could...
do that. It's worked out all right for me, but I also had the benefit of entry level jobs. So if somebody come to me at the age of 21 and say, if you want to succeed in journalism, get really niche and specialize, I would say, okay, but like, I need to go.
have a job first, like, is there one of those? So to me, that's like kind of the tension. I will also say there's never been a better time to be a Nepo baby. I don't know if you've been following the Gracie Abrams story. Very talented songwriter, daughter of J.J. Abrams, the filmmaker. You know, she's born into wealth and now she's best friends with Taylor Swift. If you can manage something like that, I think you'd be very happy.
Yes, I hear that advice. And I would also add one other thing that I am starting to hear from the young people that I am talking to about this, which is that it is actually possible, at least in some industries, to sort of... Leapfrog over those entry-level jobs. If you can get really good at sort of being a manager of AI workflows and AI systems and AI tools, if you can kind of orchestrate... complex projects using these AI tools.
Some companies will actually hire you straight into those higher level jobs because even if they don't need someone to create the research briefs, they need people who understand how to make the AI tools that create the research briefs. And so that is, I think, a path that is... becoming available to people at some companies.
I would just also say that in general, it really does take a long time for technology to diffuse around the world. Look at like the percentage of e-commerce in the United States. It's like less than 20% of all commerce. And we're, what, 25 plus years into Amazon?
dot com existing so i think that one of the ways that you and i tend to disagree is i just think you have like shorter timelines than i do like i think we basically think the same things are going to happen but like you think they're going to happen like imminently and i think it's going to take several
more years. So I do think everything we've discussed today, it's going to be a problem for all of us like before too, too long. But I think if you're part of the class of 2025, you will still probably find an entry level job in the end. I hope you're right. And if not, we promise to make another podcast episode about just how badly all of this is going.
Okay, see, that wraps our discussion about AI and jobs. But we do want to hear from our listeners on this. If you have lost your job because of AI or if you are worried that your job is... rapidly being replaced by AI, we want to hear from you. Send us a note with your story at hardfork at nytimes.com. We may feature it in an upcoming episode. Yeah, we love voicemails too if you want to send one of those.
When we come back, a conversation with Mike Krieger, the chief product officer of Anthropic, about new agentic AI systems and whether they're going to take all our jobs. Or maybe blackmail us. Or maybe both. Who knows? Thank you.
The New York Times app has all this stuff that you may not have seen. I can immediately navigate to something that matches what I'm feeling. The way the tabs are at the top with all of the different sections. It's just easier to navigate that way. There is something for everyone.
personalized page, the UTEP. That one's my favorite. I can also save my articles easily in this area. Right under the byline, it says, click here if you like to listen to this article. I like that the cooking tab on top is really easily accessible. So I'm on my...
way home and I'm just thinking, oh, what am I going to make for dinner? I'll just quickly go on to cooking and say, oh, I've got this in my pantry. I'm going to try out some of these recipes I see in here. I go to games always. Doing the mini, doing the wordle. I love how much content it exposed. This app is essential. The New York Times app. All of the times, all in one place. Download it now at nytimes.com slash app.
Well, Casey, we've got a mic on the mic this week. And I'm excited to talk to him. So Mike Krieger is here. He is the co-founder of Instagram, a product some of you may have heard of. Little photo sharing app. Currently, Mike is the chief product officer at... Anthropic. Now, Casey, do you happen to know anyone who works at Anthropic? As a matter of fact, Kevin, my boyfriend works there. And so, yeah, that's something I would like to disclose at the top of this segment.
Yeah, and my disclosure is that I work at the New York Times Company, which is suing OpenAI and Microsoft over copyright violations. All right. So last week, Anthropic announced Claude 4. We just spent a little bit of time talking about all of the new agentic coding capabilities that this system has.
I think Mike has a really interesting role in the AI ecosystem because his job, as I understand it, is to take these very powerful models and turn them into products that people and businesses actually want to use. Which is a harder challenge than you might think. Yes. And also, Kevin, these products are really explicitly being designed to...
take away people's jobs. And given the conversation that we just had, I want to bring this to Mike and say, how does he feel about building systems that might wind up putting a lot of people out of work? Yeah. And Mike's perspective on this is really interesting because he... is not an AI lifer, right? He worked at a very successful startup before this. He then spent some time at Facebook after Instagram was acquired there. So he's really...
a veteran of the tech industry and in particular social media, which was sort of the last big product wave. And so I'm interested in asking him how the lessons of that wave have translated into how he builds products in AI today. Well, then let's wave hello. to Mike Krieger. Let's bring him in. Mike Krieger, welcome to Hard Fork. Good to be here. Well, Mike, we noticed that you didn't get to testify at the meta-antitrust trial. Anything you wish you could have told the court? Oh, you know.
That is the happiest news I got that week. I do not have to go to Washington, D.C. this week. You got to focus on something else, which is the dynamic world of artificial intelligence. Exactly. So you all just released Claude 4, two versions of it, Opus and Sonnet. Tell us a little bit about Claude IV and what...
it does relative to previous models. Yeah. First of all, I'm happy that we have both Opus and Sonnet out. We're in this very confusing situation for all where our biggest model was not our smartest model. Now we have a both, you know, biggest and smartest model and then our like happy-go-lucky middle child Sonnet, which is back to...
It's rightful place in there. Yeah, both we really focused on how do we get models able to do longer horizon work for people? So not just here's a question, here's an answer, but hey, go off and think about this problem and then go solve it for. tens of minutes to hours actually. Coding is a immediate use case for that, but we're seeing it be used for go solve this research problem, go off and write code, but not necessarily in the service of building software, but in the service of
I need a presentation built. That was really the focus around both Cloud models. Opus, the bigger, smarter model can do that for even longer. We had one customer seven-hour refactor using Cloud, which is pretty amazing. Sonnet, maybe a little bit more time-constrained, but much more. or human in the loop.
So let me ask about that customer, Rakuten, I believe a Japanese technology company. And I read everywhere that they use Claude for seven hours to do it. One thought that came to mind is, well, wouldn't it have been better if it could have done it faster? Like, why is it a good thing that Claude worked for?
seven hours on something. That was a good follow-up, which was, is that a seven-hour problem that took seven hours or a 20-hour problem that took seven hours or a 50-minute problem that it is still churning on today? We just had to stop it at some point. It was a big refactor, which like a lot of sort of iterative kind of...
you know loops and then tests and i think that's that's what made it a longer horizon like seven hour uh type of problem but it is interesting question around like when you can get this asynchronicity of having it really work for a long time does it change your relationship to the You want it to be checking in with you. You want to be able to see progress. If it does go astray, how do you reel it back in as well?
seven-hour problems that we're going to have, you know, going forward. Most software engineering problems are probably one-hour problems. They're not seven-hour problems. So was this a case where it was like a real kind of like set it and forget it, like walk away, come back at the end of the day, and okay, the refactor is done? Or was it more complicated than that? That's my understanding.
It was like a lot of migrating from one big version to another one or just changing frameworks. I remember at Instagram, we had a moment where we changed network stacks, like how Instagram communicated with our backend service.
migration to demonstrate it and then we farmed it out to basically 20 engineers over the next month so that that's exactly the kind of thing that today i would have given to opus and said all right here's an example of one migration please go and do the rest of our code base and let us focus on the more interesting stuff so
I want to talk about all this agentic stuff and the implications for workers and labor markets and whatnot. But we have to talk about the blackmail stuff because this is the thing that caught everyone's attention after these announcements last week. You're... Safety engineers and testers found that in some early testing, Claude IV would actually turn to blackmail when engineers tried to take it offline as part of a fictional scenario during a safety test.
tests. What happened? Yeah. And I think to be clear, these are bugs rather than features. I think we should be clear as well. I think a thing that drew me to Anthropic in the first place is we're asking the hard questions and...
putting the model through its paces. And sometimes it surprises us in positive ways. And sometimes you're like, all right, that's something that we're going to have to either mitigate in testing or further training. And so there was, you know, a couple of examples where in safety testing, either the model said, hey, I'm
going to either contact somebody if I have the tools to do so, or I'm going to try to manipulate it. And so that's really the purpose of a lot of the safety testing is put it through its paces, see what it does, and then either focus on
classifiers that we can use down the line to prevent that behavior or not give it those tools. That's another important thing. Like this is a very contrived scenario that we had there or further train the model so that it demonstrates behavior that we think is more aligned as we say. Right. I mean, in this specific. the safety testers gave it some incriminating emails as part of a corporate sort of document set that
showed that in this fictional scenario, the researcher who was trying to turn Claude off was also having an extramarital affair. And so one of the things that Claude tried to do is like... blackmail the engineer into not replacing it, not shutting it down by saying, like, I could expose your affair. Do I have that right? I think that sounds right. Yeah.
People got really mad about some of these safety tests when they were released or published. There was another test where Claude was asked to do something unethical. I think it was fake data in a pharmaceutical. trial and one of the things that it tried to do in this simulated test environment was like use the command line to like
tip off authorities and maybe send some incriminating evidence to the press to try to be a whistleblower for this fictional scenario. I like that part. Yeah. And by the way, Claude, if you're listening, we're hardfork at nytimes.com. Let us know if you see something. something um but the signal tool that's right yeah i'm curious like if you think that all models of a certain size and sophistication would demonstrate
behaviors like this and just the other AI labs building these models aren't talking about it as openly as Anthropic is? Or do you think there is something specific about Claude that is more prone to, for lack of a better word, narc on its users? We don't know. my suspicion is that they would have similar patterns. I'd love to see that sort of experimentation happen as well. I think there's a lot that like is common to, you know,
What have we decided in our collective published and, you know, discussed works as appropriate behavior? And then there's probably additional things that we're doing on the, we have a constitutional AI process. We're really trying to train sort of goals for behavior for Claude rather than, you know, sort of.
If then, then that kind of rules, which very, very quickly, as we're discussing, kind of become insufficient when you deal with nuanced, complicated situations. But my guess is that a lot of the larger models would demonstrate emergent.
interesting behaviors in that situation. Yeah. Which I think is like part of the value of doing this, right? It's not just like Anthropic saying, here's what's going on at Claude. Like the stuff that Anthropic is finding out, I'm sure the other labs are finding out. And, you know, my hope is that this kind of work pressures the other.
labs to be like, yeah, okay, it's happening with us too. And in fact, we did see people on X trying to replicate this scenario with models like O3, and they were very much finding the same thing. Yeah. I'm just so fascinated by this because it seems like it makes it quite... challenging to develop products around these models whose behavioral properties we still don't fully understand. Like when you were building Instagram.
It wasn't like you were worried that the underlying feed ranking technology was going to blackmail you if you did something inappropriate. There's this sort of...
unknowability or this sort of inscrutability to these systems that must make it very challenging to build products on top of them. Yeah, it's both a... really interesting product challenge and also why it's an interesting product at all so i um i talked about this on stage at code with claude where we did an early prototype alongside amazon to you know see like
could we help partner on Alexa Plus? And when I remember this really early prototype, I had built a tool that was like the timer tool, right? Or like a reminder tool. And one or the other was broken, like the backend was broken for it. And Claude was like, ooh. I can't set an alarm for you. So instead, I'm going to set a 36 hour timer, which no human would do. But it was like, oh, it's agentically figuring out that like that.
I need to solve the problem somehow. And you can watch it do this. If you play with Cloud Code, if it can't solve a problem one way, it'll be like, well, what about this other way? I was talking to one of our customers. Somebody asked Claude, like, hey, can you generate a, you know, like a speech version of this text? And Claude's like, well, I don't have that capability. I'm going to open Google. Google.
free TTS tool paste the user text in there and then hit play and then record and like basically like export that and like Nobody programmed that into Claude. It's just Claude being creative and agentic. And so a lot of the interesting product design around this is how do you enable all the interesting creativity and agency?
when it's needed, but prevent the, all right, well, I didn't want you to do that, or I want more control. And then secondarily also, when it does it right one time, how do we kind of compile that into, great, now you've figured this out. You know, like you want somebody who can creatively solve a problem, but not.
every time if you had a worker that every time was like i'm just gonna like completely from first principles decide how i'm gonna like write a word document be like okay great but it's like day 70 like you know how to do this now my impression from the outside um is that a lot of the usage of claude is for coding that claude is um
used by many people for many things, but that the coding use case has been really surprisingly popular among your users. What percentage of Cloud usage is for coding-related tasks? It's, I mean, on Cloud AI, I would... wager it's 30 to 40 percent even and that's even a product that i would say is fine for sort of code snippets but it's not a coding tool like cloud code where obviously it's i would say 95 to 100 some people use cloud code for just talking to clod but
really not the optimal way to talk to Claude. But on Claude.ai, you know, it's not the majority, but it is a good chunk of what people are using it for. There was some reporting this week that Anthropic had decided toward the end of last year to invest.
less in Claude as a chatbot and sort of focus more on some of these coding use cases. Give us a kind of state of Claude. And if you're a big Claude fan and you were hoping for lots of cool new features and widgets, should those folks be disappointed? I think of it as two things. One is, what is the model really good at? And then how do we expose that in the products?
for ourselves and then you know who builds on on top of claude in terms of what the model is being trained on again it's the year of the agent i have this joke in meetings like how long can we go without saying agent and you know i think we made it like 10 minutes it's pretty good um that Capability unlocks a bunch of other things. Sure, coding is a great example. You can go and refactor code for tens of minutes or hours.
Hey, I want you to go off to do this research and help me, you know, prepare this, you know, research brief that I am doing, or I'm getting, you know. 50 invoices a day. Can you scrub through them? You know, help me understand it and help them classify an aggregate. Like these are agentic behaviors that have applications beyond just And so we'll continue to push on that. So as a Claude fan that likes to bring Claude to your work, then that's useful.
Meanwhile, we've also focused on the writing piece. So I've, I spent a lot of time writing with Claude. It's not at the point where I would say like. write me a product strategy, but I'll often be like, here's a sample of my writing. Here's some bullets. Help me like write this longer form doc and do this effectively. I'm finding it's getting really good at that matching tone, producing like.
non-cliched Phil text. Like if I look at Sonnet 3.7, it's a pretty good writer, but there's like turns of phrases to me are like decidedly clawed where I'm like, it's not just revolutionizing AI. It's also, and I'm like, it loves that phrase, for example. And it's like a little bit of. of like a Claude tell. And so for like the Claude fans, like we'll help you get your work done, but hopefully we'll also help you write and just be a good conversational partner as well. Let's talk about...
the labor implications of all of the agentic AI tools that you all and other AI labs are building. Dario. Your CEO told Axios this week that he is worried that as many as 50 percent of all entry level white collar jobs could disappear in the next one to five years. You were also on stage with him last week and you asked him when he thinks there will be the first billion dollar.
company with one human employee, and he answered 2026 next year. Do you think that's true? And do you think we are headed for a wipeout of early career professionals in white collar industries? I think this is another example of...
I presume a lot of the labs and other people in the industry are looking and thinking about this, but there is not a lot of conversation about this. And I think one of the jobs of Anthropik can uniquely have is to surface them and have the conversation. I'll start maybe with the...
entrepreneur one, and then maybe the entry, we'll do the entry one next on the entrepreneurship. Absolutely. Like that feels like it's inevitable. I joked on, you know, uh, with, with Dario, like, you know, we did it at Instagram with 13 people and, you know, we could have. likely done it with less so that that feels inevitable on the labor side i think what i see inside anthropic is our you know our most experienced best people have become kind of
orchestrators of clods, right? Where they're running multiple clod codes in terminals, like farming out work to them. Some of them would have maybe assigned that task to like a new engineer, for example, and not the entirety of the new engineer's job, right? There's a lot more to engineering than just.
doing the coding but part of that role is in there and so when i think about how we're hiring just very transparently like we have tended more towards the like i see five is kind of like our you know career level, you know, you've been doing it for a few and beyond. And I have some hesitancy at hiring New York, partly because we're just not as developed as an organization to like have a really good internship program and help people on board, but also partially because it's...
That seems like a shifting role in the next few years. Now, if somebody was an IC3, IC4, and extremely good at using Cloud to do their work and map out, of course, like we would bring them on as well. So there is, I think, a continued role for people that... have embraced these tools.
to make themselves in any ways as productive as a senior engineer. And then their job is how do you get mentored? So you actually acquire the wisdom and experience that you're not just doing seven hours of work to the wrong end, you know, or in a way that's going to be, you know, a spaghetti vibe. mess that you can't actually then maintain a year from now because it wasn't just a weekend project. The place where it's less known and I think something that we'll have to study over the next...
you know, several months to a year is for the jobs that are more, you know, is it data entry? Is it data processing where you can set up an agent to do it pretty reliably? You'll need people in the loop there still to validate the work to even set up that agentic work in the first place. I think it would be unrealistic that the exact same jobs look exactly the same even a year or two from now. So as somebody who runs a business, I get the appeal of having a sort of digital...
CTO, salesperson, whatever else these APIs will soon be able to do. That could create a lot of value in my life. At the same time, most people do not run businesses. Most people are... W2 employees. And they email us when we have conversations like this. And they want us to ask really hard questions of folks like yourself. And I think it's because they're listening to all this and they're just like...
why would I be rooting for this person, right? Like this person is telling me that he's coming to take my job away and he doesn't know what's going to come after that. So I'm curious how you think about that. And like, what is the role that you're kind of playing in this ecosystem right now? Yeah, I think.
for as long as possible the things that i'm trying to build from a product perspective are ways in which we augment and accelerate people's own work right i think there's um and the different players will take different approaches and i think There'll be like a marketplace of ideas here. But when we think about things that we want to build and from a first party perspective, it's all right. Are you able to take somebody's existing application or their role and like.
be more of themselves, right? A useful thought partner, an extender of their work, a researcher, a augmenter of how they're doing. Will that be the role AI will have forever? Likely not, right? Because it is going to get more powerful. And then, you know, if you spend time with...
the people who are like really deep in the field, they're like, oh, you know, and eventually like, you know, the eyes will be running companies. I'm not sure we're there yet. I think the eyes lack a lot of sort of like organizational and like long-term discernment to do that successfully, I think. It can do a seven-hour refactor. It's not going to conceptualize and then operate a company. I think we were years away from something like that.
There's choices you can make around what you focus on, and I think that's where it starts. Whether that's the thing that makes it so they're perfectly complementary forever, likely not, but hopefully we're nudging things in the right way as we also figure out. the broader societal question of how do we scaffold our way there you know what are the new jobs that do get created how their roles change like
How does the economy and the safety net change in that new world? Like, I don't think we're six months to a year from solving those questions. I don't think we need to be just yet, but we should be. having the conversation now. I think this is one place where I do find myself getting a little frustrated with the AI safety community in that I think they're very smart and well-intentioned when it comes to analyzing the risks that AI... poses if it were to go rogue or develop some...
malign goal and pursue that. I don't think the sort of conversation about job loss and the conversation about AI safety are close enough together in people's minds. I don't think, for example, that a society where you did have 15 or 20 percent unemployment for early career college graduates.
is a safe society. I think we've seen over and over again that when you have high unemployment, your society just becomes much less safe and stable in many ways. And so I would love... if the people thinking about AI safety for a living at places like Anthropic also
brought into that conversation the safety fallout from widespread job automation? Because I think that could be something that catches a lot of people by surprise. Yeah, we have both our economic... impact kind of societal impacts team and our ai safety team i think it's a useful nudge around how did those two come together because there are second order implications on any kind of
major labor changes. Are you guys in the conversations with policymakers, regulators, sort of trying to like ring alarm bells? Are you hearing anything back from them that makes you feel like they're taking you seriously? I'm not in the policy conversations as much being more on the products. I do think those conversations are happening and there is more, you know, it's this interesting thing where the critique.
a year ago maybe it's changed a bit was oh you guys are talking your own book you're like this is not going to happen like you know it's all hype and probably some of it was folks hyping it up at least the the the kind of uh alarm bells or you know signals that i've seen at least coming out of a thought break like no we think this is real we think that we should start reckoning with it believe it or not like
Even if you assume it is a low probability thing, shouldn't we at least have a story around what that looks like? You were one of the co-founders of Instagram. Instagram, very successful product used by many, many people. But social media in general has had a number of...
Negative unintended consequences that you may not have envisioned back when you were first releasing Instagram. Are there lessons around the trajectory of social media and unintended harms that you take with you now into your work on AI? I think you have to...
Reckon with these, I mean, AI is already globally deployed and has at least 1 billion issues or products. So it would be silly to say like it's early in the AI adoption, but it actually is early in the AI adoption curve. I think with social media, when it was. me and Kevin taking photos of really great meals in San Francisco, you know.
with our iphone 3gs like you know yeah yeah i don't know you were probably early on instagram maybe yeah casey definitely you were a hipstamatic guy the most important thing was you just would just never invite this kevin to dinner yeah but yeah okay yeah so back in those days
Yeah, you could kind of maybe extrapolate and say, all right, you know, if everybody uses this, what would happen? But I almost didn't feel like the right question to ask. And the challenges that came at scale, I think as a platform grows that large, it just... becomes much more a mirror of society when all of it's both, you know, positives and negatives. And it also enables new kind of unique behaviors that you then have to mitigate.
Yes, you could have foreseen it at scale. I'm not sure you would have designed, maybe you would have designed different moderation systems along the way, but. at first you're just like there's 10 people using this product like i don't we just need to see if there's a there there right ai feels much different because one on an individual basis like the reason we have the responsible scaling policy is that you know for biosecurity that doesn't involve
a billion people using cloud for, you know, or one AI for something negative. It could just be one person that we want to actually make sure we address and mitigate. So the sort of scale needed from a reach perspective is really different. That I think is very different from the social media.
perspective. And the second one, at least for Claude, which is primarily a single player experience, the issues are less relational, right? Like with Instagram, the harms at scale come, like if you only used Instagram. in a private mode with zero followers maybe you'd feel quite lonely maybe that's a whole separate thing but it's the kinds of things that you might think about in terms of bullying among teenagers or body image like those
wouldn't really come up really, if you're not really looking at, if you're using it as an Instagram diary, right? AI, you can have much more of that individual one-on-one experience and it is single player, which is why like, you know, there's a really thought provoking, again, internal essay just recently around.
We shouldn't take thumbs up and thumbs down data from, you know, Anthropic and from cloud users and think of that as the North Star. Like we aren't out here to please people, right? And we should be. We should fix bugs and we should fix places where they didn't succeed. But we shouldn't just be out there telling people what they want to hear if it's not actually the right.
thing for them so this is something i've been thinking about a lot because you know there are many people today who have the experience of instagram of like i like this a certain amount but i feel like i look at it more than i want to and i'm having trouble managing that experience and so maybe i'm just going to delete it from my phone. I look at...
where the state of the art is with chatbots. And I feel like this stuff is already so much more compelling in some ways, right? Because it does generally agree with you. It does take your side. It's trying to help you. It might be a better listener than any friend that you have.
in your life. And I think when I use Claude, I feel like the tuning is pretty good. I do not feel like it is sycophantic or being very obsequious, but I can absolutely imagine someone taking the Claude API and just building that and putting in the app store is like Fun Teen Chatbot 2000. How do you think about what the experience is going to be, particularly for young people using those bots? And are there risks of...
whatever that relationship is going to turn out to be for them. Yeah, I think if you talk to like Alex Wang from Scale, he's like, in the future, most people's friends will be AI friends. And I don't necessarily like that conclusion, but I don't.
know that he's wrong also, if you think about the availability of it. And I think it's really important to have relationships in your life around people that will disappoint you and be disappointed by you. That's this relationship you're looking at. Imagine, it was just pure AI. be the same right and so um i think uh maybe two answers there like one we should just
confront it and be really vocal about it, not just pretend that it's not happening, right? It's like, what are the conversations that people are having with AI at scale? And what is, what do we want as a society? Like, do we want AI to have like some sort of moderator process? hey, your conversation with this particular eye is getting a little too, you know, real, weird. Like, maybe it's time to step back. Like, will Apple eventually build the equivalent of screen time that's more like...
AI time. I don't know. It's like there's a bunch of interesting privacy questions around the world, but maybe that is interesting even for parents. Like, how do you think about moderating the experiences that your kids have with AI? It's probably going to be at the platform level, right? It's going to get...
Your apps, for example, is an interesting one. That will be a really fascinating question. And then the second piece is, as we think about moving up the safety levels thing, I mean, the responsible scaling policy is also a living document. We've iterated on it and added to it.
find the language. I think it will be interesting to think about and manipulation is one of the things that's in there and something that we look for in deception, but also like over friendliness. I'm not sure exactly the word I'm looking for, but that's sort of like over glazing, I believe is the industry term of art.
You know, that sort of like over-reliance, I think, is also an AI risk that we should be thinking about. Yeah, so if you're a parent right now of like a teenager and you find out that they're speaking with a chatbot a lot... What is your instinct to tell them? Is it you need to sort of supervise this closer, like read the chats or maybe no, don't be too worried about it. Or like unless you see this thing, don't worry about it.
It depends a little bit on the product. I mean, you have to, especially with Cloud, which currently has no memory, which mostly is a limitation of the product, but also makes it so that it's harder to have that kind of deep engagement with it. But as we think about adding memory,
What are the things? I've thought about one of the things that I'd like to do is introduce a family plan where you have child or teen accounts, but with parent visibility on there. Maybe we could even do it in a privacy-preserving way where it's not like you can read all your teens. chat so that maybe that's the right design or maybe but maybe what you can do is have a conversation with claude that also can read the teens chats but does it in a way where like
it might not tell you exactly what it felt, what your teen felt about you last night when you like told them no, but it will tell you like, hey, this behavior over time, I'm flagging something to you that I would, you need to go and follow up. Like you can't abscond responsibility from the parent though. Right. Actually, I mean,
That's really interesting if the bot could say something like, your teen is having a lot of conversations about disordered eating, you know, or something. Yeah, I want to think more about that. My last question, earlier before you got here, Kevin and I had a huge fight because I thought it was Claude 4 Opus, and then he was like, no, it's Claude Opus 4, and he turned out to be right. So why is it like that?
We changed it partially because it was a vigorous internal debate. There was something we really spent our time on as well. I'll give you two. One, aesthetically, I like it better, but it was tending towards it. Also, we think over time we may choose to release more. Opuses and more sonnets and having the major
you know, the big important thing and the thing be the version number kind of created this thing where like, well, you had Claude 3-5 Sonnet. Why didn't you have Claude 3-5 Opus? And I was like, well, we wanted to make the next Opus really worthy of the Opus name. And so maybe... Flipping the priority in there as well. But you throw up the team crazy because now our model page is like, you have Claude.
3-7 Sonnet and Claude Sonnet 4. Like, what are you doing? I feel like we can't go unreleased without doing at least something mildly controversial on naming. And as the person responsible for Claude 3-5 Sonnet V2, I... I hope we're getting better, and hopefully the AI can just name things in the future. Let us hope. Mike Krueger, thanks for coming. Thanks, Mike. Thanks for having me. When we come back, we're headed to court for Hartford Crimes Division.
Kevin, from time to time, we like to check in on the miscreants, the mischief makers, and the hooligans in the world that we cover to see who out there is causing trouble. Yes, it is time for another installment of our Hard Fork Crimes Division. Let's open the case files. All right, Casey. First on the docket, meta rest its case. After a six-week antitrust trial, the...
Case of the Federal Trade Commission versus Meta Platforms has wrapped up and is now in the hands of Judge James E. Boesberg, who has said that he will work expeditiously to make a judgment in the case. Casey, how do you think Meta's anti- trust trial went well so if you're just catching up meta of course has been accused of illegally maintaining its monopoly in a market that the ftc calls personal social networking and they did this by acquiring instagram and what's
app in the early 2010s. And the government has said that prevented a lot of competition in the market that introduced a lot of harms to consumers, such as the fact that we have less privacy because that's just kind of not an axis that there is any companies left to compete over. And the government spent a lot of time making that case. But Kevin, I'm not sure it went that well for them.
Yeah, do you think Meta's going to win this one? I think Meta has a really good chance. You know, your colleague Cecilia Kong noted in The Times that Meta called only eight witnesses over four days to bat down the government's charges. When you consider...
how much revenue Instagram and WhatsApp generate for meta and what a sort of existential threat to their business it would be to have to spin these things off. I thought it was pretty crazy that they felt like they had made their entire case in four days. their case was so simple and straightforward that they didn't need to do anymore. Or maybe they just wanted to frame it in terms of a reel.
Yeah, they did a short form antitrust trial. That's huge right now. Well, I think the real issue here is that Meta's argument is pretty simple. They're saying we face tons of competition. Have you ever heard of TikTok? The way this case is built, if the judge considers TikTok... to be a meaningful competitor to Meta today, it may be extremely difficult for him to say we're going to unwind a merger that in the case of Instagram took place 13 years ago. I guess we will see very shortly.
whether this is an actual crime that belongs in the Hard Fork crime division or whether this was just a Tempest in a teapot. Well, you know, sometimes criminals get away with things, Kevin. Moving on! Case file number two, the Crypto Gangs of New York. This comes to us from Chelsea Rose Marcius and Maya Coleman at the New York Times. And they write that another suspect has been arrested in a bit.
coin kidnapping and torture case. And let me say right up front, this story is not funny. It is extremely scary. Not funny at all. In fact, quite tragic. There has been a recent wave of Bitcoin and crypto related crimes. People attack. people to try to steal their Bitcoin passwords and steal their money. This has been happening over in Europe, in France, in just the last few months. There have been several attacks on...
crypto investors, people with lots of money in cryptocurrency. These have been called the wrench attacks because criminals are coming after these investors and executives violently, in some cases, with wrenches. This most recent case... happened in New York in...
the Nolita neighborhood of Manhattan, where an Italian man named Michael Valentino Teofrostro Carturan was allegedly kidnapped and tortured for nearly three weeks in a luxury townhouse by... criminals who were apparently trying to get him to reveal his Bitcoin password.
Casey, what did you make of this? Well, to me, the important question here is why is this happening so much? And the reason is because if a criminal can get you to give up your Bitcoin password, that's the ballgame. In most cases, there is no getting your money back.
it can be relatively trivial for this money to be laundered and for there to be no trace of what happened to your funds. That is not true if you're just a regular millionaire walking around town, right? Obviously, you know, you may be vulnerable to robberies or...
their sort of scams or theft. But, you know, if you give up your bank password, for example, in most cases, you would be able to get your money back if it had been illegally transferred. So this is just a classic case of Bitcoin and crypto continuing to be a true Wild West where people...
People can just run up to you off the street and hit you over the head with a wrench. And that's really scary. Yeah, it's really scary. And I should say this is something that I think crypto people have been right about. Years ago, when I was covering crypto more intently, I remember people sort of telling me that they were hiring bodyguards and personal security guards. And it seemed a little excessive to me. These were not, by and large, famous people who would like.
get recognized on the street, but their whole... reasoning process was that they were uniquely vulnerable because crypto is very hard to reverse once you've stolen it. It's very hard to get your money back from a And that meant that they were more paranoid than like a CEO of a public company would be maybe walking around. You know, I read a blog post on Andreessen Horowitz's website recently. So, you know, I was having a great day and they've hired.
a former Secret Service agent to, among other things, help crypto founders prevent themselves from getting hit over the head with a wrench. And he has sort of an elaborate guide to like the things that you could do. But my main takeaway from it is if you're a crypto millionaire, you have to spend the rest of your life in a state of mild to moderate anxiety about being attacked at any moment. particularly if you're out in public.
Yeah, I do think it justifies the sort of lay low strategy that a lot of crypto entrepreneurs had during the first big crypto boom, where they would sort of have these like anonymous accounts that were out there that were them, but no one really linked it to their real identity.
I think we are going to start seeing more people, especially in crypto, using these sort of synonymous identities. I mean, this is one of the reasons that people say that Satoshi Nakamoto has never wanted to reveal him or herself after all these. years is because there would be a security risk associated with that. But I think this is really sad and criminals cut it out.
And here's my message to all the criminals out there. I don't own any crypto, and I will continue to not own any crypto. You can keep your wrenches to yourself. All right. Last up on the docket for today, this one. Oh, I love this one, Casey. I've been dying to talk about this one with you. Tell me. Elizabeth Holmes' partner has a new blood testing startup. Okay, so you may remember the tragic story of Elizabeth Holmes. Yes. Who is currently serving an 11 plus year prison sentence.
for fraud that she committed in connection with her blood diagnostic company. Theranos? Because God forbid a woman have hobbies. Well, Elizabeth Holmes has a partner named Billy Evans. They have two kids together. And Billy is out there raising money for a new startup called Hamanthus.
which is, drum roll please, a blood diagnostics company that describes itself as a radically new approach to health testing. This is according to a story in the New York Times by Rob Copeland, who says that Billy Evans is... company is hoping to raise $50 million to build a prototype device that looks not all that dissimilar from the device that put Elizabeth Holmes in prison.
Theranos Mini Lab. And according to this story, the investor materials don't mention any connection between Billy Evans and Elizabeth Holmes. Hmm. Well, I wonder why that is. I have to say, she does have some experience that is relevant here, Kevin. Why not lean on that? Now, do we know what Hamanthus means? Is that like sort of a name taken from historical antiquity and we'll look it up and it turns out it's like an ogre that you...
to like stab people with a spear or something. I assumed it was like ancient Greek for like, we're serious this time. It's according to Wikipedia, Kevin, it's actually a genus of flowering plants that grows in southern Africa. But. members of the genus are known as the blood lily and i i want to say is it too late to change the name of the company to blood lily yeah that one i like that one better i did uh spend some time this morning because i was uh i was on my commute just trying to bring
some better titles for this startup that is run by Elizabeth Holmes' partner and does something very similar to Theranos. All right, let me run these by you. Okay. Blood Test 2, Electric Boogaloo. No. Fake tricks reloaded. That's a matrix reloaded. I like that it was high concept. Okay, here's one. Okay. Thera, yes. That's good. Let's go with that one. Okay. Well, good luck to Billy Evans with Thera Yes.
$50 million. Andreessen Horowitz will give that to them. You know, they love to be contrarians. Here's my prediction. This startup is going to get funded and they're going to release something. And you're going to have to figure out how to keep your family safe from it. Listen, if they're doing another Fyre Fest, they're going to do another Theranos. You better believe it. We have learned nothing. Theranos is back.
Well, Casey, that brings to a conclusion this week's installment of Hard Fork Crimes Division. And all the criminals out there, keep your nose clean, stay low, try to stay out of the funny pages. You're on notice. Hard Fork is produced by Rachel Cohn and Whitney Jones. We're edited this week by Matt Collette. We're fact-checked by Aina Alvarado. Today's show is engineered by Chris Wood. Original music by Diane Wong, Rowan Nemes-Doe, and Dan Powell.
Our executive producer is Jen Poyant. Video production by Sawyer Roquet, Pat Gunther, and Chris Schott. You can watch this full episode on YouTube at youtube.com slash hardfork. Special thanks to Paula Schumann, Pui Wing Tam, Dalia Haddad, and Jeffrey Miranda. You can email us, as always, at hardforknytimes.com. Send us your ideas for a blood testing startup.