Meta Bets on Scale + Apple’s A.I. Struggles + Listeners on Job Automation - podcast episode cover

Meta Bets on Scale + Apple’s A.I. Struggles + Listeners on Job Automation

Jun 13, 20251 hr 7 minEp. 140
--:--
--:--
Listen in podcast apps:
Metacast
Spotify
Youtube
RSS

Summary

This week, Hard Fork dives deep into Meta's significant investment in Scale AI and its latest reorg aimed at achieving 'superintelligence', analyzing the company's history and challenges in the AI race. They then break down Apple's recent developer conference, highlighting delayed AI features, internal skepticism, and minor announcements, contrasting Apple's struggles with its past innovation. Finally, the hosts open the listener mailbag to hear firsthand accounts and executive perspectives on how AI is already affecting the job market, from junior engineers to CFOs.

Episode description

This week, Meta hits the reset button on A.I. But will a new research lab and a multibillion-dollar investment in Scale AI bring the company any closer to its stated goal of “superintelligence”? Then we break down Apple’s big developer conference, WWDC: What was announced, what was noticeably absent, and why Apple seems a little stuck in the past. Finally, a couple of weeks ago we asked if your job is being automated away — it’s time to open up the listener mail bag and hear what you said.


Additional Reading:


We want to hear from you. Email us at [email protected]. Find “Hard Fork” on YouTube and TikTok.

Unlock full access to New York Times podcasts and explore everything from politics to pop culture. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify.

Transcript

Hi, my name is Sandra E. Garcia, and I'm a reporter at the New York Times. I write for the Styles Desk, where we try to understand our complicated world by keeping up with culture. We want to take you to the forefront of cultural shifts and let you know why things are trending. Our subscribers make this kind of coverage possible so The New York Times can continue to highlight the stories that go beyond breaking news. Help us keep a pulse on culture by subscribing at nytimes.com slash subscribe.

Let me ask you about this. There's this startup called The Browser Company, and they have a new browser called Dia, which is sort of, you know, based around AI. And so you have like... an AI chat. And I was reading David Pierce's story about this in The Verge, and he was like, there was a point in using Dia where I came to understand that it knows what my social security number is.

because I had entered it onto a website. And when I think about all of the things that I put into a web browser, some of the very sensitive information, Kevin, I don't know that I want... a cloud service to have total knowledge and memory of what I've been browsing. Yeah, that sounds to me like a bad idea. Perfect. Cut, Prince. We're moving on.

I'm Kevin Roos, a tech columnist at the New York Times. I'm Casey Noon from Platformer. And this is Hard Fork. This week, meta hits the reset button on AI. But does it actually believe in super intelligence? Then, Apple's big developer conference was this week. and it still seems a little stuck in the past. And finally, we asked you if your jobs are being automated away. It's time to hear what you said.

Well, Casey, we have a live show coming up. Boy, do we. My God. So on June 24th, we are going to be at SF Jazz in San Francisco for the first ever Hard Fork Live and... Boy, do we have some special guests to announce. Now, do I have to do anything for the show? I would like you to do the following things. One, show up. Okay.

Two, stand on stage with me. And three, help me interview some of our amazing special guests. All right, you drive a hard bargain, but I'll do it. So Casey, tell the people who's coming to Hard Fork Live. Let me tell you about the show, Kevin. You are going, if you're coming to Hard Fork Live, you're going to be hearing from the co-founder and CEO of Stripe, the big payments platform. That's Patrick Collison will be at the show. You will be hearing from and seeing the work of

the founder of Skip, a mobility company that makes exoskeleton pants. Catherine Zeeland will be on the show, Kevin. And finally, to cap it off. We have from OpenAI, the CEO, Sam Altman, returning to Hard Fork, and he's bringing along Brad Lightcap, his chief operating officer. We're going to have a big conversation about AI. And so that's the stuff we're going to tell you about.

But if you can believe it, there's actually other stuff that we're working on that we're not ready to tell you yet. But suffice to say, this show is packed. Yes, our cup runneth over. When we set up the show, we sort of booked like a medium-sized venue.

of expecting that, you know, some people would want to come out. The demand was overwhelming. We sold out very quickly. And so the show, you cannot buy tickets to it unless you're scalping them on StubHub or whatever. Don't do that, by the way. Yeah, that's right. But here's what if you can't come to the show, but you just want to stay

outside the building. I'm going to come out during intermission and just tell you what happened. Casey, I don't know how to break it to you. There's no intermission. There's no intermission. What if I have to pee? So if you did not get a ticket to the show, don't worry. We will be bringing you the interviews from Hard Fork Live on this very podcast feed with not too much delay. That's right. You'll be able to take part of the show even if you were not there physically. Exactly.

Yeah, but we're super excited for all of those of you who did get tickets to come say hi. Yeah, it's going to be incredible. See you there. All right, Kevin, let's dive into the story that I think you and I are both most excited about this week, which is what is happening over at Meta's AI division. Yes, they're having a big reorg and they are making big moves to try to catch up in the race to powerful AI. So Casey, what has been happening?

So the big headline news is that as of this recording, Kevin, multiple sources, including myself, have reported that Meta is about to make a huge investment in scale AI. which is a startup here in San Francisco. They're going to take 49% of the company for somewhere between $14 and $15 billion. A lot of money. That's kind of thing one. Thing two is, as part...

of that investment, the co-founder and CEO of Scale, Alexander Wang, is going to come to Meta. He's going to leave Scale, come to work at Meta, and lead a new AI team that is devoted to creating super intelligence. Yes. And what caught my eye about this announcement was not only the dollar figure and the new super intelligence team, but the fact that Meta is also going out and trying to aggressively recruit a bunch of top AI talent to come sort of turn their shit.

Yeah. And so recently on the show, you and I had a conversation about the somewhat botched rollout of Llama 4, the company's... latest AI model and what it told us about the state of AI over there. Today, I want to go through what happened over the past year that led Meta to this place, and what do we make of this new plan? Do we think that...

this will put them back into the conversation with some of the real frontier AI labs. So before we get into that, is there anything we want to disclose to our dear listeners? Yes, I work at the New York Times, which is suing OpenAI and Microsoft over copyright infringement related to the training of large language models. And my boyfriend works at Anthropic. So let's dive into this story, Kevin. And I think the first thing to do...

is kind of lay out the state of play. When you think of Meta's place in the AI ecosystem, where are they right now compared to some of the other big players? So right now, I would say Meta is considered a... They've had a bunch of internal turmoil and disorganized sort of messy strategy decisions over the past couple of years. And so I think a lot of people feel like they have kind of fallen off in AI. And if you're Mark Zuckerberg why is that a big problem?

Well, because AI is increasingly the thing that people in the tech industry are pinning their hopes on, not just as the future of large language models, but as really the future of social media, the future of lots of other things that Meta is interested in doing. And Meta has spent tons and tons of money trying to build these powerful AI systems and buying up a bunch of GPUs. They sit on one of the largest stashes of GPUs of any company in Silicon Valley.

And I think the feeling is that they have just not been doing a lot with that. That's right. And you compare that to some of their peers, like look at OpenAI, the incredibly rapid growth of ChatGPT. Look at what Google is doing and how those products are gaining tons and tons of users. Anthropic is building a huge enterprise business. Meta is not yet part of it.

that conversation. So let's talk a little bit how we got here, because Meta has been working on AI basically as long as any of these people. What is the history of AI development at that company? It's a really strange and interesting story. because I think people who are just coming to this story may not know that Meta was once considered one of, if not the leading AI company in the world. So here's the capsule history. Back around 2012...

Facebook tried to acquire DeepMind. Mark Zuckerberg thought Demis Hassabis and his co-founders were doing cool and interesting things. thought this could be strategically important for Facebook, and so he made them an offer. Now, they did not sell to Facebook, obviously. They decided to sell themselves to Google instead. But so Facebook, around this time, set up its own research division.

Fair, which was led by Jan LeCun. And tell us about Jan LeCun. So Jan LeCun is a big deal in AI research. He is one of the people who is considered a godfather of deep learning. He won the Turing Award several years ago. a big deal in the world of AI. And he was able to recruit a bunch of other really good, well-respected AI engineers and researchers to come work at Facebook.

I would say during the 2010s, Facebook did a bunch of really solid AI research. They were pretty instrumental in building this thing, PyTorch, which is now used by most of the big AI companies still to this day. sort of foundational work that led to the models that we have today. But then in 2017,

something happened, which is that Google published this transformer paper that outlined this sort of framework for building these so-called large language models that we see today. And would you call that a transformative paper? Yes, yes, it did end up being transformative because for basically the next five years, OpenAI and to a lesser extent, Google and DeepMind were just building these bigger and bigger large language models and finding that they were actually.

getting better with scale. And as that happened, Facebook and Ian Lacoon did not really... head down that same path, right? Facebook had a bunch of other priorities. This was right after Donald Trump's election. They were still worried about misinformation on Facebook. They were making bets on things like crypto and later the metaverse.

Pete with TikTok. So there was just a lot going on at Facebook. And I think people that I've talked to say that the AI research division just didn't really get a lot of attention from the top. Yeah, well, and to the extent that they were shipping AI features, it was machine learning that...

would help them identify bad content that needed to be removed or improving a recommendation algorithm. So stuff that was useful to them, but was not the sort of large language models like ChatGPT that wound up, I think, being a lot. more interesting to people. Yeah, and one of the reasons that they pursue that direction is because Jan LeCun, the guy leading their AI research division...

didn't believe in large language models and still doesn't to this day. He is one of the sort of foremost critics and skeptics of the scaling era of large language models. Yeah, if you want to know why ChatGPT didn't come out of meta, like Jan LeCun is sort of the reason. They were never going to... to build that kind of product under him. Yes. So in 2022, after ChatGPT came out, Meta, like every other company in Silicon Valley, started to freak out. Mark Zuckerberg says...

Oh my goodness, we may be behind. We don't have our own sort of version of this that is ready to go. And so they kind of go into panic mode. They start buying up a bunch of GPUs and start working on what becomes Llama, which is their... version of an AI language model.

Yeah, and the first version of Llama actually winds up, I think, being more successful than some people might have guessed. Yeah, and at this time, Meta still has a lot of really good AI researchers, and, you know, Jan LeCun doesn't believe in large language models, but...

bunch of other people there do. And so they start building Llama and they make this decision to open source Llama. And so it does actually get widely used because unlike ChatGPT, which you have to pay for, if you're a developer, you can just sort of build on top of Llama for free. And this, by the way, was a hugely important decision, Kevin, because it was

meant to be a strategic move that would blunt the momentum of OpenAI, right? The idea was, we will take this product that you are selling for $20 a month, we will give it away for free, it will put cost pressure...

on you, it will make it harder for you to innovate. So that was the idea behind Lama. And I think it's important to remember because whenever you hear Meta talking about open source, it's always like, well, open source will save the world. It was like, no, open source was meant to slow down open AI and Google.

Right. And so I think during the last few years, this sort of post-Chat GPT era of AI research and development, a lot of Meta's top AI researchers have left. And everyone's got their reasons for leaving. But one of the things that I've been hearing... from people who left Meta during this time is that the company just did not believe in AI the way that some of the other big AI labs did. Yeah. And we should talk about why that is, right? I think if you are a researcher at a... like an open AI.

From the very start, you have been trying to build the absolute most powerful AI that you can, essentially like almost without regard for how much that changes society, right? You believe that this thing is inevitable. You're going to build it. You're going to try to steer it in a positive direction. But you think this thing is going to be hugely...

transformative. If you work at a giant tech incumbent with a trillion dollar valuation, there is no obvious reason why you want to disrupt all of society, right? Because if all of society is disrupted, that might not necessarily be good for you. So I could understand.

why if you're running a company like Meta, you're incentivized to think a little bit smaller. You're thinking not, how do we build super intelligence? You think, how can we create a slightly better advertising recommendation algorithm? Totally, and that's fine as a strategy goes, but if you are an amb-

ambitious AI researcher who's really committed to this idea that this is a transformative technology, you want to do that at a place that actually believes what you do, that believes that what you are working on is not just a better way to sell shoes to people.

or make chatbots that go inside Instagram, you want to be building superintelligence. And so a lot of their top AI talent did leave and go to other places. Yes. And around that time, Kevin, the company's playbook stopped working. And that playbook, which we've seen so many other...

times across so many different products is essentially the fast follower model. You let somebody else figure out something interesting, then you reverse engineer it, put it in your own products and take over. This is what Meta did, for example, with Snapchat stories. It put stories everywhere.

successful for them. They start to think they can do the same thing with AI. We will let the frontier labs go spend all the money, figure out all the innovations. We'll read all of the research they publish. We'll build our own version of that. We'll give it away for free.

It'll be a little bit behind the state of the art, but it won't matter because we'll be basically there. That's good enough for our purposes. And this works up until about Llama 3, but then they start building Llama 4. And an interesting thing happens, which is that the latest Frontier models... turn out not to be as easy to copy as the ones that came before.

Yes. I think a lot of people who were impressed by the first couple versions of Llama saw Llama 4 come out recently and thought this is a company that has lost its way and they are no longer considered a frontier ad lab. Yeah. And so the last thing that I want to say. as part of this capsule history before we move into the present is that while Meta is making some big moves now, it's important to remember they also tried to make some big moves in January 2024 when they also did a big reorder.

organization of their AI teams in recognition of the fact that they weren't getting the results that they wanted. They didn't go out and make a huge investment or try to bring in a bunch of new talent. It was sort of more on the order of reshuffling a few teams.

Mark Zuckerberg went out and did an interview about it. He started talking for the first time about trying to reach AGI, so artificial general intelligence, one notch down from super intelligence. And he said explicitly that he had to do that because he knew it was going to attract... more researchers. And then...

A year went by, and that reorganization did not get the job done. And so that is what finally brings us today, this investment in scale, and this once again hitting the reset button, trying to find a path forward for them in AI. Yeah, so I want to ask you about... two possible ways to interpret this week's news out of meta one way is that this is basically a sign that meta has

kind of come to its senses after many years of betting on these directions for AI research that did not pan out, that it is, you know, sending Yann LeCun to sort of research Siberia, and that it is essentially trying to buy its way back.

back into the race to AGI by bringing on Alexander Wang and Scale AI, and that is going to spend whatever it takes to actually get back to the frontier of AI research and development. The other way is that meta... is basically pretending here that they have realized that If they say that they believe in AGI or even in superintelligence, that might allow them to recruit these engineers who would otherwise be going to work for OpenAI or Google or Anthropic or somewhere else.

still wants to do what it has always wanted to do, which is to use AI to, I don't know, build... companions into Instagram or develop sort of things for the metaverse, but that it has essentially changed its posture toward AGI as a recruiting strategy, and that it is not actually trying to build superintelligence. Which of those two explanations do you think is closer to the truth?

Hmm. I think I'm going to cop out and say I think that the answer is somewhere in between. Yesterday, as part of my reporting, was going through the evolution of the way that Zuckerberg has talked about powerful AI. And it is true that his desires to build more powerful AI have scaled along with...

what some might call a desperation to get back into this race, right? I think back when he thought that he could use AI as a very practical tool to enhance a bunch of his current business objectives, he felt no need to talk about superintelligence whatsoever.

But once he noticed that all of the best talent in the world did not want to come work at his company, that's when he said, okay, I am going to have to change my tune on this front. Where I think your first explanation resonates with me the most is...

It's still not really clear to me how superintelligence benefits Mark Zuckerberg and Meta in particular, right? I think that if you talk to the researchers at the Frontier Labs about they want to build superintelligence, it's like, well, they want to usher in a world of abundance.

They want to cure disease. They want to solve poverty. And a lot of people think that those claims are sort of too grandiose. But I've talked to the real believers there. I think they really believe that. That's not what Mark Zuckerberg wants to do. Mark Zuckerberg wants to rule.

meta and have meta be among if not the most powerful companies in the world and in a world where super intelligence exists i'm not sure meta will have much of a role to play yeah i want to ask you about one other angle here that i saw people discussing which was actually about scale AI more than meta. So scale AI, for people who are not familiar, they are not sort of an AI R&D lab, right? They are essentially a data provider.

to the big AI lab. So Casey, how would you explain what scale AI does and how that might fit into meta strategy here? Sure. So the bulk of their business works like this. They have a couple of subsidiaries. Those subsidiaries hire people for pretty cheap. And then they show them a bunch.

content. For example, they might show them content that might violate Meta's standards because it has like violence or nudity. And the content moderator will go in, they will say, okay, yeah, this violates the standard, and I'm going to categorize it, and I'm going to feed that back to Scale.ai.

And then Scale AI is going to label that data and clean it up and send it back to Meta so that Meta can then build a machine learning classifier to sort of create automated content moderation systems. So it's that kind of service that has been really important for them. Now, it's not just content moderation.

Some of the other big labs like OpenAI or Google DeepMind are customers of theirs. And they will have people out in the world, you know, labeling, let's say like a picture of a car or something, sending that back. And that helps to train a large language model. So we know that... To make large language models more powerful, you just need a lot of not just data, but like clean, structured, labeled data. And Scale.ai has been one of the biggest providers on that front. Right. So one...

Hypothesis that I saw floating around this week online is that by acquiring a stake in scale AI, meta was essentially trying to lock up that valuable data for itself and keep it out of the hands of its rivals. I think that there's probably some multi-year contracts in place. I don't think it's actually going to be the case that Meta can just sort of unilaterally decide to shut down Scale AI's business with all these other AI companies. But I do think it will give them...

privileged access to a pretty important ingredient in training these large language models. Yes, which is one reason why a person I spoke to yesterday who's sort of like close to this deal said that they fully expect that the biggest customers of ScaleEye are going to stop. Thank you.

regulatory concerns. Because even though Meta isn't trying to buy all of scale AI, it may effectively be removing a very important player from the market at a time when Meta is already under a lot of antitrust scrutiny. We just wrapped up an

antitrust trial that is trying to force them to divest WhatsApp and Instagram. Yep. So let's talk a bit about what is going to happen now. Assuming that this does go through, here is what I've been able to piece together about what this new team is going to be doing, Kevin.

The first thing to say is these people are going to be sitting next to Mark Zuckerberg. So this is something that Zuckerberg does from time to time is he just will clear out everyone who sat next to him during the last crisis. And he brings in people to work with him during the current.

crisis. So, for example, during the Cambridge Analytica crisis, he brought in a lot of his communications team to sit around him to tell him about all of the breaking news. Now, and presumably those people shuffled off long ago, Cambridge Analytica was like in 2017, but now they're...

bringing in the AI team. And so, you know, if you've always wanted to like bounce ideas off Mark Zuckerberg, that's maybe something you could do. We should also say the people sitting around him are going to be really rich, not like Mark Zuckerberg rich, but you know, the times reported that these pay packages that they're offering.

are stretching into nine figures. That's $100 million. I heard one credible report of an engineer being offered $75 million to go work for Meta. Which we should just say, like, that's a lot of money, right? That's like what a star... pro athlete would make yeah and by the way if you ever say to somebody how much would it take me to give you for you to come work with me and the person says 75 million dollars reflect on yourself what what choices did you make right

So they're going to have that team. Now, I've also been trying to figure out what is this team going to do? Because, look, the way that Meta has rolled out this announcement has basically felt like a help-wanted ad, right? They are out basically. Now, officially, they're declining to comment.

these stories, I'm getting strong hints that someone inside Meta very much wants the world to know that there's a hundred million dollars on the table for the right person, right? It is basically a help wanted ad saying, come work here. Okay. Well, so what happens when people

actually take that deal. This is what I've been trying to figure out. It's like, okay, let's say you take $100 million and now you go get your desk across from Mark Zuckerberg. What does day one of your work look like? Is there a plan? There actually isn't. The plan is... And somehow get back in this game. Alexander Wang is going to be leading that effort.

Wang is a capable leader. Scale is a very successful company. The way that they've been successful is by always kind of pivoting to where the money is. They've been very good at that Silicon Valley startup thing of just staying alive by being very... resilient and resourceful. I want to say, though, that building super intelligence is a very different prospect than building scale AI, right? Because for when you look at what scale AI actually does, they help you scale AI. They do not build.

the ai right they're sort of like a classic like picks and shovels company um that is making money by building the inputs to ai but not actually training their own frontier models yeah and so you know wang is 28 years old he's going to be now leading a team of supposedly around

50 people, some of whom might be making as much as $100 million a year. I think that's just going to be a very difficult management challenge. Think about some of the big teams you may have worked on at your job. What is the fastest it ever gelled? Was it less than six months?

who believes that we are on the precipice of superintelligence already arriving, or maybe just AGI already arriving, you're talking about, what, six months, a year and a half before this team has actually been able to maybe ship their first major project.

You know, I am sympathetic to Meta here in the sense that they don't have another choice. They had to do something significant if they were going to get back in this race. But we should not understate the challenge of what they are attempting to do because they just lost the last year. Yeah. I'm skeptical that this plan of Meta's is going to work. And there are a couple of reasons for that. One is that...

While there are many people working on AI and many talented researchers and engineers, the universe of people who have actually built and trained the biggest language models on the biggest supercomputers is... still quite small. It might be a couple hundred people worldwide. Unfortunately for Meta, all of those people are already rich. They can work anywhere they want. They can make whatever they want. These people are writing their own checks.

And so I'm not sure that there is a sufficient amount of money you could pay some of these people to give up their jobs and come work for Mark Zuckerberg. The second reason I'm skeptical is that I think that even if Meta does manage to sort of assemble this Avengers super team of AI researchers, I still don't think they have... an attractive or coherent AI strategy that is going to motivate these people to work hard there. If you actually...

look at what Meta has said so far about what it is doing with all of the AI stuff that it has built. It has basically said two things. One, it wants to make AI companions. The second thing it has announced is that it is going to build Weapons for the military, right? This came out of a recent story where Meta is going to partner with Anduril, the sort of military technology company, and they are going to build something like an augmented reality headset for...

soldiers in the battlefield. That might be a worthy project. It might even be a profitable project, but that is not the kind of thing that top AI researchers want to spend their time working on, at least the ones that I'm talking to. And I will... Close my analysis of this situation by reading you a text that I got from a leading AI researcher who I texted this weekend to go ask if they were going to work for the Meta AI Super Intelligence Lab. All right, let's hear it. LOL.

LMAO. So, Casey, I think that tells you about how successful this new recruiting push by Meta is going to be. Yeah, I would be more optimistic about this if this was the first big reorg that Meta was doing in its AI. division, but it's not. The big reorg they did in January 2024 was also not the first reorg that they had done in this division. You mentioned a couple of the key ways that Meta has been using AI, and to your point, this is just not really...

inspiring stuff for a lot of those researchers. But more importantly, I don't see a way how to get from here to the there that they are envisioning, which is super intelligent. So look, this is one of the most interesting stories in tech to me right now for this reason. Mark Zuckerberg is on many days, the most competitive person in the entire industry.

and he's now legitimately behind in a race that he might not be able to afford to lose. So for that reason, Kevin, I think we just want to keep our eyes on this story because I suspect this will not be the last big move that Meta makes as it tries to get back in this. game. All right. When you come back, there's another big tech company that is struggling to find its AI future. We'll talk about Apple and what it announced this week at its annual developer conference.

I gave my brother a New York Times subscription. She sent me a year-long subscription so I have access to all the games. We'll do word old, mini, spelling bee. It has given us a personal connection. We exchange articles. Having read the same article, we can discuss it. The coverage, the options, not just news. Such a diversified guest. I was really excited to give him a New York Times cooking subscription so that we could share recipes. And we even just shared a recipe.

The New York Times contributes to our quality time together. You have all of that information at your fingertips. It enriches our relationship, broadening our horizons. It was such a cool and thoughtful gift. We're reading the same stuff. We're making the same food. We're on the same page. Connect even more with someone you care about. Learn more about giving a New York Times subscription as a gift at nytimes.com slash gift. Get a special rate if you act before June 15th.

Well, Casey, let's talk about the other big tech news this week, which is also about a large technology company that is on the AI struggle bus. This week was Apple's annual developer conference, WWDC. And unlike last year, when the two of us... were invited to Cupertino to take part in the festivities.

We were not invited this year. We were. And whenever I get uninvited to something, I think this company's in trouble. Yeah, I don't think it is because we were rude or ate too much food at lunch or smelled bad. I think what's going... And then many of those features did not. actually ship. Yeah. Last year, they had a story about AI that they were really excited to tell. This year, that was not the case. Yes. So...

The big thing that people were excited about at last year's WWDC was this new and improved Siri that would not only be able to respond to more complicated questions on your iPhone, but would be able to kind of pull things from all of your apps and your data. Yeah, the classic example was like, hey, you know, send an Uber to go pick up my mom at the airport when her flight gets in. Right, which is like a very complicated multi-part.

that involves communicating with many apps. And we saw that, we're like, oh yeah, that'd be really cool if that worked. Yes, and that did not work, apparently, because Apple still, a year later, has not shipped that version of Siri. And I still have to pick up my mom from the airport in a regular car. Like an animal. It's a disaster. So...

We were not there. We were not able to grill Apple executives about what the heck was happening with Siri and why it has been so delayed in its new and improved form. But friend of the pod, Joanna Stern from the Wall Street Journal was invited and she did. And I want to just play a clip from that because I think it really shows you how defensive they are. In this clip, Joanna is talking to Craig Federighi, who is Apple's Senior Vice President.

of software engineering. Let's hear it. So many people associate Apple and AI with Siri since... plus 10 years ago now. Sure. And so there is a real expectation that Siri should be as good, if not better, than the competition. I think ultimately it should be.

But it's not right now. That's certainly our mission. Yeah, but that's our mission. You know, we set out to tell people last year where we were going. I think people were very excited about Apple's values there, an experience that... integrated into everything you do not a bolt-on chatbot on the side something that is personal something that is private we started building some of those and delivering some of those capabilities i

in a way, appreciate the fact that people really wanted the next version of Syria, and we really want to deliver it for them, but we want to do it the right way. When's the right way going to come along? Well, in this case, we really want to make sure that we have it very much in hand before we start talking about dates for obvious reasons. So Casey, they have a mission, they have a vision, they have values.

What they do not have is a date when any of this will be available. Yeah. So bad news for anybody whose mom is still stuck at the airport. I shouldn't keep coming back to that joke. But no, that's, you know, look, on some level, it's like, what can they say? They tried to build it. It didn't work. It's better not to ship it and to delay it than to ship something, you know, that doesn't work. There has been some great reporting over the past couple of months about.

what happened inside of Apple that led us to this point. Mark Gurman at Bloomberg has done a ton of amazing reporting on this. And the gist is like, there just were not a lot of AI true believers inside of this company. It really kind of rhymes with the story that we just told. about meta. Apple is working on its own thing. They have an incredible business. The last thing that they want is to be disrupted by some coming wave of AI. And so they just kind of gave it short shrift.

don't work like the systems they know how to build. They know how to build these rigid, deterministic, if this, then that type of systems. Very polished, very predictable. And they do an incredible job at it. But AI isn't like that. chaotic. It's messy. It's probabilistic. It doesn't work the same way every time. They've had a lot of trouble wrapping their arms around that.

I want to diagnose more about what is going on with Apple when it comes to AI. But first, let's talk about what they actually did announce at WWDC. Casey... What were your top highlights from their announcements? Well, Kevin, obviously we have to talk about liquid glass.

Now, I don't know if you've seen the YouTube video of WWDC where they promoted Liquid Glass, but the YouTube play button sort of appeared over a couple of the letters, so it looked like Apple had announced Liquid Ass. So if you're still thinking that that's what they...

announced i want to correct that it's actually called liquid glass now what is liquid glass liquid glass is a redesign of the operating system and on one hand i don't want to underrate the significance of a redesign these devices are used by

hundreds of millions, if not more than a billion people. And when you give something a new look, it is kind of a big deal, right? You might have to relearn how certain things work. On the other hand, when that's your marquee announcement after a year of development, when the last year you were like the AI future,

is here and this year you're like control center is a different color it really speaks to the kind of difference between the two presentations kevin yes it was such a small ball presentation i did watch the event uh from afar and I gotta say, it was like very strange to watch these Apple executives get on stage and like... express delirious enthusiasm over adding polls to iMessage. You can now start a poll with your friends.

in the group chat, which, you know, I gotta say, cool feature. I'll probably use it a bunch, mostly as a joke, but that is not the sort of marquee futuristic vision that I was expecting out of Apple this year. No, and, you know, because Apple... these new features available to developers basically right away, we've started to get some early feedback about how they work. And some fair number of people are complaining that this liquid glass look in particular, it kind of just makes every...

thing harder to read, right? The basic idea in here is that all of the operating system elements are like literal glass, and they'll sort of, you know, slide over each other. And of course, you know, the presentations were like very beautiful, but then you put it onto your phone.

And it's like you find yourself squinting a lot. And, you know, I found myself thinking, Kevin, about this this old Steve Jobs quote that I like. And, you know, I want to acknowledge it's very hacky and cliche to quote Steve Jobs. But he has this quote and it's actually. from the New York Times in this interview he did in 2003 about the iPod. And the thing that he said was, essentially, design is not how it looks, design is how it works. And as I found myself looking at...

liquid glass, I thought, this is a design that is about how it looks. It is not about how it works. I don't know what this design is supposed to do that it didn't before. All Apple really said was like, Everything is more beautiful than ever, you know, but it's still very familiar, but it's more beautiful. And, you know, I don't want to tell people don't make things that are beautiful for their own sake. I appreciate beauty as much as the next fella. But on the other hand, I thought.

This doesn't actually really seem in keeping with the Apple design spirit of the past. Yeah, well, Casey, I want to bring some light to this discussion by quoting another Steve Jobs quote that was sort of lost in the archives where he said, what if we made a phone where everything was transparent and you couldn't see anything?

Wow, I missed that one. And so I think the Apple design team really found that and ran with it. So that's liquid glass. Let's talk about some of the other stuff that came out of this. Yeah, what caught your eye? Yeah, so the...

The place where it seemed like they'd put the most engineering into a feature that might help people just... get things done a little bit more efficiently was spotlight spotlight is the feature if you press command space on your macbook that brings up a search bar it's great for finding files hasn't evolved much over the years been around a long time this year they were like well

We're going to start to convert this into a little bit more of what they call a launcher app. We talked about launcher apps on the show before. I love and use one called Raycast. And the basic idea is this could be kind of the command center for your Mac. So instead of just...

searching for a file or like, you know, opening Keynote, it's now going to be about actually using it to take some actions, run some shortcuts, that sort of thing. Like what could you do with the new spotlight that you couldn't do with the old one? What's an example of something that you might type in? So for example, you could like trigger a shortcut.

Shortcuts are these like automated routines that you can set up on your Apple devices. So maybe you have one that's like, OK, I'm like, you know, going to bed for the night, like turn off all the lights in my house and you can just open up Spotlight, run that shortcut and do that without.

you know, having to do it some other way. The main benefit of doing it this way is that it just becomes second nature to hit command space and then do something as opposed to grabbing your mouse, looking for the icon somewhere on a desktop, double clicking, opening it up, right? It's just, you're just trying to take... a few steps out of it to get things done slightly faster. Now, I'm very conscious as I describe this of like, this does not sound that interesting. And, you know, I didn't say it.

Yeah, and I say that as somebody who loves little productivity hacks and getting stuff done faster on my computer. But that said, it was at least in the spirit of the Apple I love, which is help me get more stuff done. Make me a more creative and effective person. Okay, so new spotlight. What else caught your eye? There are a couple of lightly interesting new features. There's live translation, although we're not exactly sure which languages that's going to be available in.

Something I'm excited about is there's apparently a phone app that's coming to the desktop, so you can start calls from your Mac, which I think is probably something that I will do a lot. They are also, yet again... rethinking how the iPad works, right? Like how iPad should operate has been a kind of longstanding unresolved question where it's like, it looks a lot like a Mac, but it doesn't work quite like a Mac this year. It's starting to feel ever more like a Mac because Kevin.

you can resize the windows on an iPad now. Thank God. Every day for the past 10 years, I have woken up in a cold sweat thinking... When can I resize the windows on my iPad? One feature I'm not particularly excited about is you will now be able to change the backgrounds in your iMessage chats. And, you know, I am in some group chats with some real jokers, and I feel like this could potentially wreak havoc in my group chats.

I also saw they're introducing a typing indicator for group chat, so you can now see the little bubbles that say someone's typing. Yeah, you can already see that on a one-on-one chat. For some reason, you couldn't see that in the group chat. So by now, I feel like most of our listeners... has been like, one, I can't believe they're still talking about this. And two, how is that everything that Apple announced this year? But I think it's important just to mention for this reason.

For the past, call it a decade, I feel like Apple's main priority has been trying to figure out what is a seventh subscription we can sell you on this iPhone, right? And while that was happening, the future was being born across town. And they were not paying attention. And they haven't really started to pay the price for it. But you come to the end of this presentation.

And you can kind of start to see the cracks in the armor of a company that has looked pretty invincible for a long time. Yeah, I watched this presentation and I thought, this is a company that has not yet... admitted that it made a bad bet when it came to AI. This is a company that is still not bought into the idea that language models are important or powerful or useful or that they might unlock new ways of interacting with computers.

I think you're right that it rhymes with our last segment on Meta because Apple had its own version of Ayan Lacoon, a sort of senior AI researcher who was brought in to lead the strategy of AI at Apple. This guy named John Gianandrea, or JG as he's called, was brought in from Google years ago to kind of oversee all of Apple's AI research.

And according to Mark Gurman at Bloomberg, JG did not believe in large language models either. He thought they were sort of a distraction. He was convinced that consumers were turned off by chatbots. He didn't think that Apple should be putting a lot... of efforts and investment into developing its own language models.

And I think we're really now seeing the fruits of that decision coming out or not coming out in Apple's case on stage at WWDC. Yes. Now, here is what I will say in Apple's defense, Kevin. For everything that we have just said, it is also true. that if you were to pick up I still don't think there is one feature on that Pixel phone that would make the average person say, oh, wow, I got to ditch my iPhone for this. The way that Google has figured out AI.

I am so excited to ditch iMessage and become a green bubble over in this other ecosystem. And I think that speaks to the fact that for as advanced as these systems are getting, there has been a surprisingly long lag in turning them... You know, just this week...

Amazon said that its new version of Alexa, which is sort of souped up, AI-powered, had finally reached one million customers. Now, Amazon has a lot more customers than that. They had been rolling this thing out at a glacial pace because... they're still so uncertain about the reliability that they're trying to make sure that it doesn't blow up in its face. So while we're being hard on Apple here, I just want to point out that really it's...

all of the tech giants that are having this problem, that folks like you and I are having a pretty good time figuring out how to slot AI into our lives, and it mostly just involves using chatbots. The other big companies, though, have not figured out how do we... this on to what we're doing in a way that is going to make people really excited.

There's one more Apple-related story from the past week that we should talk about, and it is not something that was discussed at WWDC, but it is something that a lot of people have been emailing us and that a lot of people I know have been talking about. And this is this... research paper that came out of Apple's machine learning research division. And this paper was called...

The illusion of thinking, understanding the strengths and limitations of reasoning models via the lens of problem complexity, which I'll say... Could have used an Apple iOS rewrite on that. All right. Well, so try to describe, Kevin, concisely. What did this paper say? So this paper was basically an attempt to... pour some water on the hype around these.

so-called reasoning models, which are kind of like large language models with an additional step performed at inference time to sort of improve the outputs. So we've talked about this before, OpenAI's 01. the latest versions of Gemini and Claude, they all have these reasoning features built into them. And what this... publication this research paper said is that this is not actually reasoning that these systems are not actually doing anything like thinking that there are some

big limits to how much this approach to improving language model performance can scale. And basically, they released this and it was immediately seized on by a bunch of people who said, aha! There is proof that the AI companies are on the wrong track, that all this is hitting a wall, and that these models are not actually getting us closer to general intelligence. Yes, this paper was beloved by what I have come to think of as the AI cope bubble. So people...

who are looking for reasons not to worry about AI, oh, this paper was manna from heaven. Yes, so- Casey, why is this paper so controversial and so beloved by what you call the coat bubble? Well, I think one issue here is essentially semantic, which is the paper is trying to make the case that, as you put it, this is not actual reasoning.

which is to say that large language models are not reasoning in the way that human beings are doing. I think everyone involved would stipulate like, yes, that is the case, that large language models do not work in the exact manner that the human brain does, even if there are...

maybe some interesting parallels. So it's presented as this gotcha, aha, these things are not reasoning like human beings. When in fact, again, anyone who's paying attention could have told you that from the start. The second problem with this paper does... relate to just the limitations of the way that these models are constructed which is they can only output a certain number of tokens and so in order to reason through the most difficult problems given to them by the researchers they simply

did not have enough room. Now, if you want to say that is a reason why large language models are bad. Okay, fine. Yeah. There are like some problems that they can't solve, but that is not how this paper has been received within the AI code bubble within the AI code bubble. It is.

oh, well, this proves that LLMs can't reason like human beings and therefore we should just junk it because it is essentially not real and it is not going to have any meaningful impact on my life. Yeah. So I would say this paper did not change my...

sort of view of large language models or the kind of reasoning models that have become popular recently. It did, however, help me understand what is going on inside Apple, where you simultaneously have a company that is trying to be seen as being on or close to the AI frontier, but where a lot of the intellectual firepower and research is still being directed at trying to prove that all of this is just...

hype and fake, and it doesn't actually work, and we should maybe stop investing in it. Yeah. I think we should say this is probably Apple's highest profile AI paper, at least in the last year, maybe ever. And I think it had a lot of problems. So let's tie that back to WWDC, Kevin. What does it all mean?

I think what it means is that Apple is still undergoing this kind of identity crisis about what it wants to be. Is it a hardware company that wants to make phones? Is it a software company that wants to sell subscriptions to put on those phones?

business models are being challenged right now. Apple's iPhone sales have been sort of flat to declining over the last few years. They really haven't gotten that much different from model to model. We may kind of be reaching the pinnacle of what a smartphone can be.

And his service business is being challenged by all these antitrust actions and these court decisions that say things like you can't stop people from paying for things outside of the Apple App Store anymore. And so I think they are...

still struggling to find the next gusher of cash that could replace declines in some of these other areas. And I don't think they have sort of... come up with a solution yet, but it sounds like they are still trying to make up their mind about AI and how big a deal it is.

I agree with all that. Fortunately, Kevin, as you know, on this podcast, we always try to be problem solvers. We like to come up with solutions for the companies that we talk about. And I think I know what Apple could do to turn the ship around here. What's that? They have to hire Alexander Wang.

I don't care how much it costs. I think they go to him right now. They say 49% stake. We'll take all. How much money do you want? We can afford it. Just name your price, Alex. And, you know, not only would that turn around their fortunes in AI, Kevin, think about how mad it would make Mark Zuckerberg. Oh, boy. He would blow a gasket over that one. Siri, throw the commercial.

Didn't even work. Siri, pick Casey's mom up from the airport. She's been there for a year. Actually, can I tell you what happened on my computer when I said just now Siri threw a commercial? It opened up a map to something called the Commercial Coverage Insurance Agency. No. When we come back, it's time to pass the mic. We'll hear from you, our listeners, about how your jobs are changing as a result of AI. okay see in the past few weeks we have been talking a lot about a different

topic related to AI, which is what is happening with AI and jobs? Yes. You recently wrote an article saying that we were starting to see the early signs of AI job loss. And so we threw it out to our listeners to say... What have you been experiencing? Yeah, so today we're going to go through some of the many, many responses we got to our call out for stories about AI and whether it's taking your jobs.

I think we should start with a question that I think captures a common frustration that we hear from listeners. Oh, that we say like and and and um too much? That we're too handsome? No, here is listener Christian Danielson. Hey, Casey and Kevin, this is Christian from Hood River, Oregon. I've noticed in a lot of interviews, yours and others, with tech executives that...

Almost all of them seem to think there's going to be a categorically different level of job displacement due to this technology rolling out. And yet almost all of them also don't seem like they have. any real concrete plans or putting nearly the amount of energy they are into their products around how to mitigate that. It just seems like they don't feel like it's really their responsibility or it's someone else's problem to manage that side of things.

So I'm hoping you might pose the question why it is that the government shouldn't really, frankly, just tax the shit out of their technology, both as a way to potentially compensate. people for all this wealth that's going to be concentrated into the hands of a very small number of people. And also to slow the technology down a bit until our aging policy process can kind of catch up.

Thanks. Yeah, so why is there no sort of plan from these executives, Kevin? And what do you think about the idea of taxes? Yeah, I think it's a really... useful and important point. I think many of the executives and the companies building this technology

their goal is just to automate the jobs away, right? They are not thinking or talking much about what will happen on the other side of that to all the people whose jobs are displaced if they are successful. And, you know, some of them have done some studies or maybe... some suggestions. Sam Altman actually funded a big research project where they gave people these unconditional cash payments and sort of studied what would UBI or something like UBI do.

And Dario Amadei from Anthropic has actually proposed something like our listener is suggesting. He called it the token tax. And basically the idea is if you have all these AI models out there generating billions of dollars of revenue by automating people's jobs. some portion of that should go back to fund the sort of welfare programs and social safety net for the people who are displaced.

But I will say that most people I've talked to about this issue inside the AI industry are not even getting that far. They are not even proposing solutions or they're just kind of doing hand-waving about how the government will have to step in and take care of people who lose their jobs this way.

I would like to see a lot more people not only coming up with ideas, but actually advocating for those ideas with policymakers. Yeah, I mean, the main thing I would say is that it's not up to the corporations to run our society. That is the job of... our elected officials who should absolutely have plans in place. They should be developing them right now for a world where we do experience significant job loss through automation. I think most

Lawmakers are probably getting on board at this point with the idea that this is, if nothing else, a real threat. And so it's unfortunate that there has just been so little movement in this direction. because I do think a lot of this is going to come true, and we're going to wish we had better plans in place. Yeah. Now for some listener stories. This first one is from the perspective of a young person navigating a tighter labor market. Listener Sarah writes...

Hey, Hard Fork! I'm one of the junior software engineers who was thoroughly depressed by the latest episode on the AI job apocalypse, mostly because it was exactly in line with my current experience. I graduated in 2022 and felt very lucky to get an amazing job straight out of college where I felt very supported and valued by my team. That entire team was laid off last year to be replaced with cheaper human labor, not AI. And after a grueling job search, I ended up at a very large company.

Thank you. Obviously, we all say that most of our code is written by AI now. It's been thoroughly depressing working here, and I've been looking to move jobs since about my second week, but there are almost no openings for someone with only two years of experience. I think my only real chance is to stick around for a year and hope that my career still exists by then with some...

Maybe I can make it into a mid-level position before the ladder is pulled up behind me. I feel terrible for the people just now graduating. Wow, does this one break my heart. Yeah. Like this, can I just say, this is what we've been talking about the whole time. Yes. Is people like Sarah having this exact experience. Yes. And what makes this particularly bleak is this is something that I actually do think is going to become a major problem for these companies is.

that they are just going to lose their pipeline of their future leaders, right? If you are replacing your junior workers with AI or just forcing everyone to use AI, you are really neglecting your own. future because you are and not doing the kinds of skill building and training and mentorship that is going to allow people like Sarah, who may be your next executive, to build the skills and the experience that she needs to come in and do that job.

let her cook yeah but here's the problem i think it's so silly that companies like this are creating incentives for their workers to lie to them about how they are using AI. You're just going to get a very distorted sense of what AI is doing in your company. And then if you lay off those people because you're thinking, oh, AI is already doing 80% of everything, then you're going to find...

yourself in a lot of trouble so this just seems like a classic self-defeating like corporate thing um and uh these people need to get a better sense of what's really happening um but in any case sarah uh thank you for writing in and uh you know here's hoping that you're next job is better than this one. All right, here's a story we got from an executive. This is from listener Joseph Esparaguera. He writes,

I'm the CFO of a $150 million plus home remodeling business. Wow. Okay, Bragg. I'm in the wrong business. I'm reaching out because I think I'm living in the awkward middle of the AI transformation story. Not at a tech startup, not at a... 500, but in the trenches of a mid-sized company where AI could and should have massive impact, especially in accounting and HR.

He continues, They'll use AI to clean up an email or write a job posting, but they don't seem to grasp or want to grasp the bigger opportunity. I believe AI should let us do more with fewer people and the ones who adapt will stay. But if my current team doesn't evolve, I'll be forced to hire different people who will. Casey, what do you make of this email? So I suspect that this is playing out at a lot of companies where you have managers who are more excited about...

AI than their workers are. I think this is true of lots of different kinds of software, by the way. I remember I used to get really excited about project management software like Asana, and I would try to get my old company to adopt it.

It actually happened. The company adopted it and no one wanted to use it because it was like, you know, why do I want to go like fill out a new form every day saying what my tasks are? You know, so it's like a lot of times software has more obvious value to the manager than it does to the worker who.

You know, in many cases, it's just trying to get to 5 p.m. so they can get home to their family. So I think this is kind of a durable tension in workplaces. At the same time... I think that this is going to be part of like the rough part of this transition is more and more managers being like, no, really, like you actually have to use this thing because if you're doing it another way, it is going to make you slower and worse at your job.

And so I expect that there are going to be a lot of clashes. By the way, I think this opens up a lot of opportunity for listeners like Sarah who can show up at the front door and say, yes, I know how to use AI and you're not going to have to twist my arm into doing it. But I think there's going to be a lot of pain. Yeah, I think this is a really important moment for a lot of companies that are starting to think about how to use AI.

And my intuition on this is that the companies that are having the most success with AI right now are the companies that are doing this in a very bottoms up way, right? They are soliciting ideas from workers about how they could. use AI to maybe improve the parts of their job that they don't love doing or maybe eliminate them altogether. They're holding sort of hackathons or having sort of days set aside to just get together in a room and figure out how to...

use this stuff, they are not sort of imposing it from the top down, right? They are not the ones sending memos out saying everyone must use AI and we're going to be tracking how much you're using AI. And if you don't use AI, we're going to replace you with someone who will.

I think that is a short-term solution. And that's the direction, unfortunately, that I think a lot of companies have chosen to go. But I don't think that's a strategy for durable transformation. You really need to get people excited about this and think. Well, so what does Joseph do here? Because, you know, it sounds like if he doesn't act, there isn't going to be any bottoms up enthusiasm for AI at his company. I think what you do is you basically start a competition among your employees.

You say, we're going to set aside a day or a half a day, or we're going to do an offsite sometime in the next few months. We're going to give everyone access to all of the tools. We're going to buy them subscriptions to all the tools they might possibly need to do. their jobs using AI. And the person who comes up with the best idea or the team that comes up with the best idea gets to live. We'll call it the Hunger Games. No, they get a bonus. They get a, you know, a reward of some kind.

and you kind of make it a thing where people are excited to contribute because it is in their best interest to do so. That's what I would do if I were the CFO of a company, which let's say it, we're all glad I'm not. Well, but the day is young. Who knows what might happen to you later, Kevin? All right. Now let's hear from a listener who feels critical of the approach that some executives are taking to AI. So this person writes...

Hey guys, while my job isn't being replaced by AI yet, my boss is completely obsessed with it without actually doing anything meaningful with it himself. He's effectively put a hiring freeze on all process jobs because he believes that AI can do them better and more importantly, cheaper.

I'm in charge of the sales and marketing teams, and my very meager headcount ask as we grow rapidly is challenged or ignored because there's an AI tool he heard of somewhere. I get messages at all hours from him with links to hacky LinkedIn posts full of emoji bullet points about how...

Excel, Word, PowerPoint, or insert program here will soon be obsolete thanks to these new AI tools, or here are 20 AI miracles to revolutionize your workload. You know, our listener says, I'm far from being an AI skeptic. I make use of it daily.

But honestly, maybe I will lose my job by my own hand soon because his attitude is exhausting. And right now, I just need a few more human people without spending all my time going down rabbit holes of half solutions or privacy nightmares. I think the time spent on reading. up on AI and testing bad AI right now isn't considered enough when looking at the cost-benefit analysis. So, Kevin, what do you make of this, listeners?

I think this is really interesting. It does sort of hit that there's like a new kind of boss emerging in the sort of halls of corporate America, which is like the AI addict boss. We've heard a lot of stories along these lines of like, my boss is...

completely obsessed with AI. And I think it's tough, right? I think this is a very good point that like... businesses have immediate short-term needs that ai cannot do yet and maybe by thinking about sort of where this stuff is all heading uh so much you are actually like not listening to your employees who are telling you just give me three people so that

I can solve this problem. And I don't know what to do about that because a manager's job, an executive's job is to think about and plan for the future. But you also do have these very short term needs that need to be addressed. Yeah, I mean, here, my question.

and to the big boss here is like, what is the actual objective that we're trying to hit, right? It seems like maybe there's too much discussion about tools in this workplace and not enough discussion about goals and what is the best way to get to those goals. You know, it sounds like this person.

has a pretty informed perspective that AI is not going to be the thing that gets them to the goals that they have. And the manager needs to listen to that. Yeah. Have a conversation or post on LinkedIn. They'll probably read it there. All right, finally, let's hear a voice memo from listener George Dilsey, who is trying to find some short-term solutions to keep the staff he trains employable in this changing market.

Hey guys, my name is George Dilsey. I live in Stanford, Connecticut, and I work at a high growth B2B staff startup called Clay. And my role, I actually head up the support team. So one of the things that I've sort of leaned into is trying to hire...

a really really good people for our support team but also sort of like turning those folks into kind of expert generalists so the idea being that like they're rotating through different parts of the company sort of learning about product or learning about engineering or learning about marketing

with the hope that they've sort of gained like a number of different skills across the company and can sort of just like generalize into any other department. So just wanted to share. Thought it was pretty interesting. Love the show. Thanks so much. Kevin, what do you make of this one? I like this one. I think that support and customer service are always talked about as sort of being the first jobs to go under the new AI regime. And we've talked about some companies.

that are trying to develop these AI customer service chatbots. But I think if you are working in customer service, you... don't want to just be sort of reading off the script on a computer or trying to help people solve their problems. You really want to sort of offer a more bespoke, personalized, high-touch kind of service. And I actually, one of my long-term...

complaints about tech companies is that they just do not take customer service seriously. Like for many years, people have said, you know, there's no way to get someone on the phone if something happens to your Facebook account or your Instagram account or your YouTube account.

And I think people at the senior levels of these companies should be doing a rotation through customer service just to get a sense of what their customers and users are actually experiencing. And maybe that would lead them to invest more in these areas. So I think this is a good idea. I think that the experience of doing customer service, if you are good at it and are not just sort of reading off a script on a computer, is useful in many, many jobs. I think that in the future...

That will become very important, especially as the more rote and routine parts of the job get automated. What do you think? Yeah, I think that people who work in customer support roles often have a much better sense of what's happening in the business at the ground level as executives.

And so I love the idea that we're creating new opportunities for those people. I think that those folks can often just bring experiences to the roles that you're just truly not going to get with an AI system. All right, Casey. Have we said enough on AI and jobs this week?

I think we have. We thank all of the listeners who wrote in to share their stories. I imagine this will not be the last time we return to this subject. But it's very clear, Kevin, that already we're starting to see the effects of... AI on the job market. And I imagine that's only going to accelerate from here.

Yeah. And I think we're going to have some more conversations on this topic coming up soon. We won't spoil them now, but let's just say this is an area where I think we are going to spend a lot of time because this is something that many, many people out there are starting to. experience.

Hard Fork is produced by Rachel Cohn and Whitney Jones. We're edited this week by Jen Poyant. We're fact-checked by Anna Alvarado. Today's show was engineered by Alyssa Moxley. Original music by Mary Lozano, Alyssa Moxley, and Dan Powell. Video production by Sawyer Roque, Pat Gunther, and Chris Schott. You can watch this full episode on YouTube at youtube.com slash hardfork. Special thanks to Paula Schumann, Qui-Wing Tam, Dalia Haddad, and Jeffrey Miranda.

If you liked this episode and you found any of it useful or interesting, or maybe a little funny, you can share it with a friend or leave us a review on your favorite podcast app. You can email us as always at hardforkatnytimes.com. Send us your job offers with nine-figure salaries. I'd settle for eight. Such a humanitarian.

This transcript was generated by Metacast using AI and may contain inaccuracies. Learn more about transcripts.
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast