Meta Goes MAGA Mode +A Big Month in A.I. + HatGPT - podcast episode cover

Meta Goes MAGA Mode +A Big Month in A.I. + HatGPT

Jan 10, 20251 hr 11 minEp. 117
--:--
--:--
Listen in podcast apps:
Metacast
Spotify
Youtube
RSS

Episode description

This week, Meta announced a series of content moderation changes that will transform the way the social media company’s platforms deal with misinformation and hate speech. We break down what these changes will mean for users and why the company seems to be caving to the right’s arguments on censorship. Then, we’ll explain why 2025 is already shaping up to be a huge year in A.I. — with models like OpenAI’s o3, Google’s Gemini 2.0 and DeepSeek, from China, stirring discussion that superintelligence is near. And finally, we play a round of HatGPT.

 

Additional Reading:

 

We want to hear from you. Email us at [email protected]. Find “Hard Fork” on YouTube and TikTok.

Unlock full access to New York Times podcasts and explore everything from politics to pop culture. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify.

Transcript

I use New York Times cooking at least three to four times a week. I love sheet pan bibimbap. It said 35 minutes. It was 35 minutes. The cucumber salad with soy, ginger, and garlic. Oh my God, that is just to die for. This turkey chili has over 17,000 five-star ratings. So easy, so delicious. The instructions are so clear, so simple. and it just works. Hey, it's Eric Kim from New York Times Cooking. Come cook with us. Go to NYTCooking.com Casey!

We're back. We're back in the studio, Kevin. So a dirty secret is that we recorded our predictions episode that ran last week in 2024 before we left for the holiday break. We are just now coming back from a multi-week break.

How are you doing? How was your break? I'm doing great. We recorded that episode so long ago that when I listened to it, all the predictions were fresh to me. I was so excited to hear what we were going to say. But I'm doing good. I had a really nice break. And of course, I'm excited to be back. But what about you, Kevin? Well...

I had kind of a disaster happen to me over this break, which was that I got robbed on Christmas. Wait, was it the Grinch? You know, the citizens of Whoville are still looking for the suspect. robbed you? How'd you get robbed? Well, I wasn't home, luckily, but someone broke into my house. Wait, like, what do they take? So, still sort of sorting through. We just got back, but it...

appears that thief or thieves took some jewelry, some electronics. But weirdly, and this is sort of the tech angle here. They did not take the Apple Vision Pro. Not even a robber wants one of those. It makes sense because robbers typically only want to take what is valuable, Kevin. And it's not clear what they would actually do with a Vision Pro.

You know, also keep in mind, if you're a robber, you're out there, you're moving through the world, you're breaking into homes. You can't have that giant thing on your face. You know, you sort of need to maintain.

Clear vision, so to speak. Yes. Let me ask you this. Even though all your items were stolen, did you look at your family and your dogs and you think, you know what? At the end of the day, I got my family and that's all that really matters. I did. And I don't know why you're saying it was such a... I was looking for a nice sentimental ending. Honestly, that was sort of the moral of this robbery was much the same as the moral of The Grinch Who Stole Christmas, which is that the real...

Christmas, the real household items are families. Exactly. And so, you know, if you get robbed again, maybe don't worry about it. Was it you? I'm changing the subject. We're moving on. Okay. Where were you on Christmas? I'm Kevin Roos, a tech columnist at the New York Times. I'm Casey Noon from Platformer. And this is Hard Fork. This week, meta goes MAGA. We break down the company's surrender to the right on speech issues. Then, why 2025 is shaping up to be a huge year.

in AI. And finally, some HatchyPT. Call that a HatchyPT's. Well, Casey? I think we better talk about meta. We better do it, Kevin, because I never met a bigger story for this podcast. Yes. So the big news this week in the world of social media is that Meta is making a, I would say, pretty calculated and transparent. Craven is another word people have used.

to ingratiate itself with the incoming Trump administration by sort of surrendering to the demands of right-wing speech critics and changing a bunch of things about the way its platform works. I think this is a very big story, not just because... of what it represents about Meta, but because it is the biggest and most prominent example of a Silicon Valley tech company sort of positioning itself for the second Trump term. And I think it's going to have very big implications for

speech on the internet, for the rise of misinformation online, and potentially for the future of Meta itself. Yeah, absolutely. I think that while we have talked about speech policies on Meta, basically as long as we've been doing this podcast... I think this set of changes that the company announced this week are the most important series of policy changes that they have made in the past five years easily. Yeah. So let's run down what's actually been happening over at Meta.

Over the past week, there have been three main things that people are pointing to as being all part of this effort to kind of curry favor with the incoming Trump administration. The first was that last week, Meta's global policy... chief Nick Clegg, a former British deputy prime minister who had served in that role for a number of years.

stepped down and was replaced by Joel Kaplan. Joel Kaplan is a longtime Republican operative, going back to the George W. Bush administration, who's been working at Meta in their policy division for a while now and has sort of become the... unofficial liaison between Mark Zuckerberg and the Washington right. That's right. And then this week on Monday, Meta announced that it was appointing three new board members, including Dana White, who is the CEO of UFC.

Ultimate Fighting Championship. Dana White, not known as a particular expert on social media governance, but definitely a close friend and ally of Donald Trump and someone who can presumably act as a liaison between Meta and the Trump administration. administration. Yeah, so just sort of staffing that bench up with more Trump friends. And then the big one came on Tuesday when Meta announced that it was...

ending its fact-checking program and replacing it with an X-style community notes feature. The company also said it was redoing its rules to allow more speech and less censorship. It's going to dial up the amount of... quote, civic content, that's sort of Meta's term for political content and current events content in their feeds and said that they were moving their content review operations from California to Texas to avoid the appearance of political bias.

There were some other details in there that we can talk about, including some changes to the way that its content moderation automated services will work. But basically, this was a laundry list of things that right-wing critics of social media platforms had been asking for for years. and Meta sort of stood up and said, we're going to do all of it. Yeah, or another way of putting it, Kevin, is just that they...

accepted wholesale the Republican critique of Facebook's speech policies, right? And actually used the same words that Republicans would do. You know, in a previous time, we only used the word censorship to apply. to state action to actually prohibit speech. Some people would say,

doesn't actually apply to private companies, just sort of policing online forums. But Mark Zuckerberg said, no, effectively, you're right. We do do a bunch of censorship. We're doing too much censorship and we're going to stop doing censorship. Yeah. So the reasons that Mark Zuckerberg gave and that Joel... But the reason that...

Mark Zuckerberg and Joel Kaplan gave for these changes was that Meta had been doing some soul searching and basically had discovered that its former policies created too much censorship and that they were going to return to the company's roots as a platform for free expression. I was really struck by just the way that they completely backed down here. They accepted the critique.

And they seemingly are terrified of what the Trump administration could mean for them and for Mark Zuckerberg personally if they do not comply in advance with everything that Republicans have said about them for years.

that none of these critiques are new. They were made throughout the first Trump administration, and Facebook stood up against them. And they said, we're actually going to try to find a middle path here. We are going to try to do what we can to preserve free expression while also trying to make this a really safe...

and inclusive space for as many people as we can. And in 2025, at the start of the year, Mark Zuckerberg came forward and he said, no, not anymore. We're done with that. Everything that the Republicans have been saying about us is true. And so we are going to lean into their version.

of what a social network should be. And so I'd like to play just some of what Zuckerberg said in the reel he posted on Instagram announcing these changes. Governments and legacy media have pushed to censor more and more. A lot of this is clearly political, but there's also a lot of legitimately bad stuff out there. Drugs, terrorism, child exploitation. These are things that we take very seriously, and I want to make sure that we handle responsibly.

So we built a lot of complex systems to moderate content. But the problem with complex systems is they make mistakes. Even if they accidentally censor just 1% of posts, that's millions of people. And we've reached a point where it's just too many mistakes and too much censorship. The recent elections also feel like a cultural tipping point towards once again prioritizing speech. So we're going to get back to our roots and focus on reducing mistakes.

simplifying our policies, and restoring free expression on our platforms. I was just struck by how craven and cynical it felt like Mark Zuckerberg in particular was being about this. I mean, he sounded like Elon Musk, to be totally honest. He used phrases like legacy media.

with this kind of like dripping disdain, which is a phrase that Elon Musk and his friends love to use in describing, you know, the mainstream media. He also... did use this word censorship that he has avoided studiously for years in describing the content moderation work that every social network, including all of Meta's social networks, do as a matter of business.

It just sounded like a total capitulation, a total giving in to the demands of his most ardent right-wing critics. More than that, Kevin, he also threw his own contractors under the bus. And let's hear that clip. After Trump first got elected in 2016, the legacy media wrote nonstop about how misinformation was a threat to democracy. We tried in good faith to address those concerns without becoming the arbiters of truth.

But the fact-checkers have just been too politically biased and have destroyed more trust than they've created, especially in the U.S. He says that the fact-checkers had just proven to be too biased, gives no evidence for that, no example, just sort of says that these fact-checkers, all of whom follow this very rigorous quote for how they do their work, just sort of assert, oh, they've been super biased, so who knows what that meant.

He also, as you pointed out, says that they're going to move their moderation teams to Texas to avoid bias. Well, first of all, I can tell you they have had moderators in Texas for many years, basically for as long as they've had moderators.

put moderators in red states for years. In 2019, I visited Facebook moderation sites in Arizona and Florida, right? So there's absolutely nothing new about this, but he is throwing his moderators under the bus. And the worst part about it to me is that he is suggesting that the moderators...

were the ones making decisions about policy when in fact that person was Mark Zuckerberg. So if Mark Zuckerberg wants to talk about the perception of bias around Facebook policy, he should reckon with the fact that he is the policymaker in chief over there. Right, so. What do you think the most impactful part of these changes is? Because, you know, for all of the talk about the end of the fact checking program over at Meta.

My sense is that the fact-checking program for all the good people who worked very hard on it really only ever touched a very tiny fraction of the content shared on Meta's platforms. It was a pretty ragtag effort that never really really had as much of an impact as I think the fact-checking community would have liked, in part because of the way that Meta restricted it. So I don't know that the average user of Facebook or Instagram is actually going to notice the fact that their fact-checking

has disappeared. But what do you think that the biggest impact on users will be? Well, so let me speak to the fact checking first, because in some ways I agree with you. I don't know about you. I rarely encountered one of these fact checks on Facebook. On the other hand, I am someone who believes in harm reduction. And fact checkers did look at millions of pieces of content that were getting...

presumably hundreds of millions or billions of views. And there were empirical studies done that showed that overall people came to have fewer false beliefs if they saw those fact checks. So to the extent that people saw them, they were effective.

I think that there was a case to continue doing them, particularly if you want to be a good steward of a network that you have built that billions of people are using every day. And it's important to you that they have a good experience on that platform and don't come away from it stupider than when they started.

But I don't actually think that that's the most important thing that they announced. I think it's something else. And I'm going to point to something that Mark Zuckerberg said in his reel. Let's hear that clip. We used to have filters that scanned for any policy violation. Now, we're going to focus those filters on tackling illegal and high severity violations. And for lower severity violations, we're going to rely on someone reporting an issue before we take action.

So what does that mean? What it means is, whereas before, Meta used to rely on automated systems to catch all sorts of things, not just illegal things, but also just stuff that was annoying or hurtful, stuff that was a little bit bullying, harassment.

you a name. I called you a slur. Meta would catch that stuff in advance and maybe not show it to you, maybe take some sort of disciplinary action against the person who sent that. What Zuckerberg is saying here is we are not the content moderators anymore. You are. Facebook user, Instagram user, we are now enlisting you in the fight. And we're going to leave it to you. If you see a slur on our platform, you go ahead, report that, and then maybe we'll take a look.

And I think that this is a really big deal. So yesterday, I wound up talking to a bunch of people who either work at Meta or used to work there. And I talked to one person who just said that they were extremely worried about what this meant because they had...

seen in so many countries around the world where meta has traditionally done much worse moderation than it does in the united states where by not taking action against these lower severity violations right stuff that was not obviously illegal they had just seen violence fomented again and again. They had seen harassment against women. They had seen abuse against LGBTQ people. And Zuckerberg, in his reel, said...

look, we're going to have more bad stuff on the platform, but he doesn't go the second step to what does that actually mean? Well, what it actually means is people could get hurt. People could die. So I want to be very clear about that. This is not, you know, two like pointy headed intellectuals, like, you know, sitting in their podcast.

studio saying, oh, no, you know, Facebook isn't a safe space anymore for the college students. What I'm saying is that violence has been fomented on Facebook before, and it will be fomented on Facebook again. And as a result of these changes, more people are going to be hurt. So that to me is the as consequence of these actions. Yeah, I think this reporting thing that you bring up is so interesting because-

As we know, a lot of the worst stuff on Facebook happens in groups, happens in sort of semi-private spaces with hundreds or thousands of members. And so now I think... Meta is essentially saying that it will be up to the members of those groups to report any violative content that they want to be moderated rather than having these sort of proactive scanners going around. And you might say, what's the big deal about that?

that. Well, if you're in a stop the steal group or a QAnon conspiracy group or a group that's plotting an insurrection at the Capitol, which members of that group are going to be reporting each other for violating? Facebook's rules. I don't think that's a thing that's going to happen. And so I think what we're going to end up with is just a much more sort of unmoderated mess over at Facebook and Instagram and all the other meta platforms.

week, one of them pointed out to me what a sort of strange step backwards this is in this respect. For so many years, Mark Zuckerberg bragged about how automation was the future of content moderation, and he boasted about the systems that they were building.

that we're getting better every single quarter at detecting the hate speech, detecting the bullying and making this a sort of better place for his community. And now instead of saying, we're going to lean into this even more, we're going to make these filters better. He said, we're.

going to stop using them and we're going to go back to human beings who don't even work for us or have any training or expertise, right? This is an abandonment of his technological project in favor of something that is obviously in... inferior. So to me, that is one of the big twists here is Mark Zuckerberg walking away from the very good technology that he built. Yeah, that's a really good point. So what else in these changes caught your eye? Yeah. So, you know, some of our...

listeners, Kevin, may use Facebook or Instagram and just wonder, you know, what's it going to be like now, now that these changes have made? So I thought maybe it would be good to go through some of the offensive things that you can now say on Facebook and Instagram if you want and not get in trouble.

for example, I'm gay. You can now tell me that I have a mental illness, Kevin. You can go right onto Facebook and tell me that I'm mentally ill for being gay. You can say that I don't belong in the military. You can tell trans people. I mean, you don't belong in the military. For other reasons. For other reasons. And that's important.

Yes, nothing to do with your sexuality. No, I'm a terrible shot. Okay, go on. There's some other changes. Yes. So look, if you want to say offensive things about trans people, like they can't use the bathroom of their choice, if you want to blame COVID-19.

on Chinese people or some other ethnic group. You can just do that on Facebook and Instagram now. And Mark Zuckerberg says, well, that's sort of more in keeping with the mainstream discourse. Those are the words he uses. That is in keeping with the mainstream discourse.

And I look at that and I think, oh, like the standard on Facebook now is that it's just going to feel like a middle school playground, right? Every, all of this stuff is stuff that I used to hear when I was 12 years old in Washington middle school. Maybe not the. bathroom stuff that was sort of still yet to come.

Everything else I heard in seventh grade, and that is the new standard that Mark Zuckerberg has set for his property. Yes, he's saying, I would like the discourse on my platforms to more closely resemble the dialogue in a Borat movie. Yeah, which...

is satirical in the borat case but is you know very serious yes and look it's easy for me to joke about it look if you want to tell me i'm mentally ill for being gay like i can handle that but you know if you're 14 years old and queer and it's people in your high school that are calling you that on

Instagram, we've seen over and over again that these kids harm themselves. And one of the things I find so crazy about these series of decisions, Kevin, is that right now, 41 states and D.C. are suing Meta over the terrible child. safety record it has on its platform. And my understanding is that these changes apply to younger users just as they apply to everyone else. And so these classifiers that once used to try to find bullying and abuse and harassment against young people

They're no longer going to be automatically enforced. And it is going to be up to, I guess, the other kids in school to say, hey, looks like my friend is being bullied over here on Instagram. So that just seems like they're opening up a huge amount of liability for themselves. Right. And I think we should say, like, it is not just.

right-wing culture warriors who have complained about excessive moderation on Meta's platforms, right? People on the left complain that their pro-Palestinian speech is being targeted for takedowns or that— And that's true, by the way.

are not just like phony complaints like it is absolutely true that meta has over enforced in some cases right but what's so interesting as i'm hearing you explain the details of some of these changes and how they are revising their rules is that that they all seem to be pointed in one direction it's like let's let people on the right uh mock people on the left in more ways

Absolutely. And again, like if, you know, I sort of wrote in my newsletter that like a younger and more capable version of Mark Zuckerberg truly did handle this differently. And the way he handled it was like, oh, we're over enforcing in this way. Let's improve the classifier, right? Yeah. So that is a lot about the what of these changes. I want to talk now about the why of these changes.

think there is a an obvious explanation the one that has been popular among the critics that i've been reading and talking to over the past couple of days is the political opportunism angle which is you know this is mark zuckerberg's attempt to kind of So I think that there is a. a lot of truth to it. I think

Another factor that is in there, and we've talked about this on the show a bit, is that trying to be a good Democrat just didn't really get Mark Zuckerberg anything. You know, after the 2016 US presidential election and the huge backlash. against meta in particular that it created, Zuckerberg tried to say, whoa, whoa, whoa, okay, I hear that you're super mad. I'm going to try to fix this. And so they went out and they built all these fancy machine learning classifiers to try to improve.

prove the service. And at the end of the day, I don't think Democrats really liked him 1% better than they did before he did any of that. You have to remember that politics is transactional and people vote for people who they think they can get things out of. By the end of 2024, I think it was very clear to Mark Zuckerberg he was truly not going to get one thing out of the Democrats. But then along comes Donald Trump. And Donald Trump has this.

really interesting relationship with Elon Musk where, you know, Elon Musk used to be kind of a liberal guy too, had a bunch of sort of bog standard liberal positions. But, you know, then he, you know, changed his views for whatever reason, gave a bunch of money to Trump. Trump said, hey, I like this guy. I'm going to give him every political...

advantage that he wants. And Mark Zuckerberg is a pretty smart guy. And he thought, oh, well, you know what? Maybe I could do the same thing. Right. Right. I mean, I think the one thing that we know about the values of Mark Zuckerberg and Meta is that they are an extremely efficient. at self-preservation, right? They will do anything to stay relevant and stay ahead. They will copy features. They will change the name of the damn company. We know that Mark Zuckerberg's own views on speech.

are very flexible. They tend to sort of shift as the political winds shift. But I also think there's another potential why here, which is about Mark Zuckerberg personally and his own shifting political allegiances. I've been... talking recently with some folks who know Mark Zuckerberg or who have worked with him in the past.

And what they have said to me is that this is a man who is following a very conventional sort of former Democrat turned Republican arc, right? He is a man, he's 40 years old. He's sort of approaching middle age. He's very into these. kind of male-coded hobbies like mixed martial arts. He spends a lot of time, you know, talking with Joe Rogan and, you know, hanging out with Dana White. And he's just sort of enmeshed in this kind of manosphere outside of work.

And he's also been the target of a lot of criticism from especially the left. And one thing that we know about successful men who get targeted by left-wing opprobrium is that they often respond to that by becoming sort of...

disaffected former liberals who embrace the right because there they feel like they're getting a more fair treatment. So I just want to put that out there. I can't prove this theory, but some people who know Mark Zuckerberg have floated it to me that he has actually become... personally quite red-pilled or conservative over the last few years.

Now, obviously, he's not Elon Musk. He's not broadcasting his political opinions on social media dozens of times a day. He's been more careful about sort of signaling which team he's on. But I just offer this as a theory because I think we're...

starting to see more evidence that his own views may have shifted quite a bit independent of what's good for meta. Yeah, I mean, I think that there was a version of all of this that was less extreme and that if Zuckerberg himself were more truly liberal or progressive in his...

heart, we would not have seen these changes. So I do think that the changes that they announced this week offer some evidence for what you just said. Also, my colleagues, Mike Isaac and Teddy Schleifer, reported last year that Mark Zuckerberg has begun referring to himself as a classical liberal.

which if you've ever watched a right-wing YouTube video is what every former liberal who has now become a Republican says. They call themselves classical liberal. So I'll just put that out there. That is a code word. So, okay. Last question about the implications of these changes. Do you think that we are going to see an exodus of liberal and progressive users from meta platforms the way that we did from X after Elon Musk took it over? Well, it depends on how all of these changes.

play out. And we're just not going to know for a while. My assumption is that Meta will continue to do a significantly better job at moderation than X does. It's a much bigger company. It has more infrastructure in place. I don't think you're going to get this sort of overnight transformation you got with Elon Musk. Also, you know, Facebook and Instagram, they're just like structured, very different than X's. Like Zuckerberg, I don't think can really take over those.

like in terms of the actual post that you're seeing in the feed the same way that Elon does. So, you know, I would be somewhat surprised by that. On the other hand, if... Facebook and Instagram do truly come to feel like seventh grade playgrounds at recess, and the sort of discourse just gets much rougher and coarser. I do think you're going to see people walking away from it. Because while we...

almost only ever discuss content moderation in terms of the politics of it. The truth is there's a huge commercial demand for it. People do not want to spend time on networks that are full of violence and harassment and abuse. And that is the main reason why all of these companies build systems to remove those things or suppress them. So the real question, I think, Kevin, is how far ultimately does Zuckerberg go in this direction?

because whatever the politics might be, the vast majority of his users just want a safe and friendly place to hang out online. Yeah. Okay, so that is where we are with Meta today and what some of the implications will be. Do you have any more predictions about where...

This will all head. I have a really fun one for you, Kevin. Yes. So Meta has told its partners in this fact-checking partnership that it has been funding for the past several years that their contracts will end in March. So in March, the fact-checks on Meta... So that means, Kevin, that you and I can look forward to...

Fact-free spring on Facebook. Let's go. We can truly say the craziest things and not one person is going to be able to stop us. And let me just say, I'm cooking up some whoppers. The things I'm about to say on Facebook and Instagram, let's just... you're going to want to follow me yeah so follow casey over at threads yeah and uh let's just say start piling up the drafts now yeah because the purge is coming and you're ready i'm ready for the purge

When we come back, Osei can O3 forge a new path forward for AGI. Okay, we'll go with that. Hi, this is Lori Leibovich, editor of Well at the New York Times. Everything that our readers get when they dig into a Well article has been vetted. Our reporters are consulting experts doing the research so that you can make great decisions about your physical health and your mental health.

We take our reporting extra seriously because we know New York Times subscribers are counting on us. If you already subscribe, thank you. If you'd like to subscribe, go to nytimes.com slash subscribe. Well, Casey, we have... More news from over the break about one of our favorite topics, AI. Boy, do we. It was a huge couple of weeks for AI, Kevin, during a time of year when normally the news cycle gets pretty slow.

I was wondering about that because usually in December, people are sort of getting ready to go on holiday break. The news kind of trails off, but not this year. The AI labs were sort of trampling all over each other to try to get their big news out before.

the end of the year. Yeah, and I think it was led by OpenAI, which, of course, announced their 12 days of shitmas, where they tried to announce something's big, something's small every day for 12 days. And, you know, they did wind up ending on something pretty important, I think.

Yes, so this is all moving very fast. There's a lot to catch up on today, and I want to take some time to really dig into what happened and what we can expect for the first few months of the new year. But before we get into all that, Casey? You have something to tell us. I do. So, Kevin, of course, our listeners' trust is of paramount importance to us. And so I wanted to let folks know about something that happened in my life that I just think I want to be upfront about, which is...

that at the end of 2023, I met a man who had many wonderful qualities. One of those qualities that I loved was that he worked for a company I'd never heard of, which meant, fine, I can keep doing my job as normal. But as of this week, Kevin... My wonderful boyfriend started a job at a company we talk about sometimes on the show. He is a software engineer at Anthropic. Is his name Claude?

You know, many people have written to me asking me if I fell in love with Claude. And while I do find it to be very useful for some things, no, this was a human man that I am currently in love with. I've met him. He's real. Can confirm. He's wonderful. But yes, you are disclosing that you have this new, let's call it an entanglement, because this is a company that you and I talk about that you also cover in Platformer.

And so we just wanted our listeners to know that this is happening out in the world and in your life. Is there anything more you want to say about this? Yeah, I mean, people have some questions about this. Like, you know, I did not play any role in my boyfriend getting this.

job. Anthropic didn't know about our relationship before this happened. Of course, you know, we have since told them about this. I do plan to continue writing, reporting about Anthropic because I think it's a really important company. But whenever I do that, I'm going to remind you that

this relationship exists um a couple other things that i would say you know my boyfriend and i do not have any financial entanglements we do not currently live together um but you know i'm also going to commit to updating folks as that changes. Basically, I'm going to try to do the same job that I always do, try to bring the same skeptical, critical eye that I do to everything, but I'm also just going to remind you that I have this relationship.

But, you know, if you have questions about that, email the show, hardforkatnytimes.com. I will try to, you know, answer any respectful questions that I can about this. Now, Casey, I will just... editorialize and add a little bit here to your disclosure, which I think is, you know, laudable, and I'm glad you're doing it. I'm glad you did it in your newsletter. I'm glad you're doing it on the podcast. I have known you for a long time.

I have known how hard you have tried to avoid dating men who work in the technology industry. I truly have. I mean, for more than 10 years, Kevin, I would be on apps like Tinder and I would see that somebody cute worked at a Google. a meta, a Twitter, you name it. And I would just always swipe left because I thought I don't need that drama in my life. You know, I don't need that complication, which is.

tough in San Francisco because everyone works in tech. It is a very small town and the number of sort of eligible bachelors out there who do not work at one of the companies you cover limits your dating pool considerably. It really did. And it sort of explains why I was.

mostly single for the last 10 years. And I thought, well, I finally found something that sort of gets me out of it. But, you know, sometimes life just has other plans for you and you kind of have to roll with the punches. Yeah. So here I am.

Well, anyway, thank you, Casey, for that disclosure. I think transparency is very important. We are obviously going to keep talking about developments in AI at Anthropic and elsewhere, but we will also put this... disclosure in sort of the way we do when we talk about open AI and the fact that the New York Times company is...

suing OpenAI and Microsoft, alleging copyright violations. Yeah, and you know, when I disclosed this in my newsletter this week, Kevin, one reader actually replied that they thought it was cute that I would now have a disclosure to go along with your disclosure that you do every week. So we're sort of now one for one. Peace.

Well, let's proceed to the real meat of this segment, which is about AI news. Because so many things happened. Truly. So let's start by talking about OpenAI. We've already made the disclosure. Don't have to do that one again. This was a big month for OpenAI. on December 20th, they announced a new model called O3. This was a successor to O1.

Funnily, they skipped O2 in the naming process because of a lawsuit threat from O2, the telecom company. I'm not sure if it was a threat. They said they did it out of respect. But yes, presumably there would have been some sort of legal problems. Yes. So they skipped right over 02 to 03. This model is not yet available for users, but they did give a kind of preview of it to some researchers, and they also talked about how it had performed on some benchmark.

evaluations. Casey, tell us about O3. What is O3? So O3 is a large language model, Kevin, like you would already find in ChatGPT. But it is built in a different way, and it's known as a reasoning model. And the reasoning models are a little bit... different. A main way that they are different is how they are trained. So they are trained to try to be

better at handling logical operations and structured data. The second big way that they are different is that when you make a query, you type into the little box whatever you want it to do, the reasoning model... takes longer to go over it. It uses more computing power, it will take multiple passes through the data, and it will really try to bring true reasoning to what it is looking at. And so the result of taking more time And what OpenAI found with O3 is that they were actually able...

to get way further on some of the hardest benchmarks ever designed for LLMs to pass than anything that has come before that. Yeah, so we talked a little bit about this idea of test time inference or test time compute back when we just... Oh, one, their previous reasoning model. But this is basically a different step than the classic.

pre-training step of building a large language model. This is something that happens when the user makes the query instead of just spitting out an answer right away. It goes through the secondary test time step. And that is something that researchers were very excited about when 01 came out. They thought, okay, maybe if we are tapping out the limits of the pre-training step, maybe there is a kind of new scaling law developing around this test time or inference compute.

And maybe if we pour more resources into that step, the models will get better along a different axis. And so what people were very excited about when O3 came out was that it looks like that actually... worked. Yes, and now this stuff is not yet in the hands of everyday users, but OpenAI did

enter this 03 model in this really fascinating public competition known as the ARC Prize. You know the ARC Prize, Kevin? Yes. So the basic idea with the ARC Prize is they try to come up with problems that would be insanely difficult for an

LLM to solve. And one of the ways that they're difficult, by the way, is that they are original problems. So these problems are known to not be in the training data of any of these models because, of course, one of the criticisms of the LLMs is essentially, oh, well, you already have

have all that data stored, right? You just essentially did a quick search. So this price says, no, no, no, we're not going to let you search. You actually are going to have to show that you can reason your way through something really difficult. So this Arc AGI 1 public training set has been since at least 2020. And at that time, Kevin, GPT-3, previous OpenAI model, got a 0%, okay? So just four or five years ago, we were at 0%. In 2024, last year,

GPT-40 got to 5%, okay? With O3, it gets to... 75.7% in one evaluation where the limit was you could only spend $10,000 on computing power. In a second test where they let OpenAI spend as much money as they wanted, which we actually think it was like more than a million dollars, O3 hit 87.5% on this model. So something that was essentially impossible through all of 2020. before.

Almost instantly, we have now hit 87.5% of that benchmark. And that is essentially the only public data we have about how good this thing is. But man, did that get people's attention. Yeah, it got people's attention. I also saw a lot of people paying attention. to O3's performance on something called Code Forces. This is a programming competition benchmark, and this is sort of one way that...

These AI companies try to assess how good their models are at coding. OpenAI's O3 received a rating on code forces of 2727. That is... roughly equivalent to about the 179th best human competitive coder on the planet. And just for context, Sam Altman, in presenting this result, mentioned that only one...

Programmer at OpenAI has a rating higher than 3,000 on code forces. So why does this matter? Well, you think about some of the discussion that was happening at the end of 2024, Kevin, and you started to hear people saying, we are hitting a scaling wall.

This was the phrase, right? And the idea was the techniques that we used to build the previous LLMs were just sort of running out of the low-hanging fruit, and it's going to require some sort of conceptual breakthrough in order for them to continue improving. And O3 comes along and effectively does just that. And what I think is so important about these benchmarks and why we want to take some time today going through them is there's a lot of questions and criticism right now that is justified.

How much are these things being hyped up, right? You know, we know that the companies love to hype up their products and tell us, you know, how incredible they are. But the benchmarks are something objective that you can actually use to measure their performance. And so when you have one of those benchmarks...

Mark saying that there is now a model that is better than all but 179 people on Earth. Well, it seems like we might be getting pretty close to super intelligence because what is super intelligence if not a system that is better than every human at something? Yeah, and I would just add to that a little bit of a caveat, which is that these so-called reasoning models, they seem, from what we know about them so far, to be very good at...

The kinds of tasks that you can design what are called reward functions for, which are things that have sort of a definite right answer, right? Coding, either the code runs or it doesn't. Math has a definite right and wrong answer. So in these domains where you can kind of give... the reinforcement learning model, a goal, and the indicator of whether it is right or wrong in pursuing that goal, it tends to do very well. But if you asked it...

what is the meaning of true love? It would never know. It wouldn't know the first thing about it, and I think that's beautiful. Right. So I think for the short term, like the next year or two, we're going to have these early reasoning models that are... very good and potentially even superhuman at some tasks, the kinds of tasks that have sort of definite right and wrong answers. But for other things like, you know, fiction writing or life coaching or sort of these vaguer tasks that...

don't necessarily have one right and one wrong answer, they may not advance much beyond what we see today. Yeah. And, you know, some people will use that as an excuse to say, well, then this doesn't...

matter that much. And I would just point out that, you know, at some point in your life, you're probably going to go see a surgeon and that surgeon might be not that great of a painter. And it's not actually going to change the fact that the surgery that you got was very valuable. Right. So I think it's important to think more in terms of what these things.

are capable of in the moment than what they are not capable of. Yes. The other thing from OpenAI that we should talk about quickly is that Sam Altman wrote a new blog post on January 5th called Reflections, basically talking about some of his

thoughts about the two years since ChatGPT was released. And the big... headline from this blog post is that Sam Altman is claiming now that OpenAI knows how to build AGI, that the artificial general intelligence that people have been speculating about for years now, that OpenAI has been sort of hinting at.

that they are within sight of that goal and that he believes it could happen very quickly. And they are already starting to look past AGI to ASI, to artificial superintelligence. So Casey, what did you make of this blog post? Well, so I... you know, basically a day trying to figure out what exactly does Sam mean when he says that they know how to build AGI. And another thing that happened this week, Kevin, is that Sam did an interview with Josh

Rangel at Bloomberg. And one of the things that he tells Josh is, I'm going to quote, I don't have deep, precise answers there yet, but if you could hire an AI as a remote employee to be a great software engineer, I think a lot of people would say, okay. That's AGI-ish. My interpretation based on conversations that I had this week is this actually is the destination that everyone has in mind for 2025. This is where the race is going. You are going to see all the big AI.

labs race to try to release a virtual AI coworker. And if they can do that, and if the coworker is pretty good, then they're going to say, this is actually what AGI is. Because at the moment, you can hire a sort of virtual...

to do some task or series of tasks in your companies that you no longer need a person for, that is where this entire thing has been driving the whole time. Yeah, I agree. And I think that it is just, it is not necessarily something that we need to accept uncritically, right, Sam? Altman is a person with his own goals and motives and open AI. And reward functions. And reward functions. And we should, you know...

maybe apply some discount to what he says about his projections for AI because he does have a vested stake in the outcome. But I think we should also just use this as sort of a, you know, sticking our finger in the wind of what... conversations are happening in the AI scene in San Francisco. People here, I cannot emphasize this enough, are very sincere and very genuine about the fact that they believe that we are going to get AGI or something like it Very, very soon.

possibly this year. Yeah, and when you look at the improvement in these models that we saw in December alone, I think you have to take them seriously. Yes. Okay, moving on from OpenAI, another thing that happened in December is that Google released Gemini 2.0, the new version. of its flagship AI model. And Casey, have you tried it yet? What do you make of it? You know, I have not tried it yet, Kevin, because it is not in the sort of consumer brand Gemini that I pay for.

exception of they have this new feature called Deep Research, where you can ask Gemini to sort of go and read the web and prepare a little report for you about something. I think I've only used it one time. It seemed like okay. To be candid with you, I have not followed the 2.0 stuff as mostly because it just hasn't seemed as shocking or impressive as the OpenAI stuff. Have you? I played around a little bit with Gemini 2.0, mostly in a series of demos that I got at Google before it came out.

Some of what has been in there is sort of catching up with other models. Google also released a Gemini 2.0 flash thinking mode, which was their first kind of attempt at an inference. time compute reasoning model, similar to 01 and 03 from OpenAI. I have not played around with Gemini deep research mode yet, but I've heard people talking about how cool it is. So I'm excited. try that out. But people I trust, whose judgment I trust about this stuff, say that this is basically Google sort of...

announcing that it is on the same trajectory as OpenAI and all the other companies that are its peers and rivals, that it is going to be scaling up very quickly in 2025, and that we should look forward to more there. Yes, although there was a post on X that went... viral this week where someone asked Google, does corn get digested? And all of the image results are of AI slop that appear to be diagrams of corn and just make no sense whatsoever. And it's extremely funny. So maybe it'll be past.

by the time this comes out. But if not, just go ahead and do an image search for does corn get digested and you'll get a sense of where Google's AI search skills are at. Got it. In conclusion, Google is cooking in the AI department, but not much of this has gotten out into consumers' hands yet. And so I think that will be the question for 2025 is, is this stuff actually as good as Google says it is? Yeah.

All right. The third and final story that we're going to catch up on today from over the break is something out of a Chinese company called DeepSeek. Deep Seek is a Chinese AI company. It's actually run by a Chinese hedge fund called High Flyer. And right around Christmas... As my house was getting robbed, they released a... new model called DeepSeek V3 that ranks up there with some of the world's leading chatbots and...

caught a lot of people's attention. Yeah. And look, I have not used this one yet, but there's a few things to know about this one. One is that it's really big. It has more than 680 billion parameters, which makes it significantly bigger than the largest. model in Meta's Llama series, which I would say up to this point has been sort of the gold standard for open models. That one has 405 billion parameters.

But the really, really important thing about DeepSeek is that it apparently was trained at a cost of $5.5 million. And so what that means is you now have an LLM that is about as good as the state of the... art that was trained for a tiny fraction of what something like a llama or a GPT was trained for. I saw some speculation from this great blogger, Simon Willison, who said, it seems like...

The export controls that the U.S. is placing on chips is actually inspiring these Chinese developers to get much better at optimizing. And indeed, you now have this state-of-the-art model for $5.5 million. a huge step toward the proliferation of LLMs everywhere. Yeah, let me just back up and go a little more slowly through what you just described, because I think it's really important. I was trying to go really slowly.

I need a slower— I need the deep research mode here. Okay. So one of the big questions over the past five or so years is— about the Chinese AI industry and where they are relative to the leading frontier AI labs in the U.S. and whether we need to be doing more to kind of slow them down, and if we even can slow them down, or if this stuff is just kind of...

common knowledge that as soon as someone invents a new way of doing AI, it spreads throughout the world and there's not much you can do to stop it. One of the things that we've done... in the United States, was to pass something called the CHIPS Act, along with a set of controls that basically limited which AI chips you could export to China.

And we put a lot of faith in the ability of these restrictions to effectively constrain the Chinese AI industry. If they couldn't get the latest chips out of NVIDIA and other companies, they wouldn't be able to build models that were competitive with the state of the art. U.S. models. And that was one way that we were going to sort of try to keep our national advantage. What DeepSeek, I think, has showed, or at least what they have hinted at,

is the possibility that China is actually not that far behind. Because this model... Whatever you think about it, I have not tried it myself, but according to its benchmarks, it is up there in many respects with the latest and greatest models from companies like OpenAI and Google and Anthropic. It is... according to some measures, the highest ranking open source or open weights model that we have. And it does not appear to have needed the latest and greatest hardware to be trained on.

According to the report that DeepSeek put out, they trained this new model, V3, at an estimated cost of about $5.5 million, and they did it not on the leading-edge NVIDIA H100 or A100. chips that all the big AI labs use, but on a different version of NVIDIA chips known as the H800, which is basically just a less capable version of the state of the art chips from NVIDIA. And so I think what this all boils down to is the conclusion that regulating AI by limiting access to hardware...

is just going to be much more complicated than we thought. One interpretation would be that you actually can't stop China from building state-of-the-art. foundation models and that our regulatory regime just isn't going to cut it when it comes to keeping the U.S. ahead of China. What do you make of that?

I mean, the first thing I would say is I do get a little bit nervous when people frame the debate this way because I think a lot of the people who try to frame the like AI story as a race between the United States and China are like sort of. Very hawkish and like leading us to a potential conflict that I would rather avoid. And it also presupposes that all of the American companies have to race as fast as they can and they have to build AGI as fast as they can, even if it means cutting.

corners on safety because otherwise, you know, this looming specter of China and everything that could happen. So I just would sort of say we don't necessarily have to do that. We can choose to still, you know, move somewhat deliberately and with caution here. Do I think that this shows that it is going to be harder to prevent China from developing extremely high-end models and that regulations be more complicated? Yes, absolutely. All right, Casey, that is...

A small fraction of what happened in AI while we were gone. But probably the most important things. I think we covered most of what really mattered. And if there's one thing that we can be sure of in 2025, it's that we are going to be very busy talking about... more AI changes and progress. You know, somebody was telling me that if like 2023 was a year that made everybody say, oh my gosh, AI is going so fast. And 2024 was a year that felt very business as usual. 2025 is a year where we could be.

going back to, oh my gosh, AI is going so fast. And then maybe it'll just feel like that all the time forever. Isn't that a pleasant thought? Yeah. So anyway, happy new year. AI vertigo forever. Forever. When we come back. 2025's first game of Hatch GPT. Well, Kevin, from time to time, we like to check in on some of the wilder headlines from the world of tech in a segment we call Hat GPT. Yes!

In Hat GPT, of course, we take headlines, we put them into a hat, we fish headlines out, discuss them for a bit, and when one or the other of us gets bored, we simply say, stop generating. We have not done a Hat GPT in a while, and there's been so much that I'm excited. see what's in the hat me too well let's why don't you go ahead and get us started okay i'll pick first okay All right, this one is called Meta Kills AI Generated People Like Proud Black...

Queer Mama. This is from Futurism. So this was sparked by an interview that was given by a Meta executive in the Financial Times at the end of 2024, basically talking about their plans to let users create a... a bunch of AI profiles and sort of fake people and get them to share generated content on meta platforms.

And then people began discovering the existence of these older AI-generated profiles that Meta had started up back in 2023. And Washington Post columnist Karen Attia posted on Blue Sky about one AI... profile in particular that was described as a proud black queer mama of two and truth teller named Liv. And Karen started chatting with this chatbot. She then posted her chat on Blue Sky.

And Meta summarily killed Liv and many of its other older AI personas. You know, this whole thing was so silly. And I think there's been a lot of just backlash against Facebook for this one because this... Yes. succeeded was they let you pretend like you were talking to Luke Skywalker or Spider-Man or characters that were very personally meaningful to you. Meta just made up a bunch of...

essentially generic humans and said, go nuts, and had them say generic things. And it just felt incredibly creepy to people, I think. Yeah, I think this is a case of... an idea that needs to be taken out back and dispensed with. But Meta is not giving up on the idea of AI-generated personas. In fact, they have signaled that they intend to put more AI-generated personas inside all of their apps.

And I'm just fascinated to see what fresh horrors emerge when that happens. Here's what I hope. I hope that at some point Meta will be able to detect when you're harassing or abusing someone, which is, of course, now allowed under their new rules. And they just actually route you to an AI so that the AI... can sort of absorb all of your prejudice and bigotry. It might be a nice solution. I like that, like an AI punching bag. Exactly. Yeah, okay, stop generating.

All right. I feel like normally when it's my turn to pick, I get to shake the hat. But for some reason this week, you've decided you want to shake the hat. Okay. I'm just going to shake the hat. As is my right. All right. Here's one. Apple agrees to pay a $95 million settlement in a Siri privacy lawsuit. Kevin, this is from Chris Velasco at The Washington Post. Apple has agreed to end a five-year legal battle over user privacy related to its virtual assistant Siri.

with a $95 million payout to affected customers according to a preliminary settlement. Apparently, Kevin, Siri was a bit overzealous in listening for wake words like Siri. So when it thought it was being called into action, it would start recording. Thank you. according to some couples having sex. So if a judge signs off on the settlement, anyone who qualifies can submit a claim for up to five Siri-enabled devices for a max payout of $20 per device. So I guess my question to you is...

Would you be willing to let Apple listen to you have sex for $100? Let me just say. I'd go for it. No, I don't think my price is a little higher than that. No. But Casey, I saw this one making the rounds because people said, oh, finally, they're admitting that they listened to you through the microphone in your iPhone, which has been, of course, a- favorite conspiracy theory of people, including...

critics of Meta for years now. There's no proof that that is true. What this essentially seems to be saying is it's not that this was sort of an omnipresent listening Siri that was listening when it shouldn't be. It's that... you know, obviously Siri... needs to be listening sort of ambiently in order to tell when a user says, hey, Siri. That's right. And I'm sorry if we just woke up your Siri on your iPhone and you're no longer listening to this podcast because I just said that.

But this is essentially saying it sounds like that it was a little... miscalibrated to where it was listening more than it needed to be to sort of listen for that wake word or that it was recording more audio than it needed. Yeah, and I don't care about the actual incident, Kevin, and here's this reason. In the 14th... years that Siri has existed, I think it's correctly understood me about four times.

This is not a technology that ever knows what I'm talking about for any reason. Siri could take an hour-long recording of me and have no idea what to do with it, so I don't care about that aspect. What I do care about is this is just going to fuel the most annoying conspiracy theory in tech.

that all the tech companies are secretly listening to you. So yeah, we're just going to see a lot more conspiracies around this. And it is super unfortunate because again, this is only Siri we're talking about. It doesn't know anything. Yeah, it's not that serious. Stop generating. Okay. This one is from The Athletic. Netflix's WWE investment and the future of live events on the platform. Quote, we're learning as we go. Starting January 6th, the story says...

The WWE's popular weekly wrestling show, Raw, will stream exclusively on Netflix in the United States. This is part of a decade-long agreement worth a reported $5 billion. And Casey, as Hard Fork's resident WWE fan... and expert. Why don't you take this one on? Well, Kevin, I mean, did you watch? No, I did not. Well, you missed something huge, which is that Roman Reigns beat his cousin, Solo Sokoa.

in a tribal combat match, winning back the Ula Fala and becoming the one tribal chief of the World Wrestling Entertainment. Is that true? That is all true. It was a great match. It was a really fun show. And I think it looked great. You know, WWE positioned this as a really...

huge thing for them, and it is. It's also huge for Netflix. From WWE's perspective, now they can be in something like 280 million homes around the globe. For Netflix, they get to experiment with some of this live programming, which they've been

dipping their toes into. Of course, there's a lot of speculation about whether they might soon go after more traditional sports. So maybe they want to get a big football deal, a big baseball deal. And so I'm very interested to see how these two things work together. And I'm very interested. to see who Cody Rhodes will be fighting at WrestleMania this year. So yeah.

I did see the, I mean, obviously they did the big Jake Paul, Mike Tyson fight that was on Netflix. I also saw on Christmas Day, they had some live football on Netflix. That's right. Do you think this is hastening the death of... cable TV, or do you think it's just that was sort of already happening, and this is just Netflix trying to pick up the pieces? I absolutely do. You know, I watch, in addition to WWE, another wrestling promotion, AEW, and the reason that...

I had my YouTube TV account, which cost me something like $80 a month, was so that I could watch AEW programming because that is only available on cable. Well, guess what, Kevin? AEW started streaming on Macs. I was able to cut the cord once again, and now I am...

fully streaming again. So yes, as these sort of live events that have these, you know, intense, weird fandoms move from traditional cable to streaming, it absolutely becomes a moment where more people cut the cord. Now this is a little bit of a tangent, but I did have an interesting moment over the break. where we were stuck in a motel in Lake Tahoe. And our iPad that we used to sometimes entertain our child had run out of battery. And so I was forced to...

turn on the hotel TV and try to explain to my two-year-old son the concept of linear TV. And Casey... It blew his freaking mind. I was like, so on this screen, you can watch Bluey sometimes, but not all the time. And you can't pick a specific episode. And then about...

twice an episode, they're going to interrupt the episode to try to sell you toys. And he was just so confused by the concept of linear TV that I thought, you know, this industry probably does not have a long time left. No, it doesn't. Your child knows. Yeah. Yeah. All right. We'll stop generating.

Now, oh, this was a fun one. So the YouTuber Megalag posted a video on December 21st titled Exposing the Honey Influencer Scam. And ever since, Kevin, YouTube has been overtaken by discussion of what Honey did. Yeah, this... in the world of YouTube creators was probably the biggest news story of the year. And I don't think I've heard much about it outside of YouTube because of the sort of way that...

insular platform works. But essentially, this was a massive scandal among major YouTubers over the holidays. Maybe we should just sort of explain what happened for people who are not glued to YouTube 24-7. I think we should. So Honey is a company that was acquired by PayPal a while back, and they are a browser extension. And the idea is before you go to checkout online, before you make an online purchase, you click the Honey button and Honey will...

scan the landscape for the best coupon. Because, you know, often if you have a coupon code, you can get a little discount. And so Honey went out to a bunch of YouTubers and signed these deals, and they said, hey, please go ahead and promote Honey. And the reason that this is important is that... These sort of coupon codes are a big part of the creator economy. We've talked on this show in the past about affiliate links. A lot of the internet is built on

Companies that sell things, giving a little kickback to people who talk about their things. Right. And I think before we say what the allegations against Honey are, we should just like set the scene for people who are not YouTube heads. The relationship between, like, Honey was maybe the most prominent advertiser on... major mainstream YouTube channels. I mean, I would say that Honey sponsorships propped up YouTubers and YouTube content creation.

In a similar way that online mattresses propped up the podcast industry for a couple years. major, major YouTube influencers, you know, David Dobrik, Emma Chamberlain, the Paul brothers, Marquez Brownlee, these people, you know, many of them had major deals with Honey to sort of underwrite.

their channels. So they were basically ubiquitous. It was hard to watch a lot of YouTube a couple years ago without running into honey ad after honey ad. Right. So what are the allegations that Megalag publishes? Well, it's two things. is that, and this is just sort of hiding in plain sight on Honey's website.

Honey will actually go to online retailers and charge those retailers money to keep their best codes out of the Honey database. So let's say you have your online store and you have like a crazy 80% coupon that you gave out. Honey will say, oh, we'll make...

sure that no Honey user actually ever sees that coupon code. So Honey is straightforward about that, but it's obviously a terrible user experience, right? Right, because the way Honey works, like in a nutshell, is there are these coupon codes, people, you know, there are sites where you can go look.

up coupon codes before you buy something, try to find, you know, a 10% or 20% off coupon. Honey will basically go out and scour the internet for these codes for you and then automatically apply them to

to your purchase in your browser for basically any e-commerce website that has these codes. So save you a little money while you're out shopping. That's right. And if that had been all that Honey was doing, this wouldn't have been a scandal. But then there was the second allegation from Megalag, Kevin.

And that was that when people would see products in these influencer videos and they would go to buy them, those shopping carts would often get the creator's affiliate link inserted. So the creator would then get a kickback, which is, of course. the whole point that creators like to work with these companies that share affiliate links so they can get a little bit of money.

And the allegation is that Honey was going in at the end of this process and replacing the creator's affiliate link with Honey's affiliate link. So Honey got to keep all of the affiliate revenue and cut the creator's. out of the process so just let's just walk through this step by step okay so i am watching a

major YouTubers video. You're watching the Hard Fork channel. I'm watching the Hard Fork channel. We don't actually have affiliate links in our videos, but if, say we did, say we're out there, you know, we've got, you know, a, you know, online. mattress company that we have a promo deal with. And every time you go and buy a mattress and enter the code hard fork at checkout, you get 10% off.

The allegation was that Honey, in the instances where a user went to go buy a mattress from our affiliate link, if they used Honey in their browser, Honey would... find that affiliate link and replace it with the Honey affiliate link. And so instead of getting a kickback on that sale ourselves, that money would instead go to Honey. That is exactly right. And so people are quite mad about this. There's a channel called Legal Eagle that is suing them, which I...

I know nothing about Legal Eagle, but I have to say that sounds exactly what a YouTube channel named Legal Eagle would do, would just be to sue one of the advertisers. When The Verge asked PayPal, by the way, about all of this, PayPal said, quote, And what I take that to mean is that the industry rules and practices is horrible, and Honey is not doing one thing to try to improve on them in any way. So, you know, this was really...

a case where creators took a look at the situation and they said, I don't think so, honey. And that's a Los Culturistas reference. And I would just say that I think this is a case of like... People just really being naive about how the internet works. You know, Honey was a very popular, so profitable and popular that PayPal, you know, acquired it. And people just...

Really, YouTubers just thought they were out there providing these coupon codes to people out of the goodness of their hearts. And I just want to say, bless your heart, if you thought that's what Honey was about. YouTubers are telling Honey to mind their own beeswax. I'll stop generating. Okay. Last one.

LA tech entrepreneur nearly misses flight after getting trapped in robo-taxi. Passenger Mike Johns was reportedly riding in an autonomous Waymo car on the way to the Phoenix airport when the vehicle began driving around a parking lot.

lot repeatedly, circling eight times as he was on the phone seeking help from the company. Did you see this video? I did see this. This was so wild. So he initially... believed it was a prank, he told the Guardian, and then he sort of gets on the phone with the support person at Waymo as he's inside this car that is just circling the parking lot, and it won't let him out, and as a result, he almost missed his flight.

You know, I think this is every Waymo support person's fantasy is that one day you just pick a random Waymo and you just start driving it around in circles in the parking lot with no explanation. Maybe you're like teaching your kid how to drive or something like that. No, this would obviously be somewhat disconcerting, but it is also hilarious. And I have to say, if I made a list of like the 10 worst things that ever happened to me in an Uber, for example, driving around.

in a circle eight times would not make the top 10. Yeah, I've almost missed my flight several times because of Uber drivers just thinking they know a better way to the airport. So yes, I would say we shouldn't make light of this. People are placing their life in Waymo's hands when they get into one of the...

these autonomous cars. And I did see some people saying, see, this is why I would never trust a self-driving taxi. And I do think it's worth taking these incidents seriously. At the same time, no one was hurt. This was a case of clearly some like little software.

glitch or something or something, some issue with them. I don't think they ever got to the bottom of what happened here. Look, here's another way of thinking about it. Maybe this is a final destination situation where if, you know, the Waymo had gotten immediately on the freeway, maybe there would have been a terrible accident, but something in the training set.

out, we need to stay in this parking lot, we're going to drive around in eight circles, and that will sort of reset the timeline and ensure that Mike makes it safely to the airport. Something that they can think about. Do you know how like airport Wi-Fi sometimes makes you watch an ad before you can get the free Wi-Fi? This is giving me like an evil business idea, which is like, oh, you want to get out of your Waymo and make your flight?

Time to click over to Honey. Complete your purchase with Honey if you want us to stop circling this parking lot. God, someone out there is taking notes. I'm so sorry. Stop generating. That is Hat GPT. Casey, it is so good to be back with you in the studio doing one of our favorite games. Hats off to you, Kevin. And hats off to all of our listeners.

Art Fork is produced by Whitney Jones and Rachel Cohn. We're edited this week by Rachel Dry. We're fact-checked by Caitlin Love. Today's show was engineered by Chris Wood. Original music by Alicia Baitube, Rowan Nemisto, and Dan Powell. Our executive producer is Jen Poyant. Our audience editor is Nell Gologli. Video production by Ryan Manning and Chris Schott. You can watch this whole episode on YouTube at youtube.com slash hard fork. Special thanks to Paula Schumann.

Hui Wing Tam, Dahlia Haddad, and Jeffrey Miranda. You can email us at hardfork at nytimes.com with something really mean that you can say on Facebook now.

This transcript was generated by Metacast using AI and may contain inaccuracies. Learn more about transcripts.
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast