Bad Apple + The Rise of the AI Empire + Italian Brain Rot - podcast episode cover

Bad Apple + The Rise of the AI Empire + Italian Brain Rot

May 09, 20251 hr 12 minEp. 135
--:--
--:--
Listen in podcast apps:
Metacast
Spotify
Youtube
RSS

Summary

This episode of Hard Fork covers Apple's antitrust battle over App Store policies, featuring a discussion on developer restrictions, malicious compliance, and the implications of the ruling. It also includes an interview with Karen Hao about her book on OpenAI, exploring AI's impact on society, resource exploitation, and the debate between AI safety and accountability. Finally, the hosts dive into the bizarre world of Italian brain rot, an AI-enabled viral phenomenon.

Episode description

This week, iPhone users started to feel the impact of a stern court order against Apple that requires the company to stop collecting a commission on some app sales. We break down what this means for apps like Kindle and Spotify and why the judge suggested that Apple and a top executive should be investigated for criminal contempt. Then, Karen Hao joins us to discuss her new book about OpenAI and explain why she believes the benefits of using the company’s tools do not outweigh the moral costs. And finally, Casey introduces Kevin to a strange new universe of A.I. slop that’s racking up millions of likes on TikTok.


Guest:


Additional Reading:

 

We want to hear from you. Email us at [email protected]. Find “Hard Fork” on YouTube and TikTok.

Unlock full access to New York Times podcasts and explore everything from politics to pop culture. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify.

Transcript

Hey, I'm Robert Van Leeuwen. I'm from New York Times Games, and I'm here talking to people about Wordle and the Wordle Archive. Do you all play Wordle? I play it every day. All right, I have something exciting to show you. It's the Wordle Archive. What? Okay, that's awesome. So now you can play every Wordle that has ever existed. existed. There's like a thousand puzzles. What? We're at an archive.

Oh, cool! Now you can do yesterday's Wordle, if you missed it. New York Times game subscribers can now access the entire Wordle archive. Find out more at nytimes.com slash games. Subscribe by May 11th to get a special offer. Well Casey, have you heard the exciting news this week? Which news, Kevin? The Golden Globes.

are adding a podcast category. I did not hear that. Yeah, that just came out. So, yet another award that we're not going to win. Well, I don't know about that, because if I know one thing about the Golden Globes, it's that until very recently, it seems like you could just bribe them directly to win.

I don't know if that's still true, but we should look into it. Yeah, what does it cost to win a Golden Glove these days? I don't know, a few hundred dollars? Checks in the mail. Wait, unless there are tariffs. Topical. Okay, now we're definitely not winning. Now we're not winning? Because I accused them of corruption? Yes. Oh, listen. We speak the truth on this podcast, okay? We do. I don't care what it costs me.

I'm Kevin Roos, a tech columnist of the- For this week, the scathing... to give up some control. its app store and could send an executive to jail. Then, author Karen us to discuss her new book on the history of open AI and the hidden costs And finalini! It's time for me to teach Kevin about the joys of Italian brain rot.

Casey, have you noticed a smell in the air over San Francisco this week? Many smells in the air, Kevin. Well, the smell I'm talking about, Casey, is the smell of freedom. Because in the last week, Apple has lost its iron grip. the iOS App Store thanks to a ruling by a judge. Commerce is legal in America again, Kevin. Yes, so we're going to talk about this today. Apple has been forced to make some big changes to...

It's App Store by a lawsuit that was brought by Epic Games, the maker of Fortnite. A judge ruled last week that Apple had not complied with an earlier injunction, and we will get into all of that. But first I just want... you to make the case that this matters to normal people, why should the average person with an iPhone care about what Apple's rules for its app store are? Well, to me, it actually starts with the Kindle app, Kevin. Lots of people love to read on their phones and tablets.

And I think most people I know in my life have had the experience of opening up the Kindle app or the Amazon app thinking, I want to buy that book. And then there's just kind of a big blank spot where you're expecting to see the buy button. And Apple is the reason for that blank spot. They charge such a high commission on ebooks that Amazon and other companies cannot profitably sell them.

And so since the dawn of the App Store, in order to buy a book on your phone, something that should be very easy, has required you to open up a browser, log into an Amazon account, navigate that whole system. Amazon is not alone. Many many developers have had to go through similar contortions just to be able to sell their products and still make any kind of profit.

Yeah, this is the so-called Apple tax of up to 30% that developers have to pay when they want to charge for apps or purchases within their app. And for many years, Apple has not only levied this tax but they have also made it impossible for those developers to direct users off of Apple's platforms to say, hey,

If you want a better deal on this Spotify subscription or this Netflix subscription or this purchase of an iPhone game, you can actually go on the web and get a better deal there because there we don't have to pay Apple's 30% fee. That has not been allowed. And so Epic Games, which makes Fortnite, brought a lawsuit years ago to try to get those policies changed. And in 2021, a judge in California named Yvonne Gonzalez Rogers

Ruled that Apple had violated the law in California against unfair competition. She ordered Apple to allow apps to provide users with links to pay developers directly for their services. and that way they could avoid paying Apple's 30% commission. And after that ruling, Apple did go and make some changes, but apparently they didn't do a good enough job.

No, and I would say this has been apparent to most people who've been following this. I think we've talked about this on the show. Apple did what is often called malicious compliance, doing the absolute least while dragging and kicking and screaming the whole time.

Yeah, so we're going to talk about some of that malicious compliance, but let's just say straight up, This was a scathing opinion i have rarely read a judge who is so obviously angry at a tech company for doing what they did no this was the kind of speech that you typically only see on a bravo reality show So Judge Gonzalez Rogers not only accused Apple of doing this kind of malicious compliance,

but she also accused them of outright lying to the court under oath. She referred both Apple and their vice president of finance, Alex Roman, for potential criminal prosecution for perjury. And we should just read the last paragraph of the order from Judge Gonzalez Rogers, which is truly the mic drop moment. She writes, quote,

Apple willfully chose not to comply with this court's injunction. It did so with the express intent to create new anti-competitive barriers which would, by design and effect, maintain a valued revenue stream, a revenue stream previously found to be anti-competitive. That it thought this court would tolerate such insubordination was a gross miscalculation. As always, the cover-up made it worse. For this court, there is no second bite at the apple. Period.

but you know what it kind of was a second bite at the apple because she bit him the first time and then they didn't do it so she had to bite him again So let's just talk for a second about some of the details that were revealed in this judge's opinion that have come out about how Apple tried to... skirt compliance with this earlier 2021 injunction.

Yeah, and this was well known to all of the developers, but if you wanted to use an external sales system in the App Store, you still had to pay Apple a commission. And that commission was 27% or 3% less than it was paying Apple. And of course, these companies have to pay the payment provider.

So basically, Apple created a system where you were actively disadvantaged in multiple ways from trying to operate outside of the App Store. Yes, so I knew that Apple was charging a commission for... apps that would send people, like if you're Spotify and you want people to be able to subscribe to your app on the internet, pay a lower price, pay you directly rather than going through Apple, you could do that.

under Apple's revised rules, but Apple would actually charge you a 27% commission, which by the time you added...

Credit card fees on top of that would probably be more than the 30% that they would charge you So this was clearly a case of Apple trying to say well go ahead and use this other system But it's not actually gonna save you any money. No And what I did not realize until I read Judge Gonzalez Rogers' opinion here was that Apple would not just collect those commissions if you went directly from an iOS app to the web to buy a subscription or a service.

But if you went a week later, they would be able to track that you had gone to the web from the iOS app and they would still charge the developer that commission. Yeah, it was absolutely outrageous. It was insane. And it was also not the only thing that Apple did to try to dissuade... iOS users from going to external links to buy goods and services outside of their payment system, Casey.

What is a scare screen and how did Apple use this? The scare screen was a pop-up that you would see when a user did actually try to click out of the app store to make a purchase using an external system. And while these were not the exact words, Kevin, here was the vibe.

hey, loser, looks like you're trying to do something stupid. You're probably going to die. Do you want to try it anyway? And believe it or not, Kevin, when people saw a message that had that vibe, most of them just chose not to click it. Yeah. And what was so amazing about this was that Apple I guess had tried to protect some of its private company communications.

from being seen by the judge in this case by claiming some sort of attorney-client privilege. But the judge said, no, no, no, out with it. Let's see those emails. so we have in this opinion lots of emails between Apple executives including Tim Cook the CEO talking about the very specific language to put on this scare screen and how to make it even scarier so that users would be less inclined to go outside of Apple's ecosystem and make a...

Yes, and these internal documents showed that the company would lose minimal revenue or no revenue at all from this. That they built a system that was maximally designed to protect... their revenue which was contra to the judges order which she wrote in the spirit of increasing competition and other companies revenue Yeah, so to put it mildly, Judge Gonzalez Rogers did not find any of this charming in the least. And she also directly accused at least one Apple executive of lying outright.

under oath about what it had done. Casey, explain the perjury charge here. Yeah, so this perjury charge was leveled against Alex Roman, the vice president of finance at Apple. And among other things, she focuses on this moment where he testifies that until January 16th, 2024, which is when Apple's revised system went into effect. Apple had no idea what fee it would impose on purchases that linked out of the App Store.

He testified that the decision to impose a 27% fee was made that day. Which is just like so obviously untrue. And of course... During the legal proceedings, business documents revealed that the main components of the plan were determined in July of 2023. so basically this guy got caught red-handed and the judge is gonna punish him for it Yeah, and so effective immediately, according to Judge Gonzalez Rogers' order, Apple has to drop these commissions, these 27% fees on these external links.

And Apple, as of last week, had officially updated its App Store guidelines to allow those links out of the app in the US. But Casey, what are the implications of this and how are other developers that put stuff on iPhones reacting? So developers are reacting by implementing the links that they've always wanted to have. In the Kindle app, for example, now, you will see a Get Book button. You'll tap it and it'll kick you out immediately into a browser where you can complete a purchase.

Spotify, Patreon are also doing something like this. This is not a perfect solution. You can't actually just buy a book in the Kindle yet for reasons that actually aren't entirely clear to me. Maybe we'll get there. On the whole, we are essentially removing the restrictions that prevents outside businesses from communicating with their customers, telling them about deals, telling them about their websites.

Just these sort of like very onerous restrictions on the speech of these other companies have been wiped out. Yes, and I think that gets to why these arcane and somewhat small-seeming changes to the rules governing Apple's App Store really are important. Apple has been for many years this sort of godlike gatekeeper on any company that wants to make things for the billion-plus iPhones out there. They have made extremely strict and specific rules about

how developers can and can't build their apps and sell products and services to customers. They have effectively been a landlord over the entire digital services economy. And I think... Judging from this opinion, they have really abused that power, and now they are getting slapped on the wrist for it. Yeah, and I think it has been to their own detriment, Kevin. Apple's view is that these developers should feel lucky that they got to sell in the App Store at all.

When in reality, a big reason that we buy iPhones is because of the apps that are there. the Amazon app and the Spotify app and the Patreon app and, you know, all these other apps off of the iPhone, people would start considering alternatives, right? And so I think that the balance in between the developers and Apple had just gotten completely skewed, and Apple has not been recognizing the value of what those developers are bringing to iOS. Yeah, so you think this ruling is a good thing?

I think it is absolutely a good thing. I think it has been long overdue and I hope it is upheld after Apple appeals, which it is going to do. But what do you think? Yeah, I mean, I... I think it's an open question. So Apple's defense of these App Store rules has always been some version of like, We're protecting our customers, right? If we let people, you know, sideload apps onto the iPhone in a way other than through the App Store.

People will put all kinds of dangerous malware and stuff on the iPhone and you'll be sorry. If we let people pay for things on external websites, then people will run all kinds of scams and people will be taken advantage of. And so by implementing these rules, we're really protecting our customers. It's for your own benefit, essentially.

And I think it'll be really interesting to see if when these restrictions are gone, people actually do say, we wish that Apple were taking a more active role here. We want some of these restrictions back. or if the net result is just going to be that people have more choice,

and they pay a little less for stuff because the developers making that stuff are not having to pay 30% of the revenue to Apple. Well, I think that's going to be the case. This whole argument that Apple maintains this pristine, vigilant control over the App Store I think has always been... mostly a fantasy, you know.

Think about in the early days of like ChatGPT before there was an app, you know, you would go onto the app store and you would search for ChatGPT. You would see a dozen plus apps that were all just clearly misrepresenting themselves as open AI that were some of the most revenue generating apps in the entire app store. Apple could have stepped in to prevent that.

They didn't. I'll give you a more recent example. One of the best video games of the year is called Blueprints, P-R-I-N-C-E. All of the gaming bloggers love it. I've been playing it. I've been loving it myself. The day it came out,

Somebody just ripped it off and just uploaded it onto the app store and was selling it for, I don't know, 10 bucks or something. Why didn't Apple know that? They are not paying the attention to the app store that they are telling you that they are paying. Yeah. I mean, to me... The most interesting part of this, as with a lot of these antitrust trials that are going on right now, was just seeing the internal communications at these companies.

And in this ruling, there are all these fascinating excerpts from these emails and messages between Apple executives who are talking about the various plans that they had to circumvent this injunction and charge this 27% fee. They had all these code names like Project Michigan or Project Wisconsin. so that they could talk about this stuff in a way that would not be obvious that they were doing some sort of price fixing.

And it just makes you realize these giant tech monopolies did not end up that way by accident. They have had to work very hard for a very long time to prevent competition, to keep their market power and their dominance. And I don't know, man, there's just something really depressing about that. Like these are companies that used to succeed by making good things that people love.

And in some respects, they still do that, but they also spend just a ton of time. Their top executives are in these meetings talking about whether the fee should be 27% or some other number. And it just makes you realize like they have really lost the plot here. Absolutely. Well, let me try to cheer you up a little bit then, Kevin, because I think there actually is a negative consequence for these folks of just growing their profits.

so big on the basis of this extremely easy money where they just make every developer pay this very high rent to them, and that is Apple has been missing the boat on next generation technology. We know that they invested billions of dollars into a car project that they could never figure out and had to abandon, right? We know that they are struggling to figure out how to do anything with AI and have had to walk back.

bunch of claims recently in a really embarrassing way we know that the vision pro their most recent hardware initiative is not taking off in part because developers do not want to make apps for it because they have not been able to get rich making apps for it, right?

So all of this stuff is just adding up in a way where Apple's decisions really are coming back to haunt it. And while it remains a giant, and I'm sure will for a very long time, we are starting to see some little cracks in its armor. YES AND YET Apple just reported its earnings for the last quarter it made $95.4 billion in revenue, up 5% year over year.

Despite the fact that they are missing all of these new innovations and trends, that they're late on generative AI, that they haven't succeeded with the Vision Pro in the way that they had hoped. they are still doing quite well as a company. So I don't know that this is actually coming back to bite them in the way that we might hope it was. Well, I mean, let's see what happens. You know, the idea behind these rules was never to...

make Apple a tiny company that was struggling to get by. It was just to get them to share a very small portion of the wealth with a large number of developers. Apple has done a ton of incredible, innovative things. They deserve to be rewarded for that. They deserve to take... some sort of commission from the apps in the app store, right? But this has been about trying to create a more level playing field for other developers out there. The end result of this is that Apple is still...

pretty rich and profitable, I think that will actually make the point that the judge is making, which is that there is no need for Apple to engage in the sort of shenanigans it's been up to. Yeah, I think the best outcome possible here... is that all the big developers that can afford to sort of develop their own payment systems for their apps or send people to external websites to buy things, that they do that and they start charging way, way less than 27% for that.

and that Apple is ultimately forced to improve its own payment system to maybe reduce its fees to, in other words, compete. Like, that is what all of this is about, is forcing Apple, a company that has not had to compete. for the affections of iOS developers in a long time to finally step up and do something different. Keep in mind, even Microsoft, which was sued for anti-competitive behavior back in the early 2000s,

They never said, we want to take a 30% cut of every software program sold on Windows. They actually left a lot of money on the table. And it helped that ecosystem to thrive, right? I would like to believe something similar could happen here. When we come back, we'll talk to author Karen Howe about her new book on open AI and the costs of building such big models. times, a turning history? Are we entering a dark authoritarian era, or are we on the brink of a technological golden age?

the apocalypse. No one really but i'm trying to find out from new york times opinion i'm ross douthat and on my show interesting times i new world order, with the thinkers and leaders giving it Follow it wherever you get your podcasts. Well, Casey, it's a day ending in Y, so there's some open AI drama making the rounds this week.

Yeah, although I don't know if this is so much drama as the company is trying to retreat from drama, Kevin. Yes, so OpenAI announced on Monday of this week that it was no longer trying to... get out from under the control of its non-profit board. That was something that a lot of people, including Elon Musk, had objected to. A lot of former OpenAI employees and others in the AI field had said, hey, wait a minute.

You can't do that. You've still got to have this nonprofit board controlling you. And OpenAI, after hearing from some attorneys general that they were not happy about this plan, has retreated. So what is the new plan, Casey, and how is it different than the old plan? So the old...

The old plan was basically the non-profit is going to no longer have any control over the for-profit enterprise. It's going to go be a separate thing. It's going to invest in various AI-related causes and philanthropies. Under the new plan, the nonprofit is going to retain control over the for-profit. So basically, the status quo is going to be in effect, Kevin, except for a couple of key changes.

One is what is now a limited liability corporation is going to become what they call a public benefit corporation. And a PBC, as they are called, has a responsibility not just to think about shareholders like Microsoft and SoftBank and everybody else who owns a chunk of OpenAI, but also to think about the general public, right? So that's sort of one important idea that's there.

The other big idea is that the nonprofit is currently set to get some unlimited amount of profits if, you know, OpenAI does eventually become a trillion-dollar company. That's not going to be the case anymore. Under this new model, the for-profit is going to give some stake to the non-profit.

But after that, it's going to be a very normal tech company. Everybody who owns shares, all of the employees, they can get unlimited upside. And the more money that OpenAI makes, the more money that they can make too. Right. So these profit caps that OpenAI had previously had in place where...

investors like Microsoft were sort of limited to earning some multiple of the amount that they put in and no more, those caps are now going away. Yeah, they put on their thinking caps and they said, we're getting rid of the profit cap. Well, it just goes to your point that you've been making on this show for years now, which is that OpenAI is a very weird company. Yes, and I have to say, when Sam Altman wrote a letter to employees this week, the first sentence of the letter was, quote,

OpenAI is not a normal company and never will be. And I felt so seen. Somebody's been listening to Hard Fork. And in other open AI corporate news, the company announced late Wednesday that its board member Fiji Simo would leave her job as CEO of Instacart to come be the company's new CEO of Applications overseeing its business and product division.

So we are not going to do a whole segment about the OpenAI corporate conversion story this week. Because we love you too much. We love our listeners too much. We would not subject you to that. But we are going to talk about it and many other things related to OpenAI with Karen Howe. Karen Howe is a reporter who has been covering OpenAI and the AI industry for years now.

And she has a book that's coming out later this month called Empire of AI, where she writes about Sam Altman and OpenAI and what she calls the dreams and nightmares of this very strange company. Yeah, and you know, by the way, I think she should already start working on a sequel and call it...

The Empire Strikes Back. Something to think about. Yes, and this is a very buzzy book. People in Silicon Valley and at the AI companies have been sort of nervously waiting for it. Karen is very unsparing in her descriptions of

AI companies and the AI industry. I would not say it is a book that the AI industry will think is flattering, but it's an important conversation to have because I think it's got a lot of people talking. Absolutely. And before we do that, Kevin, do we have anything we want to disclose? Well, let me make mine first. My boyfriend works in Anthropic.

Kevin, you're coming out? I'm so happy for you. No, I work at the New York Times company, which is suing OpenAI and Microsoft for alleged copyright violations. Interesting. And my boyfriend works at Anthropic. Yours too? Yes! Anyways, let's bring a Karen. Karen Howe, welcome to Heart Fork.

Thanks so much for having me. So I imagine your book is sitting there behind you on the shelf. It's all printed up. It's ready to go. And then this very week, OpenAI puts out a story. Hey, maybe we're going to change our structure around again. Why the heck not? So what's it like trying to write a self-contained book about a company that just never stops making news? Tiring.

Yeah, but you know, like, honestly, people have been asking me this question a lot. Like, how do you even write a book at a book scale? Because usually it's like months on end before it goes to publish. And I think sometimes the news is actually a little bit distracting in that

Yes, there are a lot of changes happening. Yes, things are evolving really fast, but there are some fundamentals that are kind of ever-present. And so I try to keep the book focused on the things that don't change so much. Among other things, this book is a history of OpenAI. Maybe let's go back all the way to the beginning. What was this company like when you started writing about it?

So I started writing about OpenAI in 2019, and I went... to the office to embed with them for three days as the first journalist to profile what had just become a newly minted company so Right before I started covering it, it was still founded as a non-profit and it had this explicit goal that it should be a counterbalance to for-profit companies.

And it sort of became clear to me during my time at the company then that the idea that this was a bastion of idealism and transparency and was going to be totally open and share all of its technologies to the world and not at all be beholden to any kind of commercialization.

was already going away and there were a lot of kind of early signs of that that I picked up on while I was there just there was a lot of secrecy for a company that purported to be incredibly transparent and there was a lot of competitiveness. which to me suggested that if you're going to be competitive and you want to specifically reach AGI first, you are going to have some really hard trade-offs with this transparency mission and this like open up everything to the public mission.

So I've talked to some people at OpenAI who have said that they felt quite burned by some of your early coverage of them, like they were expecting. something different than they got. And you write in the book that after you published your story on them, they stopped talking to you for three years. I'm just curious what you think surprised them about your coverage or if they should have been surprised given some of the questions you were asking.

I think they were surprised because They gave me a lot of access and they thought that I would sort of adopt a lot of the narrative that they were giving me. And to be honest, like I kind of came in without really a lot of expectations. It was actually my first ever company profile. And I was going in kind of just with an open mind of, okay, like this.

company presents itself as as this like ethical um lighthouse and like what Let's try to understand a little bit like how do they organize themselves and how did they try to achieve the goals that they've set out to do? And I just found that they couldn't quite articulate what their vision was, what their plan was, what AGI was. And I think the prioritization of the problems that they were saying that they were focusing on just didn't quite feel...

Right to me, like I pointed out to them that there were environmental issues that were starting to become more and more of a concern as AI models were scaling larger and larger. And, you know, Ilya said to me, he was like, yes, of course, that's a concern. But when we get to AGI, climate change will be solved. And that was just like, okay, like that's kind of a, you know, it's like a cop-out card to just be like, well, when we get to the thing that we don't know how to define.

all the problems that we might have created along the way will just like magically disappear. And so that's when I started being like, I think we need to scrutinize this company more and just be more cautious about taking all the things that they say out. face value. Right. I mean, it sort of sounds like a microcosm of the arguments that have taken place for the last few years among the AI safety crowd and the AI ethics crowd.

The AI safety people, they're worried about existential risk and bioweapons and malicious use of these systems. And the AI ethics crowd are much more worried about issues like... bias and environmental concerns and things like that. So I want to make sure I'm characterizing it fairly. You yourself are coming from more of a perspective of the AI ethics crowd and that you think we should be paying more attention to immediate harms of these models rather than trying to avert some future harms.

Yeah, so I would call it like the AI accountability crowd. And the reason why I use the term accountability instead of ethics is because I think accountability acknowledges that there's a huge power dynamic happening here where like the developers of these technologies have... extraordinary amount of power that they've accrued and amassed and are continuing to accrue and amass based on this narrative that they need all of these resources to build so-called AGI, right?

I definitely come from that perspective. And I think that if we take seriously the present day harms of what is happening now, that will help us not get to future harms because we will be more thoughtful about how we develop AI systems. today so that they don't end up having wild detrimental effects in the future. And I think this idea that we don't really know how bad AGI might happen or what the catastrophic scenarios are.

is not quite right in that we have already so much evidence right now of like how AI is affecting people in society and also like AI is harming people literally right now. So we need to address that. We need to document that. We need to change that. One of the central arguments of your book is that open AI and the sort of

AI industry in general has become an empire. It's the title of your book, Empire of AI. And that is done so by exploiting people and resources around the world for their own benefit. Sketch that argument for us. Yeah, so if we think about empires of old, the long, centuries-long history of European colonialism. They effectively went around the world, laid claim to resources that were not their own, but they designed rules that suggested that they suddenly were.

They exploited a lot of labor, as in they didn't pay the labor or they paid extremely little amount. to the labor that ultimately helped to fortify the empire. And all of that resource extraction and labor exploitation went and accrued benefits to the empire. And they did this all under a justification of a civilizing mission. They're ultimately doing this to bring progress and modernity to the rest of the world.

And we're literally seeing empires of AI effectively do the same thing. And what I say in the book is like, they are not as overtly violent as empires of old. We've had 150. years of like social mores and progress. So there isn't that kind of overt violence.

Today, but they are doing the same thing of laying claim to resources that are not their own that includes like the labor of a lot of artists and a lot of writers that includes all the data that people have put online that they've just scraped in these internet loads of datasets that includes exploiting labor of the people who they contract to help clean their models and annotate the data that goes into their models.

That also includes labor exploitation in the sense that they are building technologies that are ultimately, like OpenAI literally says their definition of AGI is to... create AI systems that will be able to outperform most humans in economically valuable work. That is a labor automation machine. So they're also exploiting labor in the sense that they're creating these AI systems that will dramatically make it more difficult. for workers to kind of demand rights.

And they're doing it under this civilizing mission where they're saying, like, ultimately, this is for the benefit of all of humanity. But what we're seeing is that...

You know, that's not true. When you go far and away from Silicon Valley, when you go to places like the Global South, when you go to rural communities, impoverished communities, marginalized communities, they really feel like the brunt of this AI development, this extraction and this exploitation, and they're not at all receiving any of the supposed benefits of this accelerating AI quote-unquote progress.

Let's talk about some of that extraction of natural resources. This is one of the things that your book gets into that I think doesn't get discussed a lot in the context of AI. Tell us about some of your reporting and what you saw. Yeah, so I ended up spending a lot of time in Latin America and also in Arizona to kind of understand the just sheer amount of computational infrastructure that is now being built to support the generative AI paradigm and the quest to AGI.

And these are massive data centers and supercomputers that are being plopped in communities that initially accept this kind of infrastructure, either because They don't know about it because companies enter these communities in shell companies and aren't transparent about actually putting this infrastructure there. Or they're sort of persuaded into it because there seems to be like a really positive economic case.

where a company comes in and says, we're going to give you hundreds of millions of dollars to build this data center here, and it's going to create a bunch of jobs. And what they don't say... is that the jobs are not permanent. They're talking about construction jobs, and once the construction jobs are over, there's actually not that many jobs for running the data center. And these data centers, they consume an enormous amount of power and they consume an enormous amount of water.

because they need to be cooled when they're training these models 24-7. And this infrastructure is... So once it gets put there, even if a city doesn't have that kind of energy anymore or the water to provide to these data centers, they can't really roll it back. And in Chile, I was with activists who had been fighting tooth and nail to try and get these data centers from not literally taking all of their drinking water.

and they were entering also communities in Uruguay where I was spending time as well during a drought. where people literally were drinking bottled water if they could afford it or they were drinking contaminated water if they could not because there was not enough fresh drinking water to go around and that was like when Google decided to build a data center there. So that's kind of when I say that there's like...

The current AI development paradigm is creating a lot of harms at a mass scale. That's the kind of stuff that I'm referring to. Yeah. I mean, part of empire building is about exerting political power, right? I'm curious why the governments in Chile and Uruguay are okay with this. What is the mechanism through which they're deciding to grant all of this power to these AI companies?

A lot of governments learn that they have to serve the global north if they want to get more investment and more jobs and more opportunity into their country and in the ai case it ends up not being a good bargain but a lot of them don't know that up front and so They think that if they can open up their land, their water, their energy, to these companies that somehow they will get more investment, more high quality, like white collar jobs.

In the future, like I was talking with politicians who said that they hoped that if they allowed a data center, then eventually, you know, Microsoft would bring in like an office with like software engineering jobs nearby their data center.

and so that's kind of the reason why they end up doing this and she like has like a really interesting history in particular and that they have dealt with just like centuries of extraction most recently they have become like a huge provider of lithium for the lithium boom and so they sort of um have developed this mentality over time that Like...

This is what they do. They open up their natural resources to these multinationals and that somehow this will convert into economic growth, broad-based economic growth for people. But unfortunately... It doesn't really. Well, I want to push back on that a little bit because I think if I'm being like sort of trying to be sympathetic to the people, the politicians, the communities that are accepting this stuff.

i think there's a case to be made that it is actually helping them maybe not in terms of direct GDP or economic growth, but the World Bank recently did a randomized control trial with students in Nigeria who were given access to GPT-4 for AI-assisted tutoring. and found that it boosted their test scores significantly, and that the gains were especially big among girls who were behind in their classes.

So as I'm hearing you talk about the exploitation taking place, I'm thinking, well, maybe there is something that they're getting in return. Maybe there is something worth it to them. Maybe this technology.

can in some instances help level the playing field between poor countries in the global south and places like america and maybe there's a deal to be had where it's like okay you want to like extract our lithium you want to build a data center in our country sure but you have to give all of our students free access to ChatGPT Pro or something like that. Is there any sort of fair exchange that you can imagine that would help these people?

So I think this question is kind of premised on the idea that like we have to make these trade-offs in order to get that kind of gain. Like we have to give you our lithium in order to like have some kind of educational boost from ChatGPT. And like, that's kind of a premise that I just don't agree with. I think that there are ways to develop AI that gives you the gains without this kind of extraction.

So the reason why I call it Empire of AI in the book is in part to point out that this is not the only pathway to AI development. These companies have chosen a very particular pathway of AI development that is predicated on absolutely massive amounts of scale, massive amounts of resources.

mass amounts of data. Well, that's how you get the models to be general and good and to be able to work in all kinds of different languages. Is there another path that you're suggesting there's another path? Like, what is the path other than through scale? So we don't necessarily know what it is yet, but it isn't being explored at all. And there are already signs that there can be other ways to get to these more general capabilities without that scale.

DeepSeq is a really interesting example of this. I think there are a lot of also problems with DeepSeq, but DeepSeq demonstrated that there is a, even in a resource constrained environment, you can actually develop models that have more generality. And so... I mean this is what science is like you have to discover kind of the frontiers of what we don't know yet and the industry has fallen into this very specific scaling paradigm that they know work.

but it has so many externalities with it that it's ultimately not actually achieving what Opening Eyes says its mission is, benefit all of humanity. And so like... If we constrain the problem to think like, How can we get more positives out of this technology without having all of that negative harm?

I think there would actually be more innovation that would come out, like true innovation that would come out that would be more beneficial. Karen, one thing that is very clear in your book is that you are not a fan of the big general purpose AI models. You call them monstrosities.

built from consuming previously unfathomable amounts of data, labor, computing power, and natural resources. Is there any way for people to engage ethically with these models in your view, or is it all fruit from a poison tree? I think the way that they're being developed right now, my, me personally, I do think that it's true from a poisoned tree. Do you use chat chippy tea at all? Not really, no. Have you ever?

Yes, I have. I'm just curious, because writing a book, I'm doing it now, and I'm finding a lot of uses for AI. I'm just curious, this is a very thoroughly researched book. Was it helpful? any AI tools were used in the creation of this book. So no generative AI tools, but I did use predictive AI tools.

I used Google reverse image search to try and figure out the price of OpenAI's furniture because they had some like really nice chairs and I was trying to explain like the level of upgrade that happened when they went from like a nonprofit and one office to this new like Microsoft back. capped profit entity in this other office. And when I ran the...

Reverse image search through it came up. It was like Brazilian designer chairs that were like $10,000 each Yeah, so I mean like I do use predictive AI but I did not use generative air for this book other than to just like

understand how the tool works and like test stuff it's new features but I like never used it for like getting research or organizing thoughts or anything like that because at the end of the day i'm writing a book about opening eye and like i'm not gonna like willingly hand a bunch of my data about like what I'm thinking about and what I'm researching to open AI in the process. And that's where you and Kevin are different. So I want you guys to interact about this a little bit because

Karen, let me tell you, if Kevin can use generative AI to do something, he's doing it, okay? Like, there's gonna be a lot of generative AI that's going into the making of this book you're writing, right? Well, in the research phase, because I found that it's not that good at, like, composing. Right.

but it is super, super useful for doing... give me a history of the term AGI and where it originated and who was the first people to use it and how it evolved over the years and how has every lab defined it in all of their various publications. Like that kind of thing would have taken me weeks before. And now it's like minutes. Right. So Karen, make your case that Kevin should stop doing that.

So I'm not gonna make that case, but what I'm going to say is This is like the perfect use case for these tools because like these companies are constantly testing their tools for like on like AI topics like that is like the thing that they like stress test their tools on and so if there were any topic in the world that like these chatbots would be particularly good at talking about it would be AI and AGI and so Kevin like

Move forward. Fire away. Here's another thing that I wanted to ask you, Karen, because I think this is another place where we sort of disagree. Yeah. You are very skeptical about the claims that the AI labs are making about AI safety or the concept of AGI. And I guess I'm trying to understand that argument. My view on these folks is that they are sincere, that they are sincere when they worry about AI posing risks.

to humanity. I think that's why they're investing tons of money into AI safety and trying to work on things like interpretability, figuring out how these language models work Is your view that they are sincere but just wrong about AI being an existential threat, possibly? or that they don't believe it at all and that they're just kind of using AI safety as a smokescreen or an excuse for, you know, sort of raising money and continuing to build their model.

I think it totally depends on who you're talking about. So in general, I think There are a lot of people that are incredibly sincere about believing in these problems. I don't have any doubt about that. I talked with a lot of them for my book. And, you know, like I talked to people who were like, Their voice was quivering while they were telling me about being really, really scared about the demise of humanity. That's a sincere belief and a sincere reaction. I think there are other people who...

pretend that they believe in this as, you know, the smokescreen. But I think by and large, like a lot of these people do truly believe and their heart is where their mouth is and they are trying to do good by the world. My critique is that particular worldview is just really narrow. It's just really, really narrow and like a product of like being in Silicon Valley, which is like one of the wealthiest epicenters of one of the wealthiest countries in the world.

of course you are going to have the luxury to think about these really far off problems that don't have to do with things that are literally harming and affecting people all around the world today. It's not that I don't think we should focus on any research. to these problems like that's not what i'm saying but i think the sheer amount of resources that are going to prioritizing these problems over present-day problems is a super it's just like not at all proportional to like What the...

Problem landscape. literally is in reality. Yeah, so when people like Sam Altman or Dario Amadei or Demis Asabas say that we are a couple years away from something like AGI or even superintelligence, your view is that that just has no reflection on reality or that we should cross that bridge when we come to it and pay attention to the stuff that we can actually observe in the world now. So I think like.

It also depends on how they define AGI. When OpenAI says that they are two years away from potentially automating away most labor... I could believe that they're on a path to systems that would appear to do so in two years and then lead to a lot of company executives deciding to hire the AI instead of hiring workers. If we're talking about AGI and another definition, then I mean, it would have to be like on a case by case, like how are they defining AGI and what is their time scale?

Do I think that OpenAI has high convictions to try and create a labor-automating machine and that they have the resources to start making a dent? labor opportunities for people, like, yes, I do. Well, maybe let's have the kind of how do you define AGI conversation. It's come up a few times during this conversation. I know there are a lot of folks who regularly remark that the definition of AGI seems really sort of amorphous and slippery to them.

I have to say, it doesn't feel that amorphous to me. I work with an assistant. My assistant does customer service stuff, scheduling stuff, a little bit of sales. If there was a tool that I could use and pay a subscription to that did those things on my behalf, I think I was saying, yeah, I think that feels like AGI.

So that's kind of how I conceive of it in my mind. But I know there are so many folks out there who say, no, no, no, no, no. The definition is always changing and slippery. And this is a really big problem. So Ken, how do you feel about it?

I mean, what you were describing, like, yeah, like, if you want to define that as AGI, that's really fine. But I don't think that's how the companies are necessarily defining it as AGI, right? Like, they are not defining it well, but... when they need to raise capital, when they need to kind of rally public support, when they need to get in front of Congress and try and ward off regulation. The things that they say are one day AGI will solve climate change, one day it will cure cancer.

Like, I think the AGI system that you're describing is not exactly the AGI system that they are sketching out in that kind of broad sweeping vision. that they're trying to use as justification to continue doing what they're doing. Right. There's a lot of hand-waving that goes on when somebody says that some future AI technology is going to cure cancer. It's leaving out many, many steps.

Well, but yeah, in partial defense of the labs here, I think we have seen things like AlphaFold, which was Google DeepMind's system that solved the protein folding problem, essentially, and that was... Not something that they thought was going to be the end of their progress toward scientific cures for disease. That was sort of the beginning stages. And actually, if you talk to... biomedical researchers they say that was a huge deal and really did make it possible to do all kinds of

new drug discoveries. And I guess that part feels a little separate to me than the AGI discussion. But it does feel like the quest for AGI, the sort of scaling up of these models, the attempt to make them more general. There have just been good things that fall out of that process and also some externalities that you mentioned, Karen. But I'm just curious if you see any positive applications of the scaling hypothesis and the sort of dominant paradigm.

I don't think I've come across a positive application that I think justifies the amount of cost going into it. And I think to return back to also deep mine alpha fold. That was not a general intelligence system. That was a task specific system, right? Which I advocate for. Like, I think we need more task specific. AI systems where we give them a well-scoped problem, we curate the data.

We then, you know, train the model and then it does remarkable things. Like, I totally agree that Alifold was a remarkable achievement. And I don't think that that has much correlation with what... AGI labs are now doing with the scaling paradigm. Those are like two perpendicular tracks to me. Yeah, I think it's clear that the hype is far ahead of the results right now. We have heard a lot more about AGI curing cancer than we've actually seen progress toward curing cancer.

in the moment of this recording. Now, some people believe that's going to change very soon, but I can understand why if you read a lot of headlines and you don't see cancer being cured yet, that you would have some questions. Yeah, and I think that the other thing here is...

I mean, these companies are continuing to say that they're AGI as they're pursuing AGI, but they've dramatically shifted. And now they're really just focused on building products and services that they can charge lots of money for. All of the maneuvering that they've tried to do to make it seem like that is on exactly the same path as to what they're saying is age- Come on. Like, that's probably not.

what's happening here and like ultimately these companies are building these like i mean you know in like the last um episode that you guys were talking about ai flattery and like the debacle around that and how they're turning to maximizing for engagement because this is the thing that they've realized gets them a lot of users, gets them more cash flow.

And like that is ultimately what they're now building. So I think what they're saying they're building and what they're building is also starting to diverge in the kind of new. era, I guess, where they need to be able to justify like a $40 billion. Yeah. Well, let's sort of bring it home here by talking about one thing that I think all three of us agree on.

You write that the most urgent question of our generation is how do we govern artificial intelligence? I agree with you on that front, Karen. And so let me ask, how do we govern artificial intelligence? Please help us. Democratically. Yes. So what does a more democratic way of governing AI look like? So...

To me, it's like you consider the supply chain of AI development. You have data, you have compute, you have models, you have applications. I think at every single stage of that supply chain, there should be input from people. Not just the companies like when companies decide that they're going to.

trying to curate a data set. Like there should be people that can opt in and opt out of that data set. There should be people that not just for their own data, but maybe there's consortiums that are debating like what kind of data. Like public accessible data should or should not go into these tools. There should be like.

debates about like content moderation of the data because as I write in the book there were a lot of moments in OpenAI's history where they kind of just debated internally like should we keep in? pornographic images in the dataset or not. And then they just decided it on the fly. Like that to me is not democratic governance. Like we should be having open public discourse about those types of decisions.

When it comes to compute, there should be an ability for communities to even know that data centers are coming in to their communities. And they should then be able to go to a city council meeting and actually talk. with their city council, talk with... the companies about whether or not they want the data center and have like good solid information about like what actually the long-term trajectory of hosting a data center.

would look like. And when it comes to like the labor, the contract workers that are working for AI, like there should be, you know, they should follow international human rights norms. Because a lot of the conditions in which these workers are working do not follow international human rights norms.

So I think that's the way that I think about like all of these different stages all need to be democratic. And when OpenAI says like, we're going to develop democratic AI simply because we're an American company, like that's not how it works. Everyone actually has to participate, have agency, have a say to shape and change what is and isn't developed and how. Well, Karen, this has been a fascinating conversation. Really appreciate your time. And thanks. Thank you so much for having me.

Turn your brain off. It's time to talk about Italian brain rot. Sounds fancy. Kevin, if I were to start referring to you as Cavanini, Rossellini, what would that mean to you? I would think it was some sort of mockery of my Italian heritage. I would never. I would never. What about Tralalero Tralala? You know him? No, I think you're having a stroke. What about Bombardino Crocadillo? Okay, now this is just getting ridiculous. Ballerina Cappuccino? Nope.

All right, listen, if you or someone you love recognizes any of these terms, Kevin, you may be suffering from a case of Italian brain rot. I'm almost afraid to ask. I have not been following this story, although I know you were very excited to tell me about it today. What is going on with Italian brain rot? Do not be afraid of Italian brain rot, Kevin. If you have been on TikTok or Instagram or YouTube over the past, many weeks, you may have encountered this unique form of AI enabled insanity.

Now typically, I know that brain rot refers to this kind of feeling of, I don't know, cognitive decline related to excessive use of social media or something like that. People on TikTok are always complaining about their brain rot. But what is Italian brain rot? Well, if you want to catch up on this, I highly recommend a story in The Times by Alicia Heradasani Gupta, who kind of catches you up.

This stuff started to emerge in January, and it really is an AI phenomenon. You know, recently, Kevin, we've seen advances in some of these text-to-video generators. So you might be able to, for example, create a short clip of a little coffee cup that is also a ballerina. Well, congratulations, you just invented ballerina cappuccino. I mean... To me, this is sort of the difference between this age of viral content and previous generations of viral content.

I spend a lot of time on TikTok, but I have never, literally never seen anything about Italian brain rot. And it's such a contrast to like, everyone knew that Ice Bucket Challenge was happening, right? Because you could see it everywhere, but things have become so like... siloed and atomized that like you could tell me literally anything was happening on tiktok and then millions of people were into it it was the trend sweeping the youth and I would have no idea.

Either that means I'm old or something has changed about social media. Well, this is why you have to have your younger colleagues like myself come in and tell you what's happening in middle school. You are not younger than me. Well, spiritually, I think there's a case for it.

So listen, there's no way to talk about Italian brain rot that improves on the experience of actually watching it. So let's watch a couple of clips of brain rot. And I believe we have one queued up. I hope I get hazard pay for this. Ballerina Mimimi, szimpanzo. Uau, uau, uau frutto drillo So if you are not watching these, let me just describe what I just saw. This is sort of a compilation of these Italian brain rot memes.

which were all kind of like AI-generated weird characters. Like one of them looked like a sort of hamster poking out from a half of a coconut. That's right. And they're just saying these like... Italian phrases. So this is Italian brain rot? This is Italian brain rot. You know, you're probably grasping the Italian part because they're sort of being voiced in this over-the-top Italian accent.

And all of these sort of strange phrases that you're hearing are the names of the characters. So I know you're probably wondering, who is Trippy Trappy Trappa Trepa? And that's a shrimp with a cat head. so i love this one because you know a lot of meme explainers there's like a lot of excavating to do of where did this come from and what this about here it really is just what it says on the tin It is.

an Italian accent over a series of images that make you feel like you're going insane. Yes, and was this made by an Italian? No, in fact, in the Times, one of the main creators, this was the person who created Ballerina Cappuccino, was Susanu Savatudor, who is a 24-year-old from Romania, and who told The Times that this is just a form of absurd humor that really has very little to do with Italy.

But this creator just sort of created the name Ballerina Cappuccino, and they've gotten more than 45 million views on TikTok and 3.8 million likes. Oh, my God. At the risk of explaining a joke and thereby killing it,

Is there any point to Italian brain rot? Is it making some sort of social commentary? Is it trying to say like Italians are... big users of social media and therefore getting brain rot well so i actually do have a theory about this like i think here is what makes this feel new is that Whatever this is actually does feel fresh. And we live in a time where everything that Hollywood is giving us feels like a recycled version of something else. We are on phase six of the Marvel Cinematic Universe.

And in that world where it's like, oh, and here's Ant-Man's cousin, people are saying, F that, give me ballerina cappuccino. it does just feel like there is some like organic hunger out there for just like really stupid shit just like really random like I was thinking about this recently. You know the Minecraft movie is like a big hit, right? It's like one of the biggest movies of the year.

and there's this moment in the movie apparently i've not seen it but where someone says the word chicken jockey jack black does i think and at that moment like teens and other young people have decided that this is the moment in the movie to like stand up and cause a ruckus

they start throwing popcorn someone actually i saw brought a live chicken to the theater and like held it up like this feels like of a league with chicken jockey from the minecraft movie in the sense that it is just absurdist Trying to explain it actually makes you dumber.

in some way, and so there's a kind of appealing randomness to it. And by the way, I think that is actually part of being a young person, is building a language that is inaccessible to people older than you. That is sort of how the identity formation process works. there are older people, older people have no idea who trippy-troppy-troppy-trippy is and that is

something that you can talk about with your friends that belongs to us. What are some of the other ones? Okay, well, so I'm glad you asked because we haven't actually watched enough of these videos yet. So, Kevin, I would now like to direct your attention to one Salomino penguin. Salamino pinguino. Mezzo salame. Mezzo pinguino. Tutto problema. Non scivola si affetta. This is like a penguin covered in salami. Like wearing almost like a sort of headdress made out of salami.

Salamino Pinguino. La leggenda della salumeria. Now, let's take a look at Glorbo. Glorbo. This is a crocodile or alligator with a watermelon for a body. Yeah. This is a still image with 578,000 likes. Is this even real Italian? Are we sure it's real Italian? I'm pretty sure it's not real Italian. Let's stop that one there.

And then let's sort of... Now, I know what you're saying. You're saying, Casey, these characters are just standing around. That seems super boring. What if I were to tell you that other creators are now incorporating them into dramas, Kevin? Oh, boy. Let's take a look at one of those. And this one stars Tra-la-lero, Tra-la-la, who is a Sharkware Extended. And is that Ballerina Cappuccino, I see? That is Ballerina Cappuccino, and she's with Tung Tung Tung Sahur.

So he leaves for the day and oh there comes tra la la tra la the shark and now they're kissing in bed Oh, no. She's great. Oh, no! Now Tung Tung Tung Swords chasing after him. The shark. And that's Bumbelini Crocodini. And he sends it an airstrike. so that was let's just review that was i don't know that was 10 or 15 seconds in that you see two of these characters one of them gets into an affair has a love child her partner finds out and then sends in an airstrike to attack

the sort of cheater. So they're doing a lot in 15 seconds. Wow. That was not a Pixar film. That was really something. I... feel like I'm on a very powerful psychedelic right now. Well, you know, you mentioned earlier that, you know, in the old days, we would do things like the Ice Bucket Challenge. Kevin, what if I told you that some of these Italian brain rock characters are actually doing the Ice Bucket Challenge? No! Yeah, let's watch that one. My name is Chimpanzo.

This is a chimpanzee who is also a banana. trippy trappy and boneca and he's nominating the other characters This is so dumb yeah It's very funny though. I am like genuinely laughing at this. but it is like i could not explain to you why this is funny if you paid me

Well, listen, I have done a little bit of comedy in my life, and one thing that I learned in improv was that everyone goes nuts for an over-the-top Italian accent. It's extremely funny. All I have to do is say, make a bottle of spaghetti. You're already laughing. See? I don't even do anything. Italian brain rot functions much in the same way.

But they are taking advantage of this AI thing. We talked earlier on this show. These systems are being trained with other people's art without their consent. There are some people who feel like you can never make anything truly creative or truly artistic with AI. And yet, here you have this bonafide viral phenomenon that is people making extremely silly stuff using AI, and it is resonating with us. And I think this has been one of the more counterintuitive lessons of AI slop.

a year or so ago we were looking at images of shrimp jesus all over facebook and we were saying That seems silly. I'm sure the company is going to get rid of this. No, no, no, my friend. They're going to lean into it because there are riches that lie down this path. And Italian brain rot is the first example I think of that happening. God, it just, I mean. so i have a couple of reactions one of them is yes i absolutely think that like

AI has utility and that there are good things that have come out with it, but seeing Italian brain rot makes me want to nuke the data center. So I'm like, shut it all down! We've gone too far. But seriously, I do think there is something here, not just in the sort of absurdist humor of this thing, but I do think there are going to be new kinds of entertainment that are birthed out of these tools because... If you wanted to make something like a ballerina with a cappuccino for a head,

you know, 10 years ago, you needed to be an animator to do that, or at least have some facility with animating software. Now you just go into an AI tool and you type, give me a ballerina cappuccino, and out comes this like pretty perfect animation.

Yeah, which has always been the case for this sort of tool, by the way, is that it takes people who do not have those kinds of artistic skills and lets them express themselves creatively if they can think it, they can visualize it, they can make it available to other people. Here is my case why this is actually a good thing, Kevin. I was thinking this morning about a few years back during the height of the crypto boom when people started talking about

how crypto could be used to fund these alternative worlds of entertainment, right? Like the Bored Apes Yacht Club was going to become this mega franchise. but what made it cool was that anybody could buy in. Anyone could get a slurp juice. Anyone could get a slurp juice, put it on a mutant ape, transform your mutant ape, etc. And people didn't really get into this because I think nobody wanted to be involved in what was essentially like a homeowners association for creating entertainment.

But I look at Italian brain rot and I see something similar happening. Where it's like, as far as I can tell, no one has a trademark on Ballerina Cappuccino or Chimpanzini Bananini. You can just sort of make your own version of it and put it up there, and nobody's going to issue a copyright strike. You can have these characters do whatever you want to.

so it feels like there is actually a freedom in making this that people are really responding to and so maybe we do actually get the next version of like crowdsource entertainment and it all comes out of these bizarre text to video makers

i gotta say i believe you and you say that that is a possible outcome but my brain just goes immediately to like some office at like Disney headquarters where they're like watching these Italian brain rot memes and like furiously trying to license the IP to make like a series of seven movies about Chimpanzini Bananini. Yeah. And I do think that there's a possibility that this becomes just like any other entertainment franchise.

It could go that way, but you know, maybe that sort of robs it of the fun of it that makes it go viral today to begin with. And they're making movies out of Minecraft.

they can make movies on anything they're really running out of things to make movies out of as far as I can tell so do I lean optimistic about this yes at the same time do I think that if China had just sort of come up with this idea independently as a way of bringing down American civilization it would be a great idea if they were like what if we just sort of did weird characters in Italian accents could that

distract all of American middle schoolers for a year? Probably we're doing. How hard could it be? This is all a CCP plot. to undermine American sovereignty. That's kind of always been the thing with TikTok. It's like, I don't think it's a Chinese plot to destroy America, but it is working. Well, if Cappuccino Ballerina... starts talking, singing the praises of Xi Jinping, we'll know that something grave has gone wrong. Yeah, we'll keep our eyes on...

Hard Fork is produced by Rachel Cohn and Whitney Jones. We're edited this week by Matt Collette. or fact-checked by Ina Alvarado. Today's show is engineered by Chris Wood. Original music by Alicia McGeechup, Diane Wong, and Dan Powell. Our executive producer, It's gently on. Video production by Sawyer Roke, Pat Gunther.

and Chris Schott. You can watch this whole episode on YouTube at youtube.com slash hardfork. Special thanks to Paula Schumann, Pui Wing Tam, Dahlia Haddad, and Jeffrey Miranda. You can email us at hardfork at nytimes.com or should I Don't actually send a message to that email address.

This transcript was generated by Metacast using AI and may contain inaccuracies. Learn more about transcripts.
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast