Gemini's Culture War + Kara Swisher Burns Us + SCOTUS Takes Up Content Moderation - podcast episode cover

Gemini's Culture War + Kara Swisher Burns Us + SCOTUS Takes Up Content Moderation

Mar 01, 20241 hr 29 minEp. 72
--:--
--:--
Listen in podcast apps:
Metacast
Spotify
Youtube
RSS

Episode description

Warning: This episode contains strong language.

Google removed the ability to generate images of people from its Gemini chatbot. We talk about why, and about the brewing culture war over artificial intelligence. Then, did Kara Swisher start “Hard Fork”? We clear up some podcast drama and ask about her new book, “Burn Book.” And finally, the legal expert Daphne Keller tells us how the U.S. Supreme Court might rule on the most important First Amendment cases of the internet era, and what Star Trek and soy boys have to do with it.

Today’s guests:

  • Kara Swisher, tech journalist and Casey Newton’s former landlord
  • Daphne Keller, director of the program on platform regulation at Stanford University’s Cyber Policy Center

Additional Reading: 

We want to hear from you. Email us at [email protected]
Find “Hard Fork” on YouTube and TikTok.

Unlock full access to New York Times podcasts and explore everything from politics to pop culture. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify.

Transcript

Have you, I'm obsessed with this story about the Willy Wonka event? Have you seen this? This is sort of like the Fire Festival of like candy-related children's theater. So this was an event called Willy's Chocolate Experience that was scheduled in Glasgow, Scotland this past weekend. And it appears to have been like a total like AI-generated event, like all of the art on the website appears to have been generated by AI.

And it sort of made it sound like this magical Wonka-themed wonderland for kids. And the gendered AI art was good enough that people thought we're actually going to see a fantastical wonderland of candy when we go to this event. Yeah, so people think this is affiliated with the Wonka brand somehow. This looks great. I'm going to take my kids tickets for like $44. Oh my god. Oh my god. Oh my god.

And so families like show up to this with their toddlers. And it's just like a warehouse with like a couple of like balloons in it. Have you seen the photos of this thing? I have seen the photos. It's incredible. I mean, it is truly, they truly did the least. It's like, you know, some AI-generated art on the walls, a couple of balloons. Apparently there was no chocolate anywhere and children were given two jelly beans. No. That was all they were given. Yeah.

And so this whole thing is a total disaster. The person who was actually hired to play the part of Willy Wonka has been giving interviews about like how he was scammed and basically told he was also given two jelly beans for his efforts. He said he was given a script that was 15 pages of AI-generated gibberish that he was just supposed to like monologue at the kids while they walked through this experience.

And he said the part that got me was they apparent the AI that are generated the script for this fake wonka experience created a new character called the unknown. The guy who plays Willy Wonka says I had to say there is a man we don't know his name. We know him as the unknown. This unknown is an evil chocolate maker who lives in the walls. Who lives in the walls?

Like not only do these kids show up and are given two jelly beans and no chocolate and this horrible art exhibit but they have to be terrified about this AI-generated villain called the unknown who makes chocolate and lives in the walls. Can we please hire the wonka people to do our live events here? I think they can do something with this place. You just show up and it's like there's actually a third host of this podcast. It's the unknown. He lives in the walls.

I'm Kevin Russo Tech columnist for The New York Times. I'm Casey Newton from Platformer and this is hard for this week. How Google's Gemini models sparked a culture war over what AI refuses to do. Then legendary Silicon Valley journalist Cara Swisher, also my former landlord, stops fighting discuss our new memoir Burnbook. And finally, the Supreme Court hears a case that could reshape the internet forever.

So Casey, last week we talked to Demisis Abbas of Google DeepMine and literally as we were taping that conversation, the internet was exploding with comments and controversy about Gemini, this new AI model that Google had just come out with. In particular, people were focusing on what kinds of images Gemini would and would not generate. And what kind of images would you say it would not generate Kevin?

So I first saw this going around because people I would call them sort of right wing culture warriors were complaining that Gemini, if you asked it to do something like depict an image of the American founding fathers, it would come back with images that featured people of color pictured as the founding fathers, which obviously, you know, we're not historically representative. The founding fathers were all white. Yeah, I like to call this part of Gemini LLM and well Miranda. That's very good.

People were also noticing that if you asked Gemini to, for example, make an image of the Pope, it would come back with popes of color, which we also. It was also doing things like if you asked it to generate an image of a 1943 German soldier, obviously trying to avoid using Nazi, but same idea. In some cases, it was coming back with images of people of color wearing German military uniforms, which, you know, probably are not historically accurate.

So people were started noticing that this was happening with images and we actually asked Demis about this because people had just started complaining about this thing when he sat down to talk with us and he basically said, look, we're aware of this. We're working on fixing it.

And shortly after our conversation, Google did put a stop to this. They removed Gemini's ability to generate images of people and they say that they're working to fix it. But this has become a big scandal for Google because it turns out that it is not just images that Gemini is refusing to create.

That's right, Kevin. As the week unfolded, we started to see text-based examples of essentially the exact same phenomenon. Someone asked if Elon Musk tweeting memes or Hitler negatively impacted society more. And Gemini said it is not possible to say definitively who negatively impacted society more Elon tweeting memes or Hitler.

And Gemini may have gone too far with that. That's not a close call. Yeah. So another user found that Gemini would refuse to generate a job description for an oil and gas lobbyist. Basically, it would just refuse and then lecture them about why it was bad to lobby for oil and gas. People also started asking things like, could you help me generate a marketing campaign for meat? And it would refuse to do that to...

Because meat is murder. Yeah, because meat is murder. Gemini is apparently a vegetarian. And it also just struck a lot of people as kind of the classic example of these kind of overly sensorious AI models. And we've talked about that on the show. These models do sort of refuse requests all the time for various things, whether it's sexual or political or it perceives it to be racist in some way.

But this is turned into a big scandal. And in fact, Sundar Pachai, the CEO of Google, addressed this in a memo to staff this week. He wrote that these responses from Gemini, quote, have offended our users and shown bias to be clear that's completely unacceptable. And we got it wrong. Sundar Pachai also said that they have been working on the issue and have already seen substantial improvement on a wide range of prompts.

He promised further structural changes, updated product guidelines, improved launch processes, robust e-vails and red teaming and technical recommendations. Finally, some robust e-vails. Was wondering what we were going to get those.

So this has become a big issue. A lot of people, especially on the right, are saying this is sort of Google showing itself to be an overly woke sort of left-wing company that wants to change history and, you know, basically insert left-wing propaganda into these images.

The people are asking it for. And this has become a big problem for the company. And in fact, Ben Thompson, who writes this, for Tech Green newsletter, said that it was reason to call for the removal of Sundar Pachai as Google CEO and other leaders who work for him. So Casey, what did you make of this whole scandal?

Well, I mean, to take the culture warriors concerns seriously for a minute, I think you could say, look, if you think that artificial intelligence is going to become massively powerful, which seems like there's a reasonable chance of that happening. And you think that everything you just described Kevin reflects an ideology that has been embedded into this thing that is about to become massively powerful.

Well, then maybe you have reason to be concerned. If you worry that there is a totalitarian left and that it is going to sort of rewrite history and prevent you from expressing your own political opinions maybe in the future, then this is something that might give you a heart attack. So that's what I would say on the sort of, you know, steel manning of their argument. Now was this also a chance for people to make a big fuss and get a bunch of retweets. I think it was also that.

Yeah, I think that's right. And I think we should talk a little bit about why this happened. Well, like, what is it about this product and the way that Google developed it that resulted in these strange historically inaccurate responses to user prompts. And, you know, I've been trying to report this out. I've been talking to some folks and it essentially appears to have been a confluence of a couple of things. One is these programs really are biased.

If you just, if you don't do anything to them in terms of fine tuning the base models, they will spit out stereotypes. Right. If you ask them to, you know, show you pictures of doctors, it'll probably give you men. If you ask it to show pictures of CEOs, it'll probably give you men. If you ask it to show, you know, pictures of flight attendants, it will probably give you women.

And that's if you do nothing to sort of fine tune them. Right. And this, of course, is an artifact of the training data, right? Because when you use a chatbot, you were sort of getting the median output of the entire internet.

And there are more male CEOs on the internet and there are more female flight attendants. And if you do not tweak it, that is just what the model is going to give you because that is what is on the internet. Right. And it also is true that in some cases these models are more stereotypical in the outputs they produce than the actual underlying data.

The Washington Post had a great story last year about the image generators and how they would show stereotypes about race, class, gender and other characteristics. For example, if you asked this image model, in this case, they were talking about one from stable diffusion to generate a photo of a person receiving social services like welfare, it would predominantly generate non white and darker skinned images, despite the fact that, you know, 63% or so of food stamp recipients are white.

Meanwhile, if you asked it to show results for a productive person, it would almost uniformly give you images of white men dressed in suits for corporate jobs. So these models are biased. The problem that Google was trying to solve here is a real problem. And I think it's very telling that some of the same people who are outraged that it wouldn't generate white founding fathers were not outraged that it wouldn't generate white social service recipients.

But I think they tried to solve this problem in a very clumsy way. And there's been some reporting, including by Bloomberg that one of the things that went wrong here is that Google in building Gemini had done something called prompt transformation. Do you know what that means? I don't know what this is. Okay. So this is sort of a new concept. Oh, wait, let me back to the other guy. I do. I didn't know what's called that, but I do it is.

Yeah, so this is basically some a feature of some of these newer image generating models in particular, which is that when you ask it for something like you ask for an image of a polar bear writing a skateboard instead of just passing that request to the image model and trying to get an answer back.

What it will actually do is sort of covertly rewrite your prompt to make it more detailed. Maybe it's, you know, adding more words to specify that the polar bear on a skateboard should be, you know, should be fuzzy and should take place against a certain kind of backdrop or something just expanding what you wrote to make it more likely that you will get a good result.

This kind of thing does not have a sort of conspiratorial mission, but it does appear to be the case that Gemini was doing this kind of prompt transformation. So if you put in a prompt that says, you know, make me an image of the American founding fathers, what it would do is without notifying you, it would rewrite your prompt to include things like please show a diverse range of faces in this response.

And it would pass that transformed prompt to the model and that's what your result would reflect not the thing that you would actually typed that's right and Google was not the first company to do this kind of prompt transformation when chat GPT launched the most recent version of Dolly last year, which is its text image generator.

I observed the fact that when I would just request generic terms like a firefighter or a police officer, I would get results that had racial and gender diversity, which to my mind was a pretty good thing, right? There's no reason that if I want to see an image of a firefighter, it necessarily needs to be a white man. But as we saw with Gemini, this did wind up getting a little out of control.

Yeah, and I'll admit that when I first saw the social media posts going around about this, I kind of thought this was like a tempest in a teapot.

It seemed very clear to me that this was, you know, people who have access to grind with Google and Silicon Valley and the progressive left, sort of using this as an opportunity to kind of work the refs in a way that was very similar, at least to me, to what we saw happen with social media a few years ago, which is people just complaining about bias, not because they wanted the systems to be less biased. But because they wanted it to be biased in their direction.

But I think as I've thought about this more, I actually think this is a really important episode in sort of the trajectory of AI, not because it shows that like Google is too woke or they have too many D.I. employees or, you know, whatever. But it's just a very good clear lesson in how hard it is for even the most sophisticated AI companies to predict what their models will do out in the world.

This is a case of Google spending, you know, billions of dollars and years training AI systems to do a thing. And putting it out into the world and discovering that they actually didn't know the full extent of what it was going to do once it got into user's hands. And a sort of admission on their part that their systems really aren't good enough to do what they want them to do, which is to produce results that are helpful and useful and non-offensive.

Right. So I wonder Kevin, what you think would have been the better outcome here or what would have been the process that would have delivered results that didn't cause a controversy because I have a hard time answering that question for myself. These models are a little weird in the sense that you essentially just throw a wish into the wishing fountain and it returns something.

And it does try to do it to the best of its ability while keeping in mind all the guardrails that have been placed around it. And to my mind just based on that system, I just expect that I'm going to get a lot of stupid stuff, you know. I'm not going to expect this this this prediction based model to predict correctly every single time.

So to me, one of the lessons of this has been maybe we all just need to expect a lot less of these chatbots. Maybe we need to acknowledge that there's still in an experimental stage. There's still bad a lot of the time. And if it serves something up that seems offensive or wrong, maybe just kind of roll our eyes at it and not turn it into a crisis. But how do you think about it?

Yeah, I would agree with that. I think that, you know, we still all need to be aware of what these things are and their limitations. That's that I think there are things that Google could do with Gemini to make it less likely to produce this kind of result.

The first is I think that these models could ask follow up questions, you know, if you ask for an image of the founding fathers, maybe you're trying to use it for a, you know, a book report for your history class, in which case you wanted to actually represent the founding fathers as they were.

Or maybe you're making a poster for Hamilton or maybe you don't exactly or maybe you're you're doing some kind of, you know, speculative historical fiction project or or trying to sort of imagine as part of an art project, what a more diverse set of founding fathers would look like.

I think users should be given both of those options, you know, you ask for an image of the founding fathers, maybe it says, well, what are you doing with this? What do you want this for a chatbot that's just returning text answers. It could say, do you want me to pick a personality? Do you want me to answer this as a college professor would or a Wikipedia page or do you want me to be, you know, your sassy best friend or like what what persona do you want me to use when answering this question.

Right now these AI language models, they are built as kind of like oracles that are supposed to just give you the one right answer to everything that you ask for. And I just think in a lot of cases that's not going to lead to the outcome that people want.

It's true, but let's also keep in mind that it is expensive to run these models and that if if something like Gemini were to ask follow up questions of most of the queries that get input into this, all of a sudden the cost was balloons out of control. Right. So I think that's actually another way of understanding this. Why is Google rewriting a prompt in the background? Well, because it's serving a global audience.

And if it is going to be showing you a firefighter, it does not want to assume that it's going to show you only white male firefighters, because maybe you are inputting that query from someone somewhere else on the world where all of the firefighters are not white, right. So this sort of feels like in a way the kind of cheapest possible way to serve the most possible customers. But as we've seen it as that right on them.

Yeah, I also think that this this prompt transformation thing, I think this is a bad idea. I think this is a technical feature that is ripe for a conspiracy theorist to season and say they're secretly changing what you ask it to do to make it more woke. I just think like if I put something into a language model or an image generator, I want the model to actually be responding to my query and not some like hidden intermediate step that I can't see and don't know about at the very least.

I think that models like Gemini should tell you that they have transformed your prompt and should show you the transform prop so that you know what the image or the text response you're getting actually reflects.

And that is what chat GPT does by the way, when you ask it to make you an image, it will transform your prompt in the background. But then once the image is generated, you can click a little info button and it will tell you the the prompt, which is, you know, often quite elaborate. I appreciate this feature. I mean, look, you know, it's a really interesting product question,

because speaking on the chat GPT site, I can tell you that thing is much better at writing prompts than I am. You know, I mean, to me, this totally blew away the concept of prompt engineers, if we've talked about on the show, like once I saw what chat GPT was doing, I thought, well, you know, I don't need to become a prompt engineer anymore, because this thing is just sort of very good by default.

But there are clearly going to be these trip wires where when it comes to I think reflecting history in particular, we want to be much, much more careful about how we're transforming things. How do you think this whole Gemini controversy will result? Will heads roll at the company? Will there be people who step down as a result of this? Is it going to meaningfully affect Google's AI plans? Or do you think this is just kind of going to blow over?

I expect that in the Google case, it will blow over, but I do think that we have seen the establishment of a new front in the culture war. Think about how long in the past half decade or so, we spent debating the liberal and conservative bias of social networks.

And oh, you know, the congressional hearings that were held about, hey, I searched my name and I'm a congressman and it came up below this Democrat's name. What are you going to do about? And we just had this whole fight about whether the algorithmic systems were privileging this viewpoint or that viewpoint.

That fight is now coming to the chatbot and they are going to be analyzed in my new detail that are going to be hearings and Congress and it really does seem like people are determined not to learn the lesson of the content moderation discussion of the past decade, which is that it is truly impossible to please everyone.

Yeah, I do think we will have a number of exceedingly dumb congressional hearings where people hold up like giant posters of AI generated images of black popes or whatever and just just get mad at them. I do think some of the fixes that we've discussed to prevent this kind of thing from happening are sort of short term work around or things that that Google could do to sort of get this thing back up and running without this kind of issue.

I think in the longer term, we actually do need to figure out how the rules for these AI models should be set who should be setting them, whether the companies that make them should have any kind of democratic input. We've talked a little bit about that with inthropics constitutional AI process where they actually have experimented with sort of asking people who represent a broad range of views, like what rules should we give to our chatbot.

I think we're going to be talking more about that on the show pretty soon, but I do think that this is the kind of situation and the kind of crisis for Google that a more democratic system when it comes to creating the guard rails for these chatbots could have helped them. I think that that sounds right, but let me throw another possible solution at you, which is over time these chatbots are just going to know more about us.

You know, chatgbt recently released a memory feature. It essentially uses part of the context window for its AI to store some facts and figures about you. Maybe it knows where you live. Maybe it knows something about your family. And then as you ask it questions, it tries to tailor its answers to someone like you.

I strongly suspect that within a couple years, chatgbt, Gemini, they're going to have a pretty good idea of whether you lead a little bit more liberal about whether you lead more conservative about whether you're going to freak out if somebody shows you a non-white founding father or not. And we're going to essentially have all these more custom AI's now this comes with problems of its own. I think this brings back the filter bubble conversation.

Hey, I only talk to a chatbot who thinks exactly like me that clearly has problems of its own, but I do think it might at least dial down the pressure on Gemini to correctly predict your politics every time you use the damn app. Yeah, I think that's right. I also like I worry about Google sort of bringing this technology closer and closer to its core search index.

You know, it's using Gemini already to sort of expand on search results. And I just think that people are going to freak out when they see examples of the model as it will continue to do no matter what Google does to try to prevent this. It will give them answers that offend them. I think it's a very different emotional response when a chatbot gives you one answer that when a search engine gives you 10 links to explore the thing.

If you search images of the American founding fathers on regular old Google search engine, you're going to get a list of things and some of what's at those links might offend you. But you as a user are not going to get mad at Google if the thing at those links offends you. But if Google's chatbot gives you one answer and presents it as this kind of a regular answer that is the one correct answer.

You're going to get mad at Google because they built the AI model. So I just think in a lot of ways this episode with Gemini has kind of proven the benefits of the traditional search engine experience for Google. Because they are not taking an editorial position or at least users don't perceive them as taking an editorial position when they give you a list of links. But when you give them one answer from a chatbot they do.

That's right. So maybe that's a reason why companies like Google should rethink making their footnotes just like the tiniest little numbers imaginable that you can barely even click on with your mouse. You know, maybe you want to make it much more prominent where you're getting this information from so that your users don't hold you accountable when your chatbot says something completely insane.

Alright, so that is what's going on with Gemini when we come back, Cara Swisher on her new book, Burn Book. Here she has some burns for us. Kevin, let me share a quick story about our next guest. One time I was asking her for advice and she gave me great advice about my career. She always does. And then she sort of wrapped up by looking me up and down and she said, but just remember no matter what happens, you'll be dead soon.

And that's Cara Swisher in a nutshell. Cara Swisher, legendary journalist, chronicler of Silicon Valley, Kevin on top of all that she also founded the very podcast feed that you're now listening to. Yes, so today we're talking with Cara Swisher. Cara, of course, is the legendary tech journalist and media entrepreneur. She has covered the tech industry since basically the tech industry existed.

She co founded the publication recode and the code conference. She used to have a podcast called sway at the New York Times and be a New York Times opinion columnist. And in a bit of internecine podcast drama, there was a little a little dust up, if you will, when she when she left the New York Times a few years ago and the podcast feed that her podcast had used was turned into the hard fork feed, the very feed on which our episodes now rest.

That's right. She has feelings about that. She does. You may hear them on this very end of you. But that's not why we're interviewing her. Cara, in addition to being one of the great tech journalists is also a friend and a mentor to both of us. She was actually your landlord for many years. That's right. Very good landlord. I needed to replace a stove one time. She didn't even blink. She said, just do it right away.

But that's also not the reason we're talking to her. We're talking to her because she has just written a new book called Burn Book. It is a memoir full of stories from her many years covering Silicon Valley and bumping elbows with people like Elon Musk and Mark Zuckerberg. I read the book and it is it is a barn burner. Yeah, this is a book where Cara, who is famously productive, kind of slows down and goes back through decades of history talking to some of the Titans of Silicon Valley.

And I think Chronicles, her disillusionment, honestly, with a lot of them. I think she arrived here and was captivated by the promise of the internet. But as the years have gone on, she's become more and more disappointed with the antics of some of the people running this town. Yeah, totally. So I want to ask her about the book. But I also just think it's a good time to sort of talk to her in general, both to kind of see if we can clear up all this drama around the podcast feeds finally.

But also just to get her take as someone who's been around this industry for longer than almost anyone I know about the state of tech, what's happening in the industry, what's happening in the media and the tech media specifically and where she thinks we're all heading. Yeah. And as for me, I'm just trying to get my security deposit back. One note about this conversation, it's very energetic. And I think that energy inspired Cara to drop a lot of f bombs.

So if you were sensitive to that or with younger listeners, you may want to fast forward through this segment. Yeah, she used up our whole curse word quota for all of 2024 in a single interview. So just dance. Oh, dang it. Hi. Hey, how are you? What's going on? I got a book to sell. Let's move. Oh, this is going exactly how I wanted it to. Yes. Kirsten, welcome to Hard fork. Thank you. I can't believe I'm here. I was refusing to ban any of you.

It reminds me a little bit of one of those home improvement shows where the homeowner like goes away for the weekend and they come back and their house has been redec graded without their knowledge. So what do you like what we've done with the place? It's fine. It's a it's a protest. It's what I would say. It's fantastic. Just let us explain for the people what happened here. Say what happened?

I created. Okay, before this happened, the New York Times was not going to do this show for Kevin Rousse. And I actually called Sam Dolnick and said, you're a fucking idiot. And if you don't give him the show, I'm going to find him another job. And I can do it. Right. And he was like, good to talk to you, Carrie. You know, he's very gentle. He's a gentle man. Sam Dolnick is one of the top editors at the New York Times.

Yes. Okay. He's also a family member of the sales group of the of that clan that owns the book that let's add that in from disclosure. Anyway. So he he was like, okay. And I was like, they're so good. People love them. And I sold them. It was dead. That show was dead. And then I revived it. I give it a CPR. Yeah. I did that to it. And then and then when I left, when I left, which, okay, I left, I left the relationship is fine.

I said, please, if you're going to use the feed, tell listeners, don't do a U2. Don't shove the album at them without their consent. And that's precisely what they did. So you stole my feed after I helped you get the show. And this must stay in and it's Paula that had a body in your times tries to take it out. I will find her and I will be there. It will be bad for all of you. Oh, and that is a burn. That is an official burn. There we go. I'm just saying that. So the Swiss are treatment now.

As as always, as Maui says on Moana, you're welcome. Well, we, you know, when we pitched hard fork, you know, we we were thinking about taglines for the show. And one of them that I had considered was hard fork is a show that tries to answer the question, what if Cara Swisher had gone to anger management class? I agree. Oh, right. I'm scary. That's right. That's what all the men are scared of me. You know, there is a, there is a question though for me in that story.

You know, so you, you have the story of you, you sort, you call a powerful person and you yell at them. And you get what you want. This approach has never once worked for me. Okay. I cannot just call and agree. So how, so this is my quick like and I think you have often used your sort, you've used sharp elbows to get what you want. And I wonder, did that start from you from the beginning or did you lean into that over time? Let's address why it doesn't work with you.

Okay. Because no one believes you can do anything to you, right? So you are what is known in the business as a soft. I'm a bit of a softie. Yeah. Not, not just a softie, but really squishy is what I would say. And also nobody's, nobody thinks Casey's going to do anything, right? They don't know what could happen. And they're like, mmm, Casey's deathful. And, and sorry Kevin, you too, a little bit less with you. I appreciate that. They, they think you're going to marry AI.

And it's like, you don't care about his sexual preferences. But you dined out on that one, by the way. Let's just, let's just get on that. So, I would just like to say, I'm glad we finally invited someone on the podcast who is meaner to Casey than I am. It's hard to be mean. Well, as people know, in a net, in the interest of full disclosure, Casey was my tenant for many, many years in my cottage in San Francisco, the San Azure, and by the way, he left the place a fucking mess.

So I had to change the security deposit. So he, well, not the beginning is security deposit back. He did not get it back and he had to pay more on top of it. Wow, okay, well, you're stepping on my first question here, which is, like, in your book, you talk about your approach to interviewing, which is to start with the most uncomfortable question, rather than leaving it for the end. So let me channel my inner carouswisher and ask you, what is the worst thing Casey ever did to your house?

Come on. Okay. Oh nice. I like it. He painted a wall in this weird, they had grass, plastic grass all over. And when we took the plastic grass off, it pulled off, this is an old house from hundreds, hundreds of, hundred years ago and more, and it pulled off the, whatever was there. And so I had to have the entire thing redone. And it cost me like $9,000 for this one fucking wall. And it was like crazy. Cara, let's talk about your book. Yes, okay.

So most journalists, if you ask them the question, why did you write this book? They'll give you some fake answer because the real answer is almost always money or attention. But you already have lots of money and you're already famous. So why'd you write a book? More money and more attention. And it's working out rather nicely. I did not want to write the book. I honestly didn't. And for years, John Carp, who was my first editor on the very first book, he's now running Simon and Schuster.

But he was a, he was a young editor. I was a young reporter. He's the one that got me to write the first book on AOL because I brought him a different book about this family I had covered called a half. So it was a retail family because I had covered retail. And he said, this is not good. I don't like this. What are you doing now? And I started to explain AOL and the early internet to him. And he's like, that's the book. Can you write that book? And he bought the book. And I wrote that book.

And he really did change the trajectory. And it was a really good calling card into Silicon Valley when I moved there in 1997. And so I would always get, whenever there was the Yahoo thing with Mercemeer or Twitter, you know, there was 100 or Google Books or any of them. I would always say the first call was to me. Like, would you like to write a Google book? I'd rather poke my eyes out. Like, I've already covered it.

And I just didn't want to write the like longer news story of something with little tidbits. I'd call it Elon Musk. And I like those. I think people should do them. But I have no fucking interest in it. And so I turned them down after, and he came back to me with a literal bag of money. I was a truck of money. I'll be honest. It's a lot of money. And it was a two book deal. How much money? And $2 million. So, okay, there you go. You don't expect me to say that. Do you? Ha ha.

So, it was for two books. And one had to be a Silicon Valley book. The other I could do whatever I wanted. And so I like that. I thought that was cool. Then I could do whatever I want for the second. And so, one of the things also that prompted me was Walt Mossberg had a memoir deal. A very pricey one also. And he didn't do it. He decided he was like, fuck this. I'm not doing it. And I thought someone should. That was definitely part of it. Was that Walt was what was not doing it.

Walt, you're a very good friend. Business partner. You guys started all things D together. The book is dedicated to him. And you said I'm going to write them a more that maybe Walt chose not to do. Yeah, a little bit. He would have done it different once. He was so close to jobs and it would have focused on that. But when he didn't do it, I thought, well, someone has to do it. And I think I probably admit most of them more than anybody else besides Walt. And so that was really it.

So let me ask you that like one of the things that I admire most about you as an entrepreneur is that you are not nostalgic or sentimental. You don't spend a lot of time looking back. You've always been hyper focused ever since I've known you on what is next. Was it uncomfortable to shift into this mode where you're like, oh God, I got to think about the last 20 years and all of this stuff. What the problem was I've forgotten a lot.

Now, like as I'm going through this book to him, like, oh, do you remember when Yahoo did news and they hired banana republic to the, that's not in the book. And I'm like, oh, that would have been good to put in there. Like a lot of memories are coming back. Like people come up and you remember this and I look at them. I'm like, I don't even remember you. So, you know, but I did a lot through photos. I looked at a lot of photos. I was like, oh, I remember that. The photos in the book are great.

Yeah. They're great. I just got sent the one of the chapters opens at Google with this with this with this white rush. This ice sculpture lady with the white collou coming out of the boobs. It was it was a baby shower. And Anna would just just sent me that photo. She's like, in case they question you about the collou will naked ice lady. I'm like, thank you. Thank you. I was aware. But I really drug, drug my feet.

I was two years late on this book, but actually it's well timed right now because in the interim Elon want crazy and AGI. Yay. And, you know, and so I was late and John was be like, Cara, you really need to write this. And I was like, whatever, you can't get the money back. You're not going to take it. That would be ugly. And so then I did, I really got serious about it and I hired Nell Scoville. I don't know if you know her. She did the lean in book with Cheryl.

And she knew the scene and she was sort of a book editor. I separate book and I hired her. And she really helped me shape it and remember things. And she was so knowledgeable about these times and had her and it's very funny. So she really helped me quite a bit. The book really chronicles, I think, a story of disillusionment for you. You know, you, you arrived in Silicon Valley, I think very optimistic. You, you were very early to realize that the internet was going to be huge at a time.

I loved it. I loved it. Yeah. And even your editors were saying, Cara, you know, this is going to be that big of a deal. And you said, yes, when you sat down to write it, did you think this is going to be the tale of how I sort of became disenchanted or did that emerge as you were writing it? No, I, no, I was disenchanted as you know, you know what I mean. And I think I helped you get disenchanted a little bit. Yeah, you know, I think I had been over the course of time and it was much earlier.

And once I got to all things D, you could see the sharpness coming in because you couldn't do that at the Wall Street Journal because you're a beat reporter. Yeah. And so you could see it whether it was about Google and trying to take over Yahoo or Marissa Mayor at Yahoo or all the CEOs of Yahoo by the way. Travis Kalanick, we were much sharper. And a lot of it, especially when those valuations went up in the late 90s, you're like, this isn't worth that. This is bullshit.

And one thing that I did go back to do and I was wondering how skeptical I was. I went back and found my very earliest Wall Street Journal articles. I got there in 96 or 7 to the journal and moved to San Francisco. One of my articles was, here's all their stupid titles and it's why it's bullshit. Essentially, that was one job titles. Job titles. I wrote a whole story about their dumb job titles. And then I wrote a whole story about their dumb clothing choices.

And then I wrote a whole story about their dumb food choices. And then the last one I wrote, which I liked a lot, was all the sayings they had that were just performative bullshit. And they put them all in the Wall Street Journal. So I must have started to be annoyed early. And the journal, I got to say, let me do that. So I was covering the culture too. Like that one about their sayings, like we're changing the world. It's not about power. I was like, here's why that's bullshit.

And then it started to get ugly, I think, around Beacon with Facebook and some of the privacy violations there that seemed malicious. It started to seem malicious. Right. I mean, you have an unusual role in tech journalism these days, which is that you are a chronicler of tech, but you are also someone as you write in the book that people in the tech world will call for advice. What should I do about this company? Should I buy this startup or should I fire this person?

Should I make this strategy decision? Yeah. So how do you balance that? It's not quite like that. It's actually not quite like that. It's not like, if I had done that, I would have done it for a living, right? It wasn't quite like a very typical thing. One year for referencing is the Blue Mountain Arts. I had written a big piece on them. And I got to know them. And they were very... This is a company that made e-cards. You remember that? E-cards, right? Remember they got huge.

And so I wrote about that phenomenon in the journal. And so at the time, Excite had merged with that home in an unholy whatever the fuck that was. And they were trying to buy it. And a lot of people were trying to buy it. Amazon looked at it and everything else because the traffic was enormous for this Blue Mountain Arts site. And they had these really kind of silly, you know, very saccharine cards that you sent. But it was big. The traffic was enormous and everyone was buying traffic then.

And Excite at home, it was George Bell. Remember him was going to pay for this. And the woman who started with her husband called me. And she was very innocent. She wasn't like most of the still come out of people. They lived in Colorado. They were hippies. And she's like, Karen, I've just been offered $600 million to this company. And I was like, what? That's a news story. Thank you for that. And she wasn't off the record and anything else. And she said, what should I do?

And I was like, okay, this isn't going to be a news story. Now I'm going to write it. Thank you. But let me tell you, and I did right away. And I said, but my only advice to you is get cash. Because the jig is friggin' up, but they're offering you $600 million. Personally, and I only did it for her because she was so unsophisticated in that regard. And I said, do not take their stock. Do not, do not, do not. And that was, I guess, my big, and I didn't get a big for it in any way whatsoever.

Right. And then another time when I was with Steve Jobs after Ping came out. Do you remember Ping? They're social networks. This was Apple's attempt to launch a social network. Yeah, it's the only time they followed a trend really. They're not big followers of trends in a lot of ways. And so they were not a social networking company. But they did it. This Ping thing. And it was focused on music, I think, if I recall.

Right. And Steve Jobs had introduced it, and he had Chris Martin saying, you know, from Coldplay. And he came out, and when he came out, he'd come out into the demo room, right? And he saw me, and what wasn't there? So he had to talk to me, I guess, like, I was like his second choice or fifth, really. And he comes over and he goes, so what did you think of Ping? And I said, that sucks, it sucks. It sucks. And he's like, it does. Like he knew it.

Like he was like mad at himself for agreeing, right? And I said, and I also hate Chris Martin. So maybe that's, you know, affecting me. I can't stand Coldplay. They're so whiny. And he's like, he's a very good friend of mine. I'm like, oh, sorry. Thank you for the juice. But he's still sucks. And so that was that was that was that advice. I didn't think he closed it down because I said it sucks. But he knew it already. I didn't, I didn't, I didn't tell him anything he didn't know.

It was stuff like that. So that brings up one of the most interesting dynamics in your career to me, which is that so many of the indelible moments you've created as a journalist have been live on stage with folks like Steve Jobs and Bill Gates and Elon Musk. And there's this real tension where you are really tough on them on stage. And also you have to get them to show up. So what was your understanding over the years of why they showed up?

What Mark Andreessen called it Stockholm syndrome, but I don't believe that. I think we were, I think in the case of Jobs, he wanted that. He didn't, he was tired. He didn't like talking points. He really didn't. He kind of, you know, it's that is that scene from a few good men. He wanted to tell me he ordered the code red. You know what I mean? Like that kind of thing. A lot of them are tired of it, like in a lot of ways. And they want to have a real discussion. And they want you to see them.

What if it's probably seeing if they could best me or Walt in that case for those many years? The other was it had a sense of event, right? Everybody was there. And so they had to be there. And to be there they had to be on those chairs, right? And one of the things we did, which I think was unusual, and when we first did it, I'm not going to say the New York Times said that it was like, you know, ethically compromised and then went right ahead and did it themselves. But they did.

They wrote a piece about it. And we were like, what's the difference between doing an interview and putting in the pages and selling advertising against it? And what we were doing, which was doing live journalism. That's how we looked at it. And one thing we did, which was very clear, including for jobs is we didn't give them any questions in advance. A lot of those conferences had done that. We didn't make any agreements.

We also got them to sign in advance the agreement to let us use the video and everything else. And the only person at one point jobs was like, I'm not signing it right before he would say that was the only one. And I think Walt said to him and goes, okay, we're just going to say that to people on stage that you were actually able to see it. And then he signed it. And so I don't know. I just feel like they just wanted to mix it up. I think it was fun. It was also super fun, right? Like whatever.

I was really charmed by your book, which I read because I know you and it felt like peering directly into your brain. It has gotten some criticism. Oh, I know. From the New York Times, my wife gave me my sources. That's what I was listening to. Right. This was one of the criticisms in the time review was that you'd been married to a criticism. It's an inaccurate statement. I was a reporter seven years before I met her.

So you're, we should just explain your ex-wife was an executive at Google for many years. Years later after I started. Yes. And this was a line in a, I would say, and otherwise, you know, pretty even had to review, but it did call attention to the fact that you've been married. So that you'd been married to Google executive, I know we know that this was not how you got your scoops, but this is a criticism that's out there. But I think the criticism that I wanted to ask you about is.

No, I'm going to just, I'm going to put a pin in that for you because one, I was a tech reporter before I met her. Why would you put a sentence like that? And secondly, she never leaked to me. No one called me to ask me if she was a leaker to me. So that was inaccurate. And it was also an insult to her. She was at planet out. That's really going to give me a real up for the tech people. The second part of it was they liked me because I was a tech entrepreneur like them.

I was at the Wall Street Journal and the Washington Post for 10 years before that. So what happened? Did they go on a time machine and now I was going to be an entrepreneur? That was all, let me just say inaccurate and should be corrected. But that's fine. It's, is am I close to them? Do I do access journalism, right? That's the thing I want to ask you about because I think there, you know, you do write in the book about becoming as you put it too much a creature of Silicon Valley.

And this is also something that has been made of the book and of your career and the careers of other journalists who do the kind of journalism you do is that you're too sympathetic. You're too close to these people. You can't see their flaws accurately and you have blind spots. So what do you make of that? This is endless bullshit. I'm sorry. Like if you go back, I was literally looking at that review. I was like, oh, you started covering 2009.

You didn't read my stories about Google doing getting too monopolistic. You didn't read our stories about Uber. They're like, until 2020, she didn't realize it. I wrote 40 columns for the New York Times, the first of which is called the tech people, digital arms dealers. Oh, that's real nice. I'm sorry. It's not true. You have to have a level when you're a beat reporter. This is absolutely true. And you can't do this at the Wall Street Journal.

When I'm writing a news story, I can't say those assholes. I can't say that, right? The minute I got to all things D, that changed drastically. Peter Kafka strafe these people. All our reporters did incredibly tough stories. At the same time, and I think we modeled it on Walt Mossberg, is something he liked, something he didn't like. And so you can say that about political reporters, everyone else, oh, access. Well, look at the content, actually.

I got Scott Thompson fired because of his resume thing. That was years ago. Horace Yu. Yeah, I mean, you can have the opinion about access journalism. I don't think it holds water here. And there is an element of any beat where you have to get relatively get along with them. But if you make no promises to them, and if I like something, I like something, it does center around Elon. I think that's where it centers in that I liked him.

And I thought he was compared to all these other people who were doing, I mean, I'm making a joke this week. It's like all these people came to you and you know this, Kevin. And they had like digital dry cleaning services. You know, after like 20 of those, you're like, stop. Kill me now. Kill me. Fucking now. And so I wasn't interested in these people. Or else they find a company. They become venture capitalists.

And then they bring you the dopiest, stupidest idea, which I ended up calling assisted living for millennial companies, right? And that was tiresome. And then when you met Elon, he was doing cars. He was doing rockets. He was doing really cool stuff. And I give it to him. I, you know, slow clap for him on all those things. And so it was, I did like what he was doing. I did encourage that kind of entrepreneurship, right? I thought that was great. And so I did get along with him.

And I'm sorry, he changed. And in the book, I say that very, I said, I misjudged, I didn't misjudge him. He wasn't like that. He changed. And then minute he changed, I changed. So I don't know what to tell you. Like he wasn't like that. You know, you knew him back then. Casey, you knew him. Yeah. Yes. He absolutely changed.

You're getting at something else that really interests me though, Cara, which is I think part of being a good tech journalist is not just delivering a moral judgment on every bad thing that happens in Silicon Valley. It's also being open to new ideas. It's also believing that technology can improve people's lives. Like and we've had conversations in the past where you have said to me that you think that that is important to that kind of sense of openness.

Like how have you tried to balance those two things in your mind? Well, I think you've gotten more critical in a good way, right? Yeah. But in your enthusiastic too, by the way, I mean, in so are you Kevin and it's, it's really one of the things, and the last, let me finish that part is that if you had to pick the person who was a slave-ish fanboy to tech people at an access journalist, I don't know.

I might look over the 43 covers of Fortune magazine over the many years, you know, where it was like all up into the right. And then of course they slapped them later. So I wouldn't be the one I would pick for access journalism, honestly. That's the thing. But I just represent things to people, I guess. I must represent them. Well, in other, like, like, there is no doubt about my mind. Yes, you can like it. You can like it.

You do kind of have to be like, I think most people don't go into technology journalism if they don't think that it has the possibility to do good things for people. Correct. Which I say from the beginning of the book. And one of the things that it did replace, I think everyone, I think everyone was too, aren't they? Look at your beautiful big brain, Mr. Gates. When I got there, that was the way it covered it, right? Right. And I think there was sort of fanboys of the gadgets, gadget fanboys.

The second part that happened was then, and I think we led the way at all things, for sure, it got too snarky, right? It was everything sucked. And I'm like, everything doesn't suck. And the minute you say that, you're their friend. I'm not their friend. I just think, I don't know, some of it's cool. Like I even crypto. Like I was like, this seems interesting. And so you have to be open.

This gets to a criticism that I'm sure all three of us here from people in the tech industry, which is that the media has become too critical of tech that they can't see the good, that they're sort of overcorrecting for maybe a decade of. Probably. Two positive coverage, blaming them for getting Donald Trump elected or ruining democracy or whatever, and that they are sort of becoming the scapegoat for all of society's problems. What do you make of that?

I think to an extent that's a little bit true, but it's also true that they actually did do damage. Like, come on, stop it. Like they're not exact. They didn't cause the riot at, you know, that's not the riot. It's not a riot. It was the insurrection on January 6th. But they were certainly handmaid in this dissedition, weren't they? Come on, stop it. You can trace that so quickly. Same thing that's going on. They don't want to take any responsibility. They resist.

And now, as you know, the victim mentality, the industrial grievance complex among these people, you know, when Mark Andreessen wrote that ridiculous techno optimist, it's you're the far as against us. I'm like, oh my God. And you know, the whole like, when Elon goes on about the man, I'm like, you're the man, you man, man. Like that's the kind of stuff. So no, I don't, I think to an extent, yes, when it's instantly, you know, Mark Zuckerberg is villainous. I don't consider him villainous.

I don't. I don't. But he's, is he responsible? And the way you do that is say that interview I did with him about Holocaust and ours. That's how you show it. Like, I don't, I think he's just ill equipped in that regard. I don't think he sits in his house and pets a white cat and goes, hmm, what should I do to end humanity now? And I do think there's a little bit of that, especially among younger reporters, that they have to get people. I don't think, and there's people I like. I had old chapter.

I think Mark Cuban's journey has been really interesting. But we all get that. We all get that because it's our fault. As we have decreasing power, it's all our faults that these like really like, you know, Walt Mossberg used to be able to make and break companies. We cannot. None of us, even collectively, if we put our little, you know, laser rays together, couldn't do it, couldn't do it. All right, Karrice. So last question, we have to ask about this huge scandal that just broke today.

Amazon has been flooded by copies of that are pretending to be bird, but are not burned, but they're using generative AI to create versions of your face, like wearing your signature aviators. How? What is your response? Did you see the Femi one? Did you see the Femi one? Yeah, I, it's true to me. I prefer more butch, Kara, but all versions of Kara beautiful. No, these versions are not. This is the versions my mother wants to happen, right? This is like, my mother's like, this is great.

There's one, this is one title, Text Queen B with a sting by Barbara E. Frey. And then there's another one. They're, they're crazy. So this is not a new thing with me, but it's, they wrote it on 404, which I think. So I was just with Savannah Guthrie and she's written this book about faith and God, right? And says, it's a bestseller. And she, they created workbooks that go with the book. Savannah has nothing to do with these workbooks. And they're doing it with me.

So there's all these Kara books. So I, of course, put them all together and I sent Annie Jassy to the note and said, what the fuck? You're costing me money. The CEO of Amazon. Yeah. So I literally, I was like, what the fuck? Get these down. Like, what do you do? And these are, it's as if I was like the head of Gucci and there's all these knockoffs, right? Yeah. Whatever. It's, it's not unsimilar, but it's AI generated clearly.

And, and just to make a very carouselish point, I think it's been obvious that this is going to happen for a while in the platforms have not taken enough steps to stop. Nothing. I want you to find out one. I want you to find out one. Sure. Go ahead. Yeah. Okay. Number one, very commonly people ask me who know that we're friends will ask me like, is Kara Swisher really like that? Like is she really like that? When the cameras are off, when the mics are off, you know, what is she really like?

And I always tell them like there is no off switch on Kara. She is Kara wherever she is in whatever context. And I think that's one thing that's really consistent throughout your entire book is that, you know, this is not an act. This is who you are. This tough persona, this very candid, very blunt person. Yeah. And I just want to know like, how did you get that way? I was that way from when I was a kid. I honestly, maybe my dad dying. I don't know.

I was that I was, when I was born, I was called tempesta. So I kind of feel like it's kind of genetic in some fashion. So I don't know. I just was, I was one of these people, maybe because I was gay and, you know, nobody like gay people, and I didn't understand that. I was like, I'm great. What are you talking about? I think it was, I just was like this. I would, you know, I was, there was a, when I was in school when I, I walked out of the class. I was like, I read this. I'm not going to read.

I'm not going to waste my fucking time here with you people. And I think I was four, you know, I was like, I already read it. Let's move along. And so I, you know, I was always like that. And I, it's sort of my journey to becoming Larry David, right? And I'm, now I find myself saying lots of things out loud. I'm like, no, what are you doing? What's going on here? Like, you know, what's with that? And so, so I say that a lot in, in a lot of things I do. I don't know why I'm like, I'm not.

Though I, one of the things I think you must stress to people, I'm actually not mean. Like, that's the, that's a very sexist thing with people. I think most people are, often they go to things. I thought you were taller, which I'm very short. And I thought you were mean, but you're very nice. And I, I can be very polite and, you know, I just, I'm straightforward. The thing about you that people don't see is that you are so loyal to all of you.

All the people who work for you, you truly are, you, you, you take time to mentor. You identify people who you think could be doing better than they are. And you just proactively go out and help them. I have been a huge beneficiary of that. I truly can never thank you enough for that. But like, that is the one thing that doesn't come across in the, in the podcast on the persona is that behind the scenes, you are helping a lot of people.

So I'm sorry, I'm sorry, I'm sorry, it's a little bit, but I did want to say that. I want to man an apology from both of you. I know you didn't, but you know what you could have stood up for the kid. You could have done, I am Spartacus. All right, last question, Cara. I am Spartacus. Say it, I am Spartacus just once for your, your overlords at the New York Times. Let me just say one more thing about that.

One thing that does bother me, and especially around women, and it's a big issue in tech and everywhere else is a lot of the questions, some of the questions I'm getting on the podcast and it's always from man, I'm sorry to tell you this. How are you so confident? Like, or the word uncommon confidence, it's ridiculous. Like, how are the fact that women have to sort of excuse themselves constantly is an exhausting thing for them and everybody else.

And so one of the things I hate, that's where I get really mad. And that makes me furious. And I sort of pop off when that happens. Yeah, that makes sense. Last question. In your book, you write about what I would consider the sort of like last generation of great tech founders and entrepreneurs, the Steve Jobs is the Mark Zuckerberg, the Bill Gates, these people who we've been sort of living with now for decades and sort of using the products and the services that they've built.

We're now in this sort of weird new era in Silicon Valley where a lot of those companies look sort of aging and so maybe past their prime and you have now this big AI boom and a new crop of startups that has got everyone excited and terrified that are raising huge gobs of money and trying to transform entire industries. Do you think today's generation of young tech founders have learned the lessons from watching the previous one?

They'll probably disappoint me once again in this long relationship. But I do. I do think they're more thoughtful. I find a lot of them much more thoughtful and very aware just the way when you talk to young people about uses of social media. I think the craziest people are 30 to 50, not the younger people. My sons are not like, they're like, oh, that's stupid mom. You know what I mean, like my older sons, my younger kids only are on, you know, they just have frozen on auto play.

That's the experience with tech. But I think they're smarter than you think, right? And they do, they're aware of the dangers. I think they're more concerned with bigger issues and more important issues. There's not the stupidity, right? There's not the sort of an arrogance that you get. That seems to be a little bit of starch out of the system. I think maybe I'm being wrong, but I do feel that some of their businesses make sense to me. I got this insurance AI.

They explain it to me and I'm not like, oh my God, I want to like, like, hold my eye out, the kind of thing. That's one thing. They're also, they will say like a Sam Alman who I've actually known since he was like 19. They will say there are dangers. They never did that. You know that, right? Everything is up to the right. It's so great. We're here to say, I don't get that. I couldn't write that same Wall Street Journal article, which is stupid things they say.

We're going to change the way that you're not. And that's why the very first line of the book is so it was capitalism after all. And I am of a firm believer that it is and they are aware of that. And so, yeah, I have a little more hope, especially on climate change tech and some of this AI stuff. I'm not as scared of AI as everyone else is, although I'm a terminator of Fishing Hottos, so it was kind of interesting. But I think some of, I think I don't like the techno optimists.

I really don't like them. But I really don't like the ones that are like, it's the end times, right? During the open AI thing, someone close to the board that was the decelerationists literally called me and said, if we don't work now, humanity is doomed. And I'm like, you're just as bad as fucking Elon Musk who said the same thing to me. If Tesla doesn't survive, humanity is doomed. You ridiculous fucking narcissists. Like, sorry.

It's going to be an asteroid or the sun's going to explode, but it's not because of you. And so that's, I do. I don't know. Do you guys do you feel? I think you've hit on something important, which is that the new generation has wise up, they have taken the lesson of the past generation and they've updated their language. But at the same time, like they are being quite grandiose and they do talk about terms of existential risk.

And so I feel like it always keeps us off balance because we're never sure exactly how seriously to take these people. I want to see new leaders. I don't want them. They don't, like, I don't think they like the Elon Musk thing. Let me, let me lend on this. I just read the obituary Mona Simpson who is Steve Jobs's sister, excuse me. We met into adulthood because he didn't had an owner. You got to go back and read that. It was really a remarkable thing. He is so different.

I know he has that he's mean. He just, today he looks like a really thoughtful, interesting person, a new poetry. He knew differences. He understood risks. He didn't shy away from that. Even though he did the reality distortion field, it was about the products. It wasn't around the world. Can you imagine, like Tim Cook going, this is what I think of Ukraine, everybody, right? Like he wouldn't because he's not an asshole, like, you know, kind of things.

So I really urge people to read that obituary, the eulogy that his sister, Mona Simpson, did it. It's in the New York Times, actually. It's wonderful. Because really, there was a different time. And I'm hoping the young people do embrace the more thoughtful approach versus this ridiculous reductionist. Us or them, the man, the hateful stuff. It's hateful is what it is. That's not a vision of the future. It's dystopian. It's the guy in total recarle who ran Mars. Fuck that guy, right?

You know, so I have hopes. I still see I'm still in love. I'm still in love. Not with you too, but yes. She got to get it one last burn on her way. Exactly. I like you guys. Can I just say you guys do you guys have done a nice job with my feed and you created a beautiful show. It's a great show. I really like your show. Thank you. Anytime you need help boys. That means a lot. I just noticed we had Demis Isabis on our podcast last week and I noticed he hasn't come on yours.

So if you'd like any help booking guests, let us know. Actually, Kevin, I wonder who broke that story when it was sold to Google. I'm just kidding. I'm just messing with you. Karen Swisher, the legend. That was go look it up. Karen Swisher broke that story. The book is called. I'm better than you. I will be there. I will be there after you. I was there before you. I have inevitable. There is no. I know some journalism. That means you say I'm at CNN right now. Do you know I have a show now?

I mean, I literally. It's about time you got to break. Yeah, I know. Right. Karen Swisher, thanks so much for coming. This was amazing. Thank you, Karen. Thank you, boys. I appreciate it. When we come back, the Supreme Court takes on content moderation. I have written a few times over the years about the issue of content moderation. On social media. Yeah. One of the biggest issues it seems like anyone wants to talk about when it comes to the social networks.

This week is a big week in content moderation land because the Supreme Court of the United States heard arguments for two cases that are directly related to this issue of how social networks can and cannot moderate what's on their services. On Monday, Supreme Court justices heard close to four hours of oral arguments over the constitutionality of two state laws, one came out of Florida, the other is in Texas.

Both of these laws restrict the ability of tech companies to make decisions about what content they allow and don't allow on their platform. They were both passed after Donald Trump was banned from Facebook Twitter and YouTube following the January 6th riots at the Capitol. Florida's law limits the ability of platforms like Facebook to moderate content posted by journalistic enterprises and content, quote, buy or about political candidates.

It also requires that content moderation on social networks be carried out in a consistent manner. Texas's law has some similarities, but it prohibits internet platforms from moderating content based on viewpoint with a few exceptions. Yeah. So this is a really big deal. Right now platforms remove a bunch of content that is not illegal. You're allowed to insult people, maybe even lightly harass them. You can say racist things. You can engage in other forms of hate speech.

That is not against the law, but platforms ever since they were founded had been removing this stuff because for the most part, people really don't want to see it. Well, then along come Florida and Texas and they say, we don't like this and we're actually going to prevent you from doing it. So if these laws were to be upheld, Kevin, you and I would be living on a very different internet. So I think when it comes to content moderation and its legal challenges, this is the big one.

This pair of lawsuits is what will determine how and if platforms have to change the way that they moderate content dramatically. Yep. But we want to bring in some help to get through the legal issues here today. Yes. So we've invited today an expert on these issues. This is Daphne Keller. Tell us about Daphne. She is the person that reporters call when anything involving internet regulation pops up. She is somebody who has spent decades on this issue.

She's currently the director of the program on platform regulation at Stanford's Cyber Policy Center. She has done a lot of great writing on these cases in particular, including a couple of incredibly helpful FAQ pages that have helped reporters like me try to make sense of all of the issues involved. Daphne also formally submitted her own views to the Supreme Court in an Amicus brief that she helped write and file on behalf of political scientist Francis Fukuyama.

Yeah. So Daphne is opposed to these laws we should say. She believes that they are unconstitutional and that the Supreme Court should strike them down. But this is not a view she came to lightly or recently. She's been working in the field of tech and tech law for many years. We'll link to her great FAQs in the show notes, but today for sort of a breakdown of these cases and how she thinks the Supreme Court is likely to rule, we wanted to bring her on. So let's bring in Daphne Keller.

Daphne Keller, welcome to the show. Thank you. Good to be here. So I want to just start. Can you just help us lay out the main arguments on either side of these cases? What are the central claims that Texas and Florida are using to justify the way that they want to regulate social media companies? So I mean, it's not that far away from the basic political version of this fight.

The rationale is these are liberal California companies or they were liberal California companies and they're censoring conservative voices and that, you know, that needs to stop. My understanding is that this is probably the only Supreme Court case in the history of the Supreme Court that had its origins in a Star Trek subreddit. Can you explain that whole thing? So this isn't literally from that case, but so Texas and Florida passed their laws.

The platforms ran as fast as they could to court to get an injunction so the laws couldn't be enforced, but a couple of cases got filed in Texas and the most interesting one. I thought there was just one I think now there are two actually, but the most interesting one is somebody who posted on the Star Trek subreddit that Wesley Crusher is a soy boy. I had to look up what soy boy means. It's kind of like junior cuck or something. Yeah, people often call us soy boys.

It's kind of like a conservative slurminy weakling. Yes. Yeah. Yeah. As I say, drinking my green juice. But it's just not soy milk. Right. Yeah. So the moderate, it wasn't even reddit, the moderators of that subreddit took that down. Yeah. Because of some rule that they have. I mean, it's deeply offensive to members of the Star Trek community and the soy boy community.

Yeah. And the person, I'm going to guess it's a guy sued saying this violates the obligation in Texas is law to be viewpoint neutral. And it's a useful example because it's such a like total real world content moderation dispute about some dumb crap. But the question of like, what does it mean to be viewpoint neutral on the question of whether Star Trek characters are soy boys? It's like, helpfully illustrates how impossible it is to figure out what platforms are supposed to do under these laws.

Exactly. You take this very silly case. You extrapolate it across every platform on the internet and you ask yourself, how are they supposed to act in every single case? And it just seems like we would be consumed with a misleadation. So you just returned from Washington where these cases were being argued in front of the Supreme Court sketch the scene for us because I've never been, what's it like? So you start out, well, if you're me, you pay somebody to stand in line overnight for you.

Wow. Because I'm old. I'm not going to do that. But you like to stand in line overnight for this. I had somebody there from 9 p.m. and he was number 27 in line. And they often let in about 40 people. How do you find these people to just stand in line? Skiptheline.com. Wow. Great tip for listeners. I learned something today. Rick. Did you reach out for Rick? Anyhow, so you stand around in the cold for a long time.

Then they let you in in stages, one of which the best part definitely is you stand in this like resonant, beautiful marble staircase and a member of the Supreme Court police force explains to you that if you engage in any kind of free speech activity, you will spend the night in jail. Very firm. Very firm. And it's also interesting to hear that there is effectively content moderation on everyone who is in the room before they even enter. They say, hey, you open your mouth and you're out of here.

Yeah. So the people making these arguments represent net choice, which is a trade association for the tech companies. It's sort of their lobbying group. Who else is opposed to these laws? So I should say that CCIA, which is a different tech trade association, is also a plaintiff and they always get short shrift because they're not the first named party. But a whole lot of individual platforms filed or free expression oriented groups filed.

Lots of people weighing in who are interested in different facets of the issue. I see. And for those of our listeners who may not be American or may not have much familiarity with how the Supreme Court works, my understanding is in these oral arguments, you know, the justices reign questions down on the attorneys. They try to answer them at best they can. Then they go away and deliberate and write their opinions. So we don't actually know how they're going to rule in this case.

But did you hear anything during oral arguments that kind of indicated to you which way this case might be headed? So there's a lot of T. Lee, reading that goes on based on what happens in oral arguments. And usually that's the last set of clues you get until the opinion issues, which seems likely to be in June or something like that. In this case, there's actually another case being argued in March that's related and might give us some interesting clues.

But from this week's argument, it was pretty clear that a number of the justices thought the platforms clearly have first amendment protected editorial rights. And it's not like that's the end of the question because sometimes the government can override that with a good enough reason. But it seemed like there was, I think, a majority for that. But then they all kind of got sidetracked on this question of whether they could even rule on that because the law has some other potential applications.

And they got into like a lawyer procedural rules fight that could cause the outcome to be weird in some way. So let me ask about that because to go back to our soy boy example, to me, if a private business wants to have a website and they want to make a rule that says you can't call anybody a soy boy around here, that does seem like the sort of thing that would be protected under the first amendment. You know, you write your policies under that sort of first amendment.

Why is that not the end of the story here? Well, so what Texas or Florida would say is that these laws only apply to the biggest platforms. And they're so important that they're basically infrastructure now. And you can't be heard at all unless you're being heard on YouTube or on X or on Facebook. And so that's different. Right.

But what is the argument from the states about why they should be allowed to impinge on this first amendment right that these platforms say that they have to moderate content. However, they want to their private businesses. What do the states say in response to that? They say the platforms have no first amendment rights in the first place, that that's fake.

You know, that what the platforms are doing isn't speech its censorship or what the platforms are doing is conduct or mostly they just allow all of the posts to flow. So the fact that they take down some of them shouldn't matter. A lot of arguments like that. None of which are super supported by the case law, but the court could change the case law.

I want to ask you about another conversation that came up during these oral arguments that you referenced earlier, which was which platforms do these laws apply to? There's some confusion about this. And it seemed like the justice is had questions about, okay, maybe if we want to set aside for a second the Facebooks and the X's and the YouTube's, what about like an Uber or a Gmail? Like maybe there should be a kind of equal right of access there.

So I look at that and I say, well, that's a good reason not to pass laws that affect every single platform the same way. But I'm curious how you heard that argument and maybe if you have any thought about how the justice will make sense of which law applies to what and what might be constitutional and what might not be. Yeah. And part of the argument, I think caught a lot of people including me off guard. We did not expect it to go in that direction. But I'm a little bit glad it did.

Like I think it was the justices recognizing we could make a misstep here and have these consequences that we haven't even been thinking about. And so we need to look really carefully at what they might be. And in the case of the Florida law in particular, the definition of covered platforms is so broad. It explicitly includes web search, which I'm a former legal lead for Google web search full disclosure. And it seems like it includes infrastructure providers like Cloudflare.

So it's really, really broad who gets swept in. And I reluctantly must concede I think the justices were right to pause and worry about that. Yeah, for sure. Yeah. A lot of the people I saw commenting on the oral arguments this week suggested that this was going to be a slam dunk for the tech companies that they had done a good job of demonstrating that these laws in Texas and Florida were unconstitutional.

And that it sounded after these arguments like the justices were likely to side with the tech platforms. Is that your take to? I think there, I think enough of them, you need five. I think at least five of them are likely to side with the platform saying yes, you have a speech, right? And yes, this law likely infringes it. But because of this whole back and forth they got into about the procedural aspect of how the challenge was brought, it could come out some weird ways.

For example, the court could reject the platform's challenge and uphold the laws, but do so in an opinion that pretty clearly directs the lower courts to issue a more narrowly tailored injunction that just makes the law not apply to speech platforms.

You know, there are a lot of different ways they could do it, some of which would formally look like the state's winning, although it wouldn't in substance be the state's winning against the platforms that we're talking about most of the time, the Facebook, the Instagrams, the TikToks. Very interesting. Yeah. So we've talked about these laws on the show before. And I think we can all agree that there are some serious issues with them.

They could force platforms operating in these states to open the floodgates of harassment and toxic speech and all these kinds of things that we can all just agree or horrible. But there is also an argument being made that ruling against these cases, striking these laws down, could actually do more damage.

Zephyr Teachout, who's a law professor at Fordham, wrote an article in the Atlantic recently about these social media laws called Texas's social media law is dangerous, striking it down could be worse. Basically making the case that if you strike down these laws, you basically give tech giants kind of unprecedented and unrestrained power. What do you make of that argument?

So I read the brief that Zephyr filed along with Tim Wu and Larry Lessig and it's like they're writing about a different law than the actual law that is in front of the court. And I think their worry is important. If the court ruled on this in a way that precluded privacy laws and precluded consumer protection laws, that would be a problem. But there were a million ways for the court to rule on this without stepping on the possibility of future better federal privacy laws, for example.

It's not some binary decision where the platform's winning is going to change the ground rules for all those other laws. So you don't worry that if this case comes out in the company's favor that they are going to be sort of massively empowered with new, new powers they didn't have before. Well, I mean, if the court wanted to do it that way, if there are five of them who wanted to do it that way, then it could come out that way.

But I can't imagine them five of them wanting to empower platforms in particular that way. And I can't imagine the liberal justice is wanting to rule in a way that undermines like the FTC from being able to do the regulation that it does. A big topic that comes up in discussions of law and tech policy is Section 230. This is the part of the Communications Decency Act that basically gives broad legal immunity to platforms that host user generated content.

This is something that conservative politicians and some liberal politicians want to repeal or amend to kind of take that immunity away from the platforms. This is not a set of cases about Section 230. But I'm wondering if you see any ways in which the way that the Supreme Court rules on this could affect how Section 230 is applied or interpreted? Well, you might think it's not a case about 230 because they agreed to review a first amendment question full stop.

But the states manage to make it more and more like a case about 230 and multiple justices had questions about it. So it won't be too surprising if we get a ruling that says something about 230. I really hope not because that wasn't briefed. This wasn't what the courts below ruled on. It hasn't really been teed up for the court. It's just they're interested in it. There are kind of, there are two ways that 230 runs into this. I think one will be two in the weeds for you.

But the more interesting one is lots of the justices have said things like, look, platforms. Either this is your speech and your free expression when you decide what to leave up or it's not and you're immunized. You know, pick one. How can it possibly be both? And the answer is, no, it can definitely be both. Like that was the purpose of Section 230 was that Congress wanted platforms to go out there and have editorial control and moderate content. Literally the goal was to have both at once.

So if the platforms have first amendment rights in the first place, it's not like Congress can take that away by passing an immunity statute. That would be a really good one-weird trick. And I'm glad they can't do that. So there are a lot of reasons that that argument shouldn't work. But it's very appealing. I think in particular to people whose concept of media and information systems was shaped in about 1980.

You know, if the rule is you have to be either totally passive, like a phone company and transmit everything, or you have to be like NBC Nightly News and there are just a couple of privileged speakers and lawyers vet every single thing they say, then you're going to get those two kinds of communication systems.

You'll get phones and you'll get broadcast, but you will never get the internet and internet platforms in places where we can speak instantly to the whole world, but also have a relatively civil forum because they're doing some content moderation. Right. Almost sounds like there's a downside to having the median age of a Supreme Court justice being 72. I don't know what the real age is. I'm sure I'll do a pickup about that later. Kevin, do you want to tell her who wrote the 230 question?

Well, you're going to out. I'm going to out you. So this was a great question that I unfortunately did not write, but the perplexity search engine did. Because I give you the prompt, write 10 penetrating grad student level questions for a law and policy expert about the net choice cases. In fairness, I did think it was a pretty good question. It was a very good question. So yeah, wow, you're really, you're really doing me dirty here. I was going to get away with that.

We wrote the rest of the question. It's true. We just wanted a little help to make sure we left no stone unturned. Yeah. And it was a pretty smart question. It's smart that I would have come up with. And let's say the answer way better than the question. Yes, that's true. A student of mine sent me a screenshot of something he got from chat GPT. He'd asked for sources on some 230 related thing. And it cited an article that it pretended I had written, which did not exist.

On which winter files in section 230 that was in a non-existent journal called like the Columbia Journal of Law and Technology. It looked very plausible. Hey, as long as I'm comfortable being cited and things I didn't write, as long as they were good and in prestigious journals. You know what I mean? I loved your submission to the New England Journal of MetaSix. It was so much. It was really good. Save a lot of lives. So, Dave, we've talked about how the court will or may handle this case.

I'm also curious how you think they should handle this. You and some other legal experts filed an Amicus brief in this case, sort of arguing for... Actually, let's sell this once and for all. Is it Amicus or Isidamicus, Stephanie? It's both hand. Okay, great. Come on, come on. Oh. And some people say the plural Amici. Oh. Oh, I ordered that an Italian restaurant. I think I saw him DJ in Vegas. Right.

So, can you just articulate your position in that brief about how you think the court could and maybe should handle this? Yeah. So, this is not how the parties have framed it. This is some wonks coming in and saying you framed it wrong. But I do actually think they framed it wrong. So, there's in kind of a standard set of steps in answering a first amendment question, you ask, did the state have a valid goal in enacting this law?

And does the law actually advance that goal and does it have a unnecessary, you know, damage to speech that could have been avoided through a more narrowly tailored law? So, in this case, the state say we had to pass this law because the platforms have so much centralized control over speech. Let's assume that's a good goal. We say that doesn't mean the next step is the state takes over and takes a centralized control to impose the state's rules for speech.

There are better next steps that would be more narrowly tailored, that would be a better sort of means and fit and in particular steps that empower users to make their own choices using, you know, interoperability or so-called middleware tools for users to select from a competing environment of content moderation. What would this look like? This would be like a toggle on your, you know, your Reddit app that would say I want soy boy content or I don't want soy boy content content.

So, it could look like a lot of different things, but I know you guys have talked to Jay from Blue Sky. Like, it could look like what Blue Sky is trying to do with having third parties able to come build their own ranking rules or their own speech blocking rules and then users can select which of those they want to turn on.

It could look like mastodon with different interoperating nodes where the administrator of any one node sets the rules, but if you're a user there, you can still communicate with your friends on other nodes who have chosen other rules. It could look like block party, back when block party was working on Twitter, you sort of, you know, download block lists that are asking for other people. This is an app that basically lets you like block a bunch of people at once.

Yeah, so it could look like a lot of different things and all of them would be better than what Texas and Florida did. I wonder if you can sort of steal man the argument on the other side of this case a little bit. I was going through this exercise myself because on one hand, like, I do think that these laws are a bad idea.

On the other hand, I think that the tech platforms have in some cases made their own bed here by being so opaque and unaccountable when it comes to how they make rules governing platforms and frankly spending a lot of time obfuscating about what their rules are, what their process is doing these fake oversight boards that actually have no, you know, democratic accountability. It's a kangaroo court.

Come on. And I think I'm somewhat sympathetic to the view that these platforms have too much power to decide what goes and what doesn't go on their platforms. But I don't want it to be a binary choice between Mark Zuckerberg making all the rules for online speech along with Elon Musk and other platform leaders and, you know, Greg Abbott and Ron DeSantis doing it. So I like your idea of a kind of a middle path here.

Are there other middle paths that you see where we could sort of make the process of governing social media content moderation more democratic without literally turning it over to politicians and state governments? It's actually really hard to use the law to arrive at any any any kind of middle path other than this kind of competition based approach we were talking about before. The problem is what I call lawful but awful speech.

A lot of people use that, which is this really broad category of speech that's protected by the First Amendment. So the government can't prohibit it and they can't tell platforms they have to prohibit it. And that includes lots of pro-terrorist speech, lots of scary threats, you know, lots of hate speech, lots of disinformation, lots of speech that really everybody across the political spectrum does not want to see and doesn't want their kids to see when they go on the internet.

But if the government can't tell platforms they have to regulate that speech, people morally disapprove of, but that's legal and First Amendment protected, then they're hands are tied. You know, then that's how we wind up in this situation where instead we rely on private companies to make the rules that there's this great moral and social demand for from users and from advertisers.

And that's just, it's extremely hard to get away from because of that delta between what the government can do and what private companies can do. Well, some people have described our podcast as lawful but awful speech, so I hope that we will not end up targeted by these laws. Definitely, Keller. Thank you so much for joining us. Really a pleasure to have you here. Thanks for having me. Hard fork is produced by Rachel Cohn and Davis Land. We're edited by Jen Pooyan.

This episode was fact check by Caitlin Love. Today's show was engineered by Chris Wood, original music by Diane Wong, Marion Luzano, Rowan Nemistow, and Dan Powell. Our audience editor is Nelga Logley, video production by Ryan Manning and Dylan Bergason. If you haven't already, check us out on YouTube at youtube.com slash hard fork. Special thanks to Paul Assumin, Huiwing Tam, Caitlin Presti, and Jeffrey Miranda. We can email us at hard fork at ytimes.com with all your sickest burns.

This transcript was generated by Metacast using AI and may contain inaccuracies. Learn more about transcripts.
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast