I had a big moment over the weekend. What was that? I wore my Apple Vision Pro on an airplane for the first time. Oh my God. Well, you, you do not appear to have any visible injuries on your head. Does it look like you were assaulted? So I had that. Well, it went great. So I, I I pulled it out, I put it on. I connected the, the Wi-Fi just like Joana Stern told last week it was possible to do. And I watch suits in like I max, you know, full size.
Like the window that I am, that I'm watching suits in is as big as the freaking plane. But Meghan Markle has never been larger. Exactly. But I ran into a dilemma, which was, you know how you can turn the dial for immersion. Yeah. So you can either turn it so that you can see everything around you. Yeah. And you just have this kind of floating window with suits playing or you can turn the dial all the way to the other side in which case you don't
see anything around you. And you're just, you can pick your background. I was on the surface of Mars. So, uh, what's in the case for like wanting to see what's around you? So here we go. So I immerse myself. I'm on the surface of Mars. I'm watching suits. It works great except I missed the drink cart. Oh, no. I don't see the drink cart coming by and I missed my drink. That's so great because the flight attendant could have just tapped you on the shoulder and
said, do you want a drink? But instead they made the right choice and said, true this guy. He wants to look like that. He can get his own damn drink. So this is the dilemma of flying in the Vision Pro I've learned. Now, this is not actually a dilemma. If you, if you want a drink, you need to engage with reality, my friends. The choice is yours. I'm Kevin Russo, tech columnist at The New York Times. I'm Casey New from Platformer. And this
is hard for this week. A bill that could ban TikTok passes the house of representatives. What's next? Then we got to ask, does Kate Middleton actually know how to Photoshop? And finally, The New York Times casualty will join us with an investigation into how your car may be spying on you. Rev it up. Personally, I pull over. All right, Casey, this week we have to talk about what is happening with TikTok because it has
been a very big week for that app. And I would say for social media in general. Yeah, there have been a lot of moves over the years to maybe ban TikTok. But what we have seen this week is the most serious of those moves that we've seen so far. So on Wednesday of this week, the US House of Representatives voted to pass something called the Protecting Americans from Foreign Adversary Controlled Applications Act or Puffaka. So this bill passed the House of Representatives on
Wednesday with a vote of 352 to 65. So pretty overwhelming bipartisan support for this bill. It's a bill that would essentially require bydance the Chinese conglomerate that owns TikTok to sell it. So Casey, let's talk about what this bill means and how we got here and what the implications are. But first, let's just say like what's in the bill? So two things. One, if it gets
signed into law, it requires bydance to sell TikTok within 180 days. And two, if bydance chooses not to do that, the bill creates a process that would ultimately let the president decide whether the app should be banned on national security grounds. So basically, if you have TikTok on your phone and this bill passes 180 days from when it passes, your app will not be able to get updates anymore and it won't be available in the app stores. So let's just remind each other how we got here
because this is not a new topic. As you remember, Donald Trump, when he was the president tried to ban TikTok or force a sale of TikTok that came close to happening, but then sort of fell apart in the late stages of that process. There have been other attempts to ban TikTok Montana, actually passed a law banning TikTok within the state that was overturned by a court. So this has been a long process and a lot of different organizations and lobbying groups have been pushing
for a TikTok ban for years. But why do you think this is coming to a head now? Well, in a way, there have been a series of events that brought us to this moment over the past year. TikTok was banned on federal devices. A number of states moved to say that, hey, if you are a state employee, we're going to sort of take this thing off of your phone. Behind the scenes,
bite dance was happening to a bunch of conversations. They tried to implement a this program called Project Texas, which would try to silo Americans data and create a bunch of assurances, essentially that TikTok could not be used for evil here in the United States. And all of those things just fell on deaf ears. And I think as we've begun to approach our election here in the United States,
Kevin, lawmakers are increasingly concerned about what's happening. And one thing we learned is that the Biden administration, which really wants to ban TikTok, gave a classified briefing to members of Congress recently in which they made the case. We don't know exactly what was said at this briefing. But whatever it is, seems to really motivate a lot of members of Congress to get this
thing out of the extra. Yeah, I think that's a good point. The Project Texas thing that we've talked about on the show before was not successful in terms of convincing Americans and American lawmakers that TikTok was no longer a potential threat to national security. I also remember you went down to a TikTok transparency center, which they were sort of giving tours of to various reporters and
lawmakers and skeptics to try to say, look, we are being transparent. We are letting people see our algorithms so they can see there's no sort of nefarious Chinese plot to seed propaganda in there. That ultimately did not appear to be convincing to that many people either. Yeah, I mean, it put TikTok in this position of having to continuously prove a negative, which is that it had never been used to do anything bad and never would. And it's just really hard
to do that. Yeah. And I think for a lot of people, including me, the assumption was that there would be this tension over TikTok, but that ultimately nothing would happen. But now it appears there is this real bipartisan effort that may actually succeed. And the question of why is this
happening now is really interesting. And I think it has a lot to do, frankly, with something that we haven't talked about much on this show, which is the conflict in Israel and Gaza, which has brought new attention to TikTok in part because there's a sort of coalition of people in Washington who believe that TikTok is being used to turn American public opinion against Israel. And there has been some viral analysis that showed that pro-Palestinian videos on TikTok were dramatically outperforming
pro-Israel videos. And the Wall Street Journal reports that this was a big issue that caught the attention of lawmakers on both sides of the aisle who said this app is a problem. It is basically helping to brainwash American youth into not supporting Israel, which I think is dubious for all kinds of reasons. But that does appear to have benefactor here. Yeah. Around this time, there was a tool that advertisers could use to track the performance of various hashtags.
And some researchers used it to see that videos with pro-Palestin hashtags appeared at times to be getting more than 60 times more views than videos with pro-Israel hashtags. We don't know why. And there's not really any evidence that TikTok was putting its thumb on the scale. But that research really seems like it's scandalized Congress. And once again, drew attention to the fact that if someone at TikTok were by dance or the Chinese Communist Party did want to put their thumb on the
scale, there was absolutely nothing to stop them. And this has been the core problem that TikTok has had from the start, which is that even if it does nothing wrong, there is always the potential that the Chinese Communist Party could force them to. Right. So let's talk about the process from here. So the first thing that is needed to turn this bill, Pafaka, into law is that it needs to pass the house that has happened. Now the next step is it needs to be introduced to and passed by
the Senate. Do you think that's likely to happen to? Well, actually, maybe not. There's been some reporting in the Washington Post this week that Senator Rand Paul has come out and said that he does not intend to support this bill. He thinks that Americans should be free to use whatever social media apps they want to. And he just does not see the need for this app to be banned. I would also say that Chuck Schumer, who is the Senate majority leader and a Democrat, seems pretty wobbly on
this one as well. He is not committed to bringing this thing to a floor vote. And I believe, you know, talk to lobbyists on Capitol Hill and they'll tell you that Chuck Schumer is a big reason that a lot of tech bills don't get passed because he just sort of doesn't believe that they need to be regulated much at all. So because of those two reasons and the fact that there is still no companion bill in the Senate, yeah, Kevin, I think this one does have long odds ahead of it.
But what do you think? I think it could pass the Senate. I think, you know, I've been surprised at the sort of bipartisan support. We've seen a few lawmakers come out vocally in defense of TikTok or at least in opposition to forcing it to be sold. But the majority of Congress people have signaled that they would support this. If it is passed by the Senate, it would then move to
President Biden, who would need to sign it. The White House has indicated that he would. And from there, then the bill would need to sort of survive legal challenges, which bite dance has signaled it will mount if this bill is passed. They will try to stop this in court. So the bill wouldn't need to overcome that challenge. But if it does say all of that happens and this bill is passed and holds up in court, it would give bite dance about six months to undertake a sale of
a massive tech product that it really doesn't want to sell. Yeah. And when you talk to the folks at bite dance, they will say the make no mistake. This is not about a sale. This is a de facto ban of TikTok in the United States. And I believe the reason that they are saying that is that the Chinese government has given indications that it will not allow bite dance to divest TikTok. And so bite dance will effectively have no choice but to stop offering the app in the United States. Yeah. So TikTok
has obviously not taken this news of this bill lying down. They have mounted an aggressive lobbying campaign in Washington. They have a huge lobbying team there that is presumably, you know, fanning out over Capitol Hill trying to convince lawmakers to drop their support for this bill. They also have started mobilizing users. So this week, TikTok sent a push notification to many, many of its United States users that urged them to call their representatives and called this law
a, quote, total ban on TikTok, which is not totally true. It doesn't ban TikTok. It just forces bite dance to sell it. But the company wrote in this notification that this bill would quote, damage millions of businesses, destroy the livelihoods of creators across the country, and deny artists and audience. Did you get this notification? I did not because I do not enable TikTok to send push notifications because like every other app it sends too many. Right. So this was
shown to users who opened their apps. And it did apparently result in a flood of calls to congressional representatives and their offices. One congressman, Florida's Neil Dunn, told the BBC that his office had received more than 900 calls from TikTokers, many of which were vulnerable school aged children and some of whose extreme rhetoric had to be flagged for security reasons. Which I don't, I don't understand where they like threatening the congressperson if
TikTok were banned. Going to assume that yes, they were absolutely threatening the congressperson. So a flood of kids contacting their representatives to complain about this bill. What do you think about this strategy? Well, we have seen it used effectively before Uber would do this in cities without threatening to ban Uber. They would sort of show information in the app saying, hey,
you know, why don't you call your representative here? We've seen Facebook and Google put banners in the app talking about issues of concern to them, net neutrality and other things. And this has always been pretty non-controversial actually. And it is demonstrated that these apps do have really dedicated user base that they want to keep using these apps. And so I was laughing this week Kevin when Congress was just so outraged at the fact that some of their constituents
called them to express an opinion about a bill that was before them. Right. I do think it was an interesting strategy in part because one of the charges that TikTok and bite dance are trying to dodge is this idea that they could be used as like, you know, sort of secret tools of political influence. And one of the ways that they are responding to this is by becoming a non-secret tool of political influence, like literally trying to like influence the political process in the
United States with these push notifications. But that aside, I do think this is a playbook we have seen before from other companies that are being challenged by new regulation. But like I just want to say again, is the message from Congress that they don't want to hear from their constituents about this? Is it like only call us if you're a registered lobbyist?
Like is that what they're telling us? Because that kind of stuff. Yeah, it kind of sucks. But it also is the case that these like these these Congress, these congressional offices are not set up to handle the volume of of incoming calls that they got. I don't know. You try getting 900 calls from angry teens. What are you like it? Why else do the phones exist in the offices of Congress? Not just a listen constituent feedback is like, oh, what like your door dash order is at the front
door? Is that the message that wasn't getting through? I don't understand. I don't know. I think they should they should be able to you should be able to texture Congress person because you know, they text us so often when they're fundraising or trying to get elected. I think it would be it
would be turn about as fair play. I agree with that. So another wrinkle in this TikTok story is what has been happening with Donald Trump because Donald Trump, obviously no fan of TikTok under his administration tried to force the app to be sold much like the Biden administration has done this time around. But he's flip flopped. He did a TikTok flip flop. And he now says that he does not support banning TikTok in the US. He told CNBC on Monday that quote, there's a lot of good and a
lot of bad with TikTok. And he also said that if TikTok were banned, it would make Facebook bigger and that he considers Facebook to be an enemy of the people. So Casey, why is Donald Trump now changing his tune on TikTok? Well, look, you know, you're always on shaky ground when you try to project yourself into the mind of Donald Trump. But we know a couple of things. One is that
there is this billionaire with the incredible name of Jeff. Yes. Yes, Queen. Jeff. Yes is a very rich person and big donor who recently befriended Trump and Yass's company owns a 15% stake in bite dance. And he, we believe has been lobbying Trump behind the scenes and the thought there, the suspicion there is that there is some sort of quit pro quo. It's like, hey, you leave TikTok alone. I'm going to be a big backer to your campaign at a time when you desperately need money.
Yes. So Donald Trump has officially gotten Yassified. This is a story about the Yassification of Donald Trump. So another factor here is that a lot of people in Donald Trump's camp, including Kellyanne Conway, his former advisor, have been lobbying him in TikTok's defense. The Washington Post also reported recently that part of Donald Trump's antipathy toward Facebook in particular has been fueled by watching a documentary about how Mark Zuckerberg's donations to sort of election
integrity causes in 2020 helped fuel his defeat. According to the Washington Post, Donald Trump watched this documentary about Mark Zuckerberg's political donations, got very mad about it. And has ever since been sort of opposed to Facebook on any grounds, obviously, Banting TikTok in the US, one of the main beneficiaries of that would be Facebook and Meta because they have competing short-form video apps like Instagram Reels. Yeah. And we should say the money Zuckerberg
donated was to support basic election infrastructure so that people could vote. These were not partisan donations. This was donations to like local elected offices to make sure that the election could just sort of run smoothly. And because the Republicans lost it, it's infuriated them ever since. So this is a huge talking point on the right is that Mark Zuckerberg is an enemy of the people because he
supported people being able to vote. So just want to say that real quick. Now what I will also say, though, Kevin, is that Trump is right that one of the two primary beneficiaries of such a ban is Meta. And we've spent a long time now in this country worrying that Meta is too big and too powerful. And this would absolutely make Meta bigger and more powerful. And the other one presumably is YouTube, right? Is Google and YouTube. YouTube already the most used app among young people in
the United States. And if you take away TikTok, you can you better believe that the average time they spend on YouTube is about to go up. So one question that I have for you that I don't know the answer to is, is do we know if Meta and Google, which owns YouTube, are doing any kind of lobbying around this bill? I remember several years ago, there were stories about how Meta had hired a GOP lobbying firm called Targeted Victory, basically to try to convince lawmakers that TikTok was a unique
danger to American teens. What are TikTok's rivals doing this time around? So I don't have any specific knowledge of what they're doing in this case. But for the exact reason that you just mentioned, I do assume that their lobbyists are in the ears of lawmakers saying, hey, this is the time to get rid of this thing. This thing is dangerous. Meta is always scheming to eliminate its rivals whenever they can. This is a really juicy opportunity. Why else would you pay the lobbyists that they
pay if you weren't telling them to go hard after this? Right. So let's talk about the sort of core argument here that TikTok needs to be banned or sold because it is a threat to national security. And maybe a good way to do this would be for us just to outline, like, what is the best possible case for banning TikTok? And then we'll talk about the case against it. But like, let's like really try to steal man the worries that people have here. What is the best possible argument you can
imagine for banning TikTok? I would say a few things. One is essentially a fairness argument. China does not allow US social networks to operate there, even though we allow their social networks to operate here. And I think that there is a question of essentially fair play, right? China gets this playground where if they wanted, they could push pro-China narratives using these big apps that they have built in the United States. The United States does not have the same opportunity inside China.
So that's one thing I would say. The second thing is that I think that the data privacy argument is real. We have had Emily Baker white on this show. Byte dance use data about her TikTok account to surveil her and other journalists because they were worried about what she was reporting on about their company. So this question of could byte dance use Americans data against them? It's not abstract. It's already happened. The company's hands are not cleaned. How many
Americans do you want that to happen to until you take action? So those are two things that I would say. What do you think? What are reasons might, why you might want to ban byte dance? Well, one argument is just that we already have laws in this country that restrict the foreign ownership of important media properties. Like a Chinese company would not automatically be allowed to buy CNN
or Fox News tomorrow if they wanted to. They would have to basically go through an approval process with the FCC because our laws limit the foreign ownership of those kinds of broadcast networks. Rupert Murdoch, in fact, basically had to become an American citizen before he could buy Fox News because that was the law on the books then and the law on the books now. So in some cases, it is strange that we would allow a Chinese company, a company owned by an adversary of the United
States to own a very important broadcast medium in the United States. We don't allow it on TV. Why would we allow it on smartphones? So that's one argument there. Another argument for banning TikTok is essentially that the existence of an app that is so popular with Americans that is controlled by an adversary of the US is not that already has engaged in kind of sneaky attempts
to sway American public opinion, but just that it could. We've now seen just this week that when TikTok wants to, it can try to get a bunch of American young people to call their Congress people. That is a political influence campaign and it's one that TikTok itself was behind. And you have to think like what could TikTok do in the upcoming election? What would it do in the case of a war between China and the US if it can mobilize American citizens to oppose a
TikTok ban? Consider what it could do if for example China invaded Taiwan. What it could do if there was a war between a Chinese backed state and the United States or its allies. There are so many ways that an app this powerful in the hands of an adversary could be a danger to US interests. And so while I do think that some of the more extreme arguments for banning TikTok on national security grounds don't really register with me, for me it's more of like what could happen in the
future? How could this thing be used in a way that resists American interests? Well, at the same time, Kevin, there are reasons why I think it would be bad to ban TikTok and we just talk about those. Yeah, so what are the most convincing reasons not to ban TikTok or to oppose this bill? So one big reason is that you're not addressing the root of the problem here. We don't have
data privacy laws in this country. If you're worried that your data might be misused by TikTok, I guarantee you there are a lot of other companies that are actively misusing your data and profiting from it. In fact, we're going to talk about that later in this very show, right? So this issue goes far beyond TikTok and I'm continually surprised that Americans aren't more upset
about all the ways that their data is being misused today. And my worry is that when we ban TikTok, Congress will say essentially wash their hands of the issue even though Americans are going to continue to be harmed actively by things that at least when it comes to TikTok are still
mostly theoretical. Yeah. The other argument against this bill that I've found compelling is one that organizations like the ACLU and the Electronic Frontier Foundation have been making both of those groups oppose this bill in part because they say that what's happening on TikTok is First Amendment protected speech and that essentially by banning this app because you don't
like what's being shown to people on it, you are not just punishing a foreign government. You are also punishing millions of Americans who are engaging in constitutionally protected speech on this app. And moreover, these organizations say this just gives a blueprint and a playbook and a vote of support to any authoritarian government around the world who wants to censor their own
citizen speech on social media. If you are a dictator in some country and you don't like what people are sharing on an app, you can now point to this bill and say, look, the US is banning social media apps because it deems them a threat to national security. We are going to ban an app that we don't like as well. Yeah, and I think that that concern is particularly pointed given that it really does seem like a big factor motivating Congress here is that the content on TikTok is too pro-Palestinian
for them. That really does seem to be one of the big reasons why this bill gains so much momentum so quickly is something about specific political speech. So I do think the courts will weigh in there. I think there's one other thing that is worth saying about why I think banning TikTok could be bad, which is that it takes the other biggest platforms in this country and it makes them bigger and richer and more powerful. Right. So meta and Google and YouTube are the other platforms
where all sorts of video is being uploaded every day. That is where more video is being consumed. Instagram had more downloads than TikTok. Last year YouTube is the most used app among young people in the United States. And when you get rid of TikTok and app that has 170 million users a month, they are all just going to go spend more time on YouTube, more time on Instagram and other meta
properties. So it's going to be hugely beneficial to those companies. And before the TikTok controversy came along, Kevin, people at you and me spent most of our time worrying that meta and Google were too rich and too powerful. So this is just something that worsens that problem even more. Totally. And we know that this is not hypothetical as a result because TikTok has actually been banned
before in India. It was banned in India in 2020. And we saw that what happened after the TikTok ban in the two effect, there were some like little homegrown sort of Indian apps that popped up to sort of capture some of the audience, but the vast vast majority of users just started using Instagram reels and YouTube instead. Those companies got bigger in India because TikTok was banned. And I think it's fair to assume that the same thing would happen here. And for all kinds of reasons,
you might not want that to happen if you're a regulator. Yeah. So look, we'll see what happens here. I do think that this bill still has a long road ahead of it. You know, again, we have never passed a tech regulation in this country since the big tech lash began in 2017. So if this happens, it would truly be like unprecedented in the modern era. But at the same time, that house bill moved faster than basically, you know, anything we've seen during that time when it came to regulating
big tech. And so this is something we should keep our eyes on. Right. So Casey, weighing all of these arguments for and against banning TikTok, like where do you come out of this? What is your preferred outcome here? I have to say and it makes me uncomfortable to say, but I do sort of lean on the side of them banning it. Really? Yeah. You know, again, that fairness thing bothers me. The fact that we can't have US social networks in China, but they can have social networks here.
There's just like kind of an imbalance there. We have rules in this country around media ownership by foreign entities, which you just described for us. I don't understand why you would have those rules for broadcast networks and newspapers that arguably don't even matter anymore and not have
them for the internet, where maybe the majority of political discourse takes place now. So this just feels like a moment where we need to update our threat models, update our understanding of how the media works and say, Hey, it doesn't actually make sense for there to be something like this in the United States. And I say that knowing that if Congress follows through, we are going to get rid of a lot of protected political speech. We are going to make meta and YouTube bigger and more
powerful in ways that make me totally uncomfortable. So I hate the options that I have here. But if you were to make me pick one, that's probably the one I would pick. How about you? Yeah, I think my preferred outcome here would be that bite dance sells TikTok to an American company to, you know, Microsoft or remember when Oracle and Walmart were going to team up to bid on TikTok back during
the Trump days. Like something like that, I think would actually assuage a lot of my fears about TikTok as a sort of covert propaganda app for the Chinese government while at the same time, allowing it to continue to exist. If that doesn't happen, I think I'm with you. I think I am more and more persuaded that banning TikTok would be a good idea. In part because of the reaction that we've seen from bite dance and TikTok just over the past few weeks as this bill has made its way
through Congress, we have not seen them, you know, engaging in good faith. We have seen them sort of exaggerating, calling this a total ban. We've also seen pushback from bite dance and presumably from the Chinese government too, which indicates to me that they do view TikTok as a strategic asset in the United States and that they do not want to give that up. So for all those reasons, I was skeptical of a TikTok ban and now I think I could get behind it.
Well, sounds like in the meantime then, if there are any TikToks, you love, might want to go ahead and save this your camera roll. When we cut back, Palace intrigue finds its way to heart fork from a literal palace. So today we have to talk about the biggest story on the internet this week, which is what is happening with Kate Middleton also known as the Princess of Wales specifically. What is happening with her photograph that she posted and the many questions it has raised
about the fate of our shared reality? Yes. So let's just give a, if you've not been keeping up with this story, let's just give a basic timeline of what's been happening. And truly everyone has been keeping up with this story. So make it snappy, Roos. Okay. So basically about two months ago on January 17th, Kensington Palace released a statement notifying the public for the first time that Princess Catherine had gone into the hospital for planned abdominal surgery. And Princess Catherine is
Kate Middleton because once you're a princess, you got a bunch of new names. Right. Technically, it's her royal highness, the Princess of Wales was admitted to the hospital yesterday for planned abdominal surgery. This statement comes as a surprise to people who watched the royal family. No one had said anything about her having a abdominal surgery. She had great abs. Yes. And the royal family is pretty withholding about personal details. So people sort of roll with it. Then a couple
days later, we get the start of the conspiratorial talk. A Spanish journalist named Kanchak Kaleja, who has sort of written a lot about the royal family over the years is also something of a conspiracy theorist herself. She wrote a book suggested that Michael Jackson had been murdered to give you a sense of sort of where this person falls on the truth versus fiction spectrum. Exactly. So
not exactly Walter Conquite, but this report is widely talked about. She reports that Kate Middleton was actually admitted to the hospital several weeks before Kensington Palace said she was and that she wasn't doing very well about a week later. The same Spanish journalist suggests that actually the Princess of Wales is in a medically induced coma following this report. A spokesperson for Kensington Palace responds basically saying this is all total nonsense from what I understand.
This is quite rare that royal family spokesperson will comment on what are essentially internet rumors. So the fact that even they denied it then maybe raise some suspicions. Right. So then following this denial from Kensington Palace, there are a bunch of seemingly small things that just kind of like tip people more into the land of conspiracy theories. Prince William pulls out of a planned memorial service that he was going to go to at the last minute, claiming that it was
a personal matter. Then a few weeks later on March 4th, paparazzi take some grainy photos of the Princess of Wales with her mom driving in a car. And people immediately start to think like this isn't Kate Middleton. This is this is a body double. This is you know that people come up with all kinds of theories about why this is not actually the Princess of Wales. This is someone pretending to be the Princess of Wales. What has happened to the Princess of Wales? So the suggestion is this was
essentially staged for the benefit of the paparazzi. Exactly. And then just a couple days ago, we got the biggest turn of events in this saga so far, which was that on Sunday, which was Mother's Day in the UK, which side note I didn't know that they had a different Mother's Day than we have here. Well, it's because they have a different word they call the mums. That's true. So on Mums Day, Kensington Palace released a photo of Princess Catherine with her kids. And it was signed with a C,
which is what Princess Catherine does with all of her social media posts. And this photo was presumably intended to sort of dispel these rumors and say, look, here she is looking happy with all of her kids surrounding her. Instead, this totally backfires because people start pointing out that this photo has been pretty obviously manipulated. I mean, the forensic analysis that was
immediately applied to this photo, I truly do not remember anything like it. And on one hand, yes, it's obvious that people are going to be pouring over this photo for any science of strange things. But man, did people do this in a hurry? Yeah. You got the full reddered treatment. This photo did people noticed that the kids' hands were sort of oddly positioned. There was some like, you know, clearly some editing done on one of the daughter's sleeves. Princess Catherine was not
wearing any of her wedding rings. And there was one window pane that looked kind of blurred. There was a zipper that was misaligned. And following this uproar about this photo, the major photo wires that distribute photos to the news media from Kensington Palace issued what is known as a kill order. They killed it. They killed it. This is like the equivalent of the, the old, in the old newspaper days when you realize you're about to make a mistake. And so you'd run down to the printing presses
and you would say, stop the presses, right? This just does not happen all that often, Kevin, that we see one of these kill orders. Yeah. So basically a kill order is something that, you know, getty or the AP or another news agency can like issue to people who might use their photos saying, do not use this photo anymore. In this case, these agencies said this, it appears that this photo has been manipulated. And so we do not think you should use it anymore. And this rarely happens.
Mia Saddo, who's a reporter for the verge, looked into this and reported that someone at a wire service told her they could count on one hand the number of kill orders they issue in an entire year. So this is a big deal. So shortly after this kill order came out, Kensington Palace released another statement. This one supposedly from Kate as well, also signed with a C quote, like many amateur photographers, I do occasionally experiment with editing. I wanted to express my
apologies for any confusion. The family photograph we shared yesterday caused. I hope everyone celebrating had a very happy Mother's Day C. So they also didn't release any other photos or give the unedited version of the manipulated photo. And so this statement, it did not do a good job of placating the critics who believe that something more is going on. No, this is a real raise more questions than it answers moment because, you know, if she wanted to, she might have said at
least one or two things about how she edited the photo, right? Or like if there was any particular thing, she's, oh, you know, like my daughter sweater didn't look quite right. And so I want to see if I could fix that, you know, obviously won't make that make us think again. That was not what happened here. Right. This was not a simple case of sort of taking out some red eye or maybe using
the blur tool to cover up a like a zit on your face or something like that. Right. Or trying to smooth out your skin or make you look younger, like, you know, you would edit one of your photos. So we should say to sort of close the loop on the saga of the Princess of Wales. There are a lot of theories going around out there on social media about what has happened
to the Princess of Wales. And what's the most irresponsible one? Well, the one that I've seen going around that I think is the funniest was someone actually compared the timeline of her disappearance with the production schedule for the masked singer. I'm speculated that she's been hiding because she's on the masked singer. I don't think that is probably the real answer here. But you know, it's not of our business where the Princess of Wales is. Well, you know, it is sort
of a taxpayer-funded position over there, right? So like arguably there is some public interest in, you know, how is the Princess of Wales doing? Sure, but I would say it's not of our business as the host of it. Exactly. Exactly. And we fought a war to not have to care about the whereabouts of the royal family. I would say we fought a war to only care about them when it was interesting. Okay. You know what I mean? So you may be wondering why are we talking about this? This is just like
some spurious gossip about the royal family. Is this really a tech story? And Casey, what is our answer to that? Well, look, you're right, Kevin, that generally speaking, when is the last time that a member of the royal family was seen in public? It's not typically something that we're interested in. But there were so many weird things about this photo that it actually did wind up squarely in our zone because what do we often talk about here? We talk about media being
manipulated. We talk about our shared sense of reality. How do we separate truth from fiction? And all of a sudden, a very frivolous story had raised, well, I would say are actually some pretty important questions. Yeah. So the first thing that people surmised from this was that this may have been AI manipulation in some way because it is 2024 and a lot of AI image manipulation
is going on. And it's admittedly very funny to think that the palace was like, gosh, we need to put out a photo of Kate and so this went into chat GPT and was like, show us the princess of whales and her family smiling for a mother's day photo. Right. So it does not actually appear that this was due to AI, obviously AI image generators have well-documented problems. Sometimes they put extra fingers on your hand. Sometimes they make your eyes look clear. Sometimes they put
extra hands on your fingers. But it seems pretty clear at this point that this was not AI. In fact, people have sort of been examining the metadata of this image and have concluded that it was shot in a Canon 5D Mark IV camera and then it was edited on Photoshop for Mac. So this is like not a generative AI scandal. It appears. But this actually is a really important piece of metadata, Kevin, because something that has happened over the past several years is that the question, what is a
photograph? Has gotten very complicated? Our friends over at the Vergecast talk about this a lot because when you take a photo with your smartphone, it's taking many, many images at once and then it is creating a composite out of them. And so any image that you're seeing your your phones camera roll these days, there are good chances that it's not actually what the camera saw. It is a bit of
a generative AI experience that you're getting now with every single photo. So if the metadata had come back about the Kate Middleton photo saying this was shot on an iPhone 15, in some ways, this would be a more complicated question. Yes, it's not just that people can now easily edit photos on their smartphones. It's that the actual cameras that are built into the smartphones often these days
have AI manipulation built into them. So one example is the new Google Pixel phone has a feature called Best Take where basically it takes a bunch of photo, you know, say you're posing for a photo with your family and in one millisecond when one photo is taken, someone is blinking. And the next millisecond someone else is blinking or someone's not smiling. You can essentially have it take a bunch of photos and pick out the best versions of each person's face and kind of smush that all into
one composite image. And that all happens without the user sort of having to do anything proactive. That's just like the basic camera on the phone does that. We also know that there's this whole field of what's called computational photography, which is basically building algorithms and AI into the way that cameras actually capture images. So for example, on the iPhone, if you use portrait mode, that portrait mode is using AI to do things like segmentation to say this is part of the
background. This should be blurry. This is part of the the subject of the photo. That should be crisp and clear. And that is essentially a form of AI manipulation that is taking place inside the iPhone camera itself. Yeah, all of this is just to say that there actually is a lot of AI manipulation going on these days in every photo that you're taking with your iPhone. And of course, we think of this as generally benign because this is not inventing children that you don't
have. It's not usually putting a smile on your face if there wasn't one there. Although, if your eyes were close, it will open your eyes for you. Right. So I just think that's good to keep in mind as we move into this new era is that the images that we're seeing, these are not the Polaroids that we were taking in elementary school, my friend. Yeah. So I would say the biggest angle that got me interested in this story is just what it means for kind of the what people are
calling the post truth landscape, right? We've had lots of people writing their takes on this this week talking about how this is sort of the the Canary in the coal mine for this new era of sort of post truth reality making that we have entered into Charlie Worsell had a good piece in the Atlantic this week where he writes quote, for years, researchers and journalists have learned that deep fakes and generative AI tools may destroy any remaining shreds of shared reality.
The Royal Portrait debacle illustrates that this era isn't forthcoming. We're living in it. So Casey, do you think this port ends anything different about our social media landscape or the
way that we sort of make or determine what's true in this new era? I think it's definitely a step down that road, but at the same time, I think that if the sort of worst comes to pass, we'll actually look back and we will be nostalgic for this moment Kevin because this was a case where we could just look at the photo with our own eyes and know with total certainty that the image had been doctor to the point that the palace had to come out relatively soon afterwards and say, yeah,
you caught us. Our expectation, I think, is that within a couple of years, the palace might be able to come up with a totally convincing image of the princess with her children. And you know, people who study AI maybe will be able to determine, okay, yeah, this was created with generative AI tools, but maybe they'll say we actually can't say one way or another. That is the truly scary moment, but is this a step on the road to get there? Absolutely. Yeah. I mean, it's for me that one thing that
surprised me is just how quickly people jumped to skepticism when this photo was released. It feels like sort of in the span of like 10 years we've gone from Pixar, it didn't happen to like, pics and I'm going to study the pics to tell you why it didn't happen. It's like the mere existence of photographic evidence is not enough to assuage people's concerns about something being real or fake or not. In fact, in this case, putting this photo out just fueled this
speculation more. Absolutely. Now, one, I think, funny subplot here, Kevin, and I wonder if you have an opinion on this and it is does the Princess of Wales use Photoshop? Like, some people saw the statement and said, that's absolutely ridiculous. If you're the Princess of Wales, there's no way you're going to sit down and learn how to use Photoshop. I can sort of see it from the reverse though. You're
cooped up in that palace all day. You have your ladies in waiting, taking care of most of the household affairs. Maybe you should have viewed cute pictures of the kids and you say, oh, I don't like the way that my daughter's sweater looks. I'm going to see if I can clean that up. So in a way, I find it
totally plausible that the Princess would learn how to use Photoshop for fun. What do you think? Yeah, I think if you told me that like, you know, the king who's who's elderly, yeah, King Charles was using Photoshop, I would have said, I'm going to need to see some more proof of that. But, you know, gay middleton, she's in her 40s. She's a mom. Mom's like to edit photos of their kids. You know, have I edited a photo of my kid ever to like, you know, remove some some crud from his shirt?
Yeah, you know, I'm guilty. All right. So this is sort of a coin flip. We think whether Kate Mildred knows how to use Photoshop. Yeah, I can see it. I can also see reasons for skepticism. Another argument that I thought was interesting that I wanted to talk to you about today is something that Ryan Broderick wrote about in his newsletter garbage day in a post that was titled, misinformation is fun. Where he's basically saying, look, we now, you know, this happens all the time.
Something comes out. People get upset or nervous about it. They accuse it of being fake. We get all these expert researchers and reporters sort of coming out to fact check it and say, actually, this is fake. This isn't true. But his basic thing is like, look, people are missing that this stuff is fun. It's fun to speculate. It's fun to spread rumors. It's fun to try to connect the dots on some complicated conspiracy theory. This is a piece that people miss when they write about
conspiracy theories as both you and I have done over the years. Yeah. And it's important because all of these platforms that seek out often for good reasons, I think, to want to eliminate misinformation are fighting an uphill battle. And the uphill battle is their users love this stuff. Their users want to spend time on their platforms arguing incessantly about the fate of Kate Minilton. Right. And do you think that the platforms have a responsibility here? I mean,
in this case, this was not a platform story. This photo was disseminated. I guess it was disseminated. It was put on Instagram and maybe other social media networks. But it was really the photo wires and the photo agencies stepping in and issuing this kill order that really turned this, the volume up on this story considerably. So what do you think this says about sort of who is
responsible for gatekeeping here and telling whether an image is fake or not? Well, the photo wires here are a great example of an institution that does still have some authority and does still have some trust in and those are becoming fewer and further between in this current world. So I'm very grateful that we have folks like that who can come in and say, oh, yeah, this is obviously doctored, get it the heck out of there. There will probably be examples like in our election for example,
where that just is not the case. And there is no authority that can come in and sort of say definitively one way or another. This was doctorate or not. Yeah. And I just think this whole discussion about doctorate imagery is going to get so much harder as more and more cameras just come by default with AI tools installed in them. So like, you know, five years from now, is it even going to be possible to take a quote unquote real photo or is every camera and smartphone on the
market going to have some kind of AI image processing or improvement built into it? I've actually hired an oil painter just to sort of create my likeness because it's the only way that I can trust that I'm seeing my own face Kevin. I like that. Yeah. So Casey, I remember a few years ago and I was
doing a lot of reporting on crypto and sort of blockchain projects. One of the things that people would pitch to me periodically in the space is like, here's a way to use the blockchain to keep a kind of uneditable version of the metadata to tell the provenance of an image so that you can have a record on the blockchain that says this image is real. It was not doctorate or manipulated in any way. And here's how anyone can go prove it. So does this scandal and the associated drama
make you think that something like that is actually necessary? So look, I don't like solutions that are on the blockchain. I'm not going to say that no one could ever come up with a way to do that. It would be fast and efficient and worthwhile. I don't think it's possible to do that today. But there are initiatives trying to verify the authenticity of images on the internet. So there's something called the coalition for content, provenance and authenticity. This is a consortium of a
bunch of tech companies, including Adobe, Google, Intel, Microsoft. And they are trying to come up with some kind of standard so that you can embed in your photo, the idea that this image was taken with a camera and it was not just spat out by an AI generator. Now, even in a world where that exists, people are still going to share these photos on social media. They're still going to
have endless debates. But it does empower gatekeepers, right? There's some image that for whatever reason is like playing a role in an election and a meta or a YouTube or maybe even a TikTok. If that still exists, if they can look at the metadata and say, oh, yeah, this was just obviously created with gendered AI. Maybe then they're able to attach a warning label to it. Maybe then they're able to fact check it. And that's really useful, right? Newspapers,
other journalistic outlets will be able to do the same thing. So it's still a little bit tricky. Can you actually come up with a metadata standard that isn't easily removed from the image? There's stuff to be figured out. But if you want to know how do I think we will solve this problem, it's going to look something like that. Yeah. So a lot of people are saying like this incident has told them or informed them that we are sort of headed into this post-truth dystopia.
I actually took a different lesson from it. This whole thing has sort of made me more optimistic because it has showed that people actually do care what's true. People actually do want the stuff that they are relying on that's in their social media feeds. They do actually care whether or not it's realistic or that it represents reality. And they are willing to go to extravagant lengths, including picking apart pixel by pixel photographs of the royal family to determine whether what
they are looking at is real or not. I think that's a really smart point because I do think that there is a kind of defeatism that creeps into these discussions. I was like, oh, we're going to have an info apocalypse and we'll never know what's real anymore. But I think what you said is exactly right that we have a profound need to know what is true and false. And people are clearly ready to volunteer a significant part of every day to figuring out what is true or false
if the story is important enough. Yeah. I think we should have a sort of a coalition of amateur sleuths. Like instead of like picking apart these photos, well, you know, maybe that's a good use of their time for one week. But these people clearly have time on their hands. They clearly have expertise in digital sleuthing. Let's put them to work doing something more socially beneficial. Absolutely. Have them solve some cold cases. Yeah. Take a lesson from Encyclopedia Brown,
Nancy Drew, The Hardy Boys. Basically everything I read when I was nine. Those kids were onto something. When we cut back, while your car is snitching on you, it's driving me crazy. I'll allow it. Kevin, this podcast needs an infusion of cold hard cash. Cash from your hill. Yes. Today we're talking with my colleague, Cash from your Hill, who writes about technology and
privacy for the New York Times. She's got a new story out and it's a banger. This is one that really caught people's attention and for good reason because the more of this story that you read, the higher your blood pressure goes. It's true. This is a story about cars and all the data that cars collect about their users and drivers and how cars have become kind of a privacy nightmare.
It's a really good story. It's about some of these new programs that car companies have installed in their cars that allow them to remotely collect data and then not just keep that data for themselves, but actually sell that to places like insurance companies, which can use it to say, well, Casey's a very bad driver. He braked 72 times yesterday for some reason. So we're going to raise his premiums. That famous sign of being a bad driver breaking.
The broader point is that cars are now basically smart phones on wheels. The snitches on wheels is what they are and they are being used to keep tabs on the people who drive them increasingly and with all kinds of consequences for consumers. Yeah. You truly may have been roped into one of these games without even knowing it. And so if you have your car connected to the internet in any way and you haven't yet read Casual Story, I promise you,
this is one you're going to want to listen to. And when Casual started looking into this, she learned about this whole sort of hidden world of shady data brokers and companies that are selling your data from your car to insurance companies. And today we wanted to talk to Casual about what she found out in her reporting and what she thinks was going to happen next. If there's any hope for us in this new world of connected cars or if we're all just destined to be surveilled
and snooped on by these things that we drive around. So today we're turning hard fork into card fork. That doesn't work at all. He's got a little bit. Casual, welcome back to HardFork. Thank you. It's wonderful to be on this award-winning pop-up. Thank you. Thank you. We did win an award. Did you guys win the Oscar for Best Technology Podcast? The early edition. Is the IHART Podcast award the Oscar of Podcast Awards?
Many, many people are saying it is. In that case. Yeah, it was us versus Oppenheimer. Oh, congratulations. Cas, let's talk about this story. When did you decide to write about data collection in cars and why? I was spending a lot of time lurking on online car forums, forums for people who drive corvettes and Camaros and Chevy bolts, which I drive. I started to see people saying that their insurance had gone up. When they asked why, they were told to pull their Lexus Nexus consumer
disclosure file. Lexus Nexus is this big data broker and they have a division called risk solutions that kind of profiles people's risk. When they did that, they would get these files from Lexus Nexus that had hundreds of pages, including every trip that these people had taken in their cars over the previous six months, including how many miles they drove, when the trip started, when it ended, how many times they hit the brakes too hard, accelerated rapidly, and sped.
When they looked at how Lexus Nexus had gotten the data, it said the provider was General Motors, who, you know, the company that manufactures their cars. Right, so your story starts with this anecdote about this man named Ken Doll, which, by the way, great name. He is a 65-year-old Chevy Bolt driver, and like you, he owns a Chevy Bolt, or I guess he drives the least Chevy Bolt. His car insurance went up by 21% in 2022, and he was sort of like,
what the heck, why are my premiums going up? I've never been responsible for a car accident. Yeah. He goes looking and he asks for his Lexus Nexus report and gets back a 258-page document detailing basically his entire driving history. So does he then make the conclusion that this is why his premiums have gone up because he's a bad driver? Well, he says he's a very safe driver. He said his wife is a little bit more aggressive than him.
This is your blame barbie. And she also drives his car. And yeah, he said that the trips that she took, which during the weekdays, he said, when he doesn't usually use the car, had a few more hard accelerations, hard breaks. And yeah, it looked to him like this is why his insurance went up. And we should say, it's like, just because you accelerate hard in a car, doesn't necessarily mean that you did anything wrong. And if you had to break really hard, that also might not have been
your fault. And so I think one of the things that's infuriating, cash reading, your story, is that this data, which lacks a lot of really important context, is being hoovered up, often without the knowledge of the people involved, and then being used to gouged them on price. But I really was just struck by the way that these insurance companies were so eager to use data that might not actually be incriminated. Right. And Alexis Nexus said they don't actually give
the trip data to insurance companies. They give a score that Lexus Nexus gives a driver, a driver score based on that data. And that's what they're sharing. But interestingly, they didn't give Kendall his score. So he doesn't actually know what his score is. And but that's like such a corporate thing to say is like, oh, don't worry, we're not giving the individual data. We've created a mysterious, impenetrable black box and handed that to the insurance companies. But you know,
just trust us. It's actually really well done. Well, the other company, Varysk, did say we just give all the trip data and a score. So let's talk about like how widespread is this? Which cars are sending data about their drivers to insurance companies? Which companies are involved in this? Like is this sort of an industry-wide practice? Or is this just like GM and Lexus Nexus? Is it
pretty contained? So if your car has connected services, like if you have a GM car and you have on on-star or a Subaru and you have on starlink, your car is sending data about how you use it back to the auto manufacturers. At this point, the only ones that I know of who are providing it to insurance companies are GM, Kia, Honda, Hyundai, Mitsubishi. These are companies that Subaru being an exception. Subaru says they only give odometer data to Lexus Nexus. But the other companies
all have in their apps now this this driver scoring driver feedback with GM. It's called smart driver. And if you turn it on, they give you feedback about your driving. Like drive slower, be gentle with the accelerator, buckle your seatbelt. Drive faster, take more risks. Change the music. Get out of the carpool lane. I'm trying to pass you. Stop looking at your phone, which is giving
you all this feedback. But for people that turn this on, they may not have realized it, but they were saying yes, like a lot of these programs are actually kind of run by Ferrisc or by Lexus Nexus. They're the ones giving you the feedback, not the automaker. And so you're kind of just sharing it with them. This was not well disclosed. In the case of GM, it was not evident all from from any of the language. And a lot of people said that smart driver was turned on for their cars.
And they didn't turn it on. They didn't even know what it was. And it does, GM gives bonuses to salespeople at dealerships who get people to turn on on star, including smart drivers. So they may have been enrolled by the salesman when they bought their car. But for other people, if you turn this on, you're sharing your data. And when you go out shopping for car insurance, and you're trying to get quotes, a lot of the insurance companies will say, can we have permission to get third party
reports on you? Like you're your your credit file. And when you say yes to that, that releases all of that data to go over to the insurance company. And this is just I people did not realize this was happening. And and you know, that detail about salespeople being incentivized to enroll people, often without even fully informing people of what they're enrolling in, I think, is really important because at the end of the day, this product exists because it is essentially
free money for GM and these other car manufacturers, right? Like I think in your story, you say they're making millions of dollars a year by selling this data. And is this really anything more than a cash grab? I mean, the car company say that this is about safety, that they're trying to help people be safer drivers with this like, you know, driving coach. But, you know, for some of these people
at SmartDriver turned on, they didn't even know it was on. They're not getting the feedback. And, you know, as you say, general voters, my understanding is they make in the low millions of dollars per year with this program, which they described as Senator Edward Markey. He asked them, are you selling data? And they said, you know, the data that we sell that we commercially benefit from is diminimus to our 2022 revenue. So for them, it's nothing. You know, this is, this is,
right. It's a drop in the bucket. It's not moving their overall finances. But it's not diminimus for the people who have to pay 20% more on their car insurance, all of a sudden, for reasons that they don't even understand. Right. So this is, I think what we're seeing is, you know, the surveillance capitalism model, you know, the Google, the Facebook that you get something for free, you're paying with your data. It's really spreading to all of these other companies. And the automakers are like,
well, we're getting all this data. We can monetize this too. Right now, they're not actually making that much money. Like, low millions is small. And some of the automakers told me, we don't get paid for sharing the data. We only get paid when an insurance company buys it. So they don't even have a good business deal on this. Like they should be getting money for all the data. Like, there is something really damning about saying to the center, it's like, hey, we don't even make all that
much money off this. Okay, well, then why are you violating the privacy of your entire user base? If you can't even get a good price for it. Yeah. One question I have for you, Cash, is like, how many other uses are car companies finding for this data? Are they just selling it to insurance companies to raise people's premiums? Or I mean, I can imagine a situation where, you know, a company might like to know, you know, which drivers are driving past their store every day.
So they can show them, you know, target it ads on social media. How, how many buyers are there for this kind of car data? I mean, look, there's a lot of information that's flowing out of your car and a lot of potential buyers. At this point, what I mainly have been focusing on is this insurance thing. And when it comes to the insurance data, the one thing that all the automakers pointed out is that they're not providing location details. That's just when you start the trip,
when it ended, how far you drove and it doesn't actually include location data. But that's not to say the companies don't have it. They're just in this case not selling that. Well, so, you know, this is in some ways like not a new phenomenon. I mean, insurance premiums are, you know, they vary based on things like where you live and how new your car is. And like, are you a, you know, a young man who is statistically more likely than someone who's older to be in an accident? Those kinds of
things are used to change the prices of insurance premiums all the time. And I guess from the insurance company's perspective, this is just one more piece of data that they can use to make decisions about how much of a risk someone is out on the road. Did you hear any sort of principle defenses of this while you were reporting the story from the insurance companies or the companies that sell data to them? What the automakers really focus on is that they set up these programs to help people
get discounts. God. So like some of the programs like Honda's, for example, if you turn on driver feedback and then you have a good score, the actual app will kind of offer to connect you with some company who's going to give you a 20% discount. So they're really focusing on we're trying to help our customers and get them discounts. What they're not talking about is when that data is flowing out and it's hurting their customers. Like I talked to this one Cadillac driver who lives
in Palm Beach, Florida. And in December, it was time for him to get new insurance. And he got rejected by seven different companies. And he was like, what is going on? Like they just wouldn't sell him insurance for any price. They would not cover him. And his auto insurance was about to expire. And he said like, what is going on? He orders his Lexus Nexus report. And he has six months of driving data in there. And he says, he says like, look, I don't consider myself an aggressive driver.
I'm safe. But he's like, yeah, I like to have fun in my car. And I break a lot. And I accelerate. Like my passengers head isn't hitting the dashboard or anything like that. Yeah, I can tell whether you're doing donuts in the parking lot because I actually would like to go. Look, I've never been in an accident. And I couldn't get insurance. He had to go to a private broker and ended up paying double what he was paying before for insurance. So, you know, it really,
in that case, hurt him a lot. So, you know, here's where I guess I can, you know, maybe fame some sort of sympathy for the idea of doing this, which is like, I do want worse drivers to have higher insurance premiums, right? Like I think that is how we want the insurance market to work. I think if you're a good driver, your insurance should be lower. And the best way, I don't know, who is a
good driver and who is a bad driver is to monitor them obsessively. But what you have revealed here, Cash, is that once we implemented this sort of surveillance system, it seemed to do what all surveillance systems do, which is needlessly penalize innocent people, right? So, like we have all of the downsides of a surveillance system with really none of the other. Yeah, I too want safer roads. Casey, I get annoyed at aggressive drivers. And I talked to this one law professor from the University
of Chicago. And he said, you know, usage based insurance. That's what you call this when you tell an insurance company they can watch you. They can see your driving. He said, it works. He said, the impact on safety is enormous and that people drive better when they know that they're being monitored and that they're going to pay more, you know, if they drive aggressively or or unsafely. But that's not what was happening here. People were being secretly monitored and then they're paying
more and they don't know why. And that is not going to make the roads any safer. Yeah, that does feel like the stick is part of this to me is like the disclosure piece. Like if you know, you know, I've had experiences in the past couple of years, like, well, I'll go rent a car if I'm on like a work trip or something. And part of what I know when I'm renting the car is that the rental car
company is tracking that car. And I know this because they tell you when you sign up and it's, you know, very clearly disclosed, like, you know, we will track this car, you know, if it gets stolen or something, we can help you track it down that kind of thing. So I know that I'm being monitored while I'm driving a rental car. And so I, you know, I do tend to drive a little bit more
conservatively in a rental car. I can imagine that expanding to lots of other cars. But the people have to know that they're being monitored in order to be able to drive safer as a result of being monitored. Absolutely. So, Cash, talk about your reporting a little bit on this. So you started looking through these car forums. You started seeing evidence that people were having their premiums raised as a result of this surveillance by their cars. When you approached the car companies,
the data brokers, the insurance companies, did they try to deny what was going on? Were they pretty open about it? How did they react? You know, I thought that I was expecting denials. I was expecting that, yeah, they would say this, this wasn't happening because it just seems so shocking to me that they would be doing this. But they ended up kind of confirming it. But there was some evasive language about how it worked. One of the big things I was asking different
companies is, where do you disclose this is happening? And with GM, the spokeswoman said, it's in the on-star privacy policy in the section called- Which everyone reads before they click accept in its time. In this section about sharing data with third parties. And so I go and read that section. And the section doesn't say anything about Lexus, Nexus, or Varisque, or telematics,
which is what you call this driving data. It says like, you know, if they have a business deal with somebody like Sirius XM, which is the company they name there, Sirius XM is going to get some data from your car. And I just was very shocked that there was nothing more explicit anywhere. And I actually, I told you I have a Chevy Bolt. So I went to the My Chevrolet app, I connected My car, to the My Chevrolet app, and went through the Smart Drive enrollment. And all it says is,
like, get digital badges. You can get like, break genius and limit hero. I'm putting that on my LinkedIn profile, certified break genius. Get driving tips. And there's just absolutely nothing that would make you realize that as soon as you turn Smart Drive or on that general motors, it's going to start sharing everything about how I drive My Bolt with Lexus, Nexus, and Varisque. And whoever else I didn't find out about in my
reporting. It should actually show you like the splash screen of a penopticon. And it should say, is this the future you want? Just tap yes to continue. Well, I really do think like every company wants this model now. They're just thinking about how can I get an extra revenue stream through monetizing the data of my customers? And this is not just automakers. This is just anything we're buying now that's internet connected.
I mean, what it made me think of when I read your story was TVs. Because a very similar scenario has been happening with Smart TVs, which collect all kinds of data about what people watch on them. And then they can sell that data to advertisers. So it is actually in some cases, and I bought a new TV a few years ago and I went through this process of realizing that
it is actually cheaper to buy a smart TV than a non-smart TV in many cases. Because part of how the smart TV makers are making money is not through selling you the hardware.
It's actually through capturing the data and selling the data. So it is sort of, we do sort of have this phenomenon where as hardware, any hardware, whether it's a car or a TV or a refrigerator or a smart toaster or something, as it becomes more connected and more like a device in its own right, the data actually in some cases becomes more valuable than the actual piece of hardware. Well, I mean, Cash, don't we just see this all the time that privacy is just increasingly
a luxury good for rich people to pay for? Yeah, I mean, I guess so. But even rich people, I mean, they're buying expensive cars and their cars are still sending data back about them. I mean, that's one objection I saw from drivers, like people at General Motors. They said, hey, I paid a ton of money for this car. If you're going to sell my data, I want to cut of it.
Hey, yeah. Here's how I would solve this problem. I think that, you know, each manufacturer should be allowed to make one car that just sort of sends all your data to everywhere and there's nothing you can do about it. You can just sort of chew. So if you buy a GM Snitch, you know that that's what's going to happen. Right. And it should cost $100. It's going to be the cheapest car on the market. It should cost $100. So my little GM Snitch, it has a direct light to the police. Whenever I
like sort of crossover the center divider. And other than that, knock it off. Yeah, Cash, what is the response bend to your story? Is our lawmakers outraged or drivers? I'm outrage. Are drivers sending you stories about being spied on? What is what is the reaction? So I'm definitely hearing from lots of other drivers who are discovering that they had some of these features turned on. They didn't know it. And they're turning it off. I did a kind of like
news. You can use box at the bottom of the story. And I said, here's how to figure out if this is what your car is doing. And one of those was there's this website called the Vehicle Privacy Report that you can go to and it'll tell you you put in your VIN number and it tells you what your car is capable of collecting. So the person who runs that site said like I've had tens of thousands of people come and do it off of your story. I included the link to Lexus Nexus to go
request your consumer disclosure file. And not just for auto data, that file is crazy. It had me associated, I mean, it had tons of pages for me, had me associated with my sisters email address from middle school. And in 1990, it's like why? Why? So I think everyone should request that, you know, request their barracks file. And you know, I talked to Senator Edward Markey for the story. And he's been very interested in what data is being collected by cars
and watermakers are doing with it. And he said when I described to him what GM had done, he said, this sounds like a violation of the law that protects us from unfair and deceptive business practices. So I'm sure there's going to be more to come from the story. Yeah. And what can drivers do if they if they are worried that their car is snooping on them and sending data to a data broker or to their insurance company to raise their premiums? What should they actually do to prevent that? Or are
there certain car makers who are not collecting this kind of data? What can the average driver do? I mean, there are, I can tell you from my time in the car forums, I mean, there's some people that don't want their data going out from their cars. So they they hack it basically, they like turn off the connected services. They make sure that data can't leave their car. I mean, if you sign up for connected services, you are, you know, connecting your car back to the auto manufacturers, you know,
cloud servers or whatever it's sending data. So just turning that on means that data is getting sent back. And that's why a lot of these companies, you know, when you buy their car, they're like, oh, you get this for 30 days for free. And so most people turn it on. And then even if you don't pay, you're still connected after that. So, wow, wait. So even so they get you to connect it. And then your free trial runs out, but they still keep collecting the data about you that they can understand.
That's my understanding. And that's that's what you agreed to when you, you know, read the 50,000 word privacy policy. Good Lord. Wow. See, I would at least like my cars surveillance data to be like helpful to me in some way. I would like I would like it to pop up a little notification and say, this is the third time you've driven through McDonald's in the past week. Are you okay? Is something going on in your life? Do you need do you need therapy? Last question, Cash, do you think
this will create a a bull market for used cars that don't have any of this stuff in it? Like, are we going to see people, you know, running out to the car lots to buy like the 1985 Ford Bronco that doesn't have any technology in it? I mean, this was the basic premises of the Battlestar Galactica reboot. By the way, is that the only spaceship that survived was the one that was not connected to the space internet. And so that when the sort of AI sialons attacked only Battlestar Galactica was
safe. Well, that's true. And I have seen a lot of people commenting in that way. They're like, oh, I'm so glad I still have a car from 2009. If you've got a CD player in your car, it is privacy protective. Yeah, I am going to go back to the Flintstones car that you have to pedal with your feet. I don't think that was collecting much data on its drivers. All right, Cash, shill. Thanks so much for joining us. Thanks, guys. My pleasure. I'm all worked up now. But you have, you have a car, Gacy?
No, I don't even have a car. I just can't see just strong feelings for unapparent reasons. Hard fork is produced by Davis Land and Rachel Cone. We're edited by Jen Poehon. Today's show was engineered by Alyssa Moxley, original music by Alicia McE tube, Marion Luzano and Dan Powell. Our audience editor is now the locally. Video production by Ryan Manning and Dylan Bergerson. Go check out what we're doing on YouTube. You can find us at youtube.com
slash hard fork. Special thanks to Paul Assumin, Puywing Tam, Kate LaPresti, and Jeffrey Miranda. You can email us at hard fork at nytimes.com. Especially if you know where that princess is. Yeah, please toss.