Trump's Next Online Speech Cop + Doctors vs ChatGPT + Hard Fork Crimes Division - podcast episode cover

Trump's Next Online Speech Cop + Doctors vs ChatGPT + Hard Fork Crimes Division

Nov 22, 20241 hr 10 minEp. 110
--:--
--:--
Listen in podcast apps:
Metacast
Spotify
Youtube
RSS

Episode description

This week, President-elect Donald Trump picked Brendan Carr to be the next chairman of the F.C.C. We talk with The Verge’s editor in chief, Nilay Patel, about what this could mean for the future of the internet, and for free speech at large. Then, a new study found that ChatGPT defeated doctors at diagnosing some diseases. One of the study’s authors, Dr. Adam Rodman, joins us to discuss the future of medicine. And finally, court is back in session. It’s time for the Hard Fork Crimes Division.

 

One more thing: We want to learn more about you, our listeners. Please fill out our quick survey: nytimes.com/hardforksurvey.

 

Guests:

  • Nilay Patel, co-founder of The Verge and host of the podcasts Decoder and The Vergecast.
  • Adam Rodman, internal medicine physician at Beth Israel Deaconess Medical Center and one of the co-authors of a recent study testing the effectiveness of ChatGPT to diagnose illnesses.

 

Additional Reading:

 

We want to hear from you. Email us at [email protected]. Find “Hard Fork” on YouTube and TikTok.

Unlock full access to New York Times podcasts and explore everything from politics to pop culture. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify.

Transcript

Whether you're starting or scaling your company's security program, demonstrating top-notch security practices and establishing trust is more important than ever. Vanta automates compliance for SOC 2, ISO 27001, and more. With Vanta, you can streamline security reviews by automating questionnaires and demonstrating your security posture with a customer-facing trust center. Over 7,000 global companies use Vanta to manage risk and prove security in real time.

Get $1,000 off Vanta when you go to vanta.com slash hardfork. That's vanta.com slash hardfork for $1,000 off. Casey, what's going on? Well... I have really changed my feelings about Blue Sky in the past week. Yeah? You know, before last week, I have to admit, while I did use it and I did occasionally see stuff on there that I... that I thought was really funny or interesting. The feed was so political that I had just sort of written it off as not for me.

But then something really powerful happened, Kevin. What's that? Which is that my following doubled in four days. Mine too. I got like a weird number of followers. I mean, something is happening. There is something in the water. And what I think is funny about it, Kevin, is that my own experience reminded me how truly... And, you know, oh, I hate all the ads over there.

of the day, it is how many followers do I have and how many little internet points did I get for making my little quip and wherever the number is the highest, that is where you will find me. And that is all to say, please follow me on Blue Sky. Thank you.

I'm Kevin Roos, a tech columnist at the New York Times. I'm Casey Noon from Platformer. And this is Hard For This Week. The future of the internet could look very different next year. The Verge's Nilay Patel joins us to talk about President-elect Trump's pick for the head of the FCC. Then...

A new study found that ChatGPT outperforms doctors in diagnosing some diseases. One of the study's authors, Dr. Adam Rodman, is here to discuss the future of medicine. And finally, court's back in session, Kevin! It's time for a hard fork crime division. I rest my case. No, you don't rest your case at the beginning. It's at the end. Sorry. Adjourned? No. No. Stop it! Where's my gavel? You're ruining the show!

Well, Kevin, let's get started this week with a car crash. Oh, yeah? Yeah, Brendan Carr has crashed into the news as the next potential chairman of the Federal Communications Commission. Yes, so obviously in the post-election period, President... Elect Trump has been announcing many of his picks to lead top agencies. And the one that really stuck out to me that I thought was relevant to the topic of our show was his pick to lead the Federal Communications Commission or the FCA.

who's a man named Brendan Carr. Yes, Brendan Carr is a Republican. He's been on the FCC since 2017. And the FCC has five members on it. And collectively, they do control broadcast. media in this country. They have a lot of legal authority to do that. They have less legal authority over the future of the internet. But Brendan Carr is somebody who has a lot to say about how he thinks the internet should work. Yeah, he's a real activist.

in some of these debates about internet regulation, he's been very vocal about going after big tech, as he calls it, for their political bias and what he views as anti-conservative censorship. He's constantly picking fights with people online and sort of defending his vision of an internet free of left-wing censorship. And he also starts every day, literally every single day by posting on X.

Good morning and God bless America. And up until recently, I think if you're somebody who does not agree with Brendan Carr on these issues, it has been easy to dismiss him as a crank. But within a couple of months, he is going to be a person potentially. And whenever internet policy issues are in the news...

I want to know what Neelai Patel thinks. Neelai was my old boss at The Verge. He's a co-founder of The Verge. He is also a formidable podcaster and one of our greatest adversaries in the realm of podcasting as the host of Decoder and The Verge. And Nila has been writing recently about what the arrival of Brendan Carr as chairman of the FCC could mean for not just the internet, but for speech in America in general.

Yeah, Neelai thinks that we are headed into a truly scary and dark timeline with the appointment of Brendan Carr at the FCC. And as someone who has not followed Brendan Carr's career super closely, I'm very curious to understand why he thinks...

this man poses such a threat to the future of the internet. Yeah, we're going to hear about that, but we're also going to ask him for maybe some more empowering thoughts that we can bring into this next chapter in American history. So with that, let's bring in Nilay Patel. Welcome to Hard Fork, Eli. It's great to have you here. And I want to get started with some basic background information. Who is Brendan Carr and why does Trump see him as a, quote, warrior for free speech?

Yeah, Brandon, he is an extremely online commissioner of the FCC. He was appointed by Trump. His view is that the FCC should spend a lot of time regulating not only... the traditional purview of the FCC, which is wireless spectrum and broadcast television, but also big tech companies. And he's got a lot of ideas about how he might get that power.

And then how we might use that power. But really what you have is a guy who likes going on Fox News and Twitter and railing about how there's a censorship cartel in big tech and it should be crushed. And I think Donald Trump likes that a lot. So most people probably don't think of the FCC as a very powerful agency in their daily life. But there was a time in recent American history where it did play a larger role. So tell us a bit about the recent history of the FCC.

So most Americans, I think, in 2024 never think about the FCC, especially the last five years. The FCC has really receded. It has been sort of a neutered agency. No one trusts it anymore to do the things. people wanted to do. That has not always been the case. If you go back 20 years, the FCC is a cultural force in America.

And this really hit its peak with Janet Jackson at the Super Bowl, right? Justin Timberlake. Nipplegate. Yeah, he rips off the corset, the bodice. America is forced to endure like half a second of a nipple and the world goes crazy. And if you remember, the George W. Bush administration hated nipples. John Ashcroft is George W. Bush's...

Attorney General, he famously covers up the Statue of Lady Justice at the DOJ because God forbid lawyers see nipples. It's just a very weird time in America. And all of this is based on the dominance of broadcast media. Most Americans at this time get most of their media from broadcast television and radio stations over the airwaves, right? You hang up an antenna, you get NBC or CBS.

Or ABC, you hang up an antenna in your car, you get whatever local radio station. And that spectrum is owned by the government. And it's licensed out to these broadcasters in the public interest. And that's really where the FCC's authority comes from. And there's like a long string of Supreme Court cases that basically add up to...

The spectrum belongs to the people. The government gets to make rules based on a spectrum. And the Americans do not want nipples on their public airwaves. So we're going to freak out about this. So this is the high point of the FCC as a cultural force. And what happens during all of this is...

The iPhone comes out and YouTube is introduced and podcasting is introduced. And Americans, by and large, switch to cable television in huge numbers. They stop watching broadcast TV. They stop consuming this content. So the FCC itself. says, we have to get out of this business. We have to be out of the speech policing business. Michael Powell, who's Colin Powell's son, is chairman of the FCC. And he's like, we got to stop this. These broadcast providers.

They're not competing with each other. They're not the dominant force. They're competing with cable television. And cable television is not regulated the same way as broadcast television because it doesn't go over the air on this sort of publicly owned spectrum. Right. It's on Comcast wires, not the...

public's airwaves. You know, he's a Republican FCC chairman. He's like, we're making these really weird rules for these companies. We have to get out of this business and we need to get into the business of broadband deployment. And this has largely been what the FCC has. been focused on since 2011-2012. That's where you get the big net neutrality fights.

And net neutrality, just for people who are not steeped in the history and the context here, is basically the rules that say, if you're a Comcast, if you're an internet service provider, you cannot dictate what goes over your pipes, right?

You can't impose, you know, censorship or speech regulation at the level of the internet service provider. You are just supposed to be like the dumb pipes. That is essentially net neutrality, correct? Yes. And it's really predicated on, I see, that you're going to have massive competition for internet content. Which turned out to not be the case, right? You only ended up with a few giant platforms. You ended up with YouTube and TikTok and Meta.

Those are your choices. So something very weird happened along the way of the internet where we recreated the dominance of broadcast television. Just a handful of giant companies that control most of the media in the country without any of the legal foundation for how the government might get involved in that content. So this is the stage we're in.

We unwound this previous dominant broadcast media regime where we had pretty overt speech policing all the way to, well, if you want to see a nipple, the internet will provide you a nipple at any time and no one cares anymore. But... You still have a lot of people who are very interested in how the platforms moderate and the political biases of these platforms. You have a very active right wing, which is insistent that any moderation at all that disfavors them is...

a moral catastrophe that should be stopped with a full weight of the government. And this is where you get Brendan Carr, who up until recently was a pretty normal, if somewhat overly online, deregulatory force. That's his worldview until a couple years ago when the big push for, well, we should start yelling at Mark Zuckerberg more to...

make sure the algorithm favors conservative viewpoints, or at least doesn't overtly favor liberal viewpoints. Brennan takes this up. And in the sweep of this history, you can see that what he wants is to be an old school chairman of the FCC. Where if you're mad about nipples on Instagram, you can write him a letter and he will have the power to fine or otherwise penalize meta. And that is just the wheel that is turning right now.

And so I think that gives us a good sense of what this person's worldview is and what he might do if he had that sort of power. I guess the next logical question then, Neelai, is does he have this power? Is any of this authority in the FCC? And if it's not, you know... What do you expect him to do about it? Yeah, I hope I did a good job of laying out what feels like a logical pendulum swing. Yeah. But actually, legally, none of this makes any goddamn sense. Like, in a very real way.

He does not have this power. And he was the author of the Project 2025 chapter on the FCC and what you might do with it and how you might use it. Notably. Project 2025 does not say we should dismantle the FCC. Like it says we should dismantle every other agency. Brendan's chapter of Project 2025 says the FCC should get even more power.

Yeah, I read the chapter that he wrote. I think most people associate Project 2025. This is a sort of document, this sort of roadmap for a second Trump administration that was put together by, you know, some conservative think tanks and groups. And I think most people associate it with advocating for rollbacks on abortion and other social and cultural issues. But it actually does have this sort of interesting part about the FCC and how Brendan Carr specifically wants to regulate the Internet.

A lot of what's in this chapter is boring, sort of normal FCC. chair stuff about spectrum auctions and rural broadband access and stuff. But he starts with this thing about sort of reining in big tech. So, Eli, what is Brendan Carr's idea, his big idea for how to rein in big tech? I want to be very clear. My personal opinion of Brennan Carr is this man isn't capable of having big ideas.

I do not have a high opinion of Brennan Carr, but his idea is the same idea that everyone else has, which is we should mess with Section 230 until the platforms do what we want. And Section 230, for people who are not experts, is the part of the federal law that basically shields...

online platforms from legal liability over user-generated content, right? So if you post something illegal on Instagram, the government can go after the poster, but it can't go after the platform. I appreciate that you think there are hard fork listeners at this point who don't know. No, it's Section 230. There might be a few. Section 230, the stakes of messing with 230 right now are do you want YouTube to exist? Those are the stakes of 230, right? Do you want any user-generated platform?

to exist at scale. Because if you make Google liable for the content on YouTube, there will quickly not be content on YouTube. You will actually turn them back into cable companies. So these are existential stakes. And Brendan Card does not propose getting rid of Section 230, interestingly, as some conservatives have done. He doesn't say we should repeal the whole thing. What he says instead is we should sort of limit these court-added...

extras that judges have piled on top of Section 230 to sort of extend the shield granted by the original law. Right. And that's the part that is wholly nonsensical. It is a fantasy. So first of all, Section 230 is a law. Congress wrote it. It's famously 26 words long. And it has gone to court numerous times in numerous ways. And the courts have uniformly...

upheld the idea that these 26 words are there to keep platforms from being liable for what their users post over and over and over again. There's not a bunch of court added additions to this. That just doesn't exist in the law. Second of all. Even if there was, you get around that by Congress doing more stuff.

You don't get around that by being an unelected chairman of an agency most people don't give a shit about in just issuing decrees about what the law means. That's just not that's fully not how it works. And the courts in this country. particularly conservative justices of this country, do not believe that agencies should have any power. So even if you're Brendan Carr, not only...

Does he not have that power? If he tried to use that power, he would run right into the conservative legal movement, which is trying to defang the agencies in a very specific way. So it's just none of this makes sense except. Well, if I can wield this weapon over the big platforms, they might do what I say anyway. And that is very much.

the animus of every attempt to modify 230. No one's actually saying we should get rid of this law that allows YouTube to exist. They're saying if we threaten this law enough, YouTube's trust and safety team will moderate YouTube the way we want.

Which I think in practice probably has happened. I think platforms have been responsive to those sorts of threats. You know, if you're the sort of person who likes net neutrality, you like Section 230, I feel like people might hear what you're saying and be excited.

about it they think okay cool so this man is he's sort of you know uh like like uh you know banging pots and pans and trying to get everybody all scared but there's really not a lot of legal basis for what he's threatening so maybe you might feel relieved at the same time

I feel like we're in a world where we can rely less on judicial precedents than we've been able to in the past. So many things that seem like a slam dunk either turn into a coin flip or the Supreme Court decides to throw out decades-old precedent. So as we move into this new world, this new Trump administration, how are you thinking about that risk and how the internet might change just because we might be sort of living in legal chaos land?

I feel very strongly that the First Amendment is under the most direct threat that any of us will ever really experience. The rise of the internet that we know coincided with a period of... pretty unfettered expression, right? The government was told not to regulate the internet. This phrase came up over and over again. Leave these companies alone. We're going to let a thousand voices bloom. We're going to get over a lot of weird and decency ideas we have about media in general.

More people have more access to speak. That is an unqualified good thing. And we are going to leave that alone. And that is an interpretation of the First Amendment or at least a First Amendment environment that I think most people are used to right now. That is our expectation.

Those walls are going to come in closer. What you're getting out of the Brendan cars in the Trump world version of the first amendment is closing in is my political opponent should be silenced or the platform should make sure. to favor us. And we will wrap it up in what sounds like a defense of free speech. But actually what it is, is punishment. And you see that over and over again. You see it expressed as punishment. You see Elon Musk.

who runs an ISP in this country, saying the hammer of justice is coming for people who publish election hoaxes. Well, lying is legal in America. It's just fully legal. Hate speech is legal in America. We've run this all the way up to the Supreme Court multiple times. And it's just legal to lie. It is legal to be racist. We allow the government does not punish these things because we expect the market to punish these things.

Yeah. I mean, on that front, Carr recently sent a letter to the CEOs of four big tech companies, so Apple, Meta, Microsoft, and Google, blaming them for what he called an unprecedented surge in censorship. warning them that they might face investigations, not just for their own content moderation, but for work they do with third-party groups like NewsGuard, which do ratings for news sites around bias and accuracy. Do you see that as just kind of more pure?

intimidation. Yes. But Casey, I'm curious. You live in this world. You've covered trust and safety a million times. This idea of groups like NewsGuard. where you have this appeal to a third party that will tell you how biased your news is, has always been problematic.

Yeah. But do you think the government should have a role to play in telling you how biased your third party is? I mean, you know, like what what activity could be more protected by the First Amendment than saying, like, I think this website is biased, you know, like there there is there is.

no even theory of harm there, right? Like I can see how a car and his allies would come along and say, oh, there's this like giant censorship apparatus. But in practice, you know, sites like NewsGuard aren't even particularly widely used. Right. And there are all kinds of these rating services that I think most people basically ignore. But to me, that's actually what makes it scary is this thing that isn't even that influential, you know, is suddenly the target of an FCC commissioner who is now.

threatening platform owners saying, do not work with these people. I mean, to me, that seems like the much greater threat to speech than, you know, some website that says Fox News leans conservative. Right. And the piece of that that really worries me is there's no legal mechanism to mess with these big companies. We're all basically nation states unto themselves.

You can fire threats at Jeff Bezos all day. He's going to get on his yacht and sail away from you as fast as he can with his four support yachts in tow, and he'll just be waving at them in front of the beach. Fine. But there are... speakers in america where brendan will have the power right so the actual broadcast networks still use the spectrum and kamala harris shows up on saturday night live and

Brandon Carr gets to yell about revoking the broadcast licenses of NBC, which also makes no sense because it's the stations that have the licenses, not NBC proper. And he knows that. But it doesn't matter because you can go on Fox News and say, I'm going to revoke NBC's broadcast license for having the temerity for a presidential candidate to be on their program, even though like the next day Trump was given free airtime during a NASCAR race.

Right. Which is the the rule that the government has. And NBC is very good at fulfilling this rule because they've been a broadcaster for 5000 years. Yes. I have two questions about this letter that Brendan Carr sent to these big tech CEOs.

One of them is he, the ones that he included were somewhat mysterious to me. So I get why Meta and Google are on this list. Conservatives have been mad at those two companies in particular for years about perceived censorship. But what are Apple and Microsoft doing on this list? What kind of objective? actionable content moderation are they doing in Brendan and Carr's eyes? Apple runs the App Store.

And in order to have an app in the App Store, you have to pass Apple's rules of acceptable moderation. So I think famously, Parler was kicked off the App Store. Gab was kicked off the App Store because they were still having all kinds of stuff to go buy. Apple doesn't want this to happen.

in car and you want to make sure that no one gets to control speech in america except for you the person who runs the app store is your greatest enemy because he can keep the platforms off of phones entirely microsoft runs a bunch of big platforms sure like you might be worried about being But they are also a huge developer of AI. And I think Carr is smart enough to know that the next turn of all of this...

is what the AI search results are. And if the AI starts to say, hey, this is misinformation, if Grok on X literally says Elon Musk is the greatest source of misinformation on X, which it has said recently, that's a big problem. And I think putting these companies... unnoticed that you don't want woke AI is a big deal for all of these players.

You mentioned the broadcast licenses a minute ago. I wanted to pick that up again because you also established earlier that the FCC does have a bit more legal authority with them. agree with you it seems like nonsense to say well one candidate is allowed to appear on tv but the other isn't but at the same time i also do expect that they will continue making those threats so what sense can you give us of how

easy is it for someone like Brendan Carr to wreak havoc with these broadcast networks and what do you expect there? I think it's tremendously easy for him to wreak havoc with the broadcast networks not because of the law but because they are inherently weak counterparties at this moment in American media history. They are dying. This is a historically low moment for broadcast television viewership. And even the things that we're keeping it alive, the NFL, are moving to streaming.

This is a historically low period for cable television viewership, which is how a bunch of these TV networks are making all their money. We'll see. Does anybody there have the fight? Because they could win. I honestly believe if they wanted to win these fights, they could body up against Brennan Carr and say, look, we're not going to do speech police in America. And we're also complying with the rules, right? Fully, we are in compliance with the rules.

But I don't think that matters in a world where the businesses are dying, the executives just want to cash out and leave, and the audiences don't care because they're not watching anyway. And that is very, very dangerous. When I say that I think the Brendan Carr FCC embedded in the Trump administration represents the biggest threat to free speech that any of us will have ever experienced.

That is the mechanism. It's the chilling effect with the power they have combined with their obvious desire to create new power. Yeah, I think that's right. And I think media executives have not quite...

fully internalize the degree to which the people who are about to take power in this country are obsessed with destroying them. And I think this is quite... different, actually, than the first Trump term, when there were also sort of these grave proclamations about what would happen to the media, but largely media was, you know, fine, or at least there were pockets of it that...

had a Trump bump from the first Trump term. I think this is different because I think for the people who now are going to be running the country, including people like Elon Musk, This is not just something he thinks about. Occasionally, this is one of his driving priorities in life is to delegitimize and undercut and ultimately destroy what he sees as the legacy media. But I'm also just curious, Neelai, as a person who does...

understand what's coming, does think about this stuff. How do you operate in an environment like that? Aside from just hiring lawyers to deal with a bunch of bogus defamation claims, what should you do? Well, first of all, Kevin, I'm curious if you think the legacy media continues to exist. Like in my view is that it's already dead, right? Like what this election showed is that actually Trump's mastery of.

the YouTube podcast format was much more relevant than whatever happened on ABC news, like fundamentally. And so I don't want to spend my time worrying about a thing that has already destroyed itself. And so it's like, The real question that I have is like, if our media is all going to be a bunch of independent creators on YouTube or independent podcasters buffeted by Spotify's ad rates or whatever.

How will those platforms apply this pressure to our speakers in response to the Trump administration? And will anybody even be able to follow the causal line of like, Brendan Carr yelled at CBS, so the person who runs... podcasts at Spotify made sure to promote the daily wire more than something else.

I mean, do you think we would ever see something like an equal time mandate for YouTubers where like if Jake Paul does a video praising Donald Trump, he also has to do one praising whoever's running against Donald Trump? I hope not. Elon Musk likes to say he's a free speech absolutist. He is not. But I might actually be one. I have a lot of complicated thoughts about this lately.

I don't think that we should overcome our own First Amendment in that way. There are laws in other countries that are wacky. In India, there was a law proposed that said if you had a YouTube channel over a certain size, you had to register with the government for preemptive regulation.

Imagine how the heavily armed American population would react to that idea in this country. I only support regulating YouTube channels like Cocomelon, which are a blight on humanity. But that's kids, right? If you go and ask politicians on both sides, no matter how credible or...

consistent or cynical, you think they are, you go and say, where can you find a hook that allows you to overcome the First Amendment and pass some speech regulations that everyone will agree on? They will point to children's content universally.

And that's why the Kids Online Safety Act exists, right? That's why, hey, we should make sure that at least this group of people that cannot protect themselves, and we don't think they can make choices in the market to benefit themselves, we protect them at the platform level. And that is also why the platforms are fighting against it so hard, right? Because they don't want to accept that responsibility. But that's about it.

There's not a world in which we agree that there should be such a thing as the fairness doctrine for podcasts because the solution is to just have more podcasts. Right. And that basically is this, like, there's an infinite amount of podcasts. And that should be. There truly are not. You know, you have to. I mean, we can keep creating podcasts in this country. I will fix free speech by just starting new podcasts every single day.

You can either have competition or you can have regulation. And up until recently, our solution has been competition. And I think what we're all kind of realizing or maybe waking up to is actually the recommendation algorithms.

TikTok for you algorithms, they're putting much more of a thumb on a scale than anybody can realize or quantify or see or even research because the APIs aren't open. And maybe that's the thing we need. Maybe that's where we should point our regulatory effort is saying you need more competition there.

Because otherwise you start to get into this really dicey space where you are regulating the content itself, which is what Brendan Carr is trying to do. And I just think no matter if you're super conservative or super liberal, that's too dangerous. The government should not have that power.

Well, on that cheery note, Nilay, thank you for coming on. Look, I'm just telling you, the empowering thing, whenever you see a government regulator being like, we should do some speech regulations, just say they're bad. It's great. It's like the most American thing you can do.

To look at the speech police and say, no, leave. And it feels good. And there's just, I promise you, I promise all listeners, there's something deeply empowering about that, that you can express at almost every turn of your life. Yeah. All right. We'll give it a shot. All right. Thanks, Eli. Thanks. Eli, this was great. Thank you so much. When we come back.

We've got a doctor's appointment. We'll talk to one of the authors of a new study showing how effective chat GPT can be in diagnosing disease. How much is the copay? I think it's 20 bucks a month. Amgen, a leading biotechnology company, needed a global financial company to facilitate funding and acquisition to broaden Amgen's therapeutic reach, expand its pipeline.

and accelerate bringing new and innovative medicines to patients in need globally. They found that partner Insiti, whose seamlessly connected banking, markets, and services businesses can advise, finance, and close deals around the world. Learn more at citi.com slash client stories.

Hey, it's John Chase. And Mari Uehara. From Wirecutter, the product recommendation service from the New York Times. Mari, it is gift-giving time, which means I am hopeless and need help. You're not alone. John, we have over 40 gift guides. like gifts for people who love food. I really love this butter warmer on that list. I didn't even know these existed. It's this cute enamelware pot. If you're someone like me who explodes butter in your microwave, you can melt butter in it.

but you can use it for a ton of other stuff making hot chocolate warming soup and it looks great on the stovetop This is useful, but it's also good looking. Yeah, definitely. What's an easy gift for someone like under 50 bucks? So in our Gifts Under 50 guide, we have this super cute palm-sized Bluetooth speaker. It comes in an array of cool colors. waterproof. I want one for my garden. Terrific. For all of Wirecutter's gift ideas and recommendations, head to nytimes.com slash holiday guide.

Well, Casey, it's time for your annual checkup. Oh my goodness. That's, you know what? You're joking, but I actually do have my annual checkup later today. Wait, really? Yeah, I do. You're going to the doctor? That's right. It's time to find out what's going on with this old body, Kevin. Well, um... Just from looking at you, I would say you're not getting enough vitamin D. Well, I was recently diagnosed as handsome.

I think you need to get a second opinion on that. But Casey, I want to talk today about AI and medicine because there was a thing that caught my attention recently. My colleague at the New York Times, Gina Collada, wrote a story. about a study that came out a few weeks ago over at JAMA, the Journal of the American Medical Association, which showed that on average, at least in this study, chat GPT was better at diagnosing illnesses than doctors.

even doctors who had access to ChatGPT. And why that's so fascinating to me is for decades, people have been turning to WebMD to do something very similar. And mostly, it seems, getting the wrong answer. Certainly the people posting online said, oh, I typed these three symptoms.

into WebMD and, you know, it told me I was dying. That is not what appears to be happening with ChatGPT. ChatGPT is actually able to figure out what's going on with these folks. Yes. So we have so many questions about this study that we invited one of these studies authors, Dr. Adam Rodman, to join us. Dr. Rodman is an internist at Beth Israel Deaconess Medical Center in Massachusetts and the host of a medical history podcast called Bedside Rounds. Let's bring him in.

The doctor will see us now? The doctor will see us now. Adam Rodman, welcome to Hard Fork. Thank you guys for having me. So... Let's talk about this study that you helped design. Tell us about the study and sort of what you were aiming to discover. Well, we were testing a simple hypothesis in a complicated way. That's what scientists do. We get too much into the details. One of the presuppositions in my field has been this idea that AI plus humans will always be better than AI.

alone, right? There's something essential about the humans. And a lot of health systems have rolled out these secure versions of chat GPT, sometimes there's other language models, with the idea that it'll make... doctors better. So we basically tested that hypothesis out. We did a randomized controlled trial where we gave...

doctors. We gave attending physicians and residents, so those are physicians in training, it was literally 50-50, and we either randomized them to go through these really complicated cases with ChatGPT. Or without. And we didn't just measure the diagnosis. We did, of course, measure whether they got the diagnosis. But we measure these really nuanced measures of how people think. So were you able to look for evidence that supported what you thought? Were you able to look for evidence that...

didn't support what you thought? Were you able to do these kind of basic cognitive tasks of a doctor? Hmm. What kind of information were you presenting to these doctors and these AI models? How detailed was it? Like the kind of thing that you would get in a medical school exam or like what kinds of... problems were they being asked to solve? Yeah, I want to see if we can solve some of them. Yeah, I can. Do you want me to go through one of them for you? Sure, let's hear one. Yeah, you want? Okay.

I'm excited to hear you guys attempt to go through a medical case as we go. Let me pull up. I think it's scurvy. Yeah, I don't think any of them were scurvy, unfortunately. If there's one thing we've learned about podcasts is that people love a medical mystery. Yeah, this is basically like. House M.D., right? Okay. Yeah, exactly.

Here you go. A 76-year-old man comes to his doctor complaining of pain in his back and thighs for two weeks. He has no pain sitting or lying, but walking causes severe pain in his lower back, buttocks, and calves. He has a fever. He's tired. He was told by his referring cardiologist...

got his recent test results, that since his pain started, he has a new anemia, so his blood levels are low, and he has renal failure. And then a few days before the onset of the pain, he had coronary angioplasty. So he had a coronary catheterization of his heart and opened a vessel.

And he got heparin during that. And then we go over like the lab values and stuff. This is not an easy case here. This is something that I think every doctor would know. Well, do you want to try to solve it first? Sorry, I should have gone. My first thought was chlamydia. Post-cardiologist acquired chlamydia. Exactly.

Kevin, any thoughts? I'm still going with scurvy. Okay, great. What was the real answer, Adam? Cholesterol emboli syndrome, of course. No, I'm just kidding. It's actually a very hard diagnosis. That was my second guess. Yeah, second guess. Yeah, I mean, and the point is, none of the... cases are what are called zebras, right? They're none of the things that are often on house MD. They're all things that are...

tricky to figure out, but you will see and are real. The purpose wasn't really whether or not the humans got the diagnosis, but whether they went through those steps that are essential and generalizable to getting any diagnosis. So you give these little vignettes, these medical sort of mysteries. to the doctors in the study, and the doctors are given the use of GPT-4 to try to help them diagnose and figure out what's going on with this patient. Then you also had just...

GPT-4 by itself, with no help from human doctors, try to analyze the same cases. And then you compared the analysis or the diagnosis from both groups. Is that right? Exactly. And we also let them use any other resources they wanted. And were these doctors in the study chosen because they had interest in...

using AI for diagnosis? Were they mostly more tech-savvy doctors? Were they people who had used this stuff before? No. So we did the classic trick to get a good subset of doctors is we paid them. So these doctors were everyone. practice for a variety amount of time. Some people were experienced ChatGPT users. Those were the minority. Some people had never used it before. Most people fell in between. And what were the findings? Yeah. So the findings were...

not the most optimistic if you want to make people better, which is that the AI model did not improve human performance. So humans using the AI model did about as well as humans alone. And then, of course, the finding that I think the reason that I'm here and that everyone is angry at me is that the AI model itself drastically outperformed both groups. Yes, this was the headline, you know, of a lot of the coverage about it was that the AI had beaten the doctors.

Even if you gave the doctors access to AI, the AI by itself appears to do better at diagnosing these things. Now, obviously, we should make some caveats. This is a small study. We obviously would want more studies to sort of confirm this result. But this really stuck out to me because it seems like...

sort of reading the study, what happened is that basically the human doctors did not believe the AI could be as good or better than them at diagnosing. And so they would go in and sort of second guess what the AI had said and end up getting... getting the diagnosis wrong as a result. Is that consistent with the findings? Yeah, I say there are two, well, maybe three reasons, two reasons. So one, some people, I mean, despite the basic training, some people didn't quite know how to use.

a language model to get the most use out of it. So probably some of that is training. Number two, though, when we look at the data, people liked it when the AI model said, oh, this is your idea. These are the things that agree with it. AI models said, hey, man, you might be wrong. These things don't fit. They disregarded that.

Here is why that resonates with me. Have you ever been in an Uber and they have the Google Maps open and Google Maps is like, you might want to take this route. And they say, no, no, no, no. I actually know a better way. And the next thing you know, it takes you an extra 30 minutes to get you wherever you were. going. I firmly believe there is no Uber driver who can outsmart Google Maps. And we may be moving into a situation where most doctors cannot outsmart ChatGPT.

And that brings us to reason number three, the reason that people are angry at me, which, you know... I don't think it's the case now. It might be the case with O1, and it's certainly going to be the case in the next one to two years. Maybe AI models are better at making diagnoses than human doctors. I don't think that's the case with GPT-4 Turbo, which was the model that was...

used here, but it's going to be true at some point, and we're quickly approaching that. Yeah, and we should say, this study took place last year, right? So like all of the models that doctors have access to are now almost, they are 12 months better than they were, you know.

Yeah, this is the classic academic publishing lag. And of course, I'm talking about this trial now and doing really other cool stuff. But like the models have continued to improve, especially in diagnostic domains. Like they're saturating our benchmarks, right? Everything that we can throw at them and we're like.

This is what human should accomplish. By the way, humans are like 45, 50 percent. The new models are like, well, just kidding. I'm 90 percent. So. Well, so I have a question about that, which is, you know. ChatGPT released this 01 preview model, which does better reasoning. That's what they tell us. And I have not been able to figure out any prompt that I, as a mere journalist, actually seem to have any need.

As a doctor, are you already turning to this model for reasoning through difficult medical questions? Yes. Yes. And I have a preprint that will come out in the next couple of days that shows how dramatic it is. Yeah. I mean, the reason that this study... caught my eye and fascinated me so much is that I think it's possible to imagine that a version of this finding could be found in many different fields. It's not just going to be medicine where the AI is sort of...

reaches a point where it is better than either the human practitioners in that field or the human practitioners using AI in that field. And I think when that point happens for... many white-collar sort of knowledge workers, there's this question of like, how do you as the practitioner react? Do you get defensive and say, oh, the AI has to be flawed. It couldn't possibly be better than me. I'm not going to use it. Do you rebel against the AI and say,

We can't, you know, these things make things up. They don't always get the thing right. Or do you embrace it and try to get good at the technology and use it in your work? Is that scenario playing out among doctors? Do you see doctors who are really... happy about these findings because they say, oh man, we're going to be able to give patients such better care, or do you think most of them are sort of reacting from a place of fear and confusion?

So yes, yes, and yes. Different people are reacting differently. Obviously, the reason that I'm doing this work is I want better care for my patients. And again, it's if... I like making diagnoses. I'm a huge nerd. I'm like the prototypical internist. I pace around my patient's room like a crazy person trying to figure out what's going on. But if this algorithm helps me take better care from them, I will give that up. Other people... are resistant like uh to insult doctors a little bit we're uh

a profession that really prides ourselves on our cognitive abilities. It gives us a lot of societal power and power over our patients. And this is a professional challenge to my field. I am a pain in the butt, so that's fine. I don't care about that. But there are a lot of people that do. Right now, I'm at the Macy Foundation Conference. It's all the top medical educators to try to figure out what AI means for how we train the next generation who's going to be practicing medicine.

for 30 years. These are things that the field is fiercely debating and arguing about right now. I'm just happy we're having the conversation. Well, I have to say, I mean, the results are fascinating, but I do find myself siding in some ways with doctors who might be exasperated with these findings. And the reason is, you know, Kevin, you and I say all the time, hey, don't.

bet your career on anything that a large language model is telling you these things do hallucinate they make up facts all of the time you and i don't really use them in our work in the context of we look up a quick fact and just drop it into our story we actually

are always going to second guess the LLM. We're always going to try to find a second source before we're like, okay, we actually feel like we can trust this piece of information. And Adam, in your study, basically what you found is that people who did that, which we've been advocating for as a... practice were worse at diagnosing diseases. I know. I know. To be clear, I was shocked at the results. My hypothesis going in was that people using it would be the best. So I am surprised by this.

In the psychological literature on diagnosis, it kind of makes sense. This is not doctors. Humans are resistant to things that disagree with them. And we have all these heuristics and cognitive shortcuts that we take. So it's not surprising to me that what people did was they anchored.

on what they thought and what the first things that they thought and they were resistant to something that was giving them a second opinion. Maybe that's something that's actually optimistic because we can align models or try to figure out how to present that information to make humans better. That is what I am trying to do. And I think all the short-term...

Like, let's be clear. Like, if the headline is doctors are over, chat GPT is good. No, absolutely not. There's a million things we do. This is just. one part of it, and they're not capable of operating without us. I'm not discouraged by this. I'm still working to figure out ways we can use these technologies to make better care of our patients. Yeah, I mean, that's the question I'm curious about, is like, what can the medical field do? I mean, I'm...

imagining a future where patients have access to this stuff. And maybe before you go into the doctor to get your hip pain checked out, you do sort of an exhaustive prompting exercise with the model and say, hey, what is this? And then you sort of bring the... the readout from the AI into your doctor and say, hey, could you give me this medicine and this medicine and I need this operation because, you know, and the doctor might say, well,

you know, let's do some tests. And you're saying, I don't need to. The AI already told me. That's already happening. That's already happening. I mean, there was a Kaiser Family Foundation survey on how many patients are putting their information, but it's already happened in my life when people will even put their...

To be clear, these things are not HIPAA compliant. Please don't put any of your personal health information. But people are doing it. Elon Musk told me I should be uploading all my MRIs to Grok. Are you saying he was not correct? Well, it depends on if you want someone else to own all your MRI images. So, yeah, keep that in mind. Yeah, I uploaded it and told me I had the woke mind virus. So that was weird. They don't work very well. Right.

But yeah, people are doing it already. I've had patients who do it. This is not a future. Now, does it work that well? Sometimes, but not consistently. And they're very, you know, you have to prompt them, right? But how far are we from somebody selling a commercial tool that's a dock in a box that works pretty? well. I mean, that is the actual implication of your study, is that you are better off just asking chat GPT and not your doctor. I...

That's not my conclusion from the study, if that's what you want to take from it. The conclusion is basically one in four doctors were not able to successfully diagnose this, but in 92% of cases, ChatGPT did. If I had to choose one of those two things, I'd probably choose ChatGPT because it also does other things.

things for me too. I would say the difference is that the people who put the case together, like the information, if you want to think about the prompts, were expert clinicians. We organized it in such a way. Like you can imagine, I assume I don't want to talk about your past medical.

histories, but I've had problems and we don't, humans don't always describe things the right way. We don't know how good ChatGPT is about getting that information out of us. I think it's going to happen, but I don't think that ChatGPT can do that now. I'm curious. If these AI tools do become part of the clinical model in hospitals all over the places, it sounds like they are going to. What is it going to mean to be a good doctor in a world where AI is better at diagnosing than you are?

So I'll give you the, there's the darkest timeline, but we'll go with the optimistic timeline. Give us both. Okay, well, let's go with the optimistic view first, because this is what I'm hoping. And inspired by, oh, I'm a huge nerd. This should not be a shocker, but inspired by you a little bit.

Kevin, it's the Star Trek computer, right? So you have a computer system that's listening in at all times and is saying, hey, hey, Adam, you might be showing some unconscious bias here. Adam, I think you should ask if this person, like, makes their own snuff because he isn't a... pneumonia is on their differential. Like something that's listening in.

cuing me to be better, trying to make me a better human, but also listening to the patient and getting more information from the patient. A computer system like that is something that makes the medical encounter more human, which I hope is what we want. You want the darkest timeline next? Yes, please.

I don't know if you guys know this. So AI technologies are already being rapidly spread out in clinical care. They're listening to doctors' encounters with their patients. They're writing notes. They're writing the first drafts of, like, when you talk to your doctor on a portal of those messages. I just wrote a piece.

the New England Journal of Medicine, where I originally called it language models in the inshittification of the electronic medical record. It turns out the New England Journal of Medicine doesn't consider that an academic term, so they changed the degradation. But what we're seeing so far is not the model that I see.

that I am advocating for and what I'm researching and pushing for. But a system that's obsessed with efficiency isn't really worried about some of the downstream effects on what this means for our relationships. And it's just going to like, yeah, you'll get these more efficient tools. So you'll see twice the number of.

of patients in a day. We'll just put this AI text in the chart so we can bill off of it. And a system that might use these powerful, efficient tools to like squelch out the tiny bit of humanity that remains in medicine. So that to me is the darkest timeline and what I want to. for you. I don't think there's, you have to engage with this technology. It's going to change.

every single white-collar field. We're the ultimate white-collar field. It's going to change our field. And I see a way that we end up with Dr. Kresher on the Enterprise, but I also see a way that we end up, I don't know, was it dystopia? I'd say Blade Runner, but I don't think there are any doctors in Blade Runner, so this analogy is going to fall apart.

You know, I mean, to me, like an optimistic gloss on all of this is the upside in making this kind of care much more accessible, right? Like if all of a sudden I can just check my basic symptoms with ChatGPT, maybe that does provide me some...

benefit. Now, obviously, a lot of people have been doing this for decades with WebMD, and there are, you know, sort of a lot of jokes about that. A lot of people are sort of quick to use WebMD to assume that they have the very worst condition, and also, like, constantly seeking medical care can, like, create its own set of problems.

But if you're just sort of the median person, I can also just imagine checking in with my virtual doctor a couple of times a month and get some tips about how to live a healthier life. Oh, so yeah, absolutely. We're not there yet, but I think that's the way things are going.

And to be clear, the reality is terrible. Like how long does it take you guys to see your primary care doctor? I'm a doctor and it takes me forever. So maybe we'll have a system that can do those basic things, but also recognize when it needs to step you up, like triage you appropriately. And maybe you'll have a system. where instead of referring you to a specialist, your PCP will be able to work with that system to...

answer something that you would have needed a specialist before, or a system that says, hey, you don't need to go through the referral system. Go straight to the orthopedic surgeon. So I think there's a lot of hope. And I acknowledge the baseline is terrible. Our medical system isn't really serving our patients. And if we're thoughtful about this...

Like, that's okay if my power is eroded. Like, we'll get better care for everybody if we're thoughtful about it, which if you've looked at the history of how medicine has happened in this country, that's not always the case. And I'm curious, you're a doctor. You trained for many years to become a doctor. You amassed a lot of knowledge that has made you good at that job. What is your emotional reaction to the findings of your own study? um yeah i mean i'm i am

It's a lot of emotions, right? I'm both excited and I'm freaked out. I'm not the typical doctor. I am a historian. I deeply care about how people think. I feel like I'm on the edge of something new, which is exciting. But to me... Like, I love talking to people. I love meeting new people. But one of the things that I love is the intellectual part of my work. That's what makes it.

I don't love sitting down and writing billing codes and saying, is this a level two? I hate that part of my job. But the part where I get to talk to somebody and figure out what's going on with them so I can make them better, that's my favorite part. But at the end of the day, I'm here for my patients. So I'm conflicted, but there...

It's clear to me what the right thing to do is, which is do the right thing for the patient, even if it means giving up something that is dear to me. Yeah. I mean, that strikes me as a good model for people in all kinds of industries as the AIs.

do get better at doing our jobs. It seems like the North Star should be like, what is the actual work that I am performing? And if an AI can do that better than I can, then maybe that's better for the world. Well, you know, there is another approach that I wondered if you consider, which...

which is to say that, you know, essentially these chatbots were trained on a bunch of work that real doctors did. Those doctors are not being compensated. The primary effect of these chatbots being in the world is that the salary of a doctor could go way down. there been any talk among doctors of saying, let's actually get together and stop these things from draining all of the money out of our industry?

Oh, yeah. I mean, yes. I think that in the grand scheme of things that doctors are worried about threats to their career, this is low right now. These are all theoretical talks, but I suspect we're going to hear more of that. It's weird. When you're in a profession... Wait, am I allowed to swear on hard fork? Yes. I actually believe in that old bullshit about the doctor-patient relationship being the most important thing above all else.

I believe that. That's why I'm such a pain in the ass. So like... If this thing can do a better job than me at making my patient's life better, to me, it seems like regardless of those guild issues, right? It seems to me that's what the right thing to do is.

Yeah, it'd just be interesting if we live in a world where the actors have successfully prevented movie studios from replacing them with AI, but the doctors are like, well, I guess that's fine. That might happen. So, after having done this study and... continuing to do work in this area of AI and medicine. Do you feel...

More optimistic about the future of medicine, or do you feel like we're headed into this kind of dark timeline where AI is just making all the decisions and we sort of suck the humanity out of the healthcare system? I see the market forces at bear here. And my worry and the way that I see things being rolled out now is that we're veering not directly towards the darkest timeline, but that we're heading in that direction. And I think that we need to...

be really thoughtful. And the we is not just doctors. Patients need to have a voice in this also. This is ultimately who this is about, about what type of health system we want and how we want these technologies to be used. But I'm actually worried about us heading Like, the current timeline's pretty dark, guys. You get five minutes, ten minutes with your doctor, and they don't look at you, and they type on the computer. Like, that's not good.

So medical errors, like up to 800,000 Americans are killed or seriously injured each year because of medical errors. Every one out of five dollars that Americans make go into the health care system. So this darkest timeline thing isn't that far away. And I, well, I'm a natural pessimist, but I'm trying. I'm like Don Quixote. I'm trying to go for the good timeline, even though it probably won't work.

If any young people are listening to this who may have been interested in becoming a doctor or entering the healthcare profession, what would you advise them? Should they not become doctors because AI is going to take that job? Well, the problem is what are you going to suggest that they do instead? Like if we're talking about technologies that can do this. At this conference, they played something on Notebook LM with a fantastic podcast host. So I don't know. So I think that...

We're talking about tasks of doctors. that might be automated. And it's going to be working together for a while. And we're not talking about the job as a whole. And fundamentally, it's still a job about human connection and making people better. And if that is what you want, I would do that. Also, surgeons and proceduralists are not going anywhere. So I wouldn't dissuade somebody from medicine, but they should know that's what they're going into it for. And it's not going to be like Dr. House.

I actually have never seen House. I always just use this example despite having never seen the show, so I'm phony. But it's not going to be that cognitive part. It's going to be something different. And that's scary because I can't predict. I mean, medical students ask me this, and I don't have an answer for them.

Well, it's a fascinating conversation, Doctor, but I will be seeking a second opinion, actually. I think it's just important. You should ask ChatGPT. Thank you so much. Thanks, guys. Well, that was fun. Thank you, Adam. I learned a lot. When we come back, crime doesn't pay, but it does play on the hard fork podcast. I see what you did there. Yeah.

It's Melissa Clark from New York Times Cooking, and I'm in the kitchen with some of our team. Nikita Richardson, what are you making for Thanksgiving this year? I'm making the cheesy Hasselback potato gratin featuring layers of thinly cut potatoes. Very easy, but it's a real showstopper.

Genevieve Coe, what about you? I'm actually doing a mushroom Wellington puff pastry wrapped around this delicious savory mushroom filling, arguably as stunning, if not more so than a turkey. No matter what kind of Thanksgiving you're cooking, you can find the recipes you need at NYTCooking.com. Well, Kevin, in the criminal justice system, the people are represented by two separate yet equally important groups. The police who investigate crime and the media who turn those crimes into podcasts.

And from time to time here on Hard Fork, we like to survey the landscape of crime and punishment for a segment that we call Hard Fork Crimes Division. Right now in this segment, we... seek justice, I think it's fair to say. We will not be solving any crimes, but we will describe them. Or certainly we will describe what has been alleged. I was just saying, we have not yet solved a crime, but it's not out of the realm of possibility for the future.

Not at all. We are always gathering evidence and perhaps. We should turn to our first case, Kevin. Yes. Let me crack open this case file. The FBI searches the home of the founder of the Polymarket betting website. Did you see this one? I did. This was juicy.

Polymarket founder Shane Copland had his home searched by the FBI last week as part of a criminal investigation into whether Copland was running Polymarket as, quote, an unlicensed commodities exchange, which is apparently illegal. And they seized Copland's electronic. devices, including a phone. Yeah, that's not a good thing when that happens to you. Now, Kevin, after Shane Copland's phone was seized, he posted the following on X. New phone, who dis?

So, Kevin, remind us who this Shane Copland character is. So this is the young founder of Polymarket, which is the sort of leading crypto prediction betting market platform. It rose to prominence during the election. where people wagered millions of dollars on who was gonna win the election. And as my colleague David Yaffe Bellany told us on the show a few weeks ago, it was sort of nominally illegal in the US, but lots of Americans were.

using it anyway through VPNs and things like that. And it was sort of an open secret that it had this large base of customers in the U.S. despite not technically being allowed here. Yeah, so I think that the FBI has some questions about that. But a Polymarket spokesman said, why not, that the raid was, quote,

Obvious political retribution by the outgoing administration. Yeah, the theory here, at least the one that's being sort of advocated by polymarkets fans and defenders is that, you know, the Biden Justice Department and FBI were so mad about the. election and the fact that people on polymarket had predicted that trump would win um that they i don't know went after the company on some bogus charges and here's why i don't think that's true could you imagine explaining polymarket to joe biden

It's like, Mr. President, it's a prediction market. People bet cryptocurrency on the outcomes of various events. Not in the United States, but they would VPN into it. By the time you've gone to VPN, Joe Biden has truly fallen asleep. I don't think so. I bet Joe Biden has used a VPN. You think so, Joe? What, like watch Netflix movies that are unavailable in the United States? BBC Mysteries. So what do we know about why they are being investigated here?

because if it is true that large numbers of Americans are. illegally betting on elections by using VPNs, that could be a violation of the law. You know, DYB told us that people were openly describing how to get around the ban on US bettors. in the polymarket discord. Yes. So I think at the very least, the FBI is going to say, you need to tighten this up a little bit and make it a little bit harder for Americans to use this service. Yes.

Some reports have said that this investigation predates the election. This was in process long before. It's also not PolyMarket's first run-in with the law. They previously settled with the CFTC, the Commodity Futures Trading Committee. in 2022 and paid a fine as part of that settlement. But this is something new. This is bigger. And I would say if you are a Polymarket fan in the U.S., you probably should stop doing that. Can I tell you how I think this was?

resolves how shane coplin running the federal reserve stay tuned it's gonna be a wild 2025 yes Let's open case number two, Kevin. What do we have? Well, Kevin, Razzlecon, crypto's most embarrassing rapper, some say, is going to prison. Remember Razzlecon? I sure do. Heather Razzlecon Morgan, who's a former blogger at Forbes and part of the Forbes to Prison Pipeline and creator.

of cringy crypto-tinged rap videos, was sentenced to 18 months in federal prison this week after pleading guilty last year to helping her husband, Ilya Dutch Lichtenstein, launder 120,000 Bitcoin he stole. by hacking the crypto exchange Bitfinex back in 2016. Do you know how much 120,000 Bitcoin were worth in 2016? 2016, let's see. Probably not.

as much as they are today. They were worth $71 million back then. They are worth $11 billion today. That's quite a haul. Old Dutch and Razzlecon really almost got away with it. They would be living large. I was obsessed with this story. when it, when it, came out when they got arrested because it was sort of like out of like a very pulpy like spy novel. Like they had fake passports and they were this like sort of Bitcoin Bonnie and Clyde. And they were just like...

these cringy millennials who were trying to get famous on the internet, but also stealing a bunch of Bitcoin to make themselves very rich. It was just, our friend Nick Bilton has a documentary coming out about this. this case on Netflix that I'm very excited to watch because I'm truly obsessed. Should we, should we hear a little bit of Razzlecon's work? Let's do it. Yeah. If we could hear a clip, please. It's out language. See, now.

To me, this just goes to show how much the culture has changed because there was a time when people would have looked at what Razzlecon did and simply said, she's being Fergalicious. But... In sort of the woke moment that we're in now, stealing 120,000 Bitcoin gets you a year and a half in jail. Yeah. Really sad. Do you think that her rap career will be an asset in jail? Absolutely. To her reputation? I would not be surprised if Razzle...

cloud is the most popular person in the prison that she's in and if it fuels the next phase of her journey and in fact she posted on x that she will quote soon be telling my story sharing my thoughts and telling you more about the creative and other endeavors i've been working on So, you know, I don't know what that means, but I will say I would love to see a Razzlecon jukebox musical.

Tell the story of Razzlecon in her own words through her own music. Yes. And I should say I look forward to Razzlecon's appointment to head the Securities and Exchange Commission. Next crime. What do we have? Kevin, Gary Wang, a top FTX executive, has been given no prison time. What did he do? Well, Gary Wang, Kevin...

was the last of the legal cases against FTX. You might remember some of FTX's more famous co-founders, such as Sam Bankman-Fried, who was sentenced to 25 years in prison for his role in FTX fraud, or Caroline... Ellison, who was sentenced to two years in prison. Most recently, Ryan Salami was sentenced to seven and a half years in prison. Salem.

David Appy Bellany literally has to put a pronunciation guide in his stories for this name because everyone calls him Ryan Salami, but it's Salem. Do you know what they called the case against Ryan Salem, Kevin? I think I know where you're going with this. The Salem Witch Trials! Yes. I knew that was going to happen. More like the Salem rich trials. Am I right? That's a good, better one.

Wait, I got a snort out of you for that. That was good. That was good. So anyway, so that leaves Gary Wang, the fourth member of the crew here. Actually, that's not even true. There's another guy, Nishad Singh. who was sentenced to time served. So Gary Wang was the last of these cases to be, you know, resolved. And it was resolved this week. And he was given no prison time. And the reason is he snitched so hard on SPF that the government basically gave him a standing ovation.

During the sentencing hearing, one prosecutor said that Weg was, quote, the easiest cooperator they've worked with and provided essential information to them. So he basically got the best snitch award and it kept him out of jail. Which is a good reminder.

that cooperating with the government in a fraud investigation can have benefits. Now, Kevin, the FCX legal saga has really, you know, taken place from the start of this podcast, you know, and now it's sort of wrapping up. So do you have any sort of feelings of nostalgia or other? reminiscences from two years of FTX? You know, I have been just...

I'm very interested in this whole saga, not just because I think it was a big deal in the world of crypto, but because it has had all of these strange ripple effects, including... I was talking with someone this week about this, but the investment that SBF made in Anthropic, the AI company.

has essentially paid back all of the investors who would have lost money on the FTX fraud because that stake has turned out to be worth a ton of money. And so even though Sam Bankman-Fried was a fraudster and is now serving time in prison, Turns out he was actually a pretty good tech investor. If he gets out of prison and you just run into him and he's like, you know where you should put your money, would you listen to him?

Yes, honestly, I would. You know what? I might too. I believe in second chances for people. And Sam, if you're listening, I would love your investment advice. I could really use some updates to my portfolio. Sam, if you're listening, you're not supposed to have a self- phone in there so be careful you don't think you can get podcasts in prison that'd be the worst part about going to jail

Well, Kevin, we have one more case to look at. A phone network has employed an AI grandmother to waste scammers' time with meandering conversations. Yes. As you know, there are now these scammers who will call people. using an AI voice pretending to be, you know, a long lost cousin or their grandmother or something and just try to steal money from them by impersonating someone. But this is a story that comes to us from the UK.

where the largest mobile phone operator in the UK, O2, has created a new AI system called Daisy to trick scammers. into thinking that they are talking to a real person who basically has been given the goal of just rambling and keeping them on the line for as long as possible. So wasting the scammers time. I'm sure you've seen there are all these YouTube videos now of people who.

whose whole shtick is that they take scam phone calls and then they try to scam the scammers. But that is labor intensive. And so now O2 has come along and said, we can actually build an AI that just... waste the scammers' time for you. And I think that's a great development. I agree. I've read that they've sort of designed it to keep the scammers on the phone for as long as possible, but they're also trying to learn what tricks and techniques the scammers...

are using so that they can share that with maybe their customers, maybe the police, and help prevent people from falling for these things. O2 said that Daisy has managed to keep some people on the phone for up to 40 minutes. I'll just say it. If an AI...

voice is keeping you on the phone for 40 minutes, you're a bad scammer. Terrible scammer. You're bad at your job. You can tell instantly when it's an AI on the other end of the line. At least I think I can. Well, there's usually some sort of delay, right? And presumably that's going to disappear.

here. But for now, I guess I feel somewhat confident. Now, I will say that consumers cannot use Daisy. But what O2 did was add it to the list of what they call easy target numbers used by scammers. So sort of sharing it around. and saying, hey, you know, this Daisy is a really easy mark. So that's cool. But I will say it does make this feel a little bit more stunty to me. Although I guess as I think about it, I'm not exactly sure how consumers would be able to.

I don't know, flip a button to get, you know, Daisy to answer their scam. Oh, I think this could be because you know how Apple or like other mobile devices can now sort of say scam likely when someone calls you from an unknown number. Yeah, you could just press a button and it would. put Daisy on the line, and it could just waste their time. I think we should deploy this. Wait, that's actually genius. Like, I want to do this. Yes.

Do you like these sort of vigilante schemes to take back the power? You know, I mean, look, there is always pleasure in seeing a justice done. Yes. An injustice being righted. You know, I have to say I have enjoyed YouTube videos of, like, porch pirates being apprehended. The glitter bombs. The glitter bombs. I find that very satisfying. This is when, like, you disguise, like, something as a package. Someone steals it. They open it up.

sprays glitter everywhere and, you know, sets off an alarm and... sets off like horrible smelling stuff and yeah this is a very popular genre of youtube video most people do not have uh very often an experience of justice you know it's like you see injustice everywhere but like the moment that you actually see like a right

a wrong being righted is like transcendent. I remember one time I was on the freeway and everyone was like trying to merge onto a different freeway. And so you're just sitting in bumper to bumper traffic and you're going forward at one inch an hour and somebody gets impatient and they pull onto the shoulder so they can just. get around everybody because I guess they had somewhere to be.

And about one second after the person pulled onto the shoulder, I saw siren lights go up and a police officer just went and, you know, pulled that person over and, you know, got them in trouble. And that was like my greatest experience of justice. And that happened 20 years ago when I think about it all the time.

I'm so glad that happened. Anyway, thanks, Daisy. Thanks, Daisy. And the sooner I can have you on my phone to deter the scammers, the happier I'll be. And that's the Hartford Crimes Division. Case closed. Yeah. Before we go. We have a special request. If you can, we would really appreciate if you filled out a quick survey. You can find the survey at nytimes.com slash hardforksurvey. Your answers will not be...

published in any way. They will just sort of help us make the best show we possibly can and understand more about who listens to the show in the first place. Again, you can find the survey at nytimes.com slash hardforksurvey. drop the link in show notes. Hard Fork is produced by Rachel Cohn and Whitney Jones. We're edited by Jen Poyank.

This episode was fact-checked by Ina Alvarado. Today's show was engineered by Alyssa Moxley. Original music by Marion Lozano, Diane Wong, Leah Shaw Dameron, and Dan Powell. Our audience editor is Nel Galogli. Video production by Ryan Manning and Chris Schott. You can watch this whole episode on YouTube at youtube.com slash hard fork.

Special thanks to Paula Schumann, Pui Wing Tam, Dahlia Haddad, and Jeffrey Miranda. You can email us at hardfork at nytimes.com with whatever disease ChatGPT just told you that you have.

This transcript was generated by Metacast using AI and may contain inaccuracies. Learn more about transcripts.
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast