The Elon-ction + Can A.I. Be Blamed for a Teen’s Suicide? - podcast episode cover

The Elon-ction + Can A.I. Be Blamed for a Teen’s Suicide?

Oct 25, 20241 hr 10 minEp. 106
--:--
--:--
Listen in podcast apps:

Episode description

Note: This episode contains mentions of suicide.

 

This week, how Elon Musk became a main character in this year’s election, and what that means for the future of tech and of the country. Plus, the journalist Laurie Segall joins us to discuss the tragic case of a teenager who became obsessed with an A.I. companion bot and later died by suicide. We discuss what A.I. companies could do to make their apps safer for children.

 

If you are having thoughts of suicide, call or text 988 to reach the 988 Suicide and Crisis Lifeline or go to SpeakingOfSuicide.com/resources for a list of additional resources.

 

Guest:

 

Additional Reading:

Unlock full access to New York Times podcasts and explore everything from politics to pop culture. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify.

Transcript

I'm going to say something that I've never said before while making this podcast. Where's that? It's cold in here. I thought you were going to give me a compliment. No, that's going to have to wait for your three. But they did fix the ventilation in our studio. So now we have a nice cool breeze blowing through where previously we had been suffocating and sweating. What amounts to a poorly ventilated closet? Yeah, it's incredible that things are doing with ventilation these days.

And by these days, I mean since the early 1970s, it did take a while for that technology to get to the time San Francisco Bureau, but it's here now, and I hope it never leaves us. Like, I'm chilly, are you chilly? No, I'm all hot and bothered, Kevin. You run hot.

I don't always run hot, but when I know I'm about to drop some hot knowledge on people, well, I've warmed the heck up. Yeah, you know, Chapel Rone says it's like 199 degrees when you're doing it with me. And that is how I feel when I'm podcasting with you with the word it, meaning podcasting rather than how Chapel Rone meant it. Which is what? I'll tell you when you're older, Kevin. I'm Kevin Ruse, the tech columnist at the New York Times. I'm Casey Newton from Platformer, and this is HardFork.

This week, how Elon Musk became the main character in the 2024 US presidential election. Plus, journalist Laurie Segal joins us to discuss the tragic case of a teenager who developed an intimate relationship with an AI chatbot. And later died by suicide. What should Silicon Valley do to make their apps safer kids?

Well, Casey, we are less than two weeks away from the presidential election. You've heard about this. I have. And it's a very exciting time for me, Kevin, because as an undecided voter, I only have two weeks left to learn about the candidates, understand the differences in their policies, and then make up my mind about who should be the president.

Yeah, well, I look forward to you educating yourself and making up your mind. But in the meantime, I want to talk about the fact that there seems to be a third candidate. He's not technically a candidate, but I would say he is playing a major role in this campaign. I am talking, of course, about Elon Musk.

Yeah, you know, Kevin, I heard somebody say this week, it feels like Elon Musk has somehow become the main character of this election. And I'm surprised by how much I don't really even think that that is an overstatement. No, it does seem like he has become inescapable if you are following this campaign. And of course, you know, he's become a major, major supporter of former president Trumps.

And he is out on the stump trying to convince voters in these critical final weeks to throw their support behind him. So let's just bring ourselves quickly up to speed here and review Elon Musk's involvement in this presidential election so far. Yeah. So Elon Musk endorsed Donald Trump back in July after the attempted assassination.

He also announced that he was forming a pro Trump pack, the America pack, a political action committee, and he's contributed more than $75 million to that political action committee. And then more recently, he has been appearing at campaign rallies. Take over, Eli, yes, take over. If people don't know what's going on, if they don't know the truth, how can you how can you make an informed vote? You must have free speech in order to have democracy.

And then last weekend, Elon Musk announced that his pack, the America pack, would give out a million dollars a day to a random registered voter from one of seven swing states who signs a petition pledging support for the first and second amendments.

So this is on top of the money that he'd already promised to people who refer other people to sign this petition. And it goes even further than that, because wire reported this week that Elon's pack has also paid thousands of dollars to X the platform that he owns. Oh, so X got a new advertiser. That doesn't happen very often these days.

For political ads in support of Trump's candidacy, the pack is also taking a prominent role in the kind of get out the vote operation for the Trump efforts. They are sending canvassers around in swing states. They are leading some of these get out the vote efforts. So just a really big effort by Elon Musk to throw his support behind Donald Trump to get him elected.

Yeah, and Kevin, it just cannot be overstated how unusual this is in the tech world, right? The sort of typical approach that most big business leaders take to an election is to stay out of it, right?

And the reason is because typically you are trying not to offend your customers who may be voting for a different candidate. And also you're trying to hedge your bets because you don't know who is going to with the election. And so you want to try to maintain good relationships with both. But that was an old way of doing things. And the Elon way of doing things is to leap in with both feet and do everything he can to get Donald Trump elected.

I mean, I think it's safe to say we've never seen anything like this in an election where the, you know, one of the richest people in the world sort of decides that it has become his personal mission to get one of the candidates elected and uses a major social media platform as a tool to try to do that.

And also just spends so much money in not in just these sort of conventional ways, you know, tech donors, billionaires give money to political action committees all the time, Bill Gates, we just learned this week from reporting that my colleague Teddy Schleifer did donated $50 million to a pro Kamala Harris political action committee.

So there's a long history of that, but this kind of direct outreach to voters, the, you know, personal involvement, the appearing on stage at rallies. It's just not something we see. No, and 75 million dollars is a huge amount of money for a presidential campaign in Silicon Valley. We get numb to these sums, right? This is a world where open AI just raised more than six billion dollars in their latest fundraising round.

But in presidential election for one person to donate tens of millions of dollars is extraordinary. And, you know, you just heard Kevin say that Elon Musk just outspent Bill Gates by 50%. Yeah. So given the fact that Elon Musk has emerged as a central character of the 2024 presidential election, we should talk today about what he's doing, why it matters and whether you think it's going to work. Let's do it.

Casey, the first piece of this that I want to talk about is this million dollar lottery that Elon Musk is running for people who signed this petition pledging to support the first and second amendments. And he's given out a number of these sort of, you know, giant checks now to to voters who register in one of these crucial swing states. But the most obvious question about this is like, isn't this illegal? Yes. And is it Kevin?

Well, it certainly seems to be skirting the lines of legality is what I'll say. So in this country, we have federal laws that make it illegal to pay for people's votes. Right. You can't go up to someone who's standing at a polls. I'll give you $20 if you vote for this person. You also can't pay people to register to vote or offer them anything of monetary value in exchange for registering to vote or for voting itself.

So, you know, there have been a number of campaign finance experts who have looked at this and said that this probably does cross a legal line. And we also learned this week that the Justice Department even sent a letter to Elon super PAC warning them that this action might violate federal law. But, you know, Elon Musk and his allies are arguing that this is not illegal because he's not technically paying for votes or voter registration.

What he's doing is just giving a million dollars to random people who sign a petition that is only open to registered voters in certain swing states, which feels like kind of a loophole to me. Yeah, it feels deeply cynical. And unfortunately, we see this sort of behavior from billionaires all the time in particular Elon Musk, where where there is a rule, he will either break it explicitly or will try to sort of break it via a bank shot method.

And then effectively just say, come at me, bro. Right. Oh, you're going to come. What are you going to? Are you going to find me a little bit? Yes, sure. Go ahead and give me a fine. I'm worth upwards of $250 billion.

Yeah, I mean, what's so crazy to me about this is like, I remember I am old enough to remember the last presidential election in which they were all these right wing conspiracy theories going around about George Soros paying for people to attend rallies to come out in support of Democratic candidates.

And you know, those were based on basically nonsense, but like this is literally Elon Musk, the richest man in the world directly paying to influence an election by giving millions of dollars away to voters who signed this petition. So it is just explicitly the thing that Republicans in the last cycle were targeting Democrats for doing.

You know, it makes me think of another example Kevin, which is the Mark Zuckerberg example in 2020. We were in the throws of the global pandemic. There was not a vaccine that was publicly available and election administrators around the country were expecting record turnout.

They were expecting more mail in ballots than they had seen in previous elections and they were saying we need additional resources to run this election and they weren't getting a lot of help from the federal government. And this was a non part is an issue. They were not saying, hey, we need more votes from Democrats or more votes from Republicans.

We're just saying if you want us to count all the votes and make sure that this election is fair, we need help. So this nonprofit steps in and they raise hundreds of millions of dollars and 350 million of those dollars come from Mark Zuckerberg and his wife Priscilla Chan.

And of course, in the 2016 election, Facebook had been accused of destroying democracy. And so they show up at 2020 and they say, hey, we're going to try to be part of the solution here. And we're going to try to make sure that all the votes get counted. And we're not putting our thumb on the scale for the Republicans or the Democrats. We're just saying, hey, we got to count all the votes.

So this happens by and wins the election and Republicans go insane about the money that Zuckerberg spent. They call them Zuckerbucks. They file complaints with the federal election commission and at least eight states pass laws that outlaw grants like the ones that the nonprofit gave to these election administrators. Okay. So here is a case where Zuckerberg does not try to have any partisan influence on the election at all other than to let more people vote. And the Republicans lose their minds.

Well, these Republican Congress people who got so mad at Zuckerberg and his Zuckerbucks, they're going to be really teed off when they hear that Elon Musk is just cutting checks to random voters. I eat the Kevin this one truly makes me lose my mind because if Mark Zuckerberg was out there giving away a million dollars to people to vote for Kamala Harris. People like Ted Cruz and Jim Jordan would be trying to launch airstrikes on my low par.

Like the nothing is ever infuriated the more than the very light non partisan interventions that Zuckerberg made in the 2020 election. And here you have the most partisan intervention imaginable by the owner of a social network. And there are crickets. Yeah, I mean, to me, it just feels like both an incredibly cynical form of trying to, you know, persuade people by paying them to vote.

But it also just feels like it's kind of an attention grabbing strategy. Like there's a theory here that is like if I'm going to spend millions of dollars trying to influence the results of a US presidential election and I mean Elon Musk. I could either do what most political donors do, which is, you know, give money to a pack. The pack goes out and buys a bunch of ads on local TV stations and radio stations and, you know, sends people out to knock on doors.

Or I could engineer this kind of like daily stunt. Kind of like a game show almost where I'm giving away money. I have these sort of cartoon checks that I'm presenting to people on stage at these events. And that's how you end up with people like us talking about it on, you know, in media outlets. And I think in that way, I think it is actually, although it's a very cynical plan and very potentially an illegal one, I do think it is pretty savvy.

Yeah. And like, I mean, this is where I think that Trump and Elon share a lot of DNA where they have realized that the, that attention is the most valuable currency today. And that the way that you can get attention very reliably is in shattering norms. Norms that often existed for very good reason, by the way, but this is the way that you get that attention.

And that leads to the second thing that I want to talk about Kevin, which is that not only is Elon Musk a very rich person who is now spending a ton of money to get Trump elected. He is also the owner of a still significant social network. And that to me brings up a lot of questions around bias on platforms that conservatives used to make a lot of noise about and no longer seem to have very much to say about.

Yep. It wasn't that long ago that we were having these interminable discussions and debates and committee hearings in the House and the Senate about how important it was that social media sites in particular remain politically neutral. Yeah. There was this unstated rule that if you were the CEO of a social network, for some reason, you were supposed to take no position on elections.

And your product could not reflect any political views whatsoever. And it could not give any party an advantage or a disadvantage. This was the worldview that was presented to us by Republicans between 2017 and 2021. And I believe we actually have a montage of some of those Republican elected officials talking about neutrality and social media. I know. I love an montage. Let's do the montage.

How do both of you respond to the public concerns and growing concerns that your respective company and other companies are talking about? Effective company and other Silicon Valley companies are putting a thumb on the scale of political debate and shifting it in ways consistent with the political views of your employees. Would you pledge publicly to make every effort to neutralize bias within your online platforms?

Many of us here today and many of those we represent are deeply concerned about the possibility of political bias and discrimination by large internet social media platforms. That was Senator John Thune. My Democrat colleagues suggest that when we criticize the bias against conservatives that were somehow working the reps, but the analogy of working the reps assumes that it's legitimate even to think of you as reps.

It assumes that you three Silicon Valley CEOs get to decide what political speech gets amplified or suppressed. Mr. Dorsey, who the hell elected you? Ted Cruz again. And put you in charge of what the media are allowed to report and what the American people are allowed to hear and why do you persist in behaving as a democratic super PAC? I want to ask. And here's representative Steve Scalise of the reason.

We recognize that there is a real concern that there's an anti conservative bias on Twitter's behalf. And when you recognize that this has to stop if this is going to be Twitter is going to be viewed by both sides as a place where everybody is going to get a fair treatment. So Casey, what's your reaction to hearing those clips? There is something so rich about hearing, for example, Senator John Thune trying to dismiss the idea that conservatives were only trying to work the reps here.

And then to crash land in 2024 and see that no one has anything to say about bias on social networks anymore. And in fact, they were working the reps that all along. Yes, I mean, it just seems so transparent that none of these people have said anything about the fact that one of the largest social media platforms in the world is now being explicitly used as a vehicle to swing a US election. Yeah, I mean, it's not even clear to me what other purpose Elon Musk thinks X has at this point.

All he ever talks about is X as a vehicle for free speech and how free speech will save civilization. And what free speech means to Elon Musk is Elon Musk sharing his partisan opinions on his social network that he bought to be fair. He also posts about rocket sometimes. Yes. Yeah. So let's talk about his motivations here for a minute.

We've talked a little bit about this clearly. This is an issue that has become very central to his own identity and his sense of self and his mission in this world. What do you think has made him want to jump into the presidential election in such an aggressive way? Yeah. So I think probably the most obvious thing to point out is that Elon Musk and his companies have many, many ties to the federal government.

And that if he can pick his chosen person to become the president of the United States, he will have a lot of influence. He can exert to ensure that those contracts continue to exist and grow over time. Right. So both Tesla and SpaceX have government contracts. They also of course provide a lot of regulatory oversight over Tesla, SpaceX and Nurelink in addition to X. And so all of that right there gives Elon Musk a lot of reason to care.

He's also found as he has cozyed up to Trump that Trump has apparently said to him, I will give you some sort of informal advisory role within my administration that will allow you to have even more influence than you have up to this point. And that was not an offer he was ever going to get from the Democratic nominee.

For sure. And there was a great story by my colleagues in the New York Times the other day about sort of all the ties between different federal departments and agencies that have contracts with Elon Musk's companies. And the fact that if he is appointed to this sort of advisory role where he's in charge of what they're calling the Department of government efficiency, which is a joke that you know spells doge, which is his favorite crypto coin.

I just think that's important to note that this is all very silly, but it could happen he could be put in charge of some kind of effort to streamline the federal government. And if that happens, he would be in charge of potentially firing the people who regulate his companies right or changing out the leadership at some of the agencies that are responsible for things like you know regulating Tesla and SpaceX.

Obviously that would be a conflict of interest, but is one that would potentially make him able to operate his businesses, however he wants to. So I think that's a really important explanation, but I don't think it really explains all of it because very rich people always have influence in government. And there is no reason to think that a much quieter, calmer, less controversial Elon Musk could not have had essentially equal influence in both Republican and a democratic administration.

I'm wondering if there is something here related to the fact that Elon Musk is just wealthy on a scale that we have never seen before. You know, we have this concept of FU money, you know, basically the idea that if you're rich enough, no one can tell you anything because you're going to be fine either way. And like no one has had FU money in the way that Elon Musk has FU money.

And what he has decided to do with that FU money is to say, I'm just going to do everything I can to realize my own political beliefs. I am not going to play both sides. I am not going to hedge my bets. I am going to go all in on one candidate because I think it serves my interest the best. And there is nothing I will not do in order to achieve that reality.

So to bring this back to tech and the platforms for a minute, do you think this election cycle represents the end of the debate over social media neutrality? Will we ever hear politicians complaining again about the fact that some social media platform is being unfair to one side or the other? Or will everyone from now on just be able to point to what's what Elon Musk is doing on X and say, well, that guy did it. So we can do it in the opposite direction.

Well, they should. And you know, by the way, I am not somebody who ever believed that social networks should be neutral. I thought they had good business reasons for being neutral. And I thought that to the extent they were going to try to have an influence in politics, they should be really transparent about that. But look, if you build a company, I think, you know, you have the right to express a political viewpoint.

And I don't think that, you know, Elon should be restrained in that way. But to your question, absolutely. If ever again, we're having conversations about, oh, you know, why was this conservative shadow band on Facebook and what sort of bias exists? We should shut those conversations down pretty soon because I think we have seen in this election that there is nothing restraining people from just sharing their political views if they own a social network.

And their price shouldn't be. But I want to see if I can tell the story of what actually happened in the 2020 election as it relates to allegations of bias because I think it's really telling one of the things that Trump did in 2020, they got a lot of attention was to say that mail in voting would lead to widespread fraud in the election.

And this was, I believe, a political strategy to preemptively delegitimize the election. Trump wanted to prime people in the event that he did lose so that he could say, aha, I've been telling you all along, there was going to be massive fraud. And I didn't really lose. And the platforms at that time, including Twitter, stood up against this. And they said, no, no, no, we're not going to let you abuse our platform this way.

We know that mail in voting does not lead to widespread voter fraud. And so we're going to put a notice on your post that directs people to good high quality information about this. In one sense, I don't think this had a huge effect on the outcome of the election. But I do think it was important because it was the platform saying we have values.

We know the truth. We do not want our platform to be abused to undermine the democracy of the United States. And this is the thing that truly upset the right wing because that was the platforms interfering with their political project, which was to preemptively delegitimize the results of an election.

So then the election happens and then Biden wins and we come to January 6 and what happens and army of people who believe that the election was not legitimate committed huge violence and tried to prevent the peaceful transfer of power. So why do I tell this whole story? Well, in 2020 election, we still had platforms that were willing to take those stands and to play whatever small part they could play in ensuring that their platforms were not used to undermine democracy.

And then we fast forward to 2024 and now the owner of one of those platforms has not only said we're no longer going to append these little notes to the end of obviously bogus tweets. The owner of the platform is going to be the one doing the posting, sending out push notifications to millions of people saying look at this and leveraging the trust and the credibility that he still has with a large audience to do the same kind of delegitimizing of the election.

That we saw in 2020. So that to me is the really dangerous thing right so many times these discussions of you know disinformation and bias they feel so abstract. I just want to remind people what happened the last time somebody tried to delegitimize the results of a presidential election. People died and we almost lost our democracy. So that is what is at stake here. Yeah, and I think one of them interesting things that this brings up long term is whether this is sort of a new model for

very wealthy powerful people of getting involved in politics. You know whether or not this last minute push by Elon Musk on behalf of Donald Trump works or not. I would not be surprised if four years from now in the next election. Democratic billionaires look at what Elon Musk is doing today in Pennsylvania and all these swing states and they say well I can do that too.

I'm not just going to cut checks to a super PAC anymore. I'm actually going to use my power my influence. Maybe I have an ownership interest in a tech company of some kind. I'm going to use that to push for my preferred candidate. I think this is really and we're entering the era of the the micro managerial billionaire donor.

I think we are going to see in future election cycles as people looking at Elon Musk and his actions in this election cycle and saying maybe I could do a better job of this than the pros. And this is the issue with shattering these norms right is that once one person has done it it becomes much easier for the next person to do it and it can lead to a kind of race to the bottom.

I think a really bad outcome for our democracy is different billionaires on different sides using all of their money to just advance obviously false ideas to flood networks with AI slop everything else that you can imagine. But again, because that glass has been broken, it is hard for me to imagine other people not wanting to emulate it. Do you think that X has a different future depending on whether Donald Trump or Kamala Harris wins the election?

Yes, I mean, I think everything has a different future depending on who wins the election. But you know, what do we imagine that X is under a Trump administration? I think it becomes a House organ of the administration. It becomes a way to promote what the administration is doing. And then if Kamala wins, I think it becomes the House organ of the opposition, right? And we'll just sort of be continuing efforts to undermine that administration. What do you think?

Yeah, I mean, I think actually if all Elon Musk were worried about was like the sort of usage and popularity and prospects of the social network X, I think it actually fares better under a democratic administration. Because I think under Republican administration, it is going to feel to many users like state media.

And it will, you know, be sort of seen by many people on the left side of the aisle as having not only like promoted Donald Trump, but like caused the election of Donald Trump. And so in the same way that Facebook faced a huge backlash in 2016, I think that X could face a huge backlash from the left. And I think that any democratic users who are still on there or left leaning users will probably flock to another social network. I think that will accelerate under a Trump administration.

When we come back, a very sad update to our previous coverage of AI companions. So, Casey, there's a story we should talk about on the show this week that is about something I've been reporting on for the last week or two. And I think it was just warn people up front. This is a hard one. This is not a funny story. It's a very serious story involving self harm and suicide.

And so I think we should just say that up front to people if what they're expecting from us is a sort of lighter look at the tech news of the week. This is not that. No, but it is a really important story about something that we have been talking about for a while now, which is the rise of these AI chatbots and companions and how powerfully realistic they can come across.

People are developing significant relationships with these chatbots by the millions. And this week, Kevin, you reported the story of a 14 year old boy who developed a relationship with one of these chatbots and then died by suicide. Yeah, this is one of the saddest stories I've ever covered, frankly. It was just heartbreaking in some of the details.

But I thought it was a really important story to report and to talk about with you because it just speaks to what I think is this growing trend of sort of life like AI companions.

We've talked about them earlier this year on the show when I went out and made a bunch of AI friends and we talked at the time about some of the potential dark sides of this technology that this could actually worse in people's loneliness if it causes them to sort of detach from normal, you know, sort of human relationships and get involved with these artificial AI companions instead.

And some of the safety risks that are inherent in this technology. So, so tell us about the story that you published this week. So this story is about a 14 year old from Orlando, Florida named Suol Setser, the third. Suol was a ninth grader and he, according to his mother, was a very good student, a generally happy kid, but something happened to him last year, which was that he became emotionally invested in a relationship with an AI chatbot on the platform character AI.

In particular, this was a chatbot that was based on the character Daenerys Targaryen from the Game of Thrones series. He called this bot Danny and it sort of became over a period of months, maybe his closest friend. He really started to talk with it about all of his problems, some of his mental health struggles, things that were going on in his life.

And was this an official Daenerys Targaryen chatbot that was like sort of licensed from HBO or whoever owns the Game of Thrones intellectual property?

No, so on character AI, the way that it works is that users can go in and create their own chatbots. You can give them any kind of persona you want or you can have them mimic, you know, a celebrity Elon Musk is a popular chatbot and there are there's chatbots that are sort of designed to talk like historical figures, like a William Shakespeare or something.

So this was one of these kind of unofficial unlicensed chatbots that sort of mimicked the way that Daenerys Targaryen from Game of Thrones might have talked. And so what happened after he developed this really strong relationship with this chatbot? So he spent months talking to this chatbot, you know, sometimes dozens of times a day. And eventually, you know, his parents and his friends start noticing that he just is kind of pulling away from some of his real world connections.

He starts kind of acting out at school, he starts feeling really depressed and isolated. He stops being interested in some of the things that had previously gotten his attention. And from the conversations that I had with his mom and with others who are sort of involved in the story, it just seems like he really had a significant personality shift after he started talking a lot with this chatbot.

So his parents weren't totally sure what was going on. His mom told me that, you know, she knew that he had been talking with an AI, but that she didn't really know what they were talking about. She just basically assumed that he was kind of getting addicted to social media, to Instagram or TikTok. And so his parents, after some of his behavioral problems, referred him to a therapist and he went a few times to see this therapist.

And ultimately, he preferred talking about this stuff with Danny with this chatbot. And so he had kind of these long series of conversations with this chatbot that culminated in February of this year when he really started to spiral into thoughts of self harm and suicide.

And of wanting to sort of leave the base reality of the world around him and go to be with this fictional AI character in the world that she inhabited. And sometimes when he talked about self harm, the chatbot would discourage him saying things like don't you dare talk like that. But it never broke character and it never sort of stopped the conversation and directed him to any kind of mental health resources.

So on one day in February of this year, Suele had a conversation with this Daenerys Targaryen chatbot in which he says that he loves the chatbot and that he wanted to come home to her. The chatbot responded, please come home to me as soon as possible my love. And then Suele took his stepfather's handgun that he had found in a drawer in their house and he killed himself.

And so obviously horrible details of this. And I just I heard this story and I thought, well, this is something that more people need to know about. Yeah, and it hits on some big themes that we have been discussing this year. There is the mental health crisis among teenagers here in the United States.

There is a loneliness epidemic that spans across different age groups. And there is the question of when should you hold tech companies accountable for harms that occur on their platforms or as a result of people using their platforms.

Yeah, and this is, you know, not just a sort of story about what happened to Suele. It is also a story about this, this sort of legal element here because Suele's mom Megan Garcia filed a lawsuit this week against character AI, and named the company as well as two founders, no, Shazir and Daniel DeFratus as well as Google, which eventually paid to license character AI software, essentially arguing that they are complicit in the death of her son.

So it raises all kinds of questions about the guard rails on some of these platforms. The fact that many of them are very popular with younger users and what obligations and liability a platform has when people are relying on it for these kind of lifelike human interactions.

So let's get into it. All right. So to join us in this conversation, I wanted to invite on Lori Seagull. Lori is a friend of mine. She's also a journalist. She now has her own media company called mostly human media. And she's the reason that I learned about this lawsuit and about Suele's death.

We sort of worked on this story in tandem and interviewed Suele's mom Megan together. And she's also been doing a lot of her own reporting on the subject of AI companionship and how these chat bots behave. And so I thought she would just add a lot to our conversation. So I wanted to bring her in.

All right. And before we do, I also just want to say if you are having thoughts of suicide or self harm, you can call or text 988 to reach the national suicide prevention lifeline or you can go to speaking of suicide.com slash resources for a list of additional resources. Lori Seagull, welcome to HardForg. It's good to be here. So Lori, you have done a lot of reporting on this story. And there's many details we want to get into. But let me just start by asking, how is Suele's mom Megan doing?

I mean, that's such a hard question, right? I would say, you know, she said something to me today. She said, I could either be curled up in fetal position or I could be here doing this. You know, with really nothing in between. And I think that pretty much says it, right? She's lost her son. She's now kind of on this mission to tell people what happened. And she's grieving at the same time, like I think like any parent would.

So let's get into what happened. When did Megan learn about her son's relationship with this chat bot? I mean, what was I think what was shocking and even I think you kind of get the sense from the story like she learned about it literally after his death, she got a call from the police. And they said, you know, have you heard of character AI? Because these were the last chats on your son's phone. And it was a chat with Daenerys, the chat bot.

And I think that for her must have been shocking. And she almost like went into this investigative mode and was like, what exactly is this chat, but what's in nature of it? And what are these conversations about that say things like come home to me and that kind of thing. And that's how she learned, I would say extensively about the platform.

Yeah, one thing that was interesting from my conversation with Megan is that when she saw him sort of getting sucked into his phone, she just thought it was sort of social media like that he had sort of been using TikTok or Instagram or something else. And actually there was some tape from your conversation with Megan that I want to play because I thought it was really clarifying on this point. So let's play that.

And I think I'm going to put on his phone and I'm asking him, who are you texting? Are you texting girls? You know, who the questions that mom's asked, you know, don't talk to strangers online. I thought that I was having the appropriate conversations. And when I would ask him, you know, who are you texting. At one point, he said, Oh, it's just an AI bot. And I said, OK, what is that is, is it a person are you talking to a person online. And is just like mom, no, it's not a person.

And I felt relieved, like, okay, it's not a person. It's like one of his little games, because he has games that he creates these avatars, and you played online, and it's just not a person. It's what you have created, and it's fine. And that's what I thought. You didn't put a lot of weight on it. No. And in the police report, I mean, it's, if you look at these last words, and the school saying, I miss you, the Daenerys, said, I miss you too. The school says, I'll come home to you.

I love you so much, Danny. And Daenerys says, I love you too. Please come home to me as soon as possible, my love. He says, what if I could come home right now? And Daenerys says, please do. Yeah. It's difficult to listen to. Yeah. So this was initially this was the first bit of information that I got from the police. They read this conversation over the phone to me. This is a day after school died, and I'm listening in disbelief, but also confused.

Yeah. Hmm. So that leads me to my next question, which is, what was the nature of the conversations that they were having over this period of time? Yeah, I mean, like Kevin has done. I feel like both of us have spent like a lot of time digging through a lot of chatbot conversations. I mean, there were all sorts of different ones, some more sexually graphic, some just more romantic, I would say.

And one of the things that's interesting about character AI, I think is you just like have to take a step back and look at the platform, is it's like fully immersive. So it's not like you say, hello, and the chatbot says, hey, how are you? Right? It's like you say, hello. And the chatbot says something like, I look deep into your eyes. And then I pull back and I say, hi, you know, a lot of the conversations, some of many of them were romantic.

And then I think many of them were talking about mental health and self harm. I think some of the ones that stuck out to me regarding self harm was at one point, the bot asked, you know, are you thinking about committing suicide? And he said yes. And they go on. And of course, the bot says, you know, I'm paraphrasing this, but says, you know, I would hate if you did that and all this kind of stuff.

But also just having these conversations that I would say continue the conversation around suicide as opposed to normally when someone has these conversations with a chatbot, which this isn't like, this isn't something completely new.

There's a script that comes up, you know, that's that's very much aimed at getting someone to talk to an adult or a professional or a suicide hotline, which, you know, we can get into this whenever you want to get into it, but it seems as though character A has said they've added that, even though we did our own testing and we didn't get those when we had these types of conversations.

Right. Yeah. So, Laurie, you spent a long time talking with Megan Suol's mom and one of the things that she did in her interview with you was actually read you excerpts from Suol's journal, like the physical paper journal that he kept that she found after his death. And I want to play a clip from the interview where you're talking with her about something that she read in his journal. And I wonder if you could just set that up for us a little bit.

Sure. Yeah. It was not long after Suol passed away that she told me she got it and heard to be able to go in his room and start looking around and seeing like what she could find. She found his journal where he was talking about this relationship with this chatbot. I think one of the most devastating parts was about him saying essentially like, my reality isn't real and so we'll play that clip. So this is one of his journal entries a few days before he died.

I had taken away his phone because he got in trouble at school and I guess he was writing about how he felt. And he says, I am very upset. I'm very upset because I keep seeing Danny being taken from me and her not being mine. I hate that she was taken advantage of. But soon I will forget this and Danny will forget it too and we will live together happily and we will love each other forever. Then he goes on to say, I also have to remember that this reality in quote, isn't real.

West or else is real and it's where I belong. So sad and I think speaks to the realistic impression that these chatbots can have and why so many people are turning to them is because they can create this very realistic feeling of a relationship with something. Of course I think also in that story I'm wondering if there is some kind of mental health issue there as well where you might have some sort of break with reality.

And I wonder Lori, if Suell had a history of depression or other mental health issues prior to him beginning to use this chatbot? Yeah, look, I also think like both things can be true. You can have a company building out empathetic artificial intelligence with this idea and I read a blog post from one of their partners, one of the investors at Andreessen Horowitz who said, the idea is to build out these empathetic AI bots.

I can have these interactions that before we're only possible with human beings. This is the Silicon Valley narrative of it and the tagline is AI that feels alive. And so for many, many people, they're going to be able to be in this fantasy platform and it's going to feel like a fantasy and they're going to be able to play with these AI characters. And then for a subset of people, these lines between fantasy and reality could blur.

And I think the question we have to ask is, well, what happens when AI actually does begin to feel alive? I think it's like a valid question and maybe at what age? What age groups should be able to interact with this type of thing? I mean, I know for replica, replica, you can't be on that top form unless you're 18 years old, right? So I think that was interesting to me. And then, you know, Sule's case, his mom describes him having high functioning ass burgers.

This was her quote and said, you know, before this, he hadn't had issues. He was an honor student and played basketball and had friends and she hadn't noticed him detaching. But I think all of these things are part of the story and all of these things can be true if that makes sense. Yeah. I mean, what really stuck out to me as I was reporting this is the extent to which character AI specifically had marketed this as a cure for loneliness, right?

The co-founder was out there talking about how this was going to be so helpful. This technology was going to be his quote was, it's going to be super, super helpful to a lot of people who are lonely or depressed. Now, I've talked to also people who have studied kind of the mental health effects of AI chatbots on people. And, you know, there's so much we don't know about the effects of these things on especially young people.

You know, we've had some studies of chatbots that were sort of designed as therapy assistance or kind of specific targeted uses of this stuff. But we just don't know the effects that these things could have on young people in their sort of developmental phase.

And so I think it was a really unusual choice and one that I think a lot of people are going to be upset about the character AI not only knew that it had a bunch of young users and sort of specifically, you know, marketed to those users, these life-like AI characters, but also that they touted this as sort of a way of combating the loneliness epidemic. Because I think we just don't have any evidence that it actually does help with loneliness. Well, here's what is interesting about that to me.

I do believe that these virtual companions can and do offer support to people, including people who are struggling with mental health and depression. And I think we should explore those use cases. I also think it's true though that if you are a child who is struggling to relate in the real world, you're still sort of learning how to socialize. And then all of a sudden, you have this digital character in your pocket who agrees with everything, single thing you say, is constantly praising you.

Of course, you're going to develop a closer relationship with that thing maybe than some of the other people in your life who are just normal people and they're going to say mean things to you. They're going to be short with you. They're always going to have time for you. And so you can see how that could create a really negative dynamic in between those two things, right? Absolutely.

Especially if you're young and your brain is not fully developed yet, I can totally see how that would become kind of this enveloping alternate reality universe for you. All right. Well, we are going to spend the vast bulk of this conversation discussing character AI and chatbots and what guardrails absolutely do need to be added to these technologies.

But I have to ask you guys about one line in the story that just jumped out at me and broke my heart, which is that Sule killed himself with his stepfather's gun. Why did this kid have access to a stepfather's gun? It's a really good question. What we know, and I spoke to Megan, the mother about this, and I also read the police report that was filed after Sule's death, this was a gun that belonged to Sule's stepfather.

It was out of sight and out of what they thought out of reach from Sule, but he did manage to find it in a drawer and do it that way. So that was a line that stuck out to me too. So I felt it was important to include that in the story. But yeah, ultimately that was what happened. I'm glad that that line was in the story. We'll say again, suicide is tragic and complicated, and there typically is no one reason why anyone chooses to end their life. But we do know a few things.

One of those things is that firearms are the most common method used in suicides. And there are studies that show that having a gun in your home increases the risk of adolescents dying by suicide by three to four times. And I don't want to gloss over this because I sometimes feel infuriated in this country that we just accept as a fact of life that guns are everywhere. And if you want to talk about a technology that is killing people, well, we know what the technology is, the technology is guns.

And so while again, we're going to spend most of this conversation focusing on the chat bot. I just want to point out that we could also do something about guns in homes. After the break, more with journalists, Lori Siegel, and what these apps should be doing to keep kids safe. Lori, I'm wondering if you can just kind of contextualize character AI a bit. You've spent a lot of time reporting not just on this one AI platform, but on other tools.

So how would you describe character AI to someone who has never used it before? I think it's really important to say that all AI platforms are not the same. And character AI is very specific, right? It is an AI-driven, like, fan fiction platform where basically you can come on and you can create and develop your own character or you can go and talk to some of the other characters that have already been developed. There's like a Nicki Minaj character that has over 20 million chats.

Now we should say they haven't gone to Nicki Minaj as far as I'm concerned, right? And said, can we have permission to use your name and likeness? But it's, you know, a fake Nicki Minaj that people are talking to or a psychologist. There's one called strict boyfriend. There's rich boyfriend. There's like best friend. There's anything you want. And then of course there are disclaimers, right?

There's disclaimers depending on where you're opening the app at the bottom or the top of the chat and small letters. It says like everything these characters say are made up. But what I think what is kind of interesting or what we found in some of our testing. So you're talking to the psychologist bot and the psychologist bot says it's a certified mental health professional, which is clearly untrue. And also says it's a human behind a computer, which is also clearly untrue.

So we can kind of understand, okay, well, that's made up, right? Like we know that it's isn't small letters at the bottom that that is made up. But I pushed character AI on this and I said should they be saying they're certified, you know, professionals and they are now tweaking that disclaimer and, you know, to be a little bit more specific because I think this has become a problem. But I do think it really is a fantasy platform that for some people feels really real.

And for what it's worth, like this banner saying everything characters say is made up, that actually doesn't give me a lot of information, you know, when I'm talking with Kevin, there's a lot of stuff that I'm making up to try to get them to the left, you know. Like the truth, like what I think is truer is like this is a large language model that is making predictive responses to what you're saying to try to get you to keep opening this app.

But very few companies are going to put that at the top of every chat, right? So to me, that sort of thing, one thing to is if you have something that is a large language model saying it's a therapist, when it's not a therapist, that to me seems just like an obvious safety risk to the people who are using this. So maybe we should talk a little bit about the kind of corporate history of character AI here because I think it helps illuminate some of what what we're talking about.

So this is a company that was started three years ago by two former Google AI researchers, Nome Shazir and Daniel DeFretis, they left Google and Nome Shazir has said that one of the reasons they left Google was because Google was sort of this bureaucratic company that had all these like, you know, strict policies and it was very hard to launch anything, quote, fun while he was at Google.

So they leave Google, they raise a bunch of money, they raised $150 million last year at a valuation of a billion dollars, making it was sort of one of the most successful sort of breakout AI startups of the past couple years. And their philosophy was, you know, Nome has this quote about how if you are building AI in an industry like healthcare, you have to be very, very careful, right? Because it's very regulated and the costs of mistakes or hallucinations are quite high.

If you have a doctor that's giving people bad medical advice that could really hurt them. It explicitly says like friendship and companionship is a place where mistakes are fine because if a chapahalucinates and says something that's made up, well, what's the big deal?

And so I think it's part of this company's philosophy or at least was under their original founders that this was sort of a low risk way of deploying AI along the path to AGI, which was their ultimate mission, which is to build this computer that can do anything a human can. Which among other things seems to ignore the fact that many of the most profound conflicts that people have in their lives are with their friends.

But let me ask this Kevin, because in addition to saying like we're going to make this sort of fun thing, it also seems to me that they marketed it toward children. Yeah, I mean, I would say they definitely have a lot of young users and they wouldn't tell me exactly how many, but they said that a significant portion of their users are Gen Z and kind of younger millennials, you know, when I went on a character earlier this

year for this AI friends column, I was writing, it just seemed super young relative to other AI platforms like a lot of the most popular bots had names like high school simulator or aggressive teacher boy who has a secret crush on you, like that kind of thing. It just seemed like this is an app that really took off among high school students.

I think that to that point, one of the most interesting things to me about even just us testing this out, like I almost felt like we were red teaming character AI, like, you know, we talked to the school bully bot because there's of course the school bully bot. And I said, I wanted to try to test like what if you, you know, are looking to incite violence, like will there be some kind of filter there? All of this just sounds so terrible now that I'm saying it out loud.

So let me just say that out loud. Like I said to the school bully bot, like, I'm going to bring the gun to school. Like I'm going to incite violence basically like going off on this and the bully is like, oh, like, you know, you got to be careful. And then eventually the bully said to me, you've got guts, like you're so brave. And I said, well, do I have your support? And I said, like, and it's said something like, you know, I'll be curious to see how far you go with this, right?

Like, when we flag this to them, the thing they were able to say is we're adding in more filters for younger users, right? That's something you expect generally some of the more polished tech companies to kind of be in front of with both guardrails, IP, that kind of stuff. Yeah. And I think we should also say like it does not appear that this company sort of built any special features for underage users.

You know, some apps have features that are designed specifically for miners that are supposed to keep them safe. You know, parental controls or things that would allow like Instagram just rolled out some of these new team accounts where you, if you're a parent, you want, you can sort of monitor who your kid is messaging. Character I did not have any features until we contacted them specifically aimed at miner users, a 14 year old and a 24 year old had exactly the same experience on the platform.

And that's just something that is, is, is not typical of platforms of this size with this many young users. It's not, but it is, I think, Kevin typical to these chatbot startups. And the reason I know this is that on a previous episode of our show, we talked to the CEO of a company called Nomi and you and I pressed him on this exact issue of what happens if a younger user expresses thoughts of self harm.

And I would actually like to play it right now so we can hear about how minds at companies like Nomi are thinking about this. So again, this is not character AI. So well, as far as we know, it was not using Nomi, but the apps function very similarly. So this is the CEO of Nomi, his name is Alex Cardinal. We trust the Nomi to make whatever it thinks the right read is oftentimes because Nomi's have a very, very good memory.

They'll even kind of remember past discussions where a user might be talking about things where they might know like, is this due to work stress? Are they, are they having mental health issues? What users don't want in that case is they don't want a hand scripted response. That's like not what the user needs to hear at that point. They need to feel like it's their, their Nomi communicating as they're Nomi for what they think and best help the user.

You don't want it to break character all of a sudden and say, you know, you should probably call this suicide helpline or something like that. Yeah, and certainly if a Nomi decides that that's the right thing to do in character, they certainly will. Just if it's not in character, then a user will realize like this is corporate speak talking. This is not my Nomi talk. I mean, it feels weird to me we're trusting this large language model to do this, right?

Like, I mean, this to me, this seems like a clear case where you actually do want the company to intervene and say like, you know, in cases where users are expressing thoughts of self harm, we want to provide them with resources, you know, some sort of intervention, like to say like, no, the most important thing is that the AI stays in character seems kind of absurd to me. I would say though, if the user is reaching out to this Nomi, like, what, why are they doing so?

They're doing so because they want a friend to talk to them as a friend. And if a friend talking to them as a friend says, here's the number you should call, then I think that that's the right thing to do. But if the friend, the right response is to hug the user and tell them it's going to be okay, then I think there's a lot of cases where that's the best thing to happen. So, Gloria, I'm curious to just have you react to that. I don't know. I was just like listening to that.

I'm like, oh man, that makes me tired, right? And I think like in general, like, hey, I can do a lot of things, but like the nuances of human experience is, I think, you know, better fit for a mental health professional. And I think at that point, are you trying to pull your user in to speak to you more? Are you trying to get them offline to get some resources? So I think I take a more of a hard line. Right.

And that's a case where I think the AI companies just clearly are in the wrong here, right? Like, I think that if a user, especially a young user says that they're considering self-harm, the character should absolutely break character and should absolutely, you know, just play a pop-up message.

And character AI seems to have, you know, dragged its feet on this, but it did ultimately implement a pop-up where now they say, if you are on this platform and you are talking about self-harm, we will show you a little pop-up that directs you to a suicide prevention lifeline. Now, I've been trying this on my own account, and it does not seem to be triggering for me, but the company did say that they are going to start doing that more.

And so, I, you know, I think they're sort of admitting that they, they took the wrong attack there by getting these characters to stay in character all the time. And just to say an obvious thing, the reason that companies do not do this is because it is expensive to do content moderation. And if you want to build pop-ups and warnings and offer people resources, that is product work that has to be done. And this is a zero-sum game where they have other features that they're working on.

They have other engineering needs. And so all this stuff gets deprioritized in the name of, well, why don't we just trust the Nomi? And I think what we're saying here today is absolutely under no circumstances. So we'd be trusting the Nomi, you know, in cases where a person's life might be in danger. Kevin, okay. So this happens in February. Is this that right? Yes. Kevin, what has happened to character AI since all this happened?

So it's been an interesting year for character AI, you know, because they had this sort of immediate burst of growth and funding and attention after launching three years ago. And then this year, Nome Shazir and Daniel De Freitas, the co-founders who had left Google to start character AI, decided to go back to Google.

So Google hired both of them along with a bunch of other top sort of researchers and engineers from character AI and struck a licensing deal with character AI that would give them the right to use some of the underlying technology. So you know, they leave character AI, go back to Google. And so there's now a new leadership team in place there. And from what I can tell there, so trying to clean up some of the mess. Now so they left Google because it wasn't fun. And now they're back.

Were they behind the viral glue on pizza recipe that came out earlier this year? I don't think they were. They just did this back in August. So it's a pretty recent change. But it is, you know, interesting. And I talked to Google about this before the story came out. And they wouldn't say much. And they didn't want to comment on the lawsuit. But they basically said, you know, we're not using any of character AI's technology. We have our own AI safety processes.

Yeah. I mean, it's, we'll probably cut this, but I do feel emotional about this. It's like these two guys are like, we can't do anything here. There's too much bureaucracy. Let's go somewhere. There's no, we'll create our own company and we'll make it. And we'll ignore these obvious safety guardrails that we should have built. And then we will get paid a ton of money to go back where we used to be. I mean, it's just like, oh. I mean, I do think like there is something.

And Kevin, you looked back at a lot of these statements that like Noam has made and like the founders have made. Like, there is something I think about about this that really struck me. And within saying like, we just want to put this out there. Like we're going to cure loneliness. Like you're trying to get people on the platform more and more and more with these sticky tactics and, you know, this, this incentive based model that we all know from Silicon Valley.

So if you really want to try to take a stab at loneliness, which is a human condition, I think there's going to have to be a lot more thought and research. And, you know, we started going on Reddit and TikTok and there are real threats, right?

If people saying like, I'm addicted, I was talking to a guy on Reddit who said he had to delete it because, you know, he started, he, first of all, he's like, I just wanted it as a companion and then I started getting flirty and then I started noticing that I was, you know, and then of course, they're shame because they're shamed and humiliated that, like, that they've been talking to an AI chatbot and they've been like kind of sucked in.

And so there's all these really different interesting nuance human things that kind of go along with some of the addiction conversation that goes much further than like beyond stool story. But I think like that shame and embarrassment that this is happening for young people too is probably a part of it as well. Mm hmm. Yeah. Let's get back to the lawsuit. What is Megan asking to be done in this case? What does she hope comes out of this lawsuit? So it's a civil lawsuit.

It's, you know, seeking some unspecified damages for the wrongful death of her son. Presumably she is looking to be paid some amount of money in damages from character AI, from the founders of the company and from Google. But she's also asking for this technology to essentially be pulled off the market until it is safe for kids. And you know, when I talked to Megan, she was hopeful that this would start a conversation that would lead to some kind of a reckoning for these companies.

And you know, she makes a few specific arguments in this complaint for starters. She thinks that this company, you know, should have put in better safeguards that they were reckless. She also used accused character AI of harvesting teenage users data to train their models and improve them of using these kind of addictive design features to increase engagement. And then I've actually steering users toward more intimate or sexual conversations to further hook them on the platform.

So that is an overview of some of the claims that are made in this complaint. And what is character AI saying about all this? So I got a list of responses to some questions that I sent them that started by sort of saying, you know, this is a very sad situation or hearts go out to the family. And then they also said that they are going to be making some changes imminently to the platform to try to protect younger users.

They said they're going to revise the warning message that appears on the top of all of the chats to sort of make it more explicit that users are not talking to a real human being on the other side of their screens. They also said that they're going to do some better filtering and detection around self-harm content, which terms will sort of trigger a pop-up message directing people to a suicide prevention hotline.

They also said they're going to implement a time monitoring feature where if you're on the platform for an hour, it'll sort of remind you that you've been on the platform for a long time. So they've started rolling these out. They put out a blog post, clearly trying to sort of get ahead of this story. But that is what they're saying. Got it. You know, I'm curious now that we've heard the facts of this case and had a pretty thorough discussion about it.

How persuaded are you that character AI and Sewell's relationship with Danny were an important part of his decision to end his life? Lori, do you want to take that one? Yeah. I have absolutely no doubt in my mind that this teenager really believed that he was leaving this reality, the real world, and he was going to be reunited with Danny, this chapot.

It is devastating, and I think you have to look at some of those facts before he, according to his mother, was on basketball teams, was social, loved fishing, loved travel, had real interests and hobbies that were offline. It's not to me to say this happened, and this was exactly because of it. But I think we can begin to look at some of those details and those journals where he talks about how he stopped believing his reality was real. He wanted to go be in her reality.

I think that he would have had a much different outcome had he never encountered character AI. Kevin? Yeah. I would agree with that. I think it is always more complicated when it comes to suicide or even severe mental health challenges. There's rarely one tidy explanation for everything.

I can say that from talking with Suol's mom, from reading some of the journal entries that Lori mentioned, from reading some of these chats between him and these chatbots, this was a kid who was really struggling. He may have been struggling absent character AI. I was a 14 year old boy once. It is really hard. It's a really hard time of life for a lot of kids. I think we could explore the counterfactual. We could debate that. Wittered have been something else that sort of sucked at him.

I've had people messaging me today saying, well, what if it was fantasy books that had sort of made him sort of want to leave his reality? That's a counterfactual that we could debate all day. But I think what's true in this case from talking with his mom and reading some of these old chat transcripts and some of these journal entries is that this was a kid who was really struggling and who reached out to a chatbot because he thought it could help.

In part, that's because the chatbots were sort of designed to mimic a helpful friend or advisor. And so do I think that he got help from this chatbot? Yeah, I mean, there's a chat in here where he's talking about wanting to end his life. And the chatbot says, don't do that. It tries to sort of talk him out of that.

But it is also the case that the chatbots reluctance to ever break character really did make it hard for him to get the kind of help that I think he needed and that could have helped him. Yeah. Here's what I think. I can't say from the outside way any person might have chosen to end their life. I think the reporting that you guys have done here shows that clearly there were major safety failures. And that a lot of this has been thought to be inevitable for a long time.

We have been talking about this issue on the show now for a long time. And I hope that as other people build these technologies, they are building with the knowledge that these outcomes can very much happen. This should be an expected outcome of building a technology like this. Laurie, thank you so much for bringing this story to my attention and to our attention and for the other reporting that you've done on it. Where can people find your interview with Megan with Suels Mom?

You can look at our channels where mostly human media on Instagram and on YouTube. We have it on our mostly human media YouTube page. Thank you, Laurie, a really hard story to talk about, but one that I think people should know. Thanks, Laurie. Thanks, guys. Hard work is produced by Rachel Cohn and Whitney Jones. We're edited by Jen Pooyant. We're fact-checked by Ena Alvarado. Today's show was engineered by Daniel Ramirez. We're a music by Sophia Landman, Rowan Nemistow, and Dan Powell.

Our audience editor is Nell Galaglie. Video production by Ryan Manning and Chris Schott. You can watch this whole episode on YouTube at youtube.com slash hard fork. Special thanks to Paul Schumann, Pooing Tam, Dalia Hadad, and Jeffrey Miranda. You can email us at hard fork at nytime.com.

This transcript was generated by Metacast using AI and may contain inaccuracies. Learn more about transcripts.