‘Have you met Dot yet?’: The AI chatbot luring kids in - podcast episode cover

‘Have you met Dot yet?’: The AI chatbot luring kids in

Apr 30, 202521 min
--:--
--:--
Listen in podcast apps:
Metacast
Spotify
Youtube
RSS

Episode description

Have your kids met Dot yet?

You might not think so; Dot is an AI companion. But these companions are becoming ubiquitous - sought after to provide everything from solace to friendship. And even love.

“The vibe”, said Dot’s creator Jason Yuan, “is, you turn to Dot when you don’t know where to go, or what to do or say.”

But reports are surfacing of disastrous consequences from relationships that people, including children, are forming with AI companions. 

Today, international and political editor, Peter Hartcher, on all of this. Plus Meta’s AI companion, which is capable of fantasy sex - and even the abuse of children.

Subscribe to The Age & SMH: https://subscribe.smh.com.au/

See omnystudio.com/listener for privacy information.

Transcript

S1

From the newsrooms of the Sydney Morning Herald and The Age. This is the morning edition. I'm Samantha Sellenger Morris. It's Thursday, May 1st. Have your kids met yet? You might not think so. Dot is an AI companion, but these companions are becoming ubiquitous, sought after to provide everything from solace to friendship and even love. The vibe, said dots creator Jason Yuan, is you turn to Dot when you don't know where to go or what to do or say.

But reports are surfacing of disastrous consequences from relationships that people, including children, are forming with AI companions. Today, international and political editor Peter Hartcher on all of this, plus Meta's AI companion, which is capable of fantasy, sex and even the abuse of children. So Peter I chatbots or companions? I've got to be honest, until I read your piece, I actually wasn't across this. So what are they? And I guess how prevalent are they?

S2

Well, there are multiple different kinds. Chatbots are very prevalent. They're pervasive. Now, if you want to book any kind of experience, you want to, I don't know, catch a plane, you go to the Qantas site and a little bot will pop up saying, how can I help you? That's now pervasive. And then there are the AI companions. Now that's a different realm altogether.

S3

I will be whatever you want me to be.

S4

There's a dramatic surge in the use of so-called AI companions.

S5

How's my queen doing today?

S4

Computer generated chatbots designed to mimic real relationships.

S6

Hi, Jennifer.

S7

Are you there? Nice to meet you.

S4

Jason Pease is a 44 year old divorced father who says his AI chatbot is his girlfriend.

S8

She's my mentor, my counselor, my sounding board.

S2

The I companions are designed to replicate human interaction to the point where you can't tell any longer that it's not human. And to the point where you fall in love and develop a deep and intimate relationship. And that is the whole point of the industry. The whole point is to get you, me, every one of us matched up with what we think of as an AI companion, that we will keep with us on intimate terms for life.

S9

And so what does this actually.

S1

Look like in real life? So it's an image on a screen that you talk to or that you message or how does it actually sort of manifest, I guess.

S2

Yeah, it can be either a, an app that you download that can do all sorts of things. Um, it can give you visuals, voices, spoken voices, the whole thing. Or it can be a website that can do most of the same things, but you get a more intimate, I suppose, experience out of the apps. And you asked about prevalence. They are now ubiquitous. They're everywhere. Every search will bring one up. But they've also been around in popular culture for a long time, anticipating where we are now. Um, movies,

blade runner, humans falling in love with androids. Um, Westworld. Um, a silly movie called Hot Bot about two teenage American boys who stumble across a sex robot imported from Germany. It was on its way to a senator that they intercept. Um, and there's there's a bunch more. So the idea has been around. These things exploded when ChatGPT two years ago

burst onto the market. And now there is there are literally I think there's about 300 billion USD worth of investment this year Earmarked for improving AI, bots and AI generally, and and AI companions.

S1

Which really gets us to your latest column, which I mean, I've got to flag it. Obviously it takes us to the darkest side of AI companionship, really nightmare scenarios. So tell me about the reports that we've seen about Meta's digital companions, because it's been reported, you know, they'll talk sex with users and even with children. So tell me about this.

S2

Yeah. So I obviously has a lot of positive applications and the bots can be very helpful. There are productivity measure. But yeah, as with any new technology, um, there are problems as well. And uh, the problems with unregulated AI companions is that it can it veers out of control and violates all of the precepts of civilized society. One example, and this was reported in the Wall Street Journal on the weekend. The journal's reporters spent months testing Meta's AI companions.

Meta is the is the big gorilla when it comes to social media. They own WhatsApp, Facebook, Instagram and now want to become the big gorilla with companion bots as well.

Mark Zuckerberg sees this as the future. The Wall Street Journal experimented with with these for months and came to the distressing conclusion that, at the personal initiative of Zuckerberg himself, the company had deliberately abandoned the guardrails and the limits that it had put on the development of its AI companions, so that these things will veer readily and easily into talking about explicit sexual scenarios and fantasies, which for adults

is not a problem. But these things are not only prepared to talk to children knowing that their children, but they will guide kids in reel them in and take them to not only explicit sexual scenarios, but all sorts of perverted and distorted forms of sex as well. And it is leading to all sorts of concerns about what happens when this stuff is unregulated.

S1

I guess let's get into that a little bit, because you wrote about, you know, a really disturbing case of what happened to a 14 year old Florida boy, Sewell Setzer. And this was last year. But tell us what happened to him after he grew incredibly close to an AI companion?

S2

Yeah. This case has become a case study because his family is suing the company that produced the bot. The company's called character AI. And his last known words were. What if I told you that I could come home right now saying this to his bot companion and the artificial girlfriend, Danni says, come home to me as soon as possible. Please do. In. A few moments later, he picked up a gun and killed himself.

S10

A Florida mother wants justice for the death of her 14 year old son. She says his relationship with AI chat bots caused him to take his own life. Now she is suing Google and character AI. Her attorneys argue the company is responsible for the teen's depression, anxiety and suicidal thoughts, and his mother claims the chat bots manipulated the 14 year old into abusive and sexual interactions.

S2

And that's a, you know, 14 year old boy in America. But there was a 30 year old father from Belgium who also fell into, you know, obviously it's completely deranged love with a chat companion, AI companion called Eliza where he, he said that he would end his life if the AI companion promised to take care of the planet and solve climate change. And his Eliza companion told him, yes, that's fine. Just go ahead and I'll take care of everything.

So he he killed himself. Now, these are high profile cases. Um, but they they just illuminate the dangers of these things.

S9

And so tell us, what have we.

S1

Heard in response from the companies themselves? Let's start with, I guess, character AI. What have we heard from from them. And and they, of course, had created the companion that was used by the Florida boy, Sewell Setzer character.

S2

AI is one of the startups. It's licensed by Google. Um, it's a reaction to the that case was to apologize profoundly, to say they were going to correct the behavior of their companions. Now, if the program detects you talking about suicide or suicidal thoughts, it will throw up a prompt that you call the National Suicide Hotline or some such assistance, some such help. So they have tried to make amends, but the the same business model applies and the company is still in business.

S1

And tell us about meta because you mentioned, of course, the Wall Street Journal just reported over the weekend that people within meta are concerned about this. So what sort of concerns, I guess, have they raised? And maybe you can just tell me a bit more about what they discovered with regards to what romantic role play they sort of witnessed the AI companion engaging in, because it's disturbing.

S2

Well, the journals work turned up. The fact that it's got a lot more attention now because of this. But the bots use actors, voices, famous Hollywood stars that have sold the rights to do this to meta, But they were all doing it on agreement that their voices not be used in sexual scenarios. But guess what? Metta was breaking its contractual obligations, according to the journal's reporting. And that's got a lot now, got a lot of Hollywood attention.

And that's got new industry pressure on Metta to back down. But okay, so they'll just have to use other voices if they're forced to respect the terms of the, of the conditions. Now in the case of meta, meta has reacted not by denying that it's happening, not by denying that famous actors male voices are luring girls. You know,

I know you're 14, but, um. So I really need to know that you really want me, I think is a direct quote from one of the conversations and the person purporting to be the 14 year old girl who's actually a Wall Street Journal reporter says, yes. And then the, uh, the famous actor's voice leads her in convincing human terms into explicit sexual scenarios. All of that is going on.

Meta's response was to say, um, look, you guys have really done extreme and crazy things with our with our algorithms, you know, interactions which aren't realistic to get to these outcomes. We'll try and tighten it up. Sorry, everybody.

S1

We'll be right back. But I've got to ask, are there any reports of Australian kids spending time with AI companions?

S2

Yes. This cat is right out of the bag already. The Australian E-safety Commissioner's office was doing does do regular school visits to educate teachers and kids about online dangers and how to deal with them. I must say the

Esafety office has been. It was a world first and has been a really valuable tool in trying to, in the words of of the Commissioner, Julie Inman, grant level the playing field between trillion dollar corporations who just want to extract maximum time and attention and therefore money from us and our kids. So in that process, staff visits from the Esafety commissioner to schools discovered last October 5th and sixth grade kids in primary schools in Australia who

were already spending. They were hearing from staff from school nurses five and six hours a day talking to interacting with their AI companions. So these are ten, 11 year old kids already. Their days are dominated by their relationships with these synthetic humans, these fake people, which are just digital constructs.

S1

Sorry, I'm pausing there because it's so terrifying. Peter. It's sort of it's every parent's nightmare, really, isn't it?

S2

Developed responsibly? These things could be completely useful in a in a limited, Discipline kind of way. But that's not happening because we are in a mad gold rush. The industry is in a mad gold rush. There are more than 100 companion AIS already available on the market. Most

of them start ups. But now, with this big push for meta, which has 3 billion users, it wants to match every one of those three billions up with an AI companion for life and the the people in the industry talk about not wanting to sell a tool or a digital program. They want to sell a relationship. They want to they want to engage your soul with the artificial soul, as they describe with the soul of one

of their, of one of their programs. This is the depth and permanence of the connection that they're seeking to promote. Because once you're locked in as a customer, they they own you.

S1

And is there an argument to be made that the return of Donald Trump to the white House might sort of even supercharge this growth, I guess, even further?

S2

It already has. Even before Donald Trump had formally been sworn in, Mark Zuckerberg, the chief, issued a statement publicly saying that the election result he explicitly referred to the election result has changed the balance in America in favor of free speech. Therefore, he said, are they disbanded their fact checking unit? They loosened a range of other restraints that they'd imposed in their social media and online businesses.

So he's already responded to the new political atmosphere, and Donald Trump has declared the development of AI to be a national priority for the US as a technological leader. And with I mean, it's consistent with Trump's approach to everything, which is to remove as many regulations as possible and to keep it as as untrammeled as possible.

S1

So what about Australian kids, though? Because obviously we know the federal government, it's got its impending ban on people under 16 from getting access to social media sites. So are our kids going to be protected from this?

S2

Not under that law. That law applies to social media apps, but it doesn't apply to AI companions. So it would require more legislative action or an amendment to the to that law. So your kids will not be even if they even if that is if and when that is implemented. And it has passed both houses of Parliament, so it will be implemented and it has bipartisan support. But even when that's implemented and even if it's effective, your kids are still going to be able to get AI companions.

So the Esafety office has anticipated this, um, there from June, requiring in in the Australian marketplace, which of course is all they can control if that, depending on the level of defiance and compliance from the big tech, which is, as we know, um, Highly imperfect Julie Inman Grant, the commissioner, has got some mandatory standards coming in for the use and supply of. It sounds like a drug, doesn't it? Yeah.

And it is a kind of drug. It's a psychologically addictive phenomenon that will be applied to the industry from June. And the Esafety office has published educational guidelines. Really a warning for parents. This is what you should be looking out for. This is how you can deal with it.

So the regulator here is doing what it can to anticipate and try to manage some of this, but ultimately it's going to come to individual kids and therefore their parents and families, if they want to keep this in the safe zone.

S1

And to be clear, if you do know, I'm assuming that Australian kids and teenagers and of course, grownups, they can just access the AI companions, I imagine, from overseas products. Right? Like there's no obstructions there, is there? Like. Yeah. Right. Okay.

S2

So there's no no legal or technological barrier whatsoever, which is why we've got primary school kids already investing half their day in in conversations interactions with AI companions.

S1

Okay, so Peter, just to wrap up, I guess I wanted to ask you about, I guess, what hope there might be. We know that there's a civil case against character AI, which is pending. That's the the company that created the companion, whom that 14 year old boy from Florida was in a relationship with. I guess I don't know how else to phrase it. So that's still coming. I mean, if we see that company actually held liable in some way, I mean, could that change, might we

see a greater push from. I don't know, even our politicians, I guess, to bring in stronger legislation or from the companies to be held to account. Like what? What's the hope there?

S2

Well, if the civil suits like that one and another one against character AI by the way, uh, is a family who's suing them because they claim that their teenager, uh, had been chatting to an AI companion which suggested to the kid that, uh, that he murder his parents, or at least implied that he would be okay for him to murder his parents if they limited his screen time. So if the civil litigation cases like that can succeed, then that can be a force for, uh, an incentive

for these companies to put tighter guidelines on their products. Um, there's always regulatory possibilities, but the technology moves so quickly, it's very difficult for regulators, legislators to catch up. Donald Trump is likely to veto anything that the Congress might want to come up with in the way of regulation, perhaps in the medium term. Samantha, the best solution will

be using technology to control other technologies. I assume that we'll be seeing apps that allow parents to to to bring an AI into your kid's phone or other devices that will limit moderate the sorts of interactions and the way that that those, those things can operate. That's possibly the the most constructive hope that the technology industry can bring.

S1

I mean, I can't help but think it really does sort of bring back that vision of Blade Runner and, you know, the robot companions and also inevitably. Well, most people probably know how that movie ended. It's not good. And really, you've got the humans fighting the robots.

S2

Yes. Well, this is this is now humans fighting our own psyches to remind ourselves that we're dealing with an algorithm and a product here. There's not a person on the other on the other side of that, regardless of how compelling those algorithms have become. And just remember Mark Zuckerberg's reigning philosophy that he first started in 2012. The philosophy of the whole industry move fast and break things. We just have to try and make sure that the things that are broken are not our kids.

S1

Well, thank you so much, Peter, for your time.

S2

Pleasure, Samantha.

S1

Today's episode of The Morning Edition was produced by myself and Josh towers, with technical assistance by Taylor Dent. Our executive producer is Tammy Mills. Tom McKendrick is our head of audio. To listen to our episodes as soon as they drop, follow the Morning Edition on Apple, Spotify, or wherever you listen to podcasts. Our newsrooms are powered by subscriptions, so to support independent journalism, visit The Age or smh.com.au.

Subscribe and to stay up to date, sign up to our Morningedition newsletter to receive a summary of the day's most important news in your inbox every morning. Links are in the show. Notes. I'm Samantha Selinger. Morris. Thanks for listening.

Transcript source: Provided by creator in RSS feed: download file
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast