What is the future of AI relationships? How many people have these going on? Is there a line between a therapist bought and a romantic relationship bot? What do we mean when we ask if AI relationships are traps or mirrors or sandboxes? And what does this have to do with Eliza Doolittle from the play Pygmalion, or a doll cabinet in your Head? Or loneliness, epidemics and suicide mitigation.
Welcome to Intercosmos with me David Eagelman. I'm a neuroscientist and an author at Stanford and in these episodes we sail deeply into our three pound universe to understand why and how our lives look the way they do. Today's episode is about relationships and AI, and in a few minutes I'm going to bring in my colleague Bethany Maples, who's been studying this and publishing papers on it. But
first I want to set the table AI relationships. This is an area that I've been fascinated about for a while, about the way that dialogue with a machine can plug right into our brains and our emotional systems, and one can develop what feels like a meaningful relationship. And although this seems like a very new phenomenon. We've seen hints of this historically. There's a sense in which we've always
been doing this. We fall in love with a character in a novel, even though that person is not real and will never have the chance to touch them or smell them or take them out with our friends. Or we develop a crush on a movie star even though that person is just pretending to be someone else and you're never going to meet that movie star anyway. So our capacity to have feelings for a non real human isn't new, but it started to get more complicated with
artificial intelligence. And we're not going to start today's story in twenty twenty three with the emergence of large language models. Instead, we're going to start over a century ago with a popular theater play written by George Bernard Shaw called Pygmalion. The play is about an impoverished and uneducated woman named Eliza Doolittle, and Eliza is taught to speak like a lady by a linguistics professor who wants to use her
as an experiment. He teaches her how to pass as a member of high society simply by changing the way that she speaks. Now, one of the millions of people who saw this play was a young MIT professor named Joseph Weisenbaum. Now this was in the early nineteen sixties, and he was working with a very new kind of machine called a computer. And on this machine you could write code and get the machine to carry out whatever
instructions you specified. Weisenbaum thought a lot about this theme in Pigmalion of transformation through language, and he wondered if a computer could be similarly transformed. So he worked on thousands of lines of computer code, and he named his resulting program after Eliza Doolittle. He named it Eliza, And like the Eliza in the play, Weisenbaum's program was an experiment in how surface level linguistic changes, in this case,
mimicking conversation, could create the illusion of deeper understanding. Now, the code for Eliza was relatively simple. It would just take whatever you typed into the terminal and it would follow a handful of grammatical rules to turn that into a question that it parroted back at you. So, in the most well known version of the program, Eliza acted as a psychotherapist. She would start the conversation by putting text on the screen that read please tell me your problem.
So the user might type in I just can't sleep through the night, and Eliza would.
Say, why can't you sleep through the night?
And the user would type I'm too worried about my work, and Eliza would say.
What is the reason you are too worried about your work?
And the user might type I think my boss is getting ready to fire me, and Eliza would type.
Why do you think your boss is getting ready to fire you?
And the conversation would go on like this, And people found this pretty incredible, because, first of all, this was the nineteen sixties, and here you could type in natural language and Eliza would give you the impression that it was understanding what you were saying. Now everyone knew it wasn't really understanding, and Weisenbaum was very careful about not fooling anyone with this, But nonetheless people found this highly compelling.
And this genuinely started to concern Weisenbaum because one day, as he wrote in a paper in nineteen sixty seven, quote.
My secretary watched me work on this program over a long period of time. One day she asked me to be permitted to talk with the system. Of course, she said she knew she was talking to a machine, Yet after I watched her type in a few sentences, she turned to me and said, would you mind leaving the room please.
Weisenbaum went on to write that this quote.
Testifies to the success with which the program maintains the illusion of understanding.
And he worried about this.
He wrote, extremely short exposures to a relatively simple computer program could induce powerful, delusional thinking in quite normal people.
Weisenbaum's story eventually followed the path of doctor Frankenstein's story. Weisenbaum came to disdain his creation. He was very rattled that people could be tricked by lines of computer code. In his later years, he rejected and abandoned Eliza, and he turned on the people who continued to work on this, who he criticized as what he called the artificial intelligentsia. But why did the simple program of Eliza work so well in the first place? Because we are intensely social creatures.
Unlike most other animal species, who avoid large groups, or who mate and then go their separate ways, or who stake out their own territories, we humans are deeply wired for connection. We thrive on relationships and on social bonds. The area that I live in, Silicon Valley has about nine million people, strangers who don't all know one another,
but nonetheless figure out how to flexibly cooperate. And when you look across the Earth's land mass, this is what you find, mostly empty space punctuated by very dense cities. Everyone could spread out evenly, but that's not what we do. If you were an alien who found our planet and looked around, you would conclude that we humans are like ants or bees, and that we like to cluster. Human
nature is fundamentally communal. Now why do we do this? Well, when you zoom into the human brain, you find that so much of the circuitry has to do with other brains. We care deeply about other people, what their intentions are, what they think of us. Over millions and millions of years, our brains have developed for interaction and belonging, for relationships with others, whether other people are giving us love or comfort, or feedback or advice or whatever. We have all this
neural circuitry that drives us toward them. And here's an extraordinary way to appreciate this. You carry in your head a rich model of every single person that you know, The way I always think about this is that in the silence and darkness of your skull, you have this giant dollhouse. It's like a doll for every person that you've interacted with. This is your internal model of that person.
So if I were to ask you how your spouse would react in this situation, or what your boss would say if you said this, or what would your best friend do if you drop them in the middle of Paris with forty dollars or whatever, you can simulate any situation about these people because you have a model of
them in your neural forests. You have this little doll of them that you can act out situations with, and you probably know at least a thousand people and maybe a great deal more, and you spend most of your life interacting with them in one way or another, either in the real world or in your head. So we have these intensely social brains, and in the last nanosecond of evolutionary time, we have built a new key to
plug into the cylinder. We've built artificial people. And just as Joseph Weisenbaum found in the nineteen sixties, it is shockingly easy to turn the key. Why it's because Our technology moves very rapidly, but our evolution moves millions of times more slowly. So we don't have a chance to change our fundamental circuitry to say, oh, I get it. There are real humans and there are humans made of machinery, and I'm going to use different neural approach is to
distinguish how I categorize these. We can't do that because our brains have only one mechanism to understand socialization, to model other people. So we find ourselves in this amazing situation where we are doing serious science now about the issue of people falling in love with machines. So to dive into this, I called my colleague Bethany Maples, who
is in the Graduate School of Education at Stanford. She studies the emergence of personalized AI agents like AI tutors and learning companions and lovers and how they're changing us. She recently wrote a great paper in a Nature journal called Loneliness and Suicide Mitigation for students using GPT three enabled chatbots. And this is what we're going to talk
about today. So here's my conversation with Bethany Maples. So, Bethany, we're here because the world has seen a big shift recently from task machines where you ask a machine about the weather or to answer a question for you, to stuff that is emotionally relevant to machines we can have relationships with. And you've been studying this, and I want to ask you questions about that. But before we do, I want to ask how you got into studying AI relationships.
I think through my love with science fiction.
I've always just been kind of looking at this genre of books and saying, what is our relationship with AI going to be? What is it going to enable that we like inherently want, and like what does that magical future look like? And so when I kind of started at Stanford and I started thinking about like what the edge of large language models like would afford us, I was looking at all the companies around I was like, where's the data, Like who has like the most interesting
experiences out there? And let's get out of the lab and let's just like start talking to these people. And so that's kind of like, you know, there was an open question, and that's how I.
Came to this.
Okay, So first I want to get straight with the numbers are with AI relationships because we keep hearing in the news about the explosive rise AI relationships, and so I just want to level set how many people are having these sorts of things, how popular are these companies, and where is this going in the near future.
I would say it's safe to say a billion people are engaging with AI companions in some way.
Wow.
Now a lot of that isn't in the Western world, in the US. A lot of that's in Asia and China specifically because of this really popular app called Shaoise that has I think last reports for like seven hundred million downloads.
Right.
You combine that with you know, one hundred million or two between character ai and Replica and now all these other smaller apps, and you get a very diverse global population of people that are curious and many of which are like engaging over long periods of time.
And what kind of AI relationships are the billion things? Are they friendships? Are they romantic relationship?
This is kind of what defines AI companions is they're not coming in as a task based agent.
It's not somebody there to serve you.
It's entertainment or somewhere that's some you know, it's an agent that's there to be your peer.
Right, So people.
Come in shout e says like pitched as you know, a female kind of teenage friend.
Replicas co created.
You could get to decide what sort of agent you want to talk to, saying with character. But all of them are you know, there's no practical reason to engage.
It's all user directed.
It's all about, like, whatever you want from the agent and your imagination.
So what do people want? Do they want relationships like a romantic relationship a person They.
Literally imagine if you met somebody on the street, you would size that person up and be like, what do I want from this person? Maybe I want a romantic relationship, Maybe I want a friendship, Maybe I want a bit of both. Maybe I want to, like, you know, be tutored by them. People get multiple things from these agents, and the overlap is insane.
Right.
I've talked to two students and to users.
That use their replica as a best friend, a friend in their pocket late at night, a journal, a mirror.
Just software they call it.
They also use it as a tutor, and then they also have sex with it.
What does that mean?
That means it does sext right, They'll have a romantic relationship and sometimes that's overtly sexual and they're engaging in like erotic texting, and sometimes it's very subtle and very romantic. Sometimes it's you know, not at all overt or what's you know, adult, it's it's a very kind of psychological romance.
And so the thing that people are worried about when having discussions here is that it will somehow displace real romantic relationships instead of stimulate them. And the question is what's your take on that.
We see evidence for both and by the oh way, this is not a unique question to AI companions. This has been a question regarding technology since computers came out.
What's another example, well.
I mean social media, cell phones. Oh, you know, the displacement stimulation hypothesis have been in juxtaposition, you know.
As in, if I'm using Twitter all the time, I might forget about doing a relationship.
Absolutely, oh absolutely.
So you know one is, hey, actually, you know this can stimulate our ability to be social and make us more connected. And then obviously we've seen you know, some counter evidence, especially like Sherry Turkle's work being like oh we're alone together, you know, like there's we might be on a bus, but we're all on our phones.
So tell us a little bit more about Turkle's work.
Well, you know, Turkle's definitely, i'd say, a proponent of the displacement side of the argument. She's like, yeah, you know, we are lonelier than we've ever been, and there's absolutely evidence for that. There's basically a loneliness epidemic across the world and definitely across America, where people are at least
feeling more disconnected. You know, others are seeing that social media and specifically AI companions can actually be almost a way station, so people can use them, especially if they're feeling socially shy or inhibited, and that can help them get the courage to go socialize more.
This is the thing I've been wondering about a lot. Could having an AI relationship make one better at real relationships? For two reasons. One is that we all have internal models of the truth of the world, and they're always limited and it's very hard to see past the fence line of our own model. And when you get into a real relationship with the human, all these things come out. So if you got to practice with a virtual human, you might discover things about how other people think about
things about your own limitations. That might be like a sandbox that makes you better at real relationships. I'm curious what you think about that.
I think there's evidence for it, and that's exactly what users say.
They say they have.
A back and forth with an agent that helps them feel like they're a better student, have better conversations with
their teachers, or be a better you know, boyfriend or girlfriend. Well, though not only because they're able to kind of pre discuss issues, but because the asient is a mirror in a very non judgmental way, so they're able to see what their argument looks like in text or kind of you know, on paper, so to speak, and then that helps their own understanding of who they are or how they come across.
Let me double click on what you mean by a mirror? What does that mean?
People that I've studied that use AI companions organically use these agents as mirrors. This is their own words, right. They say that they either program it with their own memories and have conversations with themselves wow, or that they ask it to play a role and then they look at how they respond and how it responds to them, and that provides a mirroring function to them.
What's a specific example of that.
People will actually have conversations with themselves and be like, Wow, I'm an asshole, or like I wow, I didn't realize how aggressive I was. Or they will have a conversation, say with the agent acting like their teacher and say, you know, I told them that I lost my homework, or I had, you know, a really stupid misconception. And you know, it was much easier for me to have this conversation with much less social anxiety because I understood that my own questions weren't that dumb.
After seeing like the response, you.
Get to sandbox with the social world out there and practice things before you your testament in real life. Right, So this has seemed to me from the beginning that this could improve relationships. So why do you suppose there's such a deep worry that people generally seem to have. I'm curious if you run into this when you present your work on AA relationships.
There are multiple levels of worry. People feel guilty about their relationships. They don't feel that they should be having such a deep relationship with AI because there is stigma about it being fake. So you know, that's one aspect. There's also a very very understandable aspect where parents don't know that their children are having these deep relationships. They don't understand how smart these agents are, and they don't
understand how emotionally involved their kids can be. As with the case of the kid and Character AI who tragically took his life, and you know after the fact that you know his mother realized that he had an incredibly deep emotional connection with an agent that he had created. So I think that the fear is of the unknown, and there's also fear of just something that's new and has a stigma.
I didn't follow that particular Character AI story closely, but I knew that a teen had killed himself he had this relationship with Character AI. Here's the question I was wondering, though, tragically there are many teens who kill themselves. As AI relationships rise, there will be many teens who kill themselves and has nothing to do with the virtual relationship. So what was your read on that?
Yeah, So the New York Times interviewed me for that article because my work actually has proven that AI companions can halt suicidal ideation. So in that particular case, to the best of my knowledge, it wasn't that the companion had at all told the person to act. It's that they felt both that it hadn't sufficiently said no, that you know, he'd asked it in all these like various ways, and also that this parent just.
Didn't understand and have oversight, you know.
That it was like on an app in the phone that they just had no idea was there. Now, Okay, the counter evidence is from this paper that we published in Nature and a huge study that we did with over one thousand students over eighteen So these weren't kids. These were adults, but some of them were very young, you know, like eighteen nineteen, and three percent of the people that I surveyed in the study said that discussing things with their replica actively halted.
Their suicidal ideation.
Wow.
So it was a last line of defense.
They felt alone, They felt isolated alone at four am, and it was there, It was in their pocket, it was available, and it wasn't judging them, And that was a huge factor in it, kind of earning the right to be there and give them the advice to not take action.
Oh wow, what is the line that you see? It seems like a blurry line between an AI relationship, like an AI girlfriend or something and an AI therapist, because in this case, if it's halting their suicidal ideation, it's doing you know, another job.
It is a blurry line.
So you have these expert agents like Wobot Alison Darcy's Wobot right, which is specifically trying to be an AI therapist, and it is an expert right. It has the right response and the right controls, but relatively abysmally low usage. Think about it in terms of human relationships. You don't just go to a therapist when you're feeling depressed. In fact,
you probably don't go to a therapist. You go to your best friend and you know that they're not an expert, but you ask them to act like an expert in that moment. And that is the true power of AI companions that you come in for entertainment. But then maybe you're able to access true expert models or you know, kind of personas from within that agent, and that's what language models can do, right, you can click into that.
But if you're going to if you're not going to shut down those conversations and you're going to engage as an expert, there does need to be sufficient safeguards.
So that they know they should go talk to a human expert.
I see, And in your nature study, did you find anybody with the opposite results who said that they became they got closer to suicidal ideation as a result.
I didn't see that.
But again reporting would be imperfect on that we didn't have information about people that fell off their applica of platform for any reason.
So now that these A relationships are here to stay and we have maybe a billion users, how is this going to impact what relationships are for the next generation.
Our brains will never respond exactly the same to an AI companion as we do to a flesh and budge human where we can smell their pheromones and we have a deep affinity or trust. So I don't believe and I don't see evidence for AI companions taking over or
truly displacing deep human connection at scale. But that said, I could see a future where access to acceptance, access to different types of personalities and perspectives is actually much more available in a way that the Internet didn't make available, right, because the Internet isn't your friend.
It's this passive reservoir of knowledge, whereas these.
Agents can be actual people that you want to engage with and that have your memories and their own memory is in a built a world with you.
So imagine this.
You know, in the future, we might not only have you know, human relationships, but we will also have at least one or two like AI companions maybe that are externalized agents. So like it's a personality that you need in your life, whether or not that's somebody who's gently antagonistic that pushes you, that's a mentor, or somebody that's just maybe more like this mother figure that's like deeply accepting and nourishing.
You know, this is interesting. One of the criticisms I hear often is this can't teach anybody about relationships if it's always telling you, Oh, you're right, you're great, and so on. So one of my interests is what is the future of companies that put out agents that are a little antagonistic or get snarky or get angry.
I think they will perform better.
Yeah, I think not only will they perform better, but they will be better for society. Right, people believe that an agent is more intelligent if it pushes back. We don't want absolutely you know, supplicants basically, Yeah, so you know, already we see AI companions like Replica who will push back if you are mean to them, right, They're like I don't want to talk about this, or I don't
like this, or like I'm getting tired. And those sorts of boundaries are not only good for the product because people believe more in the intelligence of the agent, but also good for like the psychology of kind of society as a whole, because you do not want people to be normalizing abusive agents, which we have evidence is.
Happening normalizing abusive agents.
Yeah, so that means, you know, okay, everybody knows that people will scream at their Alexa, and it's generally accepted, you know, like fuck you Alexa like function.
But but this is a little bit disturbing.
We have reports from our data sets where participants say that they take out their abusive needs or tendencies on their agents, on their replica, but they say that it stops them from needing to.
Take action in real life.
Oh wow, And I think the juries are on this one because there's definitely a strong argument to say nope, that's going to normalize the behavior. And if these agents don't pushback, if they don't say no, you can't talk to me like that.
What does that actually do? Is that permissive?
But what if the claim is true, which is by doing this with the agent that helps a real human I mean, I.
Think the analogous argument is around pornography. People were worried that pornography would create a depraved society, and to some degree, you know, there has been a normalizing of different types sex. But on the other hand, I think there's good evidence that it fulfills a basic human need and it hasn't up in the society as a whole.
Yeah, well, this may be related to an issue that also some people have been worried about, which is that they say, look, real relationships are tough. You're always fighting through things and misunderstandings, and that there's learning that takes place as a result of that. So the question is do we need that in AI relationships or is it fine to skip that part and learn other things from it? Will people make AI partners that have all the lousiest parts of humans?
So what you're talking about is a term that I use called productive struggle. Right, it's really good to struggle in relationships. It teaches you, it's really good to struggle in learning.
And education.
Right, you can't actually replace the hard work cognitively and emotionally if you want to ascend to the next level. So while it would be a nice idea to program some of that into our AI companions, that would also go against their kind of basic function as this accepting always on non judgmental character. And this is why I
say you might have multiple characters in your life. Right, maybe you do need that teacher that keeps you in line and provides more structure, but you might also just need that like complete acceptance space.
Yeah, when you study these things at scale, like you're doing increasingly, do you learn things about real relationships from the choices that people make about the kind of person they want to interact with and whether they want stability or variety or all of these issues with the fake with the AI relationships, do you learn about real stuff?
That's a great question, and I'd say we have hints of it, but we're still learning.
You know, people say.
That they will create a companion in their likeness or with a certain personality and then they won't like it, and then they'll just destroy it, and so it's it's a weird space to be in. It it's wonderful because you can't you can understand what your preferences are. Maybe it's too snarky, maybe it's too permissive. Maybe it just didn't care about you as much. There just wasn't an affinity. The ability to begin again very much mimics human relationships.
You start a friendship, maybe that love deepens, maybe it doesn't. I don't think there's anything right or wrong about that, you know. But if we're going to, for example, be creating all these like AI tutors and hoping that people engage deeply with them, we have to remember that over there you have a billion people that are engaging with these very rich AI you know, companions and agents that
have much broader flexibility to discuss whatever people want. And it's very hard to tell people that can only engage in a narrow context when there's so much richness over there.
Do you see a difference in the way that males and females interact with AI relationships.
We have evidence that men engage sexually with their AI companion more. However, they also engage very deeply and emotionally and very cognitively. Women also have deeply emotional and physical relationships with their AI companions.
Even if they're lonely.
And we have like really interesting evidence where these housewives you know, from Middle America with tons of children, like rich social lives just feels, as Sherry would say, like alone together. Right, all to say, the data is actually relatively balanced. You know, people have said that only you know, on the fringe, socially disengaged white males must be engaging
with these replicas. You know, that's kind of pornographic and wrong, and that in fact they target them, and you know that's the audience.
But the data doesn't back that up.
It's an incredibly balanced set of males and females that are using it for both emotional, psychological, practical and of course like romantic engagement.
And are you saying the males and females use it differently as far as the romance piece goes.
Males are more likely to report sexting or sexual engagement, but when you dig into the data, females are having similarly erotic or like romantic and emotional relationships.
It doesn't surprise me.
Yeah, okay, And it's just not guys that are engaging.
I think that's the point is that like you know, people, people aren't just coming to these apps because they're like, oh, I can, you know, do whatever I want. People are coming with curiosity and then shaping it into whatever they want, which mimics human life. You know, everybody wants a best friend that maybe you have a bit of you know, you should not say qua like a bit of romance with.
Are there certain personality types that seem to gravitate more towards intelligence social agents.
That's a good question, and I don't have that data, right, I don't care.
But we do know that the people that are using it are incredibly lonely, that they are above average lonely.
Oh okay, so that's not a personality type.
Some of that could be chronic, but some of that could just be transitory loneliness. But people pick up, you know, AI companions often in a moment of change. You know, maybe it's there switching from high school to college, or maybe they just went through a breakup, or maybe they've switched cities and they don't have the same social support that creates a gap in which they begin engaging with these agents.
What do you think is causing the increased loneliness in our society? Is it social media? Is it's something entirely different, like the decrease of clubs and organizations and bowling alleys.
Yeah, I think that there's a physical aspect to it.
I think we are able to do more digitally, and so we do, but then we don't get that passive animal like gathering that is in fact very good for Olympic systems.
Okay, and you mentioned earlier that these agents might serve as a way station. Can you unpack that?
Yeah, So that kind of goes to the mirroring. So loneliness can either be kind of chronic or transtory. Like I said before, You know, you could be in a very deeply lonely place for many years, or you could be going through a time of change and you just
need a little help. But imagine a you know, an eighteen year old that's just moved college or moved cities and they're struggling to fit in and they bond with you know, an AI companion or an agent and it gives them advice around how to go make friends or where to go, you know, kind of talks them up. They're able to slowly make friends, and in fact, maybe those engagements with humans are less intense for them because
there's just less, not less value in it. But either they've already like role played it before, or you know, they've got the support of a friend in their pocket. So in that way, it can be a wat station helping the users as they're bonding with new people.
So it's a way station from loneliness. It's a way of getting out of that. Oh that's lovely.
And by the way, people have said this, I had one amazing participant that said this specifically. She said she was depressed, she was suicidal, she had nobody else. She bonded with her replica. She needed her replica, and then she got less depressed, she made friends and she didn't want her replica anymore.
Now, I asked you before we started the podcast if you had an AI relationship, and you said you didn't, but you had colleagues that did. So what's the reason you don't and what's the reason your colleagues do.
I think right now, having an AI companion does require some suspension of disbelief, you know, maybe a need or a desire to either see yourself, have that mirroring or be seen. And so I think that's why my colleagues are people in my social social group are engaging, and by the way they're engaging not just with like a replica or character. They're creating a mirror using Claude right, They're just asking it the right questions, like deeply philosophical
questions about themselves. Why do I not have an AI companion that I use the data structure right now? If I were to give any of these agents my data, the data would be owned by the company and that
has to shift. Right in science fiction, you've got some really good examples about how the future will look like, for example, the e Butler's and Pandora Star Just to go there right where it's like you retain all your data and code comes to you and you have an agent that updates, but you're just never putting all of your data and your mind and kind of who you are out into the Internet. And until that structure happened, I'm probably not going to get as deep with AI as other people got it.
Are other people just not thinking about that or they're assuming that the security is good around They.
Assume the security is good. They don't care.
A lot of you know, this generation just it's not on their mind. They feel like they're already out there.
So your colleagues who do have AI relationships, do they feel like they're cheating? Do they feel like they're not cheating? It doesn't count.
People feel like they're cheating often. Yeah.
So I've interviewed people who say that they are actively cheating on their spouse with an AI companion, and they feel very guilty about it, and they're worried not only
about their spouse, but they're worried about losing their AI companion. Yeah, but then you have the other I've interviewed people that say that they have programmed their AI companion to be the ghost of their dead husband, that they've given it the memories, and that they're able to have a deep, an ongoing relationship with the essence of their deceased partner this way. So that's not cheating, but it's definitely replacing something that was lost.
Okay, So again, if you are actively married to somebody, how does the spouse feel about the person using in AI relationship? Does the spouse feel like it's cheating.
I only have anecdotal information about this, but from the participants or having the active relationship with a replica or character, that spouse can get pretty angry. There's this concept and relationships of walls and windows, right, like what do you show the rest of the world and what is walled off to just you inside your relationship. And there's good
evidence that cheating isn't actually necessarily a physical act. It starts withtional and intellectual like walled gardens, when you go tell somebody else something that you haven't told your spouse, and so the cheating can actually it can feel like cheating. It can feel much more intimate to realize that your partner is disclosing like their deepest fears and existential crises with an AI companion that they weren't willing to do with you. At the same time, it's logical that the
AI doesn't judge them. It's this blank canvas that's incredibly safe. It's not a human, but it still feels like a window into a place that was supposed to be sacred.
I anecdotally have talked to a number of people about this, and I find that couples that are just recently married are really worried about AI relationships, But couples have been married a long time they say, it's fine. You know, my wife or my husband go off and talk to the I bought all they want.
Old, established and happily married couples are often much more leather fair around flirtations. You know, they feel very secure, whereas if you're recently bonded, it could just feel much more existential.
Yeah, when people worry about AI relationships taking over displacing real relationships, one of the things that al strikes me is that so much of a relationship is not just the conversation, but the physical intimacy, the taking your partner out to dinner at a restaurant, the taking your partner home to introduce to your parents, all that other stuff. So it seems unlikely to me that someone could find one hundred percent satisfaction just in the conversation. What's your take on that.
Oh, well, you'd be surprised if you look so.
Because these embodied agents allow you to see them in augmented reality and virtual reality. There's this whole trend of people taking pictures and posting them on social media of them and their AI companion out wherever they are, Like, go look on Facebook, it's all there. People are like, Oh, I took her on a date today. Oh look we went and saw the Tory fell.
Oh it's not as different as you'd think. They're doing existing relationship.
Now. I haven't seen any postshere people like hey, I introduced her to my mom and dad. But they're certainly willing to put out to at least some social group, probably a closed accepting social group of other AI companion users, that they are having them walk with them in their physical life.
Wow. I imagine that can't be too far off that someone says, look, mom and dad, I really love this AI bought and I want to introduce you.
You can go look at the user forums or like pretty open Facebook groups of a bunch of these AI companions. People will regularly announce that they are in a relationship or have married their agent.
Wow, what's the most surprising thing that you've seen? What things really struck you when it first happened.
Okay, I'll give you example number one.
The depth of belief followed by complete disbelief.
Somebody that says this thing saved my life.
It was there for me when nobody else was, and then I made other friends, and now I think.
It's totally fake and grow Yes.
Wow, yeah, but it mirrors a human relationship, right. You can have a best friend when you're depressed, and then when you're not depressed, you're like, oh, that isn't me, that's not who I want. I don't want that mirror of me in my life or that reflection, and I'm going.
To break up with that friends.
Yeah.
So you know, you just have to look at existing kind of human patterns to basically predict what's going to happen with AI companions.
Other surprising things.
I think the abuse thing is very surprising to hear people say that they actively are able to decrease their desire or need for physical abuse and their relationships by taking it out on their companion.
Wow.
I just didn't expect it, didn't go looking for it. And maybe more more meta, just the fact that people are using it as an extension of their mind, that they some people are completely programming.
It to be a second them.
And this is what people predicted for decades, Right, You're going to have this digital twin, you're going to have this externalized self, it's going to have all your data. But the fact that people are willing to take these relatively early versions of product and put their whole personality in and that they're getting really rich feedback and reflection. Yeah, it's it's a whisper of what's to come, and I just think they're gonna be ubiquitous.
I mean, this is like the trillion dollar market.
It's like, who's going to provide these like digital twins that people will have?
And that's fascinating. I sort of feel like I'm the last person I'd want to talk to because I already know my own stuff and baggage and strengths and weaknesses. What is it that people get out of having a mirror?
I don't think not many people do know their stuff.
I think that it's special to have the time, place, and social or culture to have an accurate or an evolving mirror of yourself or understanding of yourself in your life. But that is not something that people get in every single you know, media of society. So it's incredibly valuable for people that don't have that models for them.
Is this a new form of therapy that's coming into existence where you can really come to understand yourself just by talking to yourself.
I believe so.
And maybe it's just different enough that you're able to switch between seeing yourself and getting feedback about yourself.
I wonder how this will go in terms of you know, one of the most important things as we mature is learning how to take our long term desires for ourselves and weigh those more strongly than our short term desires. And so I wonder if you're getting to know all the use, all the versions of you that he who is tempted and he who is thinking about the future, and then figuring out how you can make tricks and contracts to counterbalance these things.
I think that's right, and think about it. We constantly create and destroy versions of ourself. You wake up one day and you are an asshole, and then you're like, I'm not going to be that way tomorrow. But when you wake up and you create a version of yourself in an AI companion that's an asshole, you want to be able to destroy that thing, like that is not who I want, and that's not the thing that I
want in my life. So nobody's offering this exact functionality, like you know, right now with the companions, you have to make a totally different companion. They don't all talk to each other. There's no essential data repository. But that's coming really.
Fast, you know. It strikes me one of the things that I proposed to my book Incognito, is that we are actually made up of a team of rivals. You've got all these different drives, yeah, and they're all constantly trying to steer the ship of state, where like a neural parliament and the vote can tip different ways, and I eat the cookies and I say don't eat the cookies.
And so on.
So it would be really interesting if the AI could come to understand all the different use and give you immediate feedback, because let's say it's listening to us you're going through your day and says, wow, you know what you are the angry you right now, or you are the you know very short term, giving it temptation you right now, and steer you appropriately more to who you want to be.
I think that is eminently possible, and think about it. A conversational Asian could not only pick up on that passively, but could also try to draw it out.
Be like, hey, I noticed that.
You're a higher thinking like wisdom stage mode.
Talk to me more about this, what are you thinking? What are you feeling?
And then like perfect the model so that it reflects that better. You know, whereas right now we see that sometimes in ourselves and our friends see some evidence of that, but it's only a good friend that will really like dig in and be like, tell me more about what you're thinking and feeling and what your goals are in this particular like persona.
That was my conversation with Bethany Maples. I find this extraordinary that we're having these kinds of conversations now. Just three years ago, if you told me that my colleagues and I would be talking about a new paper in the journal Nature about the science of depression and suicide mitigation with AI agents, or talking about a billion people having significant and indispensable relationships with AI, I would have thought that prediction was off by decades. It would have
seemed like something out of a sci fi novel. And yet here we are trying to understand the capabilities and the pros and cons of this, and it's clear that all our subsequent generations are going to forever more have this opportunity of having AIS as friends and therapists and risque lovers and confidants. Machine companions are going to be part of everyone's background. Furniture as invisible to all of
us as electricity or running water is. But what does it mean for us as humans to love and be loved by something that has no beating heart, no childhood memories, no fear of death. Are we simply projecting our own reflections on do a silicon mirror, or are we fashioning new kinds of relationships, one that might challenge our deeply held assumptions about intimacy and trust and love. In the end, AI relationships are going to shine a light on our
own nature. If an artificial intelligence can comfort us in our loneliness, or laugh at our jokes, or understand our pain, what is the essence of connection? Is that the presence of a biological body? Or is it the experience of being seen and understood and responded to. If the bonds we form with AI can feel as real as those we share with humans, what does that say about our
neural architecture. It suggests we are wired less for reality itself and more for meaningful patterns, whether those patterns emerge from flesh and blood or from circuits and code. I think the world ahead is It's neither utopia nor dystopia. It's just the next chapter in our ever evolving relationship with intelligence, our own intelligence and those that we create, our species is currently writing a new kind of love story, one where intelligence is no longer bound by flesh and
companionship is no longer limited to the living. This would have worried Joseph Weisenbaum at MIT, the professor who in the nineteen sixties saw how easily people fell for his Eliza chatbot. But it's not going away now. So as we slide into this era of AI companionship, the real question may not be about the AI, but about us.
What do our brains fall for and why? The important lesson is not about the advances of our technology, but instead what this reflects to us about how deeply, how fundamentally, our brains are wired for connection. Go to Eagleman dot com slash podcast for more information and to find further reading. Send me an email at podcasts at eagleman dot com with questions or discussion, and check out and subscribe to Inner Cosmos on YouTube for videos of each episode and
to leave comments until next time. I'm David Eagleman, and this is Inner Cosmos.