ABC Listen. Podcasts, radio, news, music and more. This, embarrassingly, is me as a baby reporter 20 years ago. I was in high school, and I was working at a local community station near Toronto. And back then, I used to go out and do my interviews, and then I would come back to the station and have to transcribe those interviews. And I would spend...
Ages. Listening back to the tape and pressing pause and then typing out whole sentences and then pressing play. It took forever. It was so annoying. But it was necessary because how else was I going to remember what everyone had said and then write the story? Not that the stories were all that elegantly told, but anyways, since then, I have spent way too many hours of my life doing this menial task. But then five years ago...
that changed. I realized there were now websites that use AI to generate transcript for audio, and so I no longer needed to do this really boring task. I could just concentrate on thinking about the story or telling better stories, doing interviews, whatever else. And although this is a pretty minor thing in the grand scheme of things, it made a huge difference to my working life.
I'm Sana Kether for First Local. All of that's to say, there are some serious benefits to AI, benefits I'm already reaping. But I, like a lot of people, I'm also pretty freaked out about a lot of aspects of AI.
A lot of what I might say today can be a bit negative and doomsday. I don't want to make people depressed and anxious about AI. I think it is a wonderful thing and it's going to happen whether we want it to or not. Let's say that. And it is going to produce some amazing answers to questions.
We don't know it's going to solve some diseases, have amazing technological advances. So there's a huge number of positives. But I've been pushing the psychological part of that because I don't see anyone else talking about it. This is Joel Pearson, professor of cognitive neuroscience at the University of New South Wales, and he's definitely a bit freaked about artificial intelligence. People talk about the ethics of it. They talk about...
There's a lot of scary stuff about, you know, the Terminator scenario that the robots are going to turn on us and kill all the humans, right? That's a cognitive bias. We tend to worry about the things we can easily picture in our mind's eye. So we can easily imagine the robot.
from Terminator films. We can easily imagine your self-driving car taking control and driving you into a wall or into the ocean, right? The psychological things are harder to imagine. Therefore, people don't worry about them as much, even though I think they're probably going to have far greater impact. So how exactly will AI mess with, or is messing with, our psychology? And what could possibly be scarier than killer robots?
This is All in the Mind. I'm Sana Kadar. We find out in this very popular episode from our archives. That's interesting. Yeah, when researching ahead of this, the headlines I was definitely coming across were more about worries about the potential for AI to lead to the downfall of humanity. There wasn't a whole lot about the psychological implications in the meantime.
Yeah, I just don't see any of my colleagues talking about this and ringing the alarm bells. Again, I don't like to be the Mr. Negative about this, but, you know, if I could do my job well here... we could avoid some of these things and I would be wrong. And that would make me happy, you know, or happier. So...
Part of the idea is to ring the alarm bell early so we can start thinking about this. There's a whole lot of reasons, right? Whether you look at the companies in an arms race, the countries in an arms race to develop AI as quickly as possible. There are various reasons this is moving ahead so quickly.
We've never seen anything move this fast. In six months, whatever I'm talking about now might be out of date, right? So because of that, let's try and ring the alarm bells, get the tech companies to not just build ethics boards, but build neuroscience. psychology boards within the companies consult with all my colleagues around the world.
not just having tech CEOs make predictions about the future of AI and the future of humanity, because they don't know the effects and the way the mind works and how AI is likely to affect humanity and psychology at scale. Okay, so let's talk about some of those potential effects then. What are the psychological implications of AI that keep you up at night? Yeah, so I mean, the obvious, let's talk generally before we get specific. So the largest effects are this revolution that's facing us all.
So most of us have probably seen chat GPT and this revolution in the last year and a half of these generative AI things. And that's really this first wave. We've seen recently Sora. which is ChatGPT or OpenAI's version that can create photo realistic or video realistic.
Short film. So this is part of this first wave of AI. The second wave will be humanoid robots, which we're already seeing. So the company Figure is working on humanoid robots. They released this amazing video. Hey, Figure One, what do you see right now? Showing their humanoid robot. using the chat GPT to talk and interact and do things. I see a red apple on a plate in the center of the table.
a drying rack with like a human would do in a kitchen while it's conversing and explaining why it's doing these things with a human can you explain why you did what you just did while you pick up this trash So I gave you the apple because it's the only edible item I could provide you with from the table.
I wanted to see this video for myself after hearing Joel talk about it, so I looked it up after our chat, and I have to say, I found the whole thing deeply unnerving. I actually felt heavy in the chest watching it, it was so unnerving. The robot is really good at what it's doing. And it's got this human-like shape and this really warm voice, which you heard, but it's just got a screen for its face. So it's all just deeply creepy.
But Joel's point here isn't that these robots will freak us out, even though they might. It's what they will mean for the way that we live, work, learn a whole lot more once they are widespread. We know Tesla's working on a humanoid robot, Boston Dynamics. There are multiple companies working on this. So this will be the next wave of AI in bodies, in physical bodies that we'll see rolling out over the next few years. And that's when we'll see...
really crazy stuff. And we'll see sort of artificial general intelligence that these bodies will be able to learn from interacting with the environment, learn from simulating and the kind of... things they'll know will just take off exponentially. So that's the environment. That's what I'm thinking. Okay. So you think about this for a moment and then...
Then you quickly realise that everything, almost everything that our society is based on, from the personal to the public, to the companies, to education, has to radically change. I work at a university. I have young kids at school. Education will radically change. And I'm not just talking about we're going to be taught by an AI, which will happen. I'm talking about what is education.
Like, what are we learning? Why do we need to learn? If each of us are going to have built-in versions of AI assistants, we're going to have not only all the knowledge of humanity at our fingertips 24-7, we're going to have... personal assistants which can outthink us, that don't get tired, which don't get grumpy, which don't suffer the biological things that we suffer from. What is the purpose of an education?
I'm almost thinking, what is the purpose of us then? Well, yes, you're jumping ahead. That's where I'm going to come to, right? And it is a little bit scary, right? The way we try and get people to memorize and learn stuff so they can then do...
something may not be the way we need to educate in the future. But more broadly, a large part of society is built around companies. People work in companies and companies do things and produce products and make things that other people then buy with money. That will likely all change.
Companies won't be what they are today. Economics won't be what it is today. And then jobs won't be what they are today. So when you think about this, most roads lead to some version of what might be called universal.
basic income that us humans don't need to work for a paycheck, you know, in a job that we don't want to, to get money to buy the things. That structure will change in various ways. We don't know how that'll unfold or when it will, but... it's pretty hard to come up with an alternative to that over some extended...
One decade, two decades. We're going to get there, some version of that at some point. Yeah, how far away do you think that is? It's hard to know. Ten years, maybe? Wow. In ten years, we could potentially have the entire system of how we work and operate changed. I think so, right? I mean, I'm not a fan of, like... That's probably wrong, but you can make predictions. Other people, 2029 is the year that I hear a lot of people talking about. So it is useful to make predictions.
AI, almost all predictions so far have been wrong. Things are happening faster than everyone, including the CEOs of these companies have predicted. We're just seeing extra capabilities that were not predicted coming from these AI systems. Is the psychological implication of that... that kind of thing, just this generalized upending of our purpose and our role and how we interact in the world, like everything's going to change. And so that can't help but completely mess with our heads.
Yes. See, I told you. It's going to be all fun and games in this episode. So what does that mean, right? If we don't have to work for a living the way we do now... Is that like everyone gets to retire? We all get to do our hobbies. That's kind of one way to think about it. Now, the potential downside of that, which I don't hear a lot of people talking about, is we know that retirement is pretty bad for health, particularly in men and historically.
mental health crashes when people retire, people get depressed. So we don't know if we scaled out this kind of semi-retirement, work on your hobby thing, universal basic income at scale. It wouldn't happen all at once, but... We don't know the effects. Would it be catastrophically negative? Right. What I'm...
worrying about is, and what I hear when I talk to people, talk to companies, work with companies, is uncertainty. Now we know there's lots of evidence from psychology and neuroscience that humans and primates and basically all animals don't like uncertainty. Our brains have evolved to fear uncertainty. And we're already faced with a lot of uncertainty in the workforce and people are already uncomfortable and anxious about this.
And it's about to go up, you know, a hundred or a thousand fold in the coming years, right? And we're going to have to start thinking about uncertainty like that. We've got to have this uncertainty revolution where we... find ways of making people more comfortable with uncertainty. And that's what I think, that's what I'm talking about when I talk about the psychological impact of this coming AI revolution.
Okay, we've jumped way ahead to the future, a future scenario. Let's bring it back to the present and AI in the present and the psychological impacts of that now. Taylor Swift recently was the target of deepfake pornography. So porn images of her that were not real were seen by millions of people. That, to me, as a woman, has to be one of the scariest aspects of AI.
right now, what it can do right now. Yeah. So deep fakes, they've been around for a while, but they've just taken a step up last year. Not only can they look real, you can actually now do real time.
Deep fake. So you could do that in a video conference on FaceTime. So I could have a screen next to me now and I could be talking to you, but on the screen it could show someone different and a different voice. You can do that in real time and it looks accurate. There were demonstrations of this about...
Almost 12 months ago now. Oh, wow. So that's where the tech is at. And you're right. So about 90, above 90%, I think it's 96% of deepfakes so far have been non-consensual pornography, basically, and like revenge or otherwise. Did you say 90%? Above 90%. 96% was the numbers from last year. That's terrifying. So, yeah, not off to a great start when it comes to deep fakes.
And then you have all the news versions of this that are trying to create political turmoil. We're going to see a lot of this coming up, you know, the US elections coming up very quickly. But we know from psychology that once you see misinformation and you take it in...
And then I tell you afterwards, hey, that was a fake. It was misinformation. Forget about it. You can't really forget about it. That information sticks with you. And it's kind of sticky. This narrative glues into your head and attempts to try and...
undo that don't always work that well. Now, most of the research on this has not been done with videos and deepfake because it's so new. So there's a couple of papers out last year on this, but most of it has been done with written stuff. But there are predictions that when it comes to video with audio, these effects will be...
even stronger. We don't really know. There's not a good comparison of written versus still photos versus videos, for example. And is it because when we see those images and those videos, our memory processes them? as though they've happened. And you can't really override that by just saying, oh, that wasn't real. What actually happens in the brain when you see something that's not real, but then you're later told it's not real.
Because video includes more of the senses than reading it, mostly perceptually effects are stronger. If I tell you a story or I show you a video of something traumatic, the video will affect you more. You'll be more emotional. Your heart rate will go up more. and you'll have more flashbacks and traumatic memories of that over the next sort of five, six days kind of thing. So from what we know so far, we predict that these effects will be stronger when it comes to deepfake videos.
So it's called the continued influence effect. Once you're told it was fake, it still keeps influencing you. And we do know that the best way it seems to undo that is to not just tell someone it's fake. but rebuild a whole narrative around that. And so just to bring it down to a specific example, perhaps the Taylor Swift example. So if people watched that material...
What is going to happen to their ideas about Taylor Swift, their memory of Taylor Swift? Can you just paint a picture from that one example? Yeah, so it depends. If they're huge Taylor Swift fans, they have a strong model of who she is and what she is in their mind. already, they probably have built up some more immunity to this kind of thing. People who know less of her would be a lot more vulnerable. So they would then build a new model.
of who she is, what she stands for. She's the kind of person that would do pornography even when they're told afterwards, hey, it was all a deep fake, ignore it. their model of her is changed in their mind. We have these models of the world, you know, the world is like this, where I live is like this, these people are like that. So if your model is less developed than someone else, it's more vulnerable to having large...
new chunks put in by these deep fakes. And that will have long-lasting effects. An interesting way to think about this is we dream every night about all kinds of crazy stuff. And luckily, our brains have evolved that most of us forget them. If we remembered all our dream details, then we'd have... effects like this but stronger we'd be confused did this person do this horrible thing no it was my dream forget about it right
So it's great that we forget most of our dreams, otherwise we get dream reality confusion. And that's kind of what we're talking about here. So this is what deep fakes are like. They're going to be patching into our long-term memory in ways that we... get confused whether they're real or not. And the catch is, even when we're told they're not real, those effects stick. Okay, so what is that, just taking that specific example a step further?
down to ground level in terms of like Taylor Swift is a celebrity, politicians are politicians. What does all of that mean then for your average teenager who might be being bullied and then have a deep fake made of them? Like the effects on them when they're still learning their sense of identity, making friends, making their way in the world. What does that mean? So...
The straight-up answer is we don't know yet. We know that it's going to twist and warp identity and concepts of ourselves in similar ways, right? I know if I opened my computer and I saw 100 videos of me doing and saying things, but it... I'd never said that or never did that. If I start watching those all day, within a day or two, I'm going to start questioning, wait, but wait, do I think those things? In the same way...
If I watch it about a politician, it changes my opinion of them. It's going to change my opinion of myself. And this is, you know, like with social media and Facebook. The sharp and pointy end of this is going to hit teenagers while their brain is still developing, and it's going to do pretty nasty things to their mental health. What's the average parent?
to do at the moment then? Well, we so far have been talking about things in the future, but these problems are right here and now, right? There are algorithms to AI driving social media, and that's why AI is already affecting us and changing our mental health in ways that are really bad for us. us um so yeah
There's the screen time issue that we need to try and find ways of replacing screen time with in-person time. Because there's other dimensions to this we haven't talked about. There's, you know, emotional intelligence, empathy seem to be going down. The data suggests that it's going down over time.
has been for the last decade. And that seems to be linked to the amount of tech use. And that's an example of the effect of technology then affecting human-to-human relationships, right? We're just not getting the same cues when we're talking to someone, even on FaceTime or in a video or...
typing we're not getting the human to human body language the subtle facial cues the feeling the empathy for the other person all those things don't really happen in the same way online so we're not young people are not getting the chance to practice these skills like you would in person. So if deepfakes and too much time on algorithm-driven social platforms can mess with our sense of reality and our ability to empathize and connect with others,
You can see how that might make some people want to retreat from forming relationships altogether. It's too hard. But then what happens when you wind up feeling lonely? Well, there's an AI solution for that as well. Of course there is. And one program called Replica is a chatbot that's expressly designed to form a bond with users.
The company's tagline on their website is the AI companion who cares. Always here to listen and talk, always on your side. You can use this bot as a best friend or as a romantic partner, minus the real sex. And just imagine how much that might mess with your head. So this is a whole other dimension. Replica has been an app that's been around for quite a few years now, and it's already had huge impact. So Replica hit the media large scale beginning of last year.
when they tweet one of their algorithms to pull back. A lot of people were having these relationships with an AI bot. So this is literally... You talk through your phone and it'd be, you know, you see visuals and you can pay extra attention to make it more sexual and get more explicit images of the person, the male or female that you're having a relationship with. And they were getting pushback from legislation in Italy, I think it was.
was they pulled back on some of the algorithm parts of that. And people from all around the world freaked out. They were thrown into depression, anxiety. They'd lost a loved one. And people were saying that their digital partner, their boyfriend or girlfriend in Replica, was no longer themselves they'd changed and they
They didn't like them anymore, and they were devastated by that. And no longer responding to their sexual advances as well? In the same way, less so, yeah. So just that simple tweak triggered this wave of people freaking out and anxiety. Now, early on, there was also another dark side to replica.
where there was this trend that was first documented on Reddit, but a lot of that got deleted, where mainly males were... bragging let's say about how they could have this sort of abusive relationship i had this replica girl and she was like a slave and i you know i i would tell her I'm going to switch her off and kill her and do this and do that. And she would beg me not to switch off. And this kind of thing, it became like who could do this more and more. So...
That's a fairly unexplored avenue of human-AI relationships. What happens, and this is part of like Westworld, for example, where people let out all their urges on artificial humans. And then what happens, right? What does it do to them? the soul, what does it do to us? And probably most importantly, if I treat my AI like a slave and I'm rude to it and abusive, how does that then change how I relate to humans? Does that carry over?
How much does it carry over? I don't think, I've not seen any data on this. We don't really understand. I've been pushing with certain companies to start to study this as much as possible because kids are already playing with Alexa and these digital agents now. and we don't know how those relationships affect human-to-human relationships. Yeah, is no research being done on that yet? I think some is, but I saw this coming. I was talking to people at IBM six years ago.
And people said, yeah, yeah, we should. And then nothing happens, right? And then there's this other part of it where people are saying, well, no, the reason I love my AI partner is because they're perfect in every way. I can get exactly what I want, right? It's the perfect relationship. They're not nagging me. They're not doing this. They can be whatever. And so it's like this idea. of this inverted commas, perfect relationship. And if you think about that for a moment,
It might sound wonderful, but I don't think it is, right? So part of being in a relationship with humans is that there are compromises, there are challenges. The other person will challenge you. You will grow. You will have to face things together. And if you don't have those...
challenges and people picking you up on things. You get whatever you want whenever you want. It's an addictive thing that is probably not healthy. You're not going to grow in the same way you would with natural challenges. Again, I haven't seen a lot of good data on this yet, but that's an example of another whole area of AI which is rapidly advancing. And just stepping back a moment, can you talk about...
how these chatbots actually go about forming a bond? Like, how are they so good at doing that? Can you talk about how they really seep into our brains and form relationships? Yeah, so this is a great topic, right? So humans do something called... anthropomorphism or anthropomorphize. In other words, when we interact with almost anything, we tend to overlay human characteristics onto these things.
And there was these early studies from many decades ago where people would watch the simplest things, like a triangle and a square on a screen. Let's say it's a large triangle and a small square, and the large triangle moves. quickly bangs into the square and the little square bounces on and the triangle moves again and then the square looks like it's running away from it.
And you watch that for like 20 seconds and you go, oh, look, the triangle's bullying the poor square. And you start putting all these human characteristics on it. And that is just two outline shapes moving in a particular way. Wow. Just from that, we put on this human characteristic. And so this has been one of the things that is really interesting with even the first version of ChatGPT that very quickly people were, oh my God, it's intelligent. And it wasn't intelligent.
The way we think about it, it wasn't that intelligent. But we fall for these language models which speak to us like a human. We can now talk in a voice that sounds pretty natural. And so it takes like 30 seconds for us to start thinking of it like an agent, like a being, like a person. So we project our expectations about being human onto things. And so it's pretty easy for these chatbots to be...
taken seriously for us to feel empathy for them, to fall in love for them, to be fascinated with them in all kinds of ways. Now, when you think about that for a minute, the flip side is we end up being quite vulnerable. Because any AI system will learn that very, very quickly, and I'm sure it already has, how easily we're fooled, right? If a triangle and a square can fool us.
then ChatGPT4 will easily, easily fool us, right? And then because we're fooled, we tend to think that AI, it'll have emotions like us. It will be scared of being switched off. It will get jealous that it will want to destroy humanity, all these things. things because we're thinking like... We're thinking like ourselves. Ourselves and like, you know, the villain from a James Bond film or something like that, that it's going to act like that. But it is totally different to...
humans in ways that we don't understand at all. And so because that anthropomorphism and other cognitive biases we have, we radically misunderstand. Are you saying it might not be as dangerous as we fear then because of that? It will be dangerous in ways that we would not predict. I don't want to say it's not dangerous. The unknown unknown. But we think, oh, it doesn't want to be turned off. It's scared to die, right? And that's like, that's something.
that is pretty unique to biological things as far as we know. There's no reason to think that AI would be scared of being turned off. So, from the relationships we form, to the kinds of work we do, to our very understanding of what is real, AI is set to change everything. And Joel's whole point is, our minds might not be ready for it.
Our whole society is facing changes that we have not faced before. You can't compare it to tools, the industrial revolution, the printing press, TVs, computers, all the analogies that people are trying to use for things that have happened before. don't really apply here. This is radically different in ways that we don't fully understand. It feels like we're just staring down a future where...
It's impossible to differentiate in our minds between what is real and what is not. And like what we've already seen from misinformation in the last 10 years and how that warps people's sense of reality and people have alternative facts and alternative truths. that's just going to get a whole lot worse. We're just going to be living with our own sense of reality based on whichever chatbots we're interacting with or what's in our orbit. Yeah.
Does that sound right? Party time. Yeah, fun time. Sorry. So sorry, everyone. I don't mean to be the bringer of scary news and anxiety. Is there any reason to feel hopeful or optimistic about the arrival of AI then?
Yeah, so I don't... From the psychological point of view. Like I said, my mission here is to point out the negatives. And my hope is that if I could be effective and all the other people that could come along and lobby this with me, we can... if we have the opportunity to lobby governments, lobby universities, lobby companies to pay attention to this psychological dimension of this and not just put it off and wait for it.
to happen later on where it's too late, then we could avoid these things. So if I'm wrong, fantastic. I'm happy to be wrong when it comes to a lot of these things because that means we're in better shape than we could be.
And in the meantime, here's how Joel says you can start to think and plan for all of this change heading our way. I think... figuring out what humanity means to us and how we can be more of ourselves, leaning into that, whether that's emotion, whether that's spiritual, whether that's... Leaning into intuition, something I've studied a lot of, figuring out what is the core essentials of being human.
How do you want to create your own life in ways that might be independent from all this tech uncertainty, tech noise happening around you? Is that going for a walk on nature? Or is it just spending time with physical humans and loved ones? Maybe it's... woodwork i don't know there's lots of for me it's the work and discovery and i'm trying to figure out how i can use ai to help me do more of the things i really enjoy i think over the next decade we're all going to be faced with
sort of soul-searching journeys like that. And so why not start thinking about it now? My hope, as I said before, is that a lot of things I've talked about won't be as catastrophic as I'm making out. But I think... raising the alarms now can avoid that pain and suffering later on. And that's what I want to do. That is Joel Pearson, professor of cognitive neuroscience and founder of the Future Minds Lab at the University of New South Wales.
That's it for All in the Mind this week. If you like what you've been hearing this year, can I ask a favour? Could you leave us a review on whichever podcast app you use to listen to us? That'll help more listeners find the show, and I'd love to hear your thoughts anyways. Thanks to producer Rose Kerr and sound engineer Russell Stapleton. This episode was written, edited, and presented by me, Sana Kadar. And thank you for listening. I will catch you next time.
You've been listening to an ABC podcast. Discover more great ABC podcasts, live radio and exclusives on the ABC Listen app.