Episode 11, Part 2 - Fears & Foresight of AI Adoption with Athena Peppes - podcast episode cover

Episode 11, Part 2 - Fears & Foresight of AI Adoption with Athena Peppes

Dec 16, 202416 min
--:--
--:--
Listen in podcast apps:

Episode description

Athena delves into the ethical challenges surrounding AI adoption, outlining 5 crucial areas for consideration: accountability, bias, data privacy, deep fakes, and job impacts. She emphasises the importance of critical thinking when using AI tools and discusses the potential effects on employment, citing examples like Klarna's significant job cuts due to AI efficiencies. Peppes also explores the broader economic implications of AI adoption, highlighting the need for balanced decision-making that considers both growth opportunities and societal impacts, while stressing the importance of understanding and adapting to these technologies at individual and organisational levels.

Transcript

Matt Best

Welcome back. We're here to continue our conversation with Athena Peppes, founder of Athena Peppes Consulting and Beacon Thought Leadership. Ethics is such a huge topic that exists in that you mentioned that earlier with they can is it going to take away people's jobs? Is it going to various other things as well? All right, there's already

questions. You mentioned the new regulation around AI, like questions though, that the ethics that sit behind that, you know, putting out videos that have got people's faces when they weren't actually there, representing something that they wouldn't necessarily believe in or support, is that, what's your perspective and take on that is that, is that an area that you're seeing is that becoming kind of heightened problem? Is that, like, I mean, short, short it's going to slow us down a

little bit. But what, sure, what are you seeing in the market around the that ethics challenge?

Athena Peppes

Yeah, I'd say it's a huge issue, as you say, and there's like so much to think about. I think it can also feel my thing has always had to simplify stuff, right? And because it can feel like, where do I start? I always have in my mind, kind of five things to structure a conversation around this and to help think through different issues. The first is around accountability. Who is accountable for the information

that your AI enabled bot gives? Now, this might seem like a straightforward question, but there was an interesting case earlier in the year of Air Canada that basically argued that the bot which had given mistaken information to one of the customers that was trying to get information around bereavement fees. They argued that the bot was responsible and they had no liability to pay the money back. Now that did not go down well, no, no, but they just that was their, their kind of

argument. But there's all sorts of kind of illegal implications around that.

Jonny Adams

Do you know the outcome of the case? Did they get laughed out of court?

Athena Peppes

Yes, they did.

Jonny Adams

Yeah, yeah. A bit more human.

Athena Peppes

Yeah, exactly they did. But then you can get into a little bit more detailed questions, of like, Well, is it the executives that are that kind of approved, that is there any responsibility with the team that designed it? What about if it was supplied from a third party? So there's like, so much complexity there, right that for companies to figure out the second ones around bias. So I use, I use these tools quite a lot because I personally think they help my productivity

immensely. They save me so much time, and I just love experimenting with different things. But this is the importance of critical thinking. Always is. I can see that they're they're biased, and I I'm sure they're getting better, or at least, I hope. But I was writing a piece around the economics of of AI, and I wanted an image of an economist pondering the future of productivity.

Jonny Adams

So where did you go to find the image? That's the question.

Athena Peppes

Dall-E. I used Dall-E, but straight away gave me a man with white hair, obviously, you know, like middle aged white man. I was like, Okay. And then, because I've had this experience before, I just thought, I wonder what would happen if I swapped the profession so I could, I you can prompt it to try and get different things and say, Oh, give me more diverse and everything. But I was like, what if I just swapped the word economist for nurse, and

straight away, gave me a woman. So of course, they are because they're, they're, they're trained on information that we've created, and we all come with our own biases. But how do we do we have a responsibility, as you know, as an organization, to not perpetuate those biases as well, if you're using those tools, then there's the the data privacy issues that we've

touched on as well. How do you get around that deep fakes that would that's the fourth one, and I think that's something that's particularly concerned because there's so much synthetic media at the moment, and the quality of it is amazing. One that might interest you is, have you seen the Google LM tool that now creates podcasts from articles?

Jonny Adams

No, but there we go.

Athena Peppes

But that's not to say that it will it all will be, but you, you know that also makes you think about your own job. How can you use it to do your job differently, perhaps, or perhaps, as humans, we value the fact that that was that we all got together here and had this, this conversation in person. We put greater value on this than on something that was just artificially generated. And then the fifth one is around

jobs. And I think that's a huge one, because it sometimes gets not mentioned very much in the in the light of like productivity benefits, which there are many. But I think it's better not to hide away from the conversation and to think about what, what would that mean for the impact on your people? Will that mean job losses? And if it does, how do you handle that? What's your responsibility to upskill your people and help them understand that technology better.

Jonny Adams

On that point, I was curious about that last point, and I suppose it's really hot topic for for anyone involved in in a growth role, whether that's, you know, marketing, sales, consultancy, whatever that may be, a customer success. When you think about that, that job piece, not naming names. If we think about CEO that lasts of 10 years in software as surface has been growth at all costs. So they would pretty much do anything to get to where they need to get

to, especially their VC backs. It's like quite aggressive. Do you think that a CEO and those types of C suite actually will continue to think about growth at all costs, at the absence of job losses, at maybe sort of, May, I say, middle to lower tier. Or do you think that actually there's going to be some of that ethical sort of input, we actually know common guys, that you got to not worry too much about growth, and actually you got to think about the society and the people

within your function. Do you have a theme or a trend that people are talking about, or maybe it's a bit too early, I'm not too sure.

Athena Peppes

Yeah, it's, it's quite diverse. I would say, I think you're right that if we looked at the past, say, for instance, when smartphones and social media kind of became commonplace, there was a lot of issues around, maybe not necessarily on jobs, but kind of parallel issues that we could learn from. And what we saw is that we were very slow. Our institutions were very slow to adopt to those kind of issues and help people. Automation would be one example of that.

Right loads of jobs got automated. I think now there's a lot the kind of jobs that will be affected are not just manual jobs, but also manual routine jobs, but also knowledge workers. And perhaps that's why we are seeing much more of a discussion, because it's there's a feeling that, oh, this is becoming a lot bigger in terms of the impact on jobs. I don't know if that, if you can generalize about how CEOs are

are seeing this. I think the the CEO of Klarna, the payments company, said that they they got rid of 1200 jobs because genitive I was helping their marketing and sales teams do things so much faster. And he said, actually, that they can, that they'll only be able to function with 2000 people, I think, as opposed to perhaps nearly double that now, or something like that. But there's, there's always the economic incentive of the thing. There's an IPO coming up and,

yes, good. So there's issues like that.

Jonny Adams

Is it a language pattern that creates fear in the current sort of market of job and point where, you know, job cuts? Or is it, you know, are people being redeployed well enough, you know, it has to come in one hand that I get the efficiency model, because I'm trying to sit there, if I was a CEO, and think I've got the pressure of the board, I've got the pressure of investors. Do you know what? I'm five years away from exiting, going to retire? Would I go for it? You

know? Would I? Would I cut headcount and use some type of really? Because that's what everyone's telling me, is that AI is going to solve my efficiency problems. I don't know. I've been noodling on that one, and I guess that I don't know which way people would turn.

Matt Best

I sort of gear towards this, this view of because there's a there's a point at which AI is already, I'm picking on AI there, obviously other, but AI is, is already helping in finding efficiencies that makes people more productive, that can result in more growth. But I think to your point is, well, that growth at all cost. Where does that

sort of kick in and start to become a problem? Or where does it become just, are we just going to find this natural equilibrium and it just becomes a hey, look, there's going to be a sort of reassignment. I think my biggest concern is the pace. So with other transformational changes we think about in other industries in the past, you sort of automation in manufacturing, for example, that was probably sort of slower, maybe more expensive than, say, AI could be where it's so everything's in

the palm of your hand much, much faster. That means an organization could tomorrow say, We're gonna, you know, the example that you shared there, Athena, that you know, well, we can cut 2000 jobs kind of almost overnight. And I think it's the pace of that that might be the thing that hurts us. Because behind all of that, you've got the knock on effect to education and to the journey that, you know, the next generations are going on, and the enabler of, okay, we're making sure that

we're training in the right thing. You know, how much is AI prevalent in education, in schools at the moment, or is it just being left down to kids, just learning through the way that they and I think that's probably the bit that concerns me, sorry, is the pace is probably different, to say, previous transformations.

Athena Peppes

Yeah, and I think you know, the things you touched on as well about the border perspective, right? So it helps to think, to think of this from a macro economic perspective. CEOs, I think, would generally think about it in the context of the organization, but the issues that come up, like you mentioned, around education, the future and how do we plan for our economy? Companies, those are much bigger issues, and it's not as commonplace to find CEOs that might have that vision,

arguably, some of that might come from us. Right? What expectations do we have from these organizations about their responsibility to actually create jobs? So in the economics field, there's a huge debate around the impact of this on on jobs. I'm not sure there's a conclusion, yet there's a bigger argument, because most of the ways that they would estimate what the impact would be is based on previous Waves of Change, and perhaps the data there's not good enough

predictor of what's coming in the future. So does that need an AI model to just work? Yeah, but you see the kind of challenge with doing that. There might be new jobs being created, right? There definitely will be. There are loads of philosophers now being hired by big companies to think, to help them think through these kind of questions that that might come up. So it's

yeah, it's a it's a huge topic. I just feel that the as individuals, the more we understand the technology, the more we use it, the more we learn about it, the more prepared you are to influence that change as well, whether as a consumer, you know, as an employee, as a as a citizen, as well.

Matt Best

It's connecting those things, isn't it? And I think that's going to be the challenge. And we're in danger of kind of diving into so really kind of political but if I look at the corporations response to, you know, climate action planning and that kind of thing. It's not been all that

proactive, right? It's very much okay. I'm forced to now do this, and I guess the consent I maybe have this, my personal opinion, is that if we have that same approach to some of this, you know, some of the kind of technology, and it's not supported, and that's why I asked the question around kind of policy makers and their role in this, I think there's a really important part to play there.

Athena Peppes

Absolutely. And again, they have the same actual trade off that the C suite executives have, right because arguably, there's a huge case to be made for how you use AI in the public sector, which definitely needs improvements in terms of efficiency and services that need to be provided to people. So the opportunity there is huge. But then how do you

kind of balance that without what it means for jobs? And arguably, they have, so they have doubled the role to play, in the sense of both balancing that as the public sector, but also in terms of the policy makers within that, doing, you know, thinking about that for the whole economy.

Jonny Adams

I'm really curious, and we've talked about AI, I'd say that is in a general term, not today, but in every space that I can even think of. And what's beyond AI? And I know you think about that, and talk about that a lot, but you know, even past all of this, what is going about now? What's the next thing? Do you have a an indication or a hypothesis around that, is that a fair question to ask?

Athena Peppes

Well, I think, for one thing, I think the whole topic of artificial intelligence definitely has more room. There's a lot of discussion around this being hype and so on. And of course, there's always a little bit of hype when something new comes into the forefront, right? So there is, but the opportunity is definitely there. It's huge. You see some companies appointing chief AI officers into the into

the C suite board. So in the same way that in the past, if we look at the topic of sustainability, used to be one person doing what was called CSR back then that no one used to take seriously to now someone you know, whole teams focusing on how organizations can deliver on those goals. So something

similar is happening with with AI. I think what people worry about a lot individuals, is is this idea of general artificial intelligence, so this technology being able to completely replicate what we are as humans, but there's just so much that needs to happen for us to get there that I think we won't be around ourselves to discuss that question possibly.

Jonny Adams

So I don't need to fear anything. I can feel quite confident that we're going to be okay.

Athena Peppes

Yeah, but I think it's kind of find that balance as an individual, as you know, a leader, as a team member, whoever you might be, whatever role or hat you might be wearing, what's next is finding that balance between making the most of the growth opportunity that is definitely there, the stuff that these technologies can do that can just create new

content in seconds. We know this, but how can you make sure that you do that in a way that's forward thinking enough that you don't get caught out by some of the risks or challenges other issues around that?

Matt Best

I think that's a fantastic place to, yeah, a fantastic kind of final thought, Athena, thank you so much for sharing that, and I think we had a couple of other comments. Conversations recently on the podcast about the importance of that patience, and it feels that we actually need to be a bit patient. We might even need to kind of slow down that decision making process and be a bit more considered, perhaps, but with

that North Star goal of right? Well, what could this do to us to help maximize growth for us personally and also for our businesses?

Jonny Adams

Yeah. I mean, I think I take your last point. I'm going to capture the opportunity, you know, and thank you.

Matt Best

Seize the day?

Jonny Adams

Yeah, I am feel a bit more calm in the circumstances. So thank you so much for sharing some great insights today.

Athena Peppes

Yeah, it's been a pleasure to be here. Thank you.

Transcript source: Provided by creator in RSS feed: download file