Smart Talks with IBM: Building Trustworthy AI: A Holistic Approach - podcast episode cover

Smart Talks with IBM: Building Trustworthy AI: A Holistic Approach

Jun 28, 202228 min
--:--
--:--
Listen in podcast apps:

Episode description

Advocating for artificial intelligence to be built and deployed ethically is no longer just a compliance issue but more of a business imperative. In this episode of Smart Talks with IBM, Malcolm Gladwell takes on this topic with Dr. Laurie Santos, host of The Happiness Lab, and Phaedra Boinodiris, Trust in AI Practice Leader within IBM Consulting. Phaedra’s team at IBM is creatively tackling the global need to build trustworthy AI by approaching the challenge holistically, implementing design thinking to address problems before they arise.

This is a paid advertisement from IBM.

See omnystudio.com/listener for privacy information.

Transcript

Speaker 1

Welcome to tech Stuff, a production from I Heart Radio. This season of Smart Talks with IBM is all about new creators, the developers, data scientists, c t o s and other visionaries creatively applying technology in business to drive change. They use their knowledge and creativity to develop better ways

of working, no matter the industry. Join hosts from your favorite pushkin industries podcasts as they use their expertise to deepen these conversations, and of course Malcolm Gladwell will guide you through the season as your host and provide his thoughts and analysis along the way. Look out for new episodes of Smart Talks with IBM on the I Heart Radio app, Apple Podcasts, or wherever you get your podcasts,

and learn more at IBM dot com slash smart talks. Hello, Hello, Welcome to Smart Talks with IBM, a podcast from Industries Ighart Radio and IBM. I'm Malcolm Glabbo. This season, we're talking to new creators, the developers, data scientists, c t o s and other visionaries who are creatively applying technology and business to drive change. Channeling their knowledge and expertise, they're developing more creative and effective solutions, no matter the industry.

Our guest today is Padre Bonadius, trust in AI practice leader within IBM Consulting. Advocating for artificial intelligence built and deployed responsibly is no longer just a compliance issue, but a business imperative. Part of Phedre's job is to help companies identify potential risks and pitfalls way before any code is written. In today's show, you'll hear how Phaeder's team

and IBM is approaching this challenge holistically and creatively. Phedre spoke with Dr Loris Santos, host of the Pushkin podcast The Happiness Lab. Laurie is a professor of psychology at Yale University and an expert on human cognition and the cognitive biases that impede better choices. Now let's get to the interview, Padre. I'm so excited that we get a

chance to chat today. You know, just to start off, I'm wondering how did you get started in this role at IBM, Like, what's the story to how you got where you are today? Oh goodness. My background is actually from the world of video games for entertainment, so AI has always been very interesting to me, especially when you intersect AI and play but several years ago, I began to get very frustrated by what I was reading in the news with respect to malintent through the use of AI.

And the more that I learned and the more that I studied about this space of AI and ethics, the more I recognize that even organizations that have the very very best of intentions could inadvertently cause potential harm. And so that's super cool. I love that your interest in more responsible AI came from the gaming world. You have to talk a little bit about your history with gaming

and that how that informed your interest in trustworthy AI. Well, it wasn't as much necessarily the ethical components of AI when I was working in games. It was more things like, look at what nonplayer characters can do, you know, I mean, if you've got an AI acting as a character within the game, and how is it that you can use

AI in order to make a game a more interesting experience. Actually, I ended up joining IBM to be our first global lead for something called serious games, which is when you use video games to do something other than just entertaining, and of the idea of integrating real data and real processes within sophisticated games powered by AI to solve complex problems. It wasn't until, as I mentioned, like later, when we started to hear all of us more and more news

about just problems. What could happen with respect to rendering or putting out models that are inaccurate or unfair. I know one of your inspirations for hearing other interviews that you've done is sci Fi. I'm also a sci Fi nerd, and I know sci Fi has talked a lot about, you know, the trustworthiness issues that come up when we're dealing with AI and so on, and so talk a little bit about how you bring that to your work in developing AI. That's a little bit more ethical. A

lovely question. So, my my parents were major techno files. They both were immigrants to the United States, came here to study engineering, and they met uh in college. Growing up, my sister and I, we had Star Trek playing every night. UH. My parents were both big fans of Gene Roddenberry's vision of how technology could really be used to help better humankind, and that was the ethos that, of course we grew

up in. The wonderful thing about science fiction isn't that it predicts cars, for example, but that it predicts traffic jams, you know, and I think there's just so much we can learn from science fiction or in fact, like I said, play as a mechanism to be able to teach science fiction predicting traffic jams. I love it. But when we think about AI and science fiction, we need to be careful. We need to remember that AI is not something that's going to enter our lives at some point in the

distant future. AI is something that's all around us today. If you have a virtual assistant in your house, that's AI, your phone app that predicts traffic AI. What a streaming service recommends a movie? You've guessed it AI, Paeder says. AI maybe behind the scenes determining the interest rate on your loan, or even whether or not you're the right candidate for that job you applied for. AI is both ubiquitous and invisible, which is why it is so crucial

the companies learn how to build trustworthy AI. How do we do that? When thinking about what does it take to earn trust in something like an AI, there are fundamentally human centric questions to be asked, right like what is the intent of this particular AI model? How accurate is that model? How fair is it Is it explainable if it makes a decision that could directly affect my livelihood? Can I inquire what data did you use about me to make this decision? Is it protecting my data? Is

it robust? Is it protected against people who could trick it to disadvantage me over others? I mean, there's so many questions to be asked. Earning trust in something like AI is fundamentally not a technological challenge but a socio technological challenge. It can't just be solved with a tool alone. What are the kinds of risks that companies have to think through? Is they're developing these technologies to make sure

they're as trustworthy as possible. Well, you know, they may be putting a lot of money into investing in AI that gets stuck in proof of concept planned like get's get stuck in pilot. We we've done some research where we have found about eight of investments in AI get stuck. And sometimes it's because the investment isn't tied directly to a business strategy, or more often than not, people simply don't trust the results of the AI model. As a company,

who is of course thinking about this so deeply? What a businesses need to consider when they're trying to figure out, you know, how to solve this big puzzle of AI ethics. It has to be approached holistically, So you've got to be thinking about, for example, what culture is required within your organization in order to really be able to responsibly create AI, what processes are in place to make sure that you're being compliant and that your your practitioners know

what to do. And then of course AI engineering frameworks and tooling that can assist you on this journey. There is so much fundamentally to do. We found that actually those that were leading responsible AI trust where the AI initiatives within their organization has switched in the last three years.

It used to be technical leaders, for example, chief data officer or someone who is a PhD in machine learning, and now it's switched to be a d percent of those leaders are now non technical business leaders maybe you know,

chief compliance officer, diversely inclusivity officers, chief legal officer. So we're seeing a shift, and I believe firmly it's a recognition from organizations that are seeing that in order to really pull this off well, there has to be an investment than a focus in culture in people and getting people to understand why they should care about this space. And so I see two challenges with doing that right.

One is, you know a lot of these technology companies are really built to be tech companies, not necessarily you know, social tech companies or having this sort of training and ethics and beyond. Another issue seems to be that you're really proposing a switch that's truly holistic, right, that's like rethinking the way the company thinks about its bottom line. And so as you think about working through these kinds of challenges at IBM, how have you tackled this, like

how have you brought new talent in? How have you thought really carefully about this big holistic switch that needs to come to make AIM more trustworth be Data is an artifact of the human experience. And if you start with that as your definition and then think about, well data is curated by data sideists, all data is biased. And so if you're not recognizing bias with eyes fully open, then ultimately you're calcifying systemic bias into systems like AI.

So some of the things that we've done at IBM, again recognizing this important need for culture is big, big, big focus on diversity, not only looking at teams of data scientists and saying how many women are on this team, how many minorities are on this team, but also insisting on recognizing that we need to bring in people with different world views too, for example, what's your definition of fairness?

Is your definition equality or is it equity? Also bringing people with a wider variety of skill sets and roles, including our social scientists, anthropologists, sociologist, psychologists like yourself, right, behavioral scientists, designers. I mean we have one of the

leading AI design practices in the world. I mean the effort, the investments we've been making in design thinking as a mechanism to create frameworks for systemic empathy well before any code is written, so people can think through how would you design in order to mitigate for any potential harm given not only the values of your organization, but what

are the rights of individuals asking oneself? These kinds of questions reinforces then the idea the ethics doesn't come at the end, like it's some kind of quality assurance, like check I passed the audit, I've got to go, you know, But instead, really, you know, as soon as you're thinking about using an AI for a particular use case thinking about you know, what is the intent of this model, what's the relationship we ultimately want to have with AI?

And again, these are non technology questions. This is where social scientists. Having a social scientist on your team helping think through these kinds of questions is is critical. Let's pause here for a second, because this is a really profound idea. Building responsible AI does not mean that you create a system then check in at the end and say is this okay? Is this ethical? If you don't ask those questions until the end of the process, you've

already failed. You have to think about ethics from the jump from the makeup of the team to the data you're using to train the model to the most basic question of all, is this even the right use case for artificial intelligence? The big lesson from IBM is this responsible AI is something you build at every step of the process. So this season of smart Talk is all focused on creativity and business. My guess is that thinking

about trustworthy AI involves a lot of creativity. But talk to me about some of the spots where you see this work as being most creative. Oh goodness, I would say incorporating design design thinking in particular as well as straight up design in order to craft AI responsibly. You've used this word design thinking, and so I'm wondering exactly what you mean here. How do you define this idea of design thinking. Design thinking is a practice that we

established here at IBM many years ago. In essence, what it is, it's a way of working with groups of people to co create a vision for something, for a product or a service or an outcome. And typically it starts with things like, for example, empathy maps. If you're thinking about an end user, thinking through what is this person thinking, seeing, hearing, feeling like, what are the experiencing in order to ultimately craft and experience for them that

is targeted specifically for them. So we use it in a really wide variety of different ways with respect to trustworthy AI, even rendering an AI model explainable to a subject. And I'll give you an example. So we've got this wonderful program with an IBM caller, our Academy of Technology, and we take on initiatives that steer the company in

innovative new directions. So we had an initiative where it was titled what the Titanic taught Us About explainable AI, and the project was imagining if there was an AI model that could predict the likelihood of a passenger getting a life raft on the Titanic. And we broke up into two work streams. One was the workstream full of the data scientists who were using all the different explainers to come up with the predictions and they would crank

out the numbers. And the other team here's where the social scientists lived and the designers were right where we were thinking through how do we empower people? How do we explain this algorithm and this predictor and the accuracy behind this prediction in such a way as to ultimately empower and end users. They could decide I'm not getting on that boat, or I want to get a second opinion please, or I want to contest the outputs of this model because I upgraded to first class just yesterday.

See what I'm saying. And that takes a lot of creativity. How do you design an experience for someone in order to ultimately empower them? So design designed as is critically critically important. And why I mentioned you know, we we've got to open up the aperture with respect to who we invite to the table and these kinds of conversations. Taking the time to really understand other people's perspectives is so important when you're doing anything creative, and it is

fundamental to the way the new creators work. The core question you should always be asking is where will the user be meeting this product? As Peder said, what will they be thinking, seeing, hearing, feeling. If you can answer those questions the way IBM does in its design thinking practice, you will be in great shape to create almost anything. Really,

let's hear how it works in practice. And so we've been mostly talking kind of at the metal level about you know, how to think about AI ethics generally, but of course the way this probably occurs in the trenches as a client approach as IBM and they want help with the specific problem in AI. And so I'm wondering, from a client based perspective, where do you start having

some of these tough conversations. It has varied, to tell you the truth, We had one client that approached us to expand the use of an AI model to infer skill sets of their employees, but not just to infer their technical skills, but also their soft foundational skills, meaning let me use an AI to determine what kind of communicator you might be A Laurie right. Others might come to us with, Okay, we recognize we need help setting an AI ethics board. Is this something you can assist

us with? Or we have these values, we need to establish AI ethics principles and processes to help us ensure that we're compliant given regulations coming down the pike. Or we've had clients come to us saying, please train our people how to assess for unexpected patterns in an aim at all, but then also how to holistically mitigate to prevent any potential harm. And those have been phenomenal engagements.

They're huge learning moments. And so it seems like the real additional value that IBM is bringing through this process isn't necessarily just providing an AI algorithm or consulting on sam AI algorithm. It seems like the real value added is explaining how this design thinking works. You're almost like this therapist or like a really good bartender who talks to people, who talks whole companies through some of their problems to try to figure out where they're going astray

before they start implementing. These things can I put Chief Bartender Office on my I like the metaphor. I'll tell you some of our our most valuable people on the team for that engagement. We had an industrial organizational psychologist, we had an anthropologist. That's why I'm saying it's important that we bring in the social scientists because you're exactly right, it's more than just scrutinizing the algorithm in its state. You have to be thinking about how is it being

used holistically? And so if I was a business that was trying to think about how a company like IBM could come in and help out with more trustworthy AI,

what would this process really look like. Well, what we're finding more often than not is that there'll be smaller teams within broader organizations that either have the responsibility of compliance and see the writing on the wall, or they've been the ones investing in AI and are trying to figure out how to get the rest of the organization on board with respect to things like setting up an

ethics board or establishing principles or things like that. So some things that we've done to help companies do this is we kick off engagements with what we called our our AI for leaders workshops On the one hand, it's teaching why you should care, But on the other hand, it's meant to get people so excited across the organization that they want to raise their hand and say, I want to represent this part, like, for example, I want to be part of the ethics board as it is

being stood up. The heart parts, not the tech. The hard part is human behavior. And I know I'm preaching to the choir given your background, it's so nice as a psychologist to hear this. I'm like snapping my fingers, like preach exactly. The hard part is human behavior. So it's been like drinking from a fire hose. I mean in terms of the kinds of things that that we've all been learning, and there's still so much to learn.

It really bugs me that those who are lucky enough to be able to take classes and things like data ethics or AI ethics self categorized as coders, machine learning scientists, or data scientists. If we're living in a world where AI is fundamentally being used to make decisions that could

directly affect our livelihoods, we need to know more. We need to have more literacy and also make sure that there is a consistent message of accessibility such that we are saying you don't just have to be interested in coding, like you're interested in social justice or psychology or anthropologies. There's a seat at the table for you here because we desperately need you. We desperately need that kind of

skill set. Just getting people to think about how do you design something given an empathy lens to protect people? I mean that, I think is such a crucial skill to learn. You know, one thing I love about your approaches that when you're talking to clients, you're almost doing what I'm doing is a professor, where you're kind of instructing students, getting them to think in different ways. But I know from my field that I wind up learning as much from students as I think sometimes they learned

from me. And so I'm wondering what what you've learned in the process of helping so many businesses approach AI a little bit more ethically, Like, have there been insights that you've gotten through your interaction with clients and the challenges they've been facing. I'm learning with every single interaction. For example, in my mind, given the experiences that IBM has had with respect to setting up our principles are

pillars Arii Ethics Board. There's a process to follow, right if you're thinking about it like a book, these are the chapters in order to to optimize the approach. Let's say, but sometimes we work with clients that say I'm gonna install this tool and I want to jump to chapter seven, and it's like, okay, you know, how how do we help navigate clients that want to skip over steps that

we think are important. Another one is again the social scientists and bringing them into really push hard on what is the right context where this data tell me the

origin story? Again like really pushing us to think hard and with their perspective, you don't know, just constant, constant learning, which is why one of the things we did at IBM is we've established something called our Center of Excellence, where we said, you know what IBM or we don't care what your background is, we don't care who you are. If you're interested in this space, you can become a member.

The Center of Excellence is a way in which we have not only projects people can join in order to get real life experience, but then also share back here's what we learned. We did this with this particular and here was our epiphany, because if we're not sharing back and we're not constantly educating, then we're missing the opportunity to establish the right culture. Establishing the right culture to

share what we're learning is so important. So I wanted to But going back to where we started, you with your technofile family watching Star Trek, I think if we were to fast forward a couple of decades, we probably couldn't have imagined that we'd be in the place with AI generally where we are now, and especially as we

think through more trustworthy AI. And so you know, with such change happening right now, with the fact that it's a fire hose that's gonna just get even more powerful over time, what do you think is next in this world of thinking through more trustworthy AI. I would say next is far more education, far more understanding. And we're starting to see that shift, far more CEO saying yeah, ethics has to be corrid or business. There's that, but

there's a shift. Barely half of the CEO is we're saying that a ethics was key or important to their business. And now you're saying the great majority so education, education, education, And again I would underscore making it far more accessible to far more people, which means it's not just our classes in higher ed institutions, it's our conferences, it's anytime we write white papers, anytime we publish articles, anytime we

do podcasts like this. Right the way we talk about this space has to be far more accessible and open and inviting two people with different roles, different skill sets, different worldviews, because else again, we're just codifying our own bias. Well, feature, I want to express my gratitude today for making AI a little bit more accessible to everyone. This has been such a delightful conversation. Thank you so much for joining me for it. The pleasure was mine. Looie, thank you

for being the consummate host. Thank you. I want to close by going back to that moment when Lorie suggested that Phedra was actually IBM's Chief Bartender Officer, not just because that's the best C suite title ever, but because gets at what I think is the biggest, most important idea. In today's episode, Pedro boiled it down into a single line when she said, the hard part is not the tech. The hard part is human behavior. Why is building AI

so complicated? Because people are complicated. IBM believes that building trust into AI from the start can lead to better outcomes, and that to build trustworthy AI, you don't just need to think like a computer scientist. You need to think like a psychologist, like an anthropologist. You need to understand people. Smart Talks with IBM is produced by Molly Sosha, Alexandra Garraton, Royston Reserve and Edith Russolo with Jacob Goldstein. We're edited

by Jan Guerra. Our engineers are Jason Gambrel, Sarah Brugere and Ben Holiday theme song by Grandmascope. Special thanks to Carlie Migliori, Andy Kelly, Kathy Callaghan and the eight Bar and IBM teams, as well as the Pushkin marketing team. Smart Talks with IBM is a production of Pushkin Industries and i Heart Media. To find more Pushkin podcasts, listen on the i Heart Radio app, Apple Podcasts, or wherever you listen to podcasts. I'm Malcolm Gladwell. This is a paid advertisement from IBM.

Transcript source: Provided by creator in RSS feed: download file