Justice looks like, how do we use these tools in a joyful, uplifting manner versus just being reactive to the next times.
There are No Girls on the Internet. As a production of iHeartRadio and Unbossed Creative, I'm brigitat and this is there are No Girls on the Internet. This week, President Biden signed an executive order to create some safeguards around
the use of AI. This comes after black women women like doctor Joy Blomwini, founder of the Algorithmic Justice League and author of the new book of Masking AI, have been speaking up about the ways that technology like AI has already harmed marginalized communities and what needs to be done to stop it now. That last part is key because even though her groundbreaking research has been critical to understanding technology harms, doctor Blomweni's vision the future of technology
is optimistic, blending poetry and technology. She asks, what is our collective, just and joyful vision for the future.
Hello, my name is doctor Joy Bloomwini. I'm the founder of the Algorithmic Justice League and the author of Unmasking AI. My pronouns are she and hers.
So I've heard you call yourself a poet of code which is awesome. What do you mean by that?
So I am the daughter of an artist and a scientist, So I do feel I've grown up with the arts and science literally together, and so when I use the term poet of Code, it's really to reflect those two sensibilities which inform my work. So there's a major part of it which is storytelling and humanizing what's going on with evocative audits, you know, and portrayals. And then there's another aspect of it that is getting into the analytical technical pieces of what it means to evaluate a machine
learning system or other types of AI applications. So the Poet of Code is very much indicating my origins as the daughter of an artist and a scientist.
Do you think that that the way that you approach the work has really helped bring more folks into it Because I've been interested and invested in conversations about tech for a very long time. I did not care about slash, maybe even fully understand the implications around bias and things like AI until you and so you had this way of really making it visible, really making it poetic, really making me understand what was at stake.
Do you think that.
Part of why that is why folks feel so drawn to your work is because you make it so poetic, So, you know, story based really help people understand like where they fit into it.
I think there is that element. So as I was doing my research at MIT that involves publishing research papers, and as fun as those are, you know, that's a very small community that will likely read those types of papers. So I wanted to say, how do I go from the performance metrics of evaluating an AI system to something like performance art. Why does this even matter? How do we get to the heart of all of these numbers. So if we see bias in a system and we
quantify it, that's only part of the story. The other part of the story is what does that mean for someone who could experience algorithmic discrimination, algorithmic rature, or exploitation. And that's where the storytelling has to come in and it did for me. When I was a student at MIT, I had an opportunity to do research that showed large skin type gaps and gender gaps with the accuracy of
different gender classification systems. So these are AI systems that look at a photo of your face and try to guess your gender. Where could that go wrong? Well, so we decided to do a bit of an evaluation. And after we ran the numbers and we showed there were large gaps and biases documented, I wanted to show people why it mattered.
It does matter to all of us, whether you spend a lot of time thinking about it or not. This kind of technology is becoming more and more commonplace, despite the fact that it doesn't work so well on women or people with darker complexions, setting us up to disproportionately experience harm from its use. In gender shades doctor Blomini's groundbreaking research, she was among the first to uncover the
gender and racial biases that plague facial recognition technology. But ever, the poet Doctor blomini spoken poem ai ain't ti a woman? Really brings the problem to life? Or AI misgenders and misidentifies famous black women in history like Michelle Obama, who facial recognition recognizes as a young man wearing a two pay.
And so from that gender shades research project came the art piece that is AI ain't I a woman? Michelle Obama unabashed and unafraid to wear her crown of history, yet her crown seems a mystery to systems unsure of her hair, a wig of a fond a too pay maybe not? Are there no words for our brains.
And our logs?
Where I show tech companies you've probably heard of failing on the iconic women of people like Oprah Winfrey, Serena Williams, Michelle Obama, historic figures like sojourn nor truth. Hence the title AI AIN'TI a woman? And I saw that when I shared that poem. It's a video poem in all kinds of spaces, right you, defense ministers, you know, kids
in middle school. It touched people's humanity in a way that the research couldn't, And that for me was really an important moment because for a long time I felt that I couldn't bring my art into my research because it might not be taken as seriously or it might
lessen its impact. And I found just the opposite. When you humanize what's going on, it extends the reach of the people who feel they have a place in the conversation about AI or even like, oh, this is how it could matter to me, not some abstract Oh there's discrimination or tech can be harmful.
These harms aren't abstract or theoretical. They're very real. And they're already happening. We talked about Portia Woodrif on the podcast before. She was heavily pregnant when she was falsely arrested and held for hours and needed to be hospitalized after being falsely arrested when police facial recognition misidentified her as a suspect that a carjacking she had nothing to
do with. And she's not the only one. Back in twenty twenty, Robert Williams, a black man, became the first documented case of a person being falsely arrested thanks to the use of faulty facial recognition technology. Robert was arrested in front of his daughters after facial recognition mismatched his driver's license photo to someone who stole watches from a Shanola store in Detroit, but Robert had nothing to do
with it. Tools like turn it in that are used to detect students cheating by turning in AI generated assignments routinely falsely accused of students of plagiarizing. According to the markup, the technology is much more likely to generate a false positive for international students and students who are non native
English speakers. A group of Stanford Computer scientists found that seven different AI detectors flagged writing by non native speed makers as AI generated sixty one percent of the time on about twenty percent of the papers. That incorrect assessment was unanimous. Meanwhile, those same detectors almost never made such mistakes when assessing the writing of native English speakers. Obviously, these kinds of accusations could throw vulnerable students academic careers
into turmoil. The people like Porscha and the international students speak up when they've experienced harms because of faulty technology, So are the powers that be listening? Do their experiences matter as much as the companies trying to make money from rolling out this technology do.
But an example like Porscha Woodruff falsely arrested due to AI powered facial recognition. She was eight months pregnant, sitting in a jail cell, having contractions for crime. Now she was being held for a crime she didn't commit. And so when you hear those stories, the stories of who I like to call the X coded, you start to pay attention, right or maybe it's your kids and they got flagged for cheating. Turns out they didn't actually cheat.
English is their second language. But some chat GPT detection system, right is flagging them as cheating. And so I do think those stories are what helps people see that this is a conversation that requires their voice. And it's so easy to think it's like I have a PhD from MIT, but I was doing all this before. Right, You don't have to have this type of in depth technical background to have a voice and to have an important perspective, because if you know you're being hard, that's enough. Yeah.
A big part of what we aim to do here is to help people understand that you might not be an engineer, you might not have a doctorate, but you are the expert of your experience, and you use this technology every day or it's being used on you, and so you innately have a perspective that is valuable and worth sharing and worth hearing and worth centering about how that technology has impacted you.
Let's take a quick break.
At er back. People who are traditionally marginalized, like women and black women, are made invisible by technology. Every single day we're faced with silencing, erasure, and hostility. So is that one of the reasons why the technology that these spaces build also can't really see us? Do you ever feel I mean this stay with me here. I think that there is like a general hostility toward marginalized people,
like toward black women. And I think in technology and I sometimes feel that the technology that is being made in turn mirrors that same hostility, mirrors that same erasure. And so because these facial recognition is not being tested or trained out us, you know, diverse data sets or whatever it in turn erases us, do you feel that there's like a that is kind of because of this underlying hostility toward people who are traditionally marginalized in the space.
I actually don't think it's a intentional underlying hostility, which makes it even more dangerous. So well intentioned people right collecting data, doing what they think is good science, good machine learning, can still create harmful systems. And this is what I learned after we did different audits, and I would go talk to the teams behind some of these systems, right. They are nice people, you know, try to send the
kids to college, right. And it was actually interesting to me because as much as I was wanting to humanize the people who are excoded, who are harmed by AI systems, part of what I try to do in the book as well is also humanize the people who are creating the systems and where things go wrong. But to your greater point, it's not an intentional hostility. Sometimes it is a profound and harmful ignorance to not even think to ask certain questions or to test the system in particular ways.
That's part of what the research was about. We asked what happens when we put an intersectional lens on the way in which we analyze the performance of AI systems, and just doing that right opened up new areas of conversations where before people would just look at the overall score, and that gave us a false sense of progress within
the space. Because we were testing these systems on benchmark data sets to see how well they do, and then you would look at the benchmarks, some of the benchmarks would be over eighty percent, lighter skinned individuals over seventy percent, you know, people identified as men. And if that is your benchmark of success, you're already not going to see
how you're failing. And so when we created a more inclusive data set, et cetera, it allowed us to see that the promises of potentially well intentioned people weren't even panning out But there's even more to that, because I think with some of these conversations it can seem like it's a very technical problem with a technical solution. Data didn't detect, you know, system didn't detect the face, make it more inclusive, whatever else it is. But the problem
is accurate systems can be abuse. We've seen facial recognition systems deployed at protests right, which we know can lead to chilling effects if you know you're under surveillance for daring to exercise your First Amendment rights to say this is not correct. Accurate systems create tools for state surveillance. So yes, you can say, well, our my phone tracks me done. The other you can leave your phone at home.
Your face is a little bit harder. You know, some people do put on a face, but you know what I mean. You know what I mean, right, So I think it's important to understand that even when we have conversations about the accuracy of certain systems, and we should have those conversations, accuracy is not enough to assure accountability or equity.
Now, when we're talking about accountability, especially from tech companies, it is so easy to get caught in a cycle of name and shame where you point out all the bad things that a specific company has done. And if I'm being honest, I might have done that a time or two on this very podcast. But doctor Bloomwini describes their as less name and shame and more name and change. They want to show companies what they're doing wrong so
they can change for the better. But this hasn't always meant that those companies don't lash out when her teams point out the harm that they've caused. I'm curious, how have companies I won't say any names, but companies who you have called out in your research or you know, said like this is hey, this is what's going on. How have they responded to your findings?
So overall, I take a name and change approach. So the point of pointing out what's wrong isn't to shame a company, is to say we can do better. Right, And sometimes we have companies that are reactive. We have companies that are proactive, and we have companies that are combative. With the first set of research results that we released, we saw more of the reactive stance, which is is oh, now that there's a headline, right, we're gonna go. We are on the problem or we have or we were
already working on the problem. You know, they're different ways, but now it's a priority because it's making headlines. So I saw that and the reactive approach tended to be a technical approach, which is okay, there were these disparities, so let's close them. We now have more accurate XYZ. Again, accurate systems can be of use. Then we did experience
some combative responses. Right, so here we had a huge tech company coming out and saying your research is misleading, attempting to discredit the research of At that time, I was a graduate student, and I was so fortunate, you know that I had senior scholars and people well respected in the AI industry who came to our offense, you know, cheering prize winner, somebody who was literally the chief AI scientist at that company, saying, what the research shows warrants
our attention, and this is research we should be elevating, not dismissing, because it makes the field better as a hold, if we can acknowledge our limitations, understand what's going wrong, so we can build more robust systems. Because this doesn't just deal with faces. Right, If you want to use computer vision to help. Let's say, with medical diagnoses, you want to make sure you understand where things can go wrong so we can course correct for things to go right.
So we had the combative approach, the reactive approach, but the approach I appreciate most is the proactive. Bro Okay, we've heard there's some issues. Instead of waiting for someone to drop the paper or the headline, what can we be doing as a company now. And I've had the opportunity to work with Procter and Gamble with Olay on
the Decode the Bias campaign. And when they came to me and they asked for an algorithmic audit, I said, given what you've described, your tool does and this was a tool that would analyze your skin and give you product recommendations. Ah, and how you train the tool on a set of data. I suspect if we dig in there, we're gonna find some bias. They're like, that's okay. If you find bias, we'll do what we can to correct it. I was like, and if you can't correct it, we'll
shut the system down. Like can I get this in writing? I never I never hear this. I never hear this. And then my other question was if we did an audit, could we publish the audit results, because that adds another level of transparency, so it's not just up, we got checked, but no one knows what happened. Right. They agreed to all of those things we did and in fact find bias as we thought would be there, and they actually and the proactive Not only did they seek to be audited,
they also agree to a consented data promise. And this was inspired by their skin promise. So when I first started working with LA I was excited. And then they told me, you know, when we do the campaigns, there won't be any post production air brushing. What we capture is what will show, right, you know, truth and advertising. You want to think about people's body image and all
of that. And I'll be honest, I was a little disappointed because, like, if you you just want to know, you could be saved by the airbrush.
Right.
But because of that promise, which is a good promise, I get where they're coming from. As the person on the other side of the four K camera in your face, you're asking, can we consider XYZ? But what I appreciated about that is it made me even more disciplined with my actual skincare regimen, and I also drank water, and
I did all the right things for vanity reasons. I won't I did the right things for vanity reasons, but I think about that with so that skincare that promise, right, that was part of the inspiration for the consented data promise. And just like when you make a promise and it's a public commitment, that's the important part. Now there's a little bit of accountability. Right. So now you are going
to bed early. Now you are drinking more water. Now you are exercising five days of the week, which might not have been the case before you made that public. So that's an example of more proactive. So from the reactive to the combative to the proactive, we've seen it all.
I think what I learned most from the bative response was how much of a risk I was taking as a young researcher to not only do their research, but to name the companies and I'll name them now, right, that I tested IBM, that I tested Microsoft, that I tested Amazon, and because of their power, that meant I was risking future opportunities and also other researchers watching how I was treated, how my co authors were treated people
like doctor Gabru. They were also getting a sense of what is possible When I look at research papers now where people openly talk about algorithmic bias and algorithmic harms, and people openly name the AI models or the tech companies. That wasn't always the case.
Right.
A price was paid for this more robust conversation to happen.
Doctor Blomini is right, this all comes out of cost. Doctor tim Net Gabrew, who she mentioned earlier, was a co author on doctor Blomini's gender shades research. Doctor Cabrew was once the technical co lead of the Ethical Artificial Intelligence team at Google. While in that role, she worked on a paper about the risks of large language models from environmental impact to bias. Google demanded she withdraw the paper. It got contentious, the conversation was hostile, and the whole
thing was highly gendered and racialized. Doctor Cabrew was belittled, discredited, and harassed online, and it ended with her termination.
It cost us something for doctor Gabru, you know, it cost her her job to speak up when she saw some of the issues that we see in what they call large language models, the type of AI systems that will power chat GPT right, and so I do think the timing of the different types of company responses also made a difference in my own trajectory. The first response I had when the gender Shades paper was published was IBM invited me to their headquarters. You know, I spoke
to their team members. They actually had released a new model by the time I was presenting that research, and I could share what their results had been. And then later we did our own study. So that was a very different reception. That reception gave me home. I was like, Okay, all right, let's work together. Amazon situation. I don't know, I don't know how corporate anymore sort of thing. But I'm being you know, I'm putting these as more extreme cases.
But the point being, we can't really just wait on if a company's going to choose to be reactive, proactive, or combative. What we really need are laws and regulations that don't rely on the goodwill of companies. Yeah.
I mean, I have to ask, when you were this young researcher naming these companies in your in your findings, did you know that you were taking on such a personal risk or were you like, oh wait, glad it worked out. Glad people had my back. Did you know that that was a risk that you were encourring and did it anyway or did you sort of do you sort of look back and think like, Wow, I'm really glad that worked out.
I knew that once the research was published it would be questioned, so before it was published, I actually sat with a law clinic.
Right.
We went through what could be said, right, what might actually put you in legal jeopardy, and so forth. So I didn't go into the situation not thinking there might be blowback. I was actually surprised with the first round we prepared and they're like, oh, yeah, okay, these are issues. Come to the headquarters XYZ, we released new models, et cetera. The blowback that I got with the second paper is what I had thought I might experience, but just the
magnitude of it I wasn't ready for. I remember what the film Coded Bias available on Netflix. It shares part of the story of graduate students Darnalgorithmic Justice League, and examples of people experiencing real world AI harms. The people to provide the insurance for that film were nervous because we critiqued Amazon. It wasn't that Amazon had said anything. It was just a acknowledgment of Amazon's power, right, And so it didn't dawn on me just how powerful some
of these tech companies are. I remember being at an international summit in Switzerland, and it was as if the heads of the tech companies were heads of state, you know, and so observing that closer made me really I was like, Oh, I'm like, okay, I'm poking. It's like, okay, I'm poking a dragon. I'm like, oh, it's a fire breathing dragon. Oh, it's like a dragon dragon.
And like when you go to kill a bug and you're like, oh, it's got wings, it flies, right.
So I knew, like, you know, it's not going to be the best situation. But I don't think I was fully prepared, though I thought I had prepared.
More after a.
Quick break, let's get right back into it. Even though Amazon tried publicly discrediting doctor Blomini's work calling out the harms of their facial recognition technology, in the end, they conceded that technology wasn't exactly safe. In twenty twenty, they announced a pause on allowing police to use the technology, and eventually extended that pause indefinitely and correct me if
I'm wrong. But your work ended up with Amazon rolling back some of the uses of their faulty facial recognition technology. So ultimately, not only were you obviously vindicated, but that work went on to create a somewhat like safer landscape for everyone because Amazon had to step back and be like, Okay, wait a minute, this technology maybe isn't really working that well.
They would not put it in those terms, but they did make they didn't take other steps. So I will say before Amazon, IBM actually said we are no longer going to sell to police departments. And this was in twenty twenty, right, so we also had the murder of George Floyd happening at that time, and Microsoft said we will not sell this until regulations are in place. And then Amazon came third, you know, and they said we'll halt it for a year, and then they extended that halt.
Right.
But this is to say there was an acknowledgment right of the risk and the harms. Was it just risk and harms to people, or risk and harms to the company's reputations. It could be a combination of both. I'm excited to share that this research led to numerous cities, you know, incorporating some of the findings in their analysis and in their statements for why they chose to enact
certain laws that restrict police use of facial recognition. It also changed the conversation at the national and international level that EUAI Act actually has a provision that would prevent the use of life facial recognition in public spaces. When I spoke with President Biden at the AI Realm Table some time ago, this was top of mind. I shared the story of Robert Williams being wrongfly arrested in front
of his two daughters. We talked about racial bias in AI systems and other types of harms that can impact many people because no one, trust me, no one is immute. This isn't just other people's problems. And so to see the reach of that work certainly made all of the combative you know, responses and things like that somehow worth it.
And it really goes back to what you were saying earlier about how systems don't have to be biased to be misused, and like, don't know, I want to believe in a tech landscape where companies with so much power don't have to wait until something goes wrong, don't have to wait until somebody is wrongfully arrested, don't have to wait until they're called out to make things a little bit safer and more equitable, Like, do you believe that
is possible where companies aren't just reacting, they're actually you know, being proactive at wanting the technology that they deploy on all of us to be more equitable.
I think that again, you do see companies taking on the mantle of responsible AI. You'll have other companies like Credo dot Ai that will have services that are meant to help companies adopting AI systems do it within a responsible way. You'll see companies hiring responsible AI leads, right, So I definitely think there is an intention there. Where I still push back a bit is self regulation is always self interested, not surprising, So I do think real
accountability requires external accountability. And the other part that I don't really see companies focused on so much is redress. So there's a lot of conversation about being responsible in terms of preventing future harms, but what about those who have been harmed already? And I do think algorithmic redress is oftentimes missing from this conversation of responsible AI. So when I see the company stepping out to say and we're doing redress, I might be convinced I haven't seen
it yet, though, prove me wrong. Prove me wrong. I want to be wrong.
Well as somebody at the helm of an organization fighting for algorithmic justice, what does justice look like?
Justice looks like you live in a world where data is not destiny, where you're hue is in a cue to dismiss your humanity, where you actually have data rights and you can consent to how your information is used. Justice looks like how do we use these tools in a joyful, uplifting manner versus just being reactive to the next time. So when I think of social justice, you
can't have social justice without algorithmic justice. Because if you're saying we're pushing for gender equality, yet you have an AI system that cuts out women's resumes, we didn't quite think it right. You can't necessarily say, oh, we have racial equality and then you're adopting bias facial recognition. That's
putting so far. The folks I've seen have all been dark skinned like us, you know, into prison due to misidentification, And so me for me right, Algorithmic justice is truly being in that place where we can be our full selves and not be targeted right or algorithmically placed as other algorithmically erased, algorithmically exploited. And so that's the world we fight for. Right we say, free the X coded, and so this is algorithmic justice.
The book is unmasking Ai your namesake, Joy. I have to tell you you are such a joyful person. Speaking to you about this work is it just comes through how much you care and how I don't know. I have a lot of conversations about tech that are hard and dark and grim, and your work is just so the opposite. It asks what if, what's possible? What can we do? How can things be better? Like it's just really nice to see somebody leading the way with such
a joyful but justice rooted perspective. It's so refreshing.
Thank you, and it's so wonderful to be in conversation with you. I love supporting Fi bad Ass. You said I could cuss women, so this is great.
So how can folks learn more about the Algorithmic Justice League.
I'm so glad you asked. We do have ww dot AJL dot org and so we invite people to be agents of change and join the Algorithmic Justice League. We have a library there as well, so if you're new to this area and you are curious you know what is even AI, We've created our resources for you so
you can be part of the conversation. And we also have an ex Coded Experiences platform, So, like you were saying, you are the expert of your own lived experience, and we value that expertise, so we do campaigns where people can tell their stories of being ex coded. For some people, we just launched a campaign about facial recognition in airports, so people are sharing if they saw signage, if they knew they could opt out, and all of that actually builds a database of stories that shows if the TSA
or others are actually doing what they say. They said it was optional. I didn't even know I could opt out. We have a disconnect, but we also have the data, right, So I do think people as they are encountering various AI systems and they have questions or stories to share, AJL is that place they can go to.
So please check out AJL it is you're doing such incredible work. Thank you so much for being here, and just thank you for being you. Thank you for being in the space. We need more people like you.
Thank you for having me.
Black women like doctor Blowini have been speaking truth to power when it comes to AI, and it's critical that the people who hold that power are listening. Her new book, A Masking AI is poised to be one of the most important books about technology of the year, and it could not have come at a better time. Available now, so I'll hope that you'll join me in reading it. If you're looking for ways to support the show, check out our merch store at tangoty dot com slash store.
Got a story about an interesting thing in tech, or just want to say hi, You can read us at Hello at tangodi dot com. You can also find transcripts for today's episode at tengody dot com. There Are No Girls on the Internet was created by me Bridget Tood. It's a production of iHeartRadio and Unbossed Creative edited by Joey pat Jonathan Strickland is our executive producer. Tari Harrison is our producer and sound engineer. Michael Almada is our
contributing producer. I'm your host, bridget Tood. If you want to help us grow rate, and review us on Apple Podcasts. For more podcasts from iHeartRadio, check out the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.