Welcome to tech Stuff, a production from I Heart Radio. Okay, there, I'm Lauren Vogelbaum, sitting in for Jonathan Strickland today for I Heart Radio's International Women's Day podcast Takeover. Uh. Some of y'all might remember me from the way back I was Jonathan's co host here for a minute, when I was just a ton of little baby podcaster, or from Forward Thinking, another show that we worked on together, or from other podcasts that I work on myself, or you
might find my voice new and strange. Um. But in any case, hello, thank you for existing and for being here today. In honor of International Women's Day, I wanted to do this episode about ethics in technology, and specifically an artificial intelligence. You know, about the ways that tech can can hurt or help the quest to make the world more equitable. And you might be going, aren't computers
essentially or even quintessentially unbiased? Um? You know, a program doesn't have feelings, It only has code that it executes, And of course that's true, but the humans who write the code do have biases, some conscious, some unconscious, and so the ways that we tell programs to work can carry those biases. One example that I always think of. And you might have seen headlines about this back in like two about how digital cameras can behave in biased ways.
There was this whole thing where some webcams weren't tracking and focusing on the faces of black users, and some other cameras were flagging photos of Asian subjects because the software insisted that their eyes were closed. In both of these cases, it was clear that the programs had been trained on photos of a vast majority of white faces. The programs didn't know what to do with skin that reflected light differently or with eyes that were a different shape.
And I will say, like this is not a purely digital issue. UM, this sort of thing has been an issue in photography for as long as photography has existed. UM film stocks were originally created with only white subjects in mind. And it wasn't until like furniture and chocolate companies started lodging complaints with Kodak in the nineteen sixties and seventies. The Kodak started to adjust their films to
better capture different shades of brown. But on this like small example scale, you know, like that sucks for the users of these cameras. But these problems are really compounded when you start getting into big data and machine learning and artificial intelligence. Artificial intelligence has the capacity to totally
change our world for the better. Um. Everything from making our energy grid more efficient and more adaptable, preventing tragic outages like we saw in Texas recently, to helping farmers make the most of their resources and getting more fresh foods to people who currently don't have good access to that, to making autonomous vehicles possible, to letting your doctor just like real quick consult every case of a disease, ever while making a decision about how to proceed with your treatment.
Uh to I don't know, stopping the thing where you always get served ads for the thing you just bought. Yes, I like that T shirt, That's why I just bought it up. Okay, Uh anyway, it's it's not It's not as simple as um Asimov's laws of robotics when you start getting into the wider consequences of AI UM and you know, um, a robot may not injure a human being or through inaction, allow a human being to come to harm, which of course didn't even work in the
fictional world that that asthmov was set up. And robot ethics is totally a thing that is also not simple. And of course it is ideal if you're if your roomba or your autonomous car does not kill you. UM, but we're talking about designing these AI systems that will change the way of life for whole societies. UM, it is a big deal and it could lead to some big problems. So we need to talk about how to train your algorithm. Um. These big systems start small with
designers training algorithms with data sets. So right from the start, UM, you have the issue of what data is going in and within that data, what's being paid attention to and what's being ignored. And now I'm not saying that these designers are like all mustache twirling villains out there to do evil, but they are human. You know, We're each moving through the world with our own set of experiences.
There are so many other experiences that were bound to fail to take some of them into consideration, or to or to misunderstand some of those circumstances. Which is why it's so important to have people from a variety of backgrounds and on these projects. And right now, diversity in tech is uh not great. UM. Back inten a whole bunch of big tech companies got together and pledged to increase diversity in their workforces UM and make their results public.
And every year these reports come out and the numbers haven't changed that much in some of these categories in six years. UH. Women are better represented now UM or as rather having gone from around fift of the workforce to around of the workforce at places like Google and Facebook. But the only company that showed a comparable jump in UM black employees was Amazon, and they're including their distribution
center employees. And other categories of underrepresented people like people with disabilities aren't even being reported in all of these public results UM, but studies show that they're dramatically underrepresented in the workforce, which just isn't great. You know, when we're designing technology like self driving cars that will need
to take into consideration the movement of wheelchair users. And there are unfortunately instances of programs being made specifically with bias UM, like in to sixteen when the Boston Police Department used this social media surveillance system to flag posts made by regular citizens who who used certain terms, for example, colloquial Arabic Muslim words or um words like ferguson or protest, and we are all in this, whether we like it
or not. UM Again, just as one example, everything that we do online, from you know, what we type into search engines and social media sites, to our location data, to how we move our mice or like tap at our smartphones, all of that has the potential to be recorded and collected and sold and referenced and cross referenced and used to track us in any number of ways.
But AI systems are a huge industry worldwide. Business spending on artificial intelligence hit about fifty billion dollars in and it's expected to more than double that by retail, banking, media, governments, All kinds of industries are investing in this, and all of this is fairly new, But of course the field of ethics, and even computer ethics is not new at all.
So ethics and technology have always been tied together because every time that that we humans create some new technology that changes our world and how we interact with it and each other, we have to reconsider our world and our interactions. You could even argue that in that way, like like philosophically, ethics is itself a type of technology. But I'm not going to go that deep today. I'm
backing away from that precipice. Uh So, let's skip ahead from you know, the beginning of human consciousness UM to the nineteen forties, because that's when digital computers were being invented and in the field of cybernetics got started. That's
the science of information feedback systems, right. UM. Cybernetics was pioneered by m T mathematician Norbert Wiener and some of his colleagues as they were working during World War Two to develop an anti aircraft cannon that could that could a detect and track a fast moving airplane and then be extrapolate the airplane's probable location in the immediate future and aim um and then see signal the firing mechanism to fire. And that internal communication that the machine was
doing really got Weener thinking. In he published a book called Cybernetics, or Control and Communication in the Animal and the Machine, and in it he mused that these new computing machines could very easily become central nervous systems for processing all kinds of data from all kinds of instruments, and that that potential was huge. He compared it to
nuclear weapons um. He wrote, long before Nagasaki and the public awareness of the atomic bomb, it had occurred to me that we were here in the presence of another social potentiality, of unheard of importance for good and for evil.
Oh Weener expounded on this in a book he published in nineteen fifty called The Human Use of Human Beings, which which basically predicted that integrating computer technology into society was going to be another revolution, just as sweeping and messy as the Industrial Revolution, and he tried to lay out a bit of groundwork for how to not like totally bork it up. Spoiler alert, people borked it up anyway.
There wasn't a whole lot more work done in the field of computer ethics until the nineteen sixties and seventies, when computer based crime and information security and privacy concerns had already become a problem. So by the mid sixties, corporations and the government had both begun collecting just giant amounts of personal data about US citizens in literally massive computers. I guess, I guess all computers are literally massive in
that they have mass. But okay, you know what I mean. Um, From you know, medical records to military records to legal documents to shopping habits. And this journalist by the name of Vance Packard wrote a book called The Naked Society published in nineteen sixty four about the inherent privacy issue of having that information collected and available for instantaneous reference. UH.
Something of an uproar ensued UM. Focusing as Packard's book did, on the US government's use of citizens information UM, and it led to just a whole bunch of data transparency legislation over the next decade. UH. The Freedom of Information Act, the Fair Credit Reporting Act, all designed to make sure that citizens are able to know what data is being collected about them by the government and how it's being used, And to be fair, at that time, most computational power
was in the hands of the government. But this legislature and conversation really ignored the activities of private corporations, and it never really questioned the ethics of collecting all of that at to in the first place. UM. And this is apparently like a very American thing, the concept that information is inherently good and like more is better UM. But that is a rabbit hole for another day. But
it's not that people weren't thinking about the ethics. One important thing that happened around the same time was that a researcher by the name of Don Parker, who was looking into crime being committed via computers. Parker proposed to this leading industry professional organization, the Association for Computing Machinery, that they develop a code of ethics for their members,
and they were like, yeah, cool, you you do that. UM. So he headed up a committee and the a c M adopted their first code of Ethics in ninety three. They've updated it, I think like about once a decade since then, UM, with the most recent update being in It's really thoughtful. UM. One of my favorite bits from the intro its specifies that it's quote not an algorithm for solving ethical problems. Rather, it's as a basis for
ethical decision making. UM. You can read it if you're into that sort of thing by going to a c M dot org and then back to our timeline. In nineteen seventy six, Joseph Weisenbaum, who had created the psychotherapy mimicking chat, bought Eliza. A decade earlier, he published his book Computer Power and Human Reason From Judgment to Calculation, and this book was a response to the response that he had gotten from people to his chat Bought Eliza, and I know that Jonathan has talked about this chat
Bought on the show before. UM. It comes up a lot and discussions about the Turing test and how convincingly computers can approximate human communication because it was one of the first that was effective. UM. The thing is, though, that Weisenbaum designed Eliza as a demonstration of how bad computers inherently are at this type of communication, but he got the opposite response from people who tried the program out in talking to I mean, you know, typing with
Eliza about their psychological problems. People felt like Eliza understood them, even when they knew it was a robot. UM. So Weisenbaum was kind of like, whoe wait uh. And so he wrote this book to to really explain the differences between computation and human intelligence and to assert that ethics are imperative in the design of artificial intelligence because people will forget the computers do not have the understanding, the wisdom,
the moral and emotional consideration of human beings. And this is when things really started picking up in the field of computer ethics. UM. Also in ninety this professor who was teaching medical ethics at the time. Walter Byner noticed how computers were complicating that field, and he became the first person in academia to really like decide computer ethics should be its own field of research and application UM. He wrote that computer technology was creating whole new ethical
concerns that needed to be taken into consideration. UM, so he popularized the term computer ethics, did speeches and workshops and everything and jumping ahead. The first major textbook on the subject was published called Computer Ethics, edited by one Deborah Johnson, and Johnson disagreed with maner Um. She argued the computers weren't creating new problems, but rather exacerbating old
problems around things like privacy, ownership, power, and responsibility. We will get into what the future may hold after we get back from a quick break. Welcome back. As the computer industry group with a consumer adoption of home computing, with with internet access growing in both the capacity and the ubiquity of computers, just absolutely snowballing UM. The field of computer ethics exploded UM, and more and more groups formed to to help everyone makes sense of all of this.
For example, the Electronic Frontier Foundation was founded in there, that advocacy and activism group that's dedicated to defending civil liberties as technology advances. And okay, this is kind of a side quest, but I did not notice. They actually formed up in response to the federal seizure of a bunch of computer equipment belonging to Steve Jackson Games, yep, the company that brings us Apples to Apples and Munchkin
lots of other good stuff. So what happened here? What was that was that there was this Bell South digital document that explained how the emergency telephone system worked, and it leaked, and the U. S. Secret Service was concerned that having that info out there was a security concern, you know, that hackers might overwhelm the system, um, something like that. So so the Secret Service was conducting raids
tracking this documents digital distribution. Steve Jackson was innocent, um, but they got this warrant, conducted this raid, took all this stuff, um, and wound up accessing and the leading a bunch of personal bulletin board messages from the company's website in the process. And now the Secret Service, you know, didn't find anything, so they gave the equipment back and didn't press charges, but Steve Jackson was like, no, no, no, y'all almost tanked my business. You violated the privacy of
my bulletin board users. We are pressing charges against you. But there was really no civil rights organization that was prepared to take on the case due to the technological complexity of the issue. So the e f F formed in order to bring that suit to court. Um, and that was the first time that the court recognized the email should have equal protection to to to to phone calls or any other kind of communication. So so thanks, thanks Steve Jackson for for everything. Um, the ff has
done a whole lot of important stuff. Um, they got encryption technology taken off the list of nationally regulated weapons. That's a separate episode though, anyway, So side quest over back to the main quest. Um, all of this computer stuff exploded and the field of computer ethics specialized. Um, so now you've got internet ethics, information systems ethics, robot ethics. And I and I do want to say, all of this does coincide with work being done in in other
fields engine engineering and information science. Like I don't want to imply that computer technology was the only field that had been working with and contributing to these to these ethical theories and the practical application of them. But like I was saying at the top of this episode, one of the really interesting specialties to me is in AI ethics because it does have such potentially sweeping effects, and also because it keeps coming up in the news for
less than rad reasons. So you have probably seen headlines over the past few years about algorithms gone wrong. Um, there's there's the story from where Pro Publica found that the software used by some courts to determine the risk of criminal defendants committing further offenses and therefore to determine whether to detain those defendants until trial or or what
kind of bail to set for them. Um. At least one of those algorithms, called Compass, regularly found black people riskier and white people less risky, even when everything else about the defendants cases were comparable. And this kind of issue crops up in discussions about hiring software as well. Because of a lack of care in their design, these programs that automatically sort through resumes have ranked applicants who have woman sounding or or black sounding names as lower
in consideration. And then there's Google's search algorithms UM from eighteen there were all of these headlines. Reverse image searches using photos of black people were returning images of guerrillas because no one had taught the system how to consider dark skin tones in people. Or if you searched the seemingly innocuous term black girls, the first page results out
of trillions of web indexed pages included porn. Or there was research into the search history of the white guy who killed nine black people in a church in Charleston, South Carolina. It's highly likely that he was radicalized, at least in part due to the way that Google search works UM. It's taking into account location and demographics about him and other searchers in his area. Google search returning white supremacist propaganda when he searched the term black on
white crime. By the way, related to this, you can make Google give you less specialized search results. And I am looking into that and doing it as soon as I finished recording this this because it is continually infuriating to me, just just when I'm doing my reading for for podcast episodes like this and trying to find stuff that isn't like a restaurant in my area. If I'm
talking about a larger concern anyway. UM. Also, remember how I was talking about the flaws in programming of digital cameras and how they sometimes have trouble discerning specific features of people of color. UM. You know, extrapolate that out to how facial recognition software is used with surveillance footage
by police departments. Um, the software is more likely to make a mistake in identifying a black person's face because the software just isn't as good as seeing that face because of how it was programmed, which can lead to false identifications and thus wrongful harassments and arrests. UM. Georgetown University put out a whole report on this in called
the Perpetual Lineup. That report found that half of Americans have photos and police facial recognition databases, by the way, UM, which includes just lots of people with no criminal backgrounds. And of course, even if we fix those algorithms, that that isn't going to fix the fact that communities of color are subject to more surveillance in the first place. More recently, there have been headlines and a whole discussion about this new rule that was issued in September by
the U S Department of Housing and Urban Development. And this rule essentially makes it super difficult for banks or landlord or homeowners insurance companies to be sued for denying housing two people of color if an algorithm was used to make that determination. Based on the concept that algorithms cannot be racist, the rule was immediately challenged, UM, and in January, President Biden issued an executive order directing the Department of Housing and Urban Development to to examine the
rules effects. But hoof, hoof, UM, And you know it's it's not it's not easy. None of this is easy, um. In order to build a non biased artificial intelligence system, we um, I mean like like humans, not like you know, you and me, dear listener. We need to change the
systems that lead to the building of artificial intelligence. We need to examine how the design and programming is taught, um, how companies conduct their business, how policy is written, and and who has access to seats at all of those tables. I mean it's also not technically easy, Like when you build and train these systems, just adding more diverse data isn't magically going to make the system create better, less
biased rules. It might create conflicting rules. UM. You know it is expensive in terms of time and effort and and just pure physical energy to to do this work. The pitfalls of not doing this work are tremendous. You know, it can cause measurable hurt in people's lives. And as one doctor Debi Chatra, material science engineer, has said, any sufficiently advanced neglect is indistinguishable from malice. But the benefits
to doing this work are equally measurable and tremendous. One consideration moving into the future is how to square the very concept of ethics with an increasingly multicultural digital world. Um. You know, not not everyone on the planet grew up with European philosophy, going back to the ancient Greek as the basis of their ethical conception. And we also have to acknowledge that, um, you know, whatever a culture's philosophical basis,
it's probably rooted in some biases of its own. Um. Just for example, just throwing it out there, if your society has coded emotions as feminine and feminine as bad, um, then you're probably not giving emotional harm as much weight,
if any weight, as physical harm in your considerations of justice. Um. And you can see the effects of this in things like the care that we give our veterans with physical injury versus veterans with PTSD UM or just the general ways that our society handles mental health versus physical health,
or or any kind of neurodivergence. All of this work in artificial intelligence and the ethics thereof is really requiring us to redefine intelligence, to fully consider what we mean by human intelligence, what logic and emotion and experience go into that, and the ways in which machine intelligence might differ. The Stanford Encyclopedia Philosophy references Minsky's book The Society of the Mind, saying we do not wish to restrict intelligence
to what would require intelligence if done by humans. And and of course that's true, um A, I can do stuff that we can't that that's arguably the whole point, um. But it all has to be done with the best of what human intelligence can be in mind, and I'm just now realizing that, like I might have written a forward thinking episode instead of a text stuff episode. But
but that's what I've got for you today. So if you have enjoyed this episode and would like to hear more from me, um, you can find me podcasts like brain Stuff. It's a It's a daily short form general science and culture show. UM or Savor which is a food science and history show. Or American Shadows, which is produced with Aaron Minky's company Grim and Mild UM It's it's a show about some of the darker bits of American history UH and ways in which even those dire
situations UM had light brought to them. I would like to give a quick shout out to my friend Damien Patrick Williams. He works in this field and has made me more familiar with a lot of the concepts that I talked about today. UM. You can find lots more from him at a Future Worth thinking About dot com. This podcast is produced by Tari Harrison and Severe. Thanks to her for being so kind and accommodating UM and
helping me with this episode. The executive producer is Jonathan Strickland, and thanks to him for trusting me with his podcast for a day. Thanks to you for listening, and He'll talk to you again. End really soon. Yeah. Text Stuff is an I Heart Radio production. For more podcasts from I Heart Radio, visit the i Heart Radio app, Apple Podcasts, or wherever you listen to your favorite shows.