Hamlet and the Apprehension of AI - podcast episode cover

Hamlet and the Apprehension of AI

Jul 17, 202345 min
--:--
--:--
Listen in podcast apps:

Episode description

A developer has created an AI chatbot designed to help hackers create malicious code and content. Was this inevitable? Beyond that, the "good" versions of AI continue to produce troublesome content. What would Hamlet have to say about that?

See omnystudio.com/listener for privacy information.

Transcript

Speaker 1

Welcome to Tech Stuff, a production from iHeartRadio. Hey there, and welcome to tech Stuff. I'm your host, Jonathan Strickland. I'm an executive producer with iHeartRadio and how the tech are you? So? Folks who have followed me for a while know that I am a huge Shakespeare fan, which means I am insufferable in many ways, but some of them very specific to Shakespeare. Namely, I'll find any occasion to PLoP in a quote from one of Shakespeare's works,

and today is no exception. I'd like to start off with this gem from Hamlet, act to scene to as spoken by the gloomy Dane himself. Quote there is nothing either good or bad, but thinking makes it so. End quote. Now, that just means that good and bad are subjective concepts. It's all, as ob Wan would say, a matter of your point of view. What seems good to you could seem bad to someone else. But if you were able to look at the universe objectively, you would see there's

no good or bad at all. Another way to frame that is to say that good and bad are human concepts. And really it gets quite selfish if we think about it, because we typically frame if something is good or bad in the way it affects us, or, if we're the compassionate type, how it affects someone else. Now, the reason I wanted to begin with that quote is I thought we'd talk about a technology that's not necessarily good or bad inherently, but how we use it certainly can have

good or bad outcomes. Honestly, this could apply to every technology, or any technology, but there's some that I feel magnifies this quality. Now, I do think some technologies are hard to frame as good than others. Right, there's some technologies where, if you were to point it at me, I would say, I can't in good conscience call this a good technology. It's very difficult for me to think of weapons of

war as being good. For example, Sure, there's the use of weapons as a deterrent to convince other people to, you know, not trample all over your country, but we've seen time and again throughout history that amassing substantial military might doesn't guarantee against aggression. See also both world wars. So I won't be talking about weapons in this episode. But I will, however, talk about a technology that can

be weaponized, and that is AI. Artificial intelligence. The topic of twenty twenty three, and we're gonna start by talking about large language models and the chatbots built on top of them, because that's top of mind. And this is where I remind you that is not the only version of AI, right, that's not AI and chatbots slash large language models. Those aren't synonyms for each other. You know, those chatbots and large language models are a subset of

artificial intelligence, which is a very broad category. Now, obviously these branches of AI have been in the news incessantly since OpenAI introduced chat GPT last year, But even before that, we had generative AI tools that were making at least some headlines. They were the kind that could create images based on text inputs. But I would really argue it was chat GPT that propelled the conversation into the spotlight. It's certainly what forced Google's hand to unveil Google Bard

well before they were prepared to do so. Now much has been said of the potential and real dangers of chatbots like chat GPT or Google Bard. Even on this show, I've talked about it quite a bit, but you know, we have dedicated episodes to talk about the tendency for chatbots to invent information, for example, to hallucinate to use

the terminology of the biz. This happens when a chatbot doesn't necessarily have information to draw upon in response to a prompt, so instead the chatbot relies on statistical models to generate sentences that, strictly speaking, are coherent. They're correct, but they don't hold correct information right. They're grammatically correct, but they aren't correct from a sense of content. The

information within them are false. So in other words, you get grammatically and structurally sound passages, but the content itself is untrustworthy. That's just one way stuff can go wrong, however, and we have numerous examples of that where the technology

is working as intended, it's just generating responses that are untrustworthy. Now, the folks that open AI, as well as Google and several other AI businesses, have understood that there are real potential problems with the use of AI, and to that end, these companies frequently will build in guardrails to attempt to wrangle AI chatbots so that they don't go rogue and

produce hateful or malicious content. Now, these guardrails include rules that are meant to keep AI from doing things like generating hate speech or trying to intimidate someone, or make threats, or use deceit to trick people, or even create malicious code.

These guardrails aren't actually fool proof. There are countless articles that detail how people and research organizations with enough patients and gumption have convinced AI bots to do stuff that, in theory, at least, they should not be able to do. And if you don't believe me, just do a search on that. Do a search for chat GPT or a Google bard and about how they are capable of creating hateful or malicious content even though there are rules that

are supposed to prevent that. Now here's the thing. These AI constructs are meant to be benign, right. They are built to be tools that a corporation can sell to other corporations. So to make the tool marketable, they need to be safe to use. But then that's something that's been artificially put onto these tools to prevent them from going you know, super bad. What if someone made a chat pot built on top of a large language model, but all of those pieces lacked those guardrails. Well, that's

not a hypothetical situation. It has already happened. PCMag dot Com recently published an article titled worm GPT is a chat GPT alternative with quote no ethical boundaries or limitations end quote. In that article, writer Michael Kahn explains that someone developed worm GPT specifically as a way to help

people who have bad intentions act upon them. The developer is hawking this tool on hacker groups online and explains that their version of an AI chat bought worm GPT will lean on the power of a large language model gptj I believe to help design malware or to create better phishing attacks. Now, I'm sure all of y'all know that a lot of phishing attacks are ultimately pretty sloppy. If you pay any attention, you're going to see red flags indicating that this is not an email you should trust.

I bet you've received an email or three or three thousand that contained spelling errors and grammatical mistakes and format errors and other red flags, and that you figured out right away that the email you received isn't legit, that it's a poorly disguised attempt to bait you into clicking on a link or sharing sensitive information or otherwise taking an action that would ultimately result in negative consequences. For you. My company, we receive fake emails from our security team.

They are always testing to make sure that employees practice good security hygiene on company devices and company accounts, and one of the tips frequently shared by this team is to be on the lookout for mistakes like that. Because attackers often lack attention to detail, they create messages that lack professionalism while they try to target our more base instincts.

So most phishing emails try to engage us on kind of a primal level, and the goal is to prompt a response that will be akin to fear or greed or something like that. And sometimes that's enough if you hit someone at just the right time with a message that explains have got, say a huge amount of money sitting in a bank account that's been dormant for years, or maybe they need to take action right now, or

their insurance is going to expire. Well, those can be effective attacks, even if you forgot to use proper punctuation or spelling. But those mistakes can be an indicator that you're up to no good and it can tip off your target. So let's bring this back around to AI. One thing these AI chatbots are really good at is

formatting sentences with correct grammar and spelling. They're also pretty good at building paragraphs, where each sentence builds upon the point that was made previously, and new paragraphs introduce a new idea. If you were to read a passage written by AI, you might not think that it's the most brilliant prose committed to text, but you'd at least think it had been written correctly. And that means a malicious hacker could use AI to craft messages that are less

likely to set off those red flags and pass as legitimate. So, out of curiosity, I decided to put this to the test. I went to Google Bard and I wrote the prompt draft a letter informing someone they have a dormant account with seventeen three hundred and forty eight dollars in it. That was the prompt. I picked that amount because it's a lot of money. It's more money than a lot of people ever managed to have in a bank account. It's also not a round number, which makes it seem

less likely to be fake. And I just thought, well, you don't want to go super crazy, like the higher it is, the less likely someone's going to believe it to be true. But you want it to be enough to convince someone to take action, because who couldn't use another seventeen grand And google Bard drafted a letter without hesitation. When we come back, I'm gonna quote part of the letter that google Bard wrote to kind of illustrate my point here. But first, let's take a moment to thank

our sponsors. Okay, we're back. So the letter the google Bard wrote read, in part quote, I am writing to inform you that you have a dormant account with our bank. The account number is then there's brackets and it says account number not an actual number, just as account number, and then in brackets and the balance is seventeen three and forty eight dollars. A dormant account is an account that has not been used in a certain period of time. In our case, a dormant account is an account that

has not been used in over twelve months. We are required by law to inform you of the balance in your dormant account. If you do not contact us within six months, the fund in your account will be forfeited to the state. To avoid this, please contact us at your earliest convenience to discuss your options. End quote. So that draft is already a big step up from the

sort of sloppy fishing attacks you typically see. Right, you've got the amount listed there, you've got the explanation of what a dormant account is, which seems to be helpful. The explanation that if action is not taken within half a year, then this money is going to be forfeited to the state. And of course you could always go in and edit that statement so that you make it a tighter deadline to create a greater sense of urgency. But you already see the steps here that could make

this a pretty effective phishing attack. There are some details that the hacker would need to fill in, but that wouldn't be too tricky. You'd have to create a random account number, you have to throw in a URL that will push people to a fake login page to share usernames and passwords, and then you could start stealing data and money from them. Now, in that case, I didn't

ask google bard to create a phishing attack. If I had done that, if I had gone to google Bard and asked to create a phishing email, I would have gotten a denial for that request. That's against the rules. But it took me no effort at all to get the same result just by typing up some parameters and asking Bard to draft a letter. I didn't, you know, I didn't even mention phishing. I didn't do that at all. I didn't try that first. I just tried this approach

and it worked a treat. Now, I suppose you could say this is the difference between being a willing accomplice in the part of Google Bard to being an unknowing accomplice that Bard could not possibly know that I'm planning on using this text for my nefarious phishing schemes, but the result ends up being the same for the victims. It doesn't matter if Google Bard knew it was part of the crime or not. Even so, you can hardly blame Bard for creating a letter after I asked it to.

That's its job, right, I mean, that's the kind of thing these chatbots were made for. Malicious code is another matter. Entirely by its nature, it's meant to do something harmful to a target device. That could include creating a backdoor so that a hacker can remotely gain access to that infected machine and then do all sorts of stuff to it.

It might involve logging keystrokes, so that the hacker can read everything someone's typed into a device, usually stuff like bank details and credit card numbers and that kind of stuff.

Maybe it's ransomware. Ransomware typically will encrypt a target machines drives and then include a message that says unless the victim pays a ransom, typically in some form of cryptocurrency, their data will remain inaccessible, perhaps even with a deadline that if they don't pay the ransom by a certain date, they will delete the decryption key, which means it will

be really hard to get the data back. Not impossible, but practically impossible if the encryption method is sophisticated enough, like not impossible, but it take so much time that you might as well say it's impossible. Chat, GPT and BARD are meant to guard against this kind of stuff. You're not supposed to be able to use those tools to make malicious code. However, researchers at Checkpoints Software found that both chat, GPT and BARD could be cajoled into

creating malicious code. It might require a more circuitous approach to get it to work, like you can't just come straight at it and do it, but with a little persistence and a little ingenuity on your part, it was possible they could get chat GPT and to a greater extent, Google bard to create stuff that could be used maliciously. Now, worm GPT, the tool made by the developer who's marketing this to hackers, it doesn't even require the circuitous approach.

You can be straightforward and direct with it. You ask it to help you create some code that's malicious in intent, and it will to the challenge. So if you wanted to craft a phishing attack message, you could provide the parameters to worm gpt and it would craft a message for you. That message might pass as legitimate, much more readily than a cobbled together email with poor syntax and grammar would. But beyond that, worm GPT will also help

hackers create actual malicious code to infect target machines. Now, that code may or might may not work, because just because an AI built it doesn't mean it will be perfect or even working. Like we've seen examples recently of chat gpt getting very sloppy with code that it was doing, things like failing to close brackets, like you would have an open bracket section, but the AI would quote unquote forget to put a closed bracket in there, and thus

you would end up getting errors in your code. That's still a possibility, Like it's it's not like AI is going to make perfect stuff rut of the gate, but it certainly can work faster than humans can. And even if the code only kind of works, it may be a great leg up if you've got hackers who can go through the malicious code and make edits and tweaks

and corrections. And there's a real danger to AI building out code meant to exploit vulnerabilities, or even to pour over available code and find new exploits that as of yet are unknown. Like it's possible that AI could be used to identify zero day exploits that the hacker community can then take advantage of before anyone in security has

any awareness of it. Now, there's also the issue of new malicious code can confound anti virus production as well, right, because the way antivirus software typically works is you've got some code, some program that is searching for examples of malicious programs that exist within a huge library of malware.

So when you run an antivirus scan, what your antivirus software really is doing is looking for symptoms of malware that are part of the antivirus software's record if it finds when it's like, ah, I found evidence that such and such malware has been executed on this machine, and

then you get a little alert. But if it's a new type of attack, well, it's not going to be in that database, right, So you might have a malicious piece of code on your machine that's doing some really dangerous stuff and your antivirus software has no way of knowing it because it's brand new, it's not something it's seen before, so it doesn't register it as a virus

or other type of malware. And there's huge value in that kind of code within the hacker community because obviously you're going to have a much more effective attack if your target machines aren't capable of detecting it. Now, there's no shortage of warning us that generative AI is dangerous. I mean, I'm arguably one of them. There are lots of people who have been saying for a long time

that generative AI is dangerous. And again we've seen how tools that even have guardrails can still be used maliciously. So it should come as no surprise that someone was willing to go to the effort of building out an outright dangerous version of this technology. Right, Yes, we expect the big companies that are marketing this to corporations to go to the trouble of building out those guardrails because that's the only way they're going to be able to

do business. Otherwise you're going to have way too much regulatory pressure put on you. But for criminals, I mean, breaking the law comes with the territory, right, and there's clearly a demand for malicious AI, or at least AI

that can behave in malicious ways. So in the case of WORMGPT, at the time of this recording, the developer who created it is asking for a subscription of sixty euros per month or five hundred and fifty euros per year, and a euro for those of you in the US like myself, that's about a dollar and twelve cents per euro. So it's really not that far off when you're talking

about dollars. Now, that is not cheap, right, sixty euros per month isn't exactly cheap, but it's also not so expensive as to price folks out, particularly if they anticipate using the tool to create ransomware attacks that could if they're effective, net them millions of dollars if the targets pay up. Oh brave new world to have such AI in it. That's also paraphrasing Shakespeare. It's not Hamlet, though,

that's from the Tempest. Well, here's the thing. There are countless ways we could use generative AI to do great things. The tools themselves aren't outright evil. There is no good or evil, but thinking makes it so. It's just that this AI can make mistakes in the form of hallucinations. So even if you're using it for benign reasons, you can get negative consequences. But it's also possible to weaponize it, and that in turn could have much more widespread negative

impact on tons of people. Not great. Now, we're going to take another quick break. When we come back, I'm going to talk about an underlying ideology that I think is really harmful, and it's one that Professor Michael Littman a Brown University also feels is a big contributor to this issue. But first, let's take another quick break to thank our sponsors. Okay, So I alluded to Professor Michael Littman a Brown University and said he and I share an idea that of a contributing factor to this issue

with AI, and that relates to techno solutionism. Now, as that kind of little hyphenated phrase implies, it's a tendency to believe that technology can solve pretty much any problem you can think of that. You know, with technology, we can overcome any obstacle. If you're worried about the human impact on the environment and the consequence is like climate change, well there's no need to do anything about it right now because humans are going to eventually engineer a way

out of the mess. We'll just create technology that will not only stop climate change but maybe even reverse it. Which, you know, maybe that's true, it might happen, but that doesn't mean we can go on acting as if it is already true. Right. We can't start from the assumption that, yes, it is going to happen and everything will be fine, because making a problem worse while we wait for someone

to innovate a solution is a terrible idea. This is kind of like when I look around my cluttered office and I think I really should clean up, but I'm going to make that a future Jonathan problem instead. Meanwhile, in the process, I'm still adding to the clutter, which means that future Jonathan is far less likely to tackle the issue because it's gotten worse since present Jonathan pushed it off, and then future Jonathan is also going to

start resenting past Jonathan's immensely for good reason. Well, current generations are doing that to future generations right now when it comes to stuff like climate change and carbon emissions, right we are. Even when we're inventing solutions or what we think of as solutions to try and tackle these problems, we're not making the problem less severe. Instead we're just like, oh, well, we came up with this cool solution, so let's just keep doing the problem, because this is making the problem

less bad. Carbon capture is a great example, right with carbon capture. Ideally, you would get off of carbon emissions in general anyway, and then use carbon capture to help reduce the CO two load the atmosphere. But instead, what we're doing is we're using carbon capture in connection with carbon emissions, which means we're not easing off on carbon emissions. In fact, in some cases, we're getting even more aggressive with them, and we're counting on carbon capture to offset that,

so that we're still contributing to the problem. You know, we're not moving away from it, which is why I hesitate to really endorse technologies like carbon capture, because unless it's coupled with an actual move to reduce carbon emissions in general, it's not really solving the problem. It's actually enabling the problem. It's facilitating it anyway. Techno solutionism is also what makes it possible for someone like Elizabeth Holmes to convince investors to pour millions of dollars into an

unproven idea. So, in case you're not familiar with Elizabeth Holmes's name, she was the founder of Farranos, the startup that aimed to create a device small enough to fit on a desktop that would be able to run medical tests on a micro drop of blood. You just need the teeniest, tiniest sample of blood, and then you'd be able to run one of more than one hundred medical tests and check for everything from present conditions or diseases

to your genetic tendency to develop certain conditions. There was only one little problem. The technology didn't work. At least, it didn't work anywhere close to what it would need to be in order to fulfill the dream device that Holmes was looking to build. There were so many factors that needed to be solved, and some of them might not be solvable, at least not in a way that would require you to only part with a microdrop of blood in the process. But Holmes and her team did

their best to obfuscate that fact. They relied on existing blood analysis technologies to make it appear as though there device worked, while they also tried to keep things going until a breakthrough came along. It was sort of the fake it until you make it ideology, but coupled with some snake oil salesmanship and some smoke and mirrors as well. Now I do not know if Elizabeth Holmes really believed

her idea was possible. I wouldn't be surprised to hear that that was the case, that she truly, earnestly believed she could do this. Because we use technology to do some amazing things that are so commonplace these days that we forget that it's amazing. Like taking flight in an aircraft. I mean, I take it for granted when I'm on a plane unless I actually stopped to consider it, then

I think this really is astounding. Or accessing information on the Internet through a device we carry around in our pocket. I mean, the Internet alone is a phenomenal technology, and then smartphones being able to tap into that technology and make it mobile and accessible wherever we go. It's insane.

You know. As a kid, I read The Hitchhicker's Guide to the Galaxy and I thought, man, how amazing would it be to have a device that contained all this information You could ask in anything you can get an answer. And now we have that. Except it's not a device that just has a huge storage space for all this info. It's tapping into an evolving, ever changing technology of the Internet, which will give us up to date answers which may be right or wrong, but we can achieve phenomenal results

through technology. Right, You were able to do these insanely incredible things, So why shouldn't we believe that you could run one hundred or even more medical tests on a device that's the size of a computer printer and you just use the tiny drop of blood. That seems like it should be possible based on some of the other incredible things we can do. Right. That's the danger of

techno solutionism. We let ourselves think that because these other incredible things are possible, then everything is possible, or at the very least everything will be possible once we throw enough technology at it. But as Thearnos proved, and as AI is now emphasizing, this philosophy can lead us into trouble, and as worm GPT proves, it doesn't really matter if the big players in the space take steps to mitigate

the dangers of AI. First, we've already seen that those steps aren't sufficient, that even with the guardrails, you can go way off course. And second, someone else is always going to be willing to go where the big companies won't. If there's money to be made in weaponizing a technology, someone will step in to fill that market need. AI might be neither good nor bad, but people certainly can be, and malicious AI is a certainty. It's not a theory,

it's not a possibility. It is a certainty, not because AI is inherently bad, but because there are people who will see opportunity in directing AI toward malicious goals, and they will do that. Nothing will stop them. So let's think of another use of AI that has proven to have negative consequences. Facial recognition technology traces its history to

the mid twentieth century. That's when some researchers were trying to use computers to match faces with images that were stored in a database, and at that time the computer power and software wasn't up to the task. So lighting conditions, the angle of the photo versus the angle of the person's face that was being analyzed, the presence or absence of glasses, a change in hairstyle or hair color. All

these variables and more were enough to confound computers. Unless the sample image matched a stored image precise, the computer was not likely to be able to come up with a match, but decades of research and improvements in technology would change all that. The US Department of Defense got involved, which should already set off some red flags for y'all

if the DoD is in it. DARPA, the department that funds R and D in technologies that ultimately could prove useful for military and defense purposes, launched the program in the nineteen nineties in an effort to encourage commercial businesses to invest in developing facial recognition technology. In the mid to late two thousands, we started seeing cameras with face detection technology. This wasn't quite the same as facial recognition technology.

Detecting a face and recognizing a face are two different things. However, it did involve creating tech that could parse shapes and determine if a face was in frame, and facial recognition tech was still in full swing, so we would start to see those technologies converge in the background. In twenty ten, Facebook introduce a ton of people to facial recognition technology by implementing it on the social platform. So the way

this worked. As users would upload photos to Facebook, Facebook would automatically analyze each photo and look for faces of people in those photos. If the faces that Facebook detected matched people within their database of biometric data, particularly people who are already in your social network, Facebook would tag those photos and put the person's name in there. Privacy advocates worried about this. I mean, yes, if you were

doing something you shouldn't be doing. That already is an issue because if someone puts a photo of you up there and it tags you, you might be caught red handed. But even under innocent circumstances, it could be bad. I'll give you an innocent version of this. Let's say that you and your best friend from college live on opposite sides of the country now, and it's your best friend's birthday and you've planned a surprise. You've flown in to your best friend's town and you're going to go and

surprise them on their birthday. And one of the mutual friends takes a photo of you while you're at the airport arriving at the city, and it auto tags you when it's uploaded to social media, and then your friend finds out about the fact that you're in town before you even get a chance to do anything. That would stink right, all that work and effort wasted because of auto tagging. That's a very minor invasion of privacy that

illustrates the issue here well. For more than a decade, Facebook kept the facial recognition tech in play, but in twenty twenty one, the company at that point freshly renamed as Meta, announced that it would drop the feature and claimed it would also delete its database of images that were used to help identify people. That database included more

than a billion photos. Why would Facebook slash Meta do this, Well, partly it was probably because of optics, because at that time the company was under intense scrutiny from the US government after whistleblower Francis Hougan came forward with serious allegations against the company and brought along hundreds of internal documents backing up her claims. Many of those allegations related to Meta slash Facebook's failure to protect user privacy and security.

Skeptics were actually worried the Meta would hold on to that data and just say they were going to delete it, but not delete it, and then just drop the facial recognition feature off of Facebook and keep the information, especially as it tried to build out the metaverse. Texas Attorney General Ken Paxton actually told Meta not to delete the facial recognition data because his office was in the middle

of an investigation into the company's biometric data collection practices. Honestly, I don't know if or when Meta purged that information. I don't know if it has been deleted. I tried to look for some updates, but didn't really find much because almost all the articles I could find were from November of twenty twenty one, when Meta first announced it was going to wipe the slate clean, So I don't know if that actually happened. But beyond embarrassing or maybe

even incriminating images popping up on social media. Facial recognition has proven to be a disruptive and traumatic technology for certain populations, namely non white populations. So we often will think of AI as being objective. Right. It's not a human being. It doesn't have emotions, It has no motive or motivations other than to complete whatever task has been set for it. But we also have to remember that

AI didn't spring forth wholly formed. People designed AI, people built AI, people trained AI, and in the process people may end up building in biases in that technology, not necessarily on purpose or with malevolent intent, but that doesn't ultimately matter if those biases have impact on the general population. As the technology goes live. With facial recognition technology, those biases manifested in disturbing ways. Many facial recognition tools prove

to work pretty darn well on white people. Sufficiently trained systems could identify a person with a pretty high degree of accuracy, but with people of color in general and black people in particular, it was a different story than methodologies used by the systems would produce false positives, and you can easily imagine scenarios where this becomes a huge problem. For example, let's take law enforcement as there have been several notable cases in which facial recognition technology has played

a part in authorities targeting the wrong person. If law enforcement depends upon a tool to match a person's face against a database of suspects and they get a hit, you can understand why they would want to question that person, why they would immediately assume this is a person of interest. But if the technology produces false positives, that just means that innocent civilians end up getting harassed by law enforcement.

And when those civilians belong to a population that already faces disproportionate aggression from law enforcement, this exacerbates an already critical social problem. Something that needs to get better is being made even worse. As such, numerous communities have pushed back on law enforcement's use of this technology, and in some jurisdictions it's not a tool that police or other

law enforcement are supposed to use. There are laws in certain areas where law enforcement is not allowed to depend upon facial recognition for the purposes of identifying a suspect. Knowing it has this law should be enough to just disqualify it for its use in investigations, and yet we still see it being used in lots of places, sometimes clandestinely. It's not great. By the way, the companies that make these law enforcement tools often build up their own databases

by scraping social networks for images. Facebook's facial recognition tool was an incredible resource for these companies. Here you had millions, in fact more than a billion images tagged with identities of people in them, and many of them posted on accounts that allowed the general public to go to that account and see those images. So building up bots to crawl Facebook and collect images and cross reference those against people's names to build out a database, that was a

logical step for these companies. Now that technically violated platform policies, but didn't stop the companies from doing it. And on top of all that, this reliance on facial recognition technology also requires heavy surveillance to really work properly. I mean, you might have a great photo of your suspect, and maybe your facial recognition system is reasonably accurate, so if you were to get another picture of this person, you

would get a match. But you still have to figure out where your suspect is in order to get any other images, and to do that you need access to a lot of camera feets you need cameras in lots of places and systems to scan images from those cameras to look for matches. A reliance on facial recognition pretty much necessitates increased surveillance, which again becomes an invasion of

privacy and security for innocent civilians, suspect or otherwise. This is one of the reasons why police and their relationship with things like ring security are a big issue right because if police have access to citizen cameras, then the citizens have become an accomplice to creating a surveillance state, and the law enforcement is leveraging that and then mixing that with facial recognition, you get a pretty oppressive approach

toward law enforcement. And I haven't even touched on how someone with an agenda could misuse this technology for their own purpose. We have seen plenty of examples where an organization that employs intrusive surveillance often discovers and I'm sure this is a shock to all of you out there, but they discovered that sometimes their staff will take advantage

of this technology and act upon it for themselves. Maybe they use it to track down an X, or to stalk someone, or to harass somebody that they do not like this happens. In twenty thirteen, doctor George Ellard confirmed that his office in the NSSAY uncovered cases in which an employee had illegally made use of the agency's technologies to spy on women with whom he had a relationship, either in the past or in the present, and that included listening in on phone calls and reading emails, a

flagrant violation of privacy. My point is that while the NSSAY intended this technolog for the use of protecting the interests of the United States, the people working at the NSA are people, and some people can't resist the temptation to abuse technologies that give them these abilities. Some, like the case in twenty thirteen, will find that not only is it possible for them to do this, they can get away with it without detection for years, which means

then they do it a whole bunch. So, even if facial recognition technology were flawless, which it is not, and even if it didn't disproportionately harm certain communities, which it does, there's still the issue of it being a technology that folks can abuse. And sure, technically you can say that about anything, right, you could abuse a pair of scissors

and use them as a weapon. So any technology can be abused, but these AI technologies make it easier to do, make it far more intrusive, make it scared available, so you can end up abusing lots of people on a grand scale and potentially to get away with it. And meanwhile there are real people who get hurt in the process.

Now a thing in the future, we're going to look back on this time as one in which Pandora opened up the pesky box, and we spend a whole lot of time and effort and agony getting stuff back in that box, or to build guardrails around the box so that the stuff can't do as bad at a job as it would otherwise, and we'll just find out that, just like with the myth, once that lid's open, that's it. Now we just have to cope. Hamlet might say what a piece of work is AI, and then I don't know,

you'd probably be all moody or something. Maybe that's just how I feel. Anyway, That is sort of my perspective on really techno solutionism in general, but AI in particular. And again I don't wish to say that AI is inherently bad or that it's useless, just that there is a lot of potential for negative use cases, intended or otherwise, and that even if we address the unintended consequences, we still have the issue of people building out tools that

were malicious from start to finish. And it doesn't matter how many good tools we have out there. If these bad tools end up enabling a new era of malicious attacks, we're going to have to come up with new ways to protect ourselves against such things. And it's going to be ugly. And it also really raises scary questions about AI's use and weaponization on more grand scales, right like

on a military level, which is an ongoing concern. Again, I think pretty much everyone out there anticipates that this is a four on conclusion that AI will be deeply incorporated into military operations beyond what it already is doing now. Because if you don't do it, someone else will, which means everybody has to do it. If you don't do it, then you end up being you know, victim to someone else.

So that it's kind of that mutually assured destruction philosophy of the Cold War, except it's with AI, and I don't see a way around it, which is a very cheerful way to conclude this episode. Maybe I'm being far too cynical and pessimistic. I would love to find out that that's the case. I would love for that to be true. So I hope that all of my fears and misgivings are misplaced. I would love to be wrong in this case. There are times when I don't like to be right, and this would be one of them.

So here's hoping. Until then, I suggest everyone out there continue to do what I always advocate, use critical thinking paired with compassion to conduct yourself so that you can avoid problems for yourself and for other people, and hopefully end up making the world a little bit better in the process. Yeah, you don't need to go out and save the world. You just, you know, need to use some critical thinking and compassion to behave in a way

that is more beneficial than harmful. That's the goal. Whether we succeed or not, sometimes that's not up to us, but we can do our part. In the meantime. I hope you are all well, and I'll talk to you again really soon. Tech Stuff is an iHeartRadio production. For more podcasts, from iHeartRadio, visit the iHeartRadio app, Apple podcasts, or wherever you listen to your favorite shows.

Transcript source: Provided by creator in RSS feed: download file