Tech News: Commanding Chatbots to Make Malware - podcast episode cover

Tech News: Commanding Chatbots to Make Malware

Jan 20, 202330 min
--:--
--:--
Listen in podcast apps:

Episode description

Cybersecurity researchers discover that, with a little bullying, they could convince ChatGPT to make some nasty malware. Plus workers in Kenya suffered to make ChatGPT less toxic. Meta faces fines in the EU for WhatsApp. And Microsoft is gearing up for layoffs. Plus more.

See omnystudio.com/listener for privacy information.

Transcript

Speaker 1

Welcome to tech Stuff, a production from I Heart Radio. Hey there, and welcome to tech Stuff. I'm your host, Jonathan Strickland. I'm an executive producer with I Heart Radio. And how the tech are you? It's time for the tech News for Thursday, January nineteen, twenty twenty three. And last year it seemed like the front half of every tech news episode was dominated by Elon Musk's on again, off again and then totally on again quest to buy Twitter.

But so far in the honor really goes to chat, GPT and open Ai, which is the startup that's behind the controversial chat pot so. First up today, Sam Altman, the CEO of open Ai, gave an interview with Strictly VC in which he addressed some rumors about the upcoming next generation of the GPT language model a k a.

GPT four. We're currently on GPT three point five. Altman warned that people are bound to be disappointed in the new language model because the hype around it has grown to a point that's just impossible to live up to. And I think Altman in general has been pretty good about acknowledging this kind of stuff. A lot of folks, including myself, have pointed out how chat GPT falls short in lots of ways, and I don't think Altman would disagree.

He has been pretty forthcoming about this sort of stuff, and he's not making claims that chat GPT is flawless or even skilled at every kind of writing. So I guess it's unfair for folks like myself to say I can't even write a shakespeareance on it, even even though it can't. I mean, maybe if I kept at it, maybe if I kept tweaking prompts, I could massaw chat GPT to a point where it could produce an actual Shakespearean style sonnet. But anyway, Altman wouldn't commit to win.

This next model will launch. He said it will launch when it's ready, which I also think it is a pretty responsible take, and he admitted that there were concerns about how chat GPT might be used by students to try and cheat on schoolwork. But he also pointed out that when calculators became a thing, we just we didn't just abandon teaching students how to do maths. Instead, we took calculators kind of into account. While doing so, the way we taught it started to change. That's a kind

of interesting take. Now, Honestly, I think chat GPT could be useful for say, English teachers, to show how a generated essay might fall short of actual real work done by a student who's taking the subject seriously. It could be useful to explain to students the difference between real

critical thinking and surface level as servation. And this way you wouldn't have to take an actual student's essay and then embarrass that student in front of the rest of the class and say, see, little Timmy over there totally doesn't understand the role that false staff plays, and Henry the fourth part two, just look at this terrible work. Shame on you, little Timmy. Instead they could say, look,

here's this essay written by chat GPT. Here are the places where the observations being made are pure surface level and they don't really say anything. This is the sort of stuff you need to avoid and think about when you make your own work. And uh, I get that I'm putting a lot on teachers there, and that they're already overworked and undervalued, and that should change too, just throwing that out there now. One thing about open AI that has undeniably caused harm is the process by which

the company has tried to filter out objectionable material. So in the past, even with g PT, we've seen examples of how chat bots can submit truly offensive material that includes stuff that ranges from racism and sexism and hate speech and calls for violence to description of truly horrific acts that I can't even begin to describe on this show. And obviously, any company making a chat bot wants to avoid the chat bot creating these kinds of situations, even

if it's just because it's bad optics. Right, if the reason is, oh, we don't want to do that because it'll hurt our investment, at least, they still don't want to do it. Now, as you may know, chat GPT generates responses to queries by referencing information from a massive database of scanned material across the Internet, and there are complex rules at play that guide chat GPTs responses, but

ultimately those responses depend heavily on the repository of scanned information. Now, the problem is, and I'm sure you've realized this, the web is home to some really terrible communities, ones where truly awful material is shared and sometimes celebrated, and chat GPT cannot magically tell the difference between what is acceptable

and what is unacceptable. So it needs people to essentially tell the model what isn't isn't right, and it has to be able to identify certain stuff as falling into categories of content that are forbidden and then filter anything

like that out of its responses. Anyway, Open a Eyes approach was to outsource this work of tagging offensive material, identifying it meta tagging it so that chat GPT could avoid such stuff, and they went with a company called Sama s a m A. This company had also worked with Facebook in the area of content moderation, so it would hire out people to go through Facebook posts and

flag any that violated Facebook's policies. Now, the way Sama hands handles this is to employ people in Kenya to do the work at a salary that breaks down between a dollar thirty two and two dollars per hour. By the way, Sama was paid around twelve dollars fifty cents per hour per case um, and the people actually doing the physical work are getting a dollar thirty two to

two dollars of that time. Magazine indicates that a receptionist in Nairobi makes around a dollar fifty two per hour, So these are pretty low paying jobs and in return for this hourly rate, employees were divided into three groups to read passage after passage of just truly the worst stuff you can imagine. Each group focused on a specific kind of offensive material. There was a hate speech group, a sexual abuse group, and a violence group, and I'm

sure there was plenty of overlap between these three. Employees would read and tag each passage for eight or nine hours a day, and as you can imagine, being plunged into that kind of work and being expected to meet certain deliverables right to tag a certain number of passages

each day really took its toll. It was psychologically taxing, to say the least, and many employees reported being traumatized by the work, and a few of them argued that SAMA, despite what it claimed, was not providing adequate services for counseling. And this whole story reminds us that behind the surface of these AI programs, there's actually this huge human contribution

to make them work. Like, yes, it's remarkable that this program is composing these texts in response to our queries, but to make that possible, a lot of people had to spend countless hours of work, and some of them doing work that was truly traumatic. The story gets even

more complicated. Sama recently ended its contracts with open Ai ahead of schedule, probably because the company was already being taken to task for how they relied on poor people in developing countries to moderate content on Facebook, which included the content moderators being subjected to truly horrific content, including videos and pictures. And we're talking about violence and sexual abuse and and and child endangerment and worse, and in other words, I don't think Sama made this choice out

of concern for its workers. The workers found themselves also either out of a job entirely or shifted to other lower paying work, so they've actually come out poorer for this as well. And some of them were saying like, yeah, the work is terrible. It takes a toll, but I need to provide for my family and now I'm not

able to. So it's a very grim story. If you would like to read about it further, I recommend the article in time it is article it is titled rather open Ai used Kenyan workers on less than two dollars per hour to make chat GPT less toxic. And for our third story, about chat GPT. Let's talk about how some cybersecurity researchers were able to get the chat bought to create a new strand of malware, and not just any malware, but polymorphic malware, meaning it can take many forms.

You can have a core structure that can then be tweaked so that you get different generations of malware based on the same core, but they can be different enough that an anti virus program would have trouble detecting new variants, thus extending the useful life of the malware. Like you can keep using this core malware over and over by making these small changes, relatively small changes, and keep sending

out waves of the stuff. So this is bad news for the security side of things, Lee, but it's also important research because if the good guys don't know about it, then the bad guys can make greater use of it. The researchers said that the web based version of chat GPT is a little more challenging to use because it's meant to filter out anything that would result in the creation of malicious tools. You're not supposed to use chat GPT to do it, and it tries to prevent you

from doing it. However, the team said that if they just kept restating their requests and if they made them more authoritative, they could eventually work around this barrier and convince chat gpt to do their bidding. Which is kind of concerning this idea that persistence and a change in tone will allow a user to sidestep safety parameters. That's not great, But what's arguably worse is the team found the a p I, the application programming interface version of

chat GPT. This is the version that's meant to let you incorporate chat GPT functionality into other applications. That version appeared to have no such restrictions at play in the first place, So, in other words, it was essentially raring to go when the time came to make malware. The security firm cyber Arc says that it will continue to do research on this new development, and that it plans to release some of the source code of the malware that was created to the cyber security community for the

purposes of education. And yeah, the bots are learning how to create cyber attacks. Now, all right, we're gonna take a quick break. When we come back, we'll have some more news. We're back now. Last Tuesday, it's past. Tuesday, I talked about how crypto is currently in a bit of a recovery phase, possibly because the crypto community thinks

that the macro economic situation is improving. But there's no telling if that recovery phase is going to be sustainable just yet, and it appears that the big companies are still very much in belt tightening mode, which suggests that at least these companies expect things to be tough for a while. Yet. This week we heard Microsoft is laying off around ten thousand employees between now and the end

of March. A company spokesperson indicated that marketing and sales will feel the impact more than engineering, and that gamers will also be sad to learn that Xbox and Bethesda

divisions are among those that will be affected. Satya Nadela, the CEO of Microsoft, said that consumers are starting to get more frugal with their digital spending and that while the early stages of the pandemic saw an increase in digital spending, we're now seeing kind of a reversal of that trend as people start to ask do I really need this digital subscription and start to make choices in a way to kind of scale down their own expenditures, and on a related note, Amazon continues to cut costs,

this time by planning to sunset a charitable donation program that has been in place since two thousand thirteen. It's called Amazon Smile, and it would let customers designate a charity one that Amazon had verified, and that charity would then receive a percentage of every eligible purchase that the customer made on the platform. So instead of going to www dot Amazon, you would go to Smile dot Amazon, and otherwise everything else would be the same as your

typical Amazon experience. I actually made use of this program myself. I selected a local theater, as in a stage theater here in Atlanta to be the recipient because it's a nonprofit organization anyway. Now, Amazon is saying that this program was not as effective as the company had hoped, and that was because these donations were you know, fractions, like a percentage of each eligible purchase, and it was spread across thousands of charities. No single charity received very much money.

The company said that the average amount donated to a charity amounted to around two d thirty dollars. I just checked mine, and over the years that I've been using this and I have been using it for several years now. My purchases have amounted to about a thousand, seven d fifty dollars in donations to this theater. But that's stretched across years, right, So it's not like this theater got a check for almost two grand and was like, wow,

what a huge donation. Now it's like, you know, a month, they might get a check for you know, a couple of bucks. So Amazon kind of has a point here in the sense that this was probably not the most effective charitable platform. It says that it will still support various causes, but it will make more focused decisions on

things that quote unquote make meaningful change. The Data Privacy Commissioner or DPC in Ireland took aim against Meta again, this time finding WhatsApp five point five million euro, which is just a smidge under six million US dollars, and it was for a breach in the EU's strict privacy laws. At issue is how WhatsApp has been leveraging the personal information of users in the EU while trying to improve the company's services. So this isn't about targeted advertising for once.

It's about using this personal information in a way to beef up the services own features and capabilities. Apparently WhatsApp methods just didn't go far enough to protect the personal information of EU citizens, though the Reuters report I read didn't go into really any detail about the nature of

the violations themselves. Meta has been the target of the DPC several times now and has had to pay fines on multiple occasions for how and has continued to handle or, depending on your point of view, mishandle data pertaining to EU citizens. I don't know at what point things would change, because the fines, while they're substantial in the big picture of Meta's you know, financial situation, rarely amount to anything that would raise an eyebrow on on a massive corporate spreadsheet.

So um, yeah, it's It's yet another example of Meta doing what Meta does best, which is handle the personal information of its users in a way that's ultimately irresponsible. Meanwhile, Global Witness, a human rights watchdog agency that we've talked about several times on this show, revealed that Meta's Facebook allowed ads calling for violence in Brazil following an already violent series of protests in that country. All right, so

first let's get some background. So in elections late last year, the leftist politician Louise Inacio Lula da Silva defeated his right wing opponent and former President Bolsonaro in a runoff, but Balsonaro took a page from the far far right playbook and refused to concede the election. Da Silva took office at the beginning of this year, and then some of Bosonaro's supporters stormed government buildings in these violent riots

in an attempt to have this election overturned. The violence lasted several hours, and this prompted META to declare Brazil a high risk region, at least temporarily. Now that's supposed to mean that meta's properties, including Facebook, take a much more restrictive approach with regard to content moderation and add approval, and an effort to mitigate things like hate speech and

calls for violence. But Global Witness found that just days after the riots, Facebook accepted fourteen of sixteen fake ads that Global Witness created, and these ads called for violence against the Silva and his supporters. That is definitely not a good look. Facebook should not have accepted those ads, but it was totally prepared to run fourteen of sixteen

of them. Global Witness reps say that this shows how Meta fails to live up to its obligations in these kinds of cases, and it's something we've seen play out

around the world, particularly in non English speaking countries. There's long been this accusation against Meta that the company pays very little attention to content moderation outside of major English speaking markets like the United States and the EU, and yet by doing this, by ignoring these other regions, the company is facilitating serious harm to come to populations by not cracking down on things like hate speech and calls

for violence. Global Witness says that YouTube performed substantially better than Meta did and denied ads that had similar language and messaging in them, and just to set minds at ease, Global Witness did cancel these ads before they could actually go live, so no users encountered these ads that violated Facebook's policies and called for violence because Global Witness pulled

them before they could they could go live on the site. Okay, I've got a couple more stories to get to, but before I get to those, let's take another quick break. We're back. So I've got a story about video games. Uh, it's not necessarily a positive one. So one of the ongoing issues in the world of video games involves what comes along with leaning on the games as a service business model, that is, releasing a game that has elements in it that allow a company to continue to generate

revenue from players over time. So in the old old days, the way you made money if you were in the video game businesses, you sold as many copies of a video game as you possibly could. But once you sold a copy of a video game, that was kind of it. That was the end of the transaction. Then once you started getting into games that had a subscription model like M M O RPGs, there became this new way to

make money. First you'd sell the game, then you would continue to make money from the game by collecting subscription fees. And this started to open up possibilities where you know, you look at this and you think, all right, we can continue to make profit from these games. We do have to also put effort in to continue to support the game, So there is an ongoing cost as well. It's not like, you know, we work one day and then we just make passive income for the rest or

our lives. That's that's a myth, but it did open up these opportunities, and that's when we started to see other ways to make money on ongoing titles, and that involved some stuff like being able to pay for upgrades, cosmetic upgrades for your character so that they could look

a specific way and have a specific style. And most gamers, I don't think object to this, like the idea of yeah, you know, if you want, you can pay a dollar and you get this new skin for your character and it looks really cool, and I think most gamers are like, yeah, I can, you know, I don't care about that, so I'm not going to spend the money, or yeah it's a dollar, I'll spend a dollar and help support this game and plus get this cool skin. Most people don't

have objections to that. But then you have the darker side of this, where you know, they offer up items or outfits or just you know, elements. They give players an advantage in return for some cold hard cash that you buy something that boosts your character performance, either temporarily or permanently, and a lot of players refer to this as the pay to win style right where you you can get an advantage on better players just by spending money and it it has a really bad rep in

the video game world. Generally speaking, it's it's heavily frowned upon because people who put in the time and who genuinely love the game will find themselves frustrated with folks who just have more discretionary income and who are spending more on the title in order to get the advantage. And that's very frustrating, because we deal with that enough

in real life, y'all. But anyway, along with these monetization strategies come concerns that the games, you know, gamify stuff so that players are kind of lured into spending money that otherwise they wouldn't, or there might be elements of gambling at play, such as the purchase of a lootbox. So a lootbox doesn't guarantee you a specific item within

the game. It gives you a chance of getting different items, and the more common items tend to have a larger chance, and the more rare items tend to have a smaller chance, So it becomes kind of like a gamble. And a lot of countries have looked into this and even gone so far as to call it gambling and outlawed loot boxes. Well, now the EU is kind of looking in on this too, as well as other elements of of games as a service that could have a negative impact on gamer's mental health.

So this doesn't mean that the EU is going to vote against loot boxes or or make those you know, illegal, or regulate them in any way. We haven't gotten to that step yet, but they have adopted this report that is looking into these kinds of things, which could be

the first step toward such an outcome. So I imagine there are some video game companies out there that are watching this with some anticipation because if the EU does come to the conclusion that stuff like lootboxes amount to gambling and that this has to be regulated, and that maybe that also creates restrictions on who can buy the games,

that's going to have a massive impact of the industry. Um. Yeah, So it's it's again an ongoing issue that is still kind of shaking out in different places around the world. Maybe in five or ten years we're going to see dramatically different strategies for monetization to avoid these kinds of I guess what Obi wan Kenobi would call imperial entanglements, although human in a bad way, whereas I think like, maybe we do need to rain this in a bit,

because I think it is get a bit predatory. And finally, a group of academic researchers have proposed that social network should employ a new approach to their recommendation algorithms. These are the sets of rules that determine what content to

promote to each user at any given time. Now, it's hardly controversial to say that, as of right now, recommendation algorithms tend to promote harmful material, or at least material that is more likely to create larger divisions between different populations of people, such as people who have different political leanings. The purpose of the algorithm is not too so discord. It's not to watch the world burn, it's not to

create chaos. The purpose for the algorithm is simply to keep users engaged, and more importantly, to keep them on the respective platform for as long as possible. Keep them stuck to your product, because that's how you make money. You want them to stay on Facebook or Twitter or YouTube or whatever for as long as you possibly can. It's unfortunate that the stuff that tends to sow discord is particularly good at keeping people glued to these platforms. So I guess what you can say is that the

algorithms themselves are not immoral. They are a moral they There's no moral judgment given to the kinds of content promoted. The algorithm is just looking for a result, doesn't care how it gets at the ends justify the means well. The researchers suggest that perhaps by purposefully designing algorithms that promote content to build bridges between people rather than fament animosity, could still keep engagement levels high while simultaneously mending fences

between different populations. That if the content that's promoted supported stuff like actual discussion and some old debate rather than accusations and dehumanizing portrayals of the other side, we might see less divisiveness outside of the online world as well, because a lot of the outside world is taking its cues from what's happening on the online world. And the argument is that if we could make some conscious steps to improve things, we could see the benefits well beyond

the native platforms. This is a nice thought. I would

love to see it happen. The cynic in me worries that it wouldn't work, but the optimist part of me would love to see some real effort made into something like this, because the worst scenario is we could try it and it doesn't work, right, Like, that's the worst thing that could come out of it, Whereas the best thing that could come out of it is that we could actually see online communities become less polarizing, and that perhaps this could also extend to other areas of our lives,

and that may be This would mean we would start to recognize where we are in agreement, as opposed to only pointing out where we have massive differences and then just escalating that to the point where people start to be harmed in the process. So I'm skimming on some of the details here, but you can read the whole paper. The whole white paper is available online. In fact, it has its own you know, kind of vanity U r L and it's bridging dot systems. And that's it for

the news today, Thursday, January three. I hope you are all well. If you have any suggestions for topics I should cover in future episodes of tech Stuff, please reach out to me let me know what those are. You can do that in a couple of ways. You can go to Twitter and send me a tweet the handle for the show is tech Stuff H. S W. Or if you prefer to leave me a voice message, you

can down the I Heart Radio app. It's free to download, it's free to use, and you just go over to the little search feature There you type in tech stuff. They'll take you to the tech Stuff page in the app, and there you're gonna see a little microphone icon. If you click on that, then you can leave a voice message up to thirty seconds in length and let me know what you would like to hear in future episodes, and I will talk to you again really soon. Yeah.

Text Stuff is an I heart Radio production. For more podcasts from I Heart Radio, visit the i Heart Radio app, Apple Podcasts, or wherever you listen to your favorite shows

Transcript source: Provided by creator in RSS feed: download file