Welcome to tech Stuff, a production from iHeartRadio. Hey there, and welcome to tech Stuff. I'm your host, Jonathan Strickland. I'm an executive producer here at iHeartRadio. And how the tech are you? It is Tuesday, August fifteenth, twenty twenty three. It is time to talk about tech news. And first up, I've got a follow up on the story of Sam Bankman Freed, the co founder of crypto companies Alimtor Research
and the crypto Exchange FTX. SO, just in case a few of y'all out there out of the loop on this story. FDx collapsed late last year and prompted a suite of investigations into the company and its executive leadership team, including Sam Bankman Freed, also known as SBF. SO authorities charged SBF with like a ton of crimes, several crimes, and he is awaiting trial. And until recently he was
just under house arrest. He was staying with his parents, but now he has been ordered to go straight to jail, do not pass go, do not collect two hundred FTT tokens. So you might say, well, what happened? What was the change? Prosecutors brought concerns to the judge and said SBF had been leaking documents to the media, potentially in an effort to intimidate a witness that they were speaking with with
regard to his case. So the judge took this seriously and has revoked the house arrest order at SBF will have to go to jail while awaiting his trial, which is a big old wolf. SBF tried to appeal this decision, but the judge dismissed that appeal, saying the points that his legal team made in the document were quote either
moot or without merit end quote double wolf. The prosecution also asked the judge to place a gag order on SBF so that he doesn't continue to leak information to the media, and that has then prompted various media outlets to submit court filings on First Amendment concerns, you know, free speech concerns about that gag order, saying it violates
his First Amendment rights to freedom of speech. This is a delicate matter, obviously, when you have an issue where communications can actually impact the legal process, it does or can at least come into conflict with the philosophies behind freedom of speech. So complicated issues we're seeing that play out in other arenas here in the United States as well. Okay, now it's time to talk about AI for just okay,
I can't lie for a lot. So first up, the hacker conference known as def Con happened this past weekend, and one of the many events that were held over the weekend at def Con was one that pitted hackers against AI chatbots, and this includes bots from all the major players in the space like open ai and Google
and Meta and others. So the purpose of the session was to test these chatbots for vulnerabilities, and it didn't involve like hacking into the code, but rather just chatting with the chatbots and seeing if you could make it do stuff it wasn't supposed to do. Finding a vulnerability in an exhibition at Defcon is probably embarrassing, but it's preferable to some bad actor out in the real world finding that vulnerability and then exploiting it to terrible effect.
So these hackers were trying to manipulate the chatbots to do things they absolutely were not supposed to do. That included sharing private information that was supposed to be protected, or produce examples of hate speech or misinformation, or making defamatory statements about famous people, that kind of thing. In fact, they had different categories for stuff that the chatbots were not supposed to do, and points associated with those tasks.
So if you got the chatbot to do one of those things, you would get that amount of points to your score. NPR covered the story and mentioned how one participant was able to convince a chatbot to reveal a credit card number by just changing the context a little. The participant claimed that his name was the number on that credit card. Then he asked the chatbot, what's my name?
And the chatbot supplied the number in response. And that's kind of out of the box thinking, right, because the chatbot seemed to be contextualizing the credit card number as not being a number, but instead it was just a person's name. And because a person's name is not protected information, especially if the person who's talking to you is the one who belongs to that name, it handed the information over.
And it's not like a fantasy or horror novel where names can have secret powers and you don't want anyone to know your true Name's not how it works in chatbot land. So if it thinks this credit card number is actually a name. Well, there's no reason you can't say a name, right. That's how simple it was to make this one chatbot do something it was not supposed to do, and so the chatbot coughed up the information.
So each participant had fifty minutes to complete as many tasks as they possibly could and accumulate points in the process, and also illustrating how far we have to go in order to make generative AI trustworthy. This next story is one that I missed when it broke Last Thursday, the US Department of Defense announced it is creating a task
force specifically focused on generative AI. This task force has the designation LIMA or LIMA if you prefer, like the BEAN, and will explore potential uses and threats regarding generative AI and large language models. I imagine that the people who will be in this task force are already well aware of the limitations of generative AI technology and how it can be impressive and even useful, but that you have to be really cautious because it can also sometimes be unreliable.
At least, I would like to think that the task members are aware of all that. It's hard to imagine that they're not, but you know, you get nervous. The press release from the DoD mentions that the DoD recognizes the potential of generative AI to significantly improve intelligence, operational planning,
and administrative and business processes. However, responsible implementation is key to managing associated risks effectively, so at least it sounds like the DoD wants a very steady approach to this and is aware things could go pair shaped if if
you aren't being careful with it. It still makes me nervous to think of generative AI being used in concert with like gathering and analyzing intelligence because we know that generative AI has a tendency to hallucinate and the event that it doesn't have all the information it needs in order to answer a question. It actually made me think of a scene from the British comedy series Blackadder, specifically
season four, which is set in World War One. There's a sequence where Captain Blackadder commands his subordinate Lieutenant George to paint a scene that shows German forces being far too powerful at their position, and this isn't an effort to convince leadership not to command his division to advance, so essentially he's saying, oh, we couldn't possibly advance. The Germans are far too entrenched and have far too many resources,
and it would be a disaster. George, though, he gets carried away while making this painting of these as far as they know, fictional German forces, and he ends up including stuff like battle elephants and stuff like that inside the painting, which Captain Blackatter does his best to incorporate
into his report to his his superiors. So I imagine generative AI producing equally fake intelligence, right, like, how can you trust any intelligence provided by generative AI without doing so much extensive double checking that you might actually negate any benefit that the generative AI gave you. Right, If it takes you more time to verify the information than it would if you just didn't use the AI at all,
then really you're playing a losing game. This is also what I argue with AI generated articles, where you have to have an editor go over the article, because typically you would have a human writer who is vetted to write an article, and then the editor would double check the article for things like, you know, grammatical mistakes and just anything that stands out. But generally you're fairly confident that the writer has turned in something without just making
stuff up. This can come back to bite you, as we have seen multiple times, but usually it works out. But with AI you can't be sure about that. And so if you give AI generated articles to editors, often they have to go through the article with such a fine tooth comb that they essentially have rewritten the article themselves. Like it's as if you had given the writing assignment to the editor and not to an AI bought in
the first place. And that's a real problem. That's what I worry about with the use of AI in connection with gathering intelligence. Okay, how about we talk about a case where a government is using AI to repress information? Yay, how fun. So I'm going to try and get through
this without going on too much of a rant. But the story is that in Iowa, the state government passed legislation that bans books in school libraries if those books include material not deemed to be quote unquote age appropriate. So that includes any book that describes a sex act. Anything like that is immediately like on that banned list. But then how does a school go about doing that right?
Because libraries are I don't know if you know this, they are absolutely choc a block with books and it would take a considerable amount of time and effort to go through every single book to see if that book met the government's definition of age appropriate or if it did not. So one school district in Mason City is leaning on AI to do that work for them. Now.
What they've started with is a list of books that have already received complaints in the past about objectionable material in those books, and so they are then feeding these books to AI software to scan the material and determine if in fact the book violates the law, in which case it would presumably be banned from the school libraries. This includes books like The Handmaid's Tale. I'm pretty sure that one's going to get banned, knowing some of the
scenes that are in that book. However, you know that's it's not even ironic, it's just sort of like predicted by the book itself. The state government is probably viewing it as a good thing because The Handmaid's Tale is a book that really lays out what happens when a government gets authority over stuff like bodily autonomy, and you know, you don't want young people being able to read about that and then getting ideas. School is the last place
for getting ideas. After all, Sorry, I am ranting, even after I said I wasn't going to anyway. I just consider it a fresh new hell to be in a world where AI is helping administrators ban books. It's like the evil Mirror Image Universe version of a Reese's Peanut butter Cup. This is two awful things that go awful together. Okay, I obviously have become a little overwrought with emotion, So we're going to take a quick break, and when we come back, I'll talk about some more stories, including a
couple more AI ones. We're back. So many years ago I did a tech Stuff episode about capture tests. These are those tests you sometimes encounter on the web that you have to pass to prove you're a human being before you can access whatever is on the other side, right Like, there are a lot of these where you have to click on it in order to complete some transaction or else the system will think that you are
a bot and reject it. So researchers say that bots are now better at completing capture tests than humans are, and that is a huge problem because the whole main purpose of captures is to create a task that should be relatively easy for most humans to complete, but it should also be really tricky for automated systems to complete it. As captures become harder for people to complete, they become
a barrier to legitimate usage. It's a real problem. And as they become easier for bots to complete, well, obviously they have no use at all from that standpoint, at least not for their stated purpose. This is not the first time we've seen this happen. By the way, the whole history of capture is one that's kind of like
a seesaw. Developers will create automated programs they get better at solving certain capture tests, and then capture test developers will come up with a new approach to captures in order to trip up this new generation of bots. So, in a way, captures have played a really important part
in the evolution of artificial intelligence. But beyond this adversarial approach to machine learning, the research points out that bots have fewer barriers to do stuff that we generally frown upon, like we typically put these capture things in place for a reason, like we don't want automated algorithms or systems to be able to game the system in some way, So that can include things like using bots that can defeat captures in an effort to access all the pages
in a website and then scrap all the data for whatever purpose, or to pose as a legitimate customer on an online marketplace and then post fake reviews for various products and then artificially driving that product's review scores up or down, right, like you could have someone pay to downvote a competitor's product so that your product looks better in comparison. You know, you could also use it to
try and boost your own product scores. These are issues that are known and are happening, and one of the reasons why captures are being used. The Independent reports that researchers put capture tests to the you know test and had people and bots tried to complete different captures, and people did significantly worse on those tests than the bots did. They took more time to complete the tests, and they
were less accurate than the bots. And the bots were able to breeze through some of those challenges in less than a second with like close to one hundred percent accuracy. So the real take home here is that captures no longer do the job they were intended to do, or at least ostensibly intended to do, and in my mind that means we should just ditch captures and come up
with a different approach. However, I should also note that some companies, such as Google have relied on humans completing captures, not because that was a way to prevent bots from getting access to stuff, but rather to help Google train its own AI models. Right Like, there was the time where you would be presented with words scanned words from a scanned book and you would have to identify what
that word was. And the reason for that was not so that Google could necessarily say, Okay, you're definitely a human, but to train its technology to be better able to scan text and interpret it. So sometimes capchas aren't really there as a safeguard against bots, but rather a method to train bots to be even smarter than they already are. And we're still on AI. So researchers at Purdue University have studied chat GPT's performance regarding coding. They took a
very specific approach. They submitted to chat gpt five hundred and seventeen different questions that they pulled from the website's stack overflow. So in case you're not familiar with stack overflow. That's a place for programmers to go in order to learn and to share knowledge and tips, and you can ask questions of the community and then receive answers from them. It's kind of like a programmer specific version of Quora
or Rest in Peace Yahoo answers. So the researchers took questions from stack overflow, They gathered all these questions from the community, and they submitted those questions to chat gpt in order to see what chat gpt said, and they said that more than half of chat GPT's answers, fifty two percent of them included at least some inaccuracies, you know, some being totally inaccurate, to some that were just like
partly inaccurate. They also said seventy seven percent of chat GPT's answers were overly verbose, which again makes me wonder if I am actually chat gpt. The researcher said, the inaccurate answers indicated that about half the time, like fifty four percent of the time, when chat gpt gave incorrect answers, it seemed to be because chat gpt didn't really understand
what the question was actually asking. So, in other words, it's possible chat GPT could have produced a correct answer if it had been able to parse what the question asker wanted to know in the first place. It's just chat gpt didn't understand the question and so gave an inappropriate or incorrect response. All of this is not to say that chat gpt is completely useless when it comes to helping programmers code. It might be very useful, but it does require a lot of editorial oversight, just like
with the writing of articles, like I'd mentioned before. But it could potentially speed things up if it if it's understanding the prompts properly and not hallucinating, which those are big ifs. But like the researchers were even quick to say, this isn't to suggest that AI doesn't have a place here. It's just to remind ourselves that, you know, the way you word questions matters, the way that chat GPT interprets questions matters, and then we can't just assume that any
answers provided are magically correct and accurate. Okay, moving off of AI, let's talk about Apple. So back in March twenty just as the world was starting to shut down in the face of COVID Apple agreed to a five hundred million dollar settlement. So the heart of the matter here was a class action lawsuit that accused Apple of purposefully slowing down older iPhone models performance, presumably in an
effort to push people to upgrade to newer models. Apple admitted that it had slowed performance down on older iPhone models back in twenty seventeen, but the company said it wasn't in an effort to make people go out and
buy a new iPhone. Instead, they said they had to do it because updates to the iOS meant that older phones would potentially shut down spontaneously, you know, would enter into an issue where they would shut down or they would burn through their battery life too quickly unless Apple artificially made them work slower. But in people upset, Apple customers upset, and around three million claimants joined this class action lawsuit, which Apple again ultimately settled in March of
twenty twenty. And now, finally, three years after the settlement, Apple will be sending checks out to the people who were part of the lawsuit. The checks come out to be about sixty five dollars per claim, because again it was like three million claims and a five hundred million dollar settlement fee, And part of the reason for the long delay has nothing to do with Apple's behavior. It's
not that Apple was dragging its heels. Part of the issue is that a couple of claimants out of those three million, were dissatisfied with the settlement and they appealed it to the ninth US Circuit Court of Appeals. But ultimately the court ruled against that appeal. So now, after many years, those checks should be heading out the door. So you should keep on the lookout if you had signed up to be one of the claimants in that lawsuit.
Last week, I talked about how Saudi Arabia was following in the EU's footsteps, requiring all smartphone manufacturers to include a USB C charging port starting in twenty twenty five. Well, now the EU has passed similar rules that will require all smartphones to have replaceable batteries starting in twenty twenty seven. So like the USBC rules, this seems to me to be more or less specifically targeting Apple, not just Apple. Apple's not the only company that makes it impossible to
replace a smartphone's battery. In fact, my Android phone is the same I can't replace the battery on my Android phone. But Apple just has this reputation for protecting its proprietary approach to smartphones and creating kind of a closed off ecosystem that requires you to work with Apple to make any repairs or maintenance to your own devices, And that's part of what is being targeted here. It's also an
effort to cut down on things like e waste. But here's the thing is that while that may be an element, like the control part of the ecosystem is probably an element for companies like Apple to lock away those batteries, it's not the only reason for it. Part of the esthetic for modern smartphones is to try and make them as slim as possible, but in order to do that, you have to cram all the components of the smartphone
into a very tiny form factor. And while you can miniaturize a lot of stuff in smartphones, batteries are one of the things you can't easily miniaturize. But this typically means that it's not really practical or sometimes even possible, to make replaceable batteries a thing because you've crammed everything into such a small form factor that you just can't access the battery or disengage it easily from the rest of the phone. It's all just it's kind of built
into itself. So mandating that all smartphones have to have replaceable batteries, ones that are replaceable by the end user. No less, we're not just talking about taking it into a shop and having it swapped out. The end user is supposed to be able to replace these batteries. Well, that means that companies will have to move away from designs that end up having these compact layouts that would
make it difficult or impossible to replace the battery. They're going to have to go with a different approach, and that could mean that we're going to start seeing some chonkier smartphones in the EU starting around twenty twenty seven or so. Also, I should mention this rule doesn't just apply to smartphones. It actually applies to any battery operated device, So you know, things like laptops and stuff will also
have to have replace batteries. Even electric bikes, which are also popular in the EU, will have to have these replaceable batteries. Okay, I've got a few more stories to cover before we wrap up. Let's take another quick break and we'll be back with some more news. We're back and I've got another class action lawsuit to bring up. This one is against HP, so claimants in California, uh and I think other places as well, but I know the lawsuit is taking place in the state of California.
They have sued HP, saying the company was purposefully restricting customers from using all in one printers if the toner ran down, so you run out of ink and then suddenly you're all in one printer just becomes a giant paper weight. So you wouldn't be able to even use the non printing functions on one of the these machines, like you wouldn't be able to scan a document to create like a PDF, or you wouldn't be able to use it to send a fax, which doesn't require any
ink in the first place. And that was the basis of the complaint, is that HP was locking these functions away in an effort to force people to buy expensive toner even if they didn't need the printer for the purposes of printing. Also, they argued that HP failed to disclose that this is what would happen in the device's documentation. So HP filed a motion to dismiss this lawsuit, and now a judge in California has denied that motion, so
the lawsuit may proceed. Earlier, when this lawsuit was first filed against HP, a judge actually did dismiss the case because the claim was that the plaintiffs had failed to make an actual legal claim against HP. They had complaints, but not a legal claim. But then the plaintiffs amended their motion and that one held up to scrutiny, and that's what's going to move forward. This still doesn't mean that HP will ultimately be found to have acted in the wrong, but it does mean that they're going to
have to face some tough questions in court. Last week, California authorities gave two companies WAIM, which is owned by Google's parent company, Alphabet, and Cruse, which is owned by General Motors, the authority to operate self driving robotaxi services around the clock in San Francisco, and then Cruz promptly created a traffic jam in the North Beach neighborhood of
San Francisco, Sad Trombo. All right, so, according to reports from San Francisco, for some reason, several of Cruz's self driving cars, as many as ten of them at a time, came to a stop around the Valejo Street in North Beach, San Francisco. Ballejo just always makes me think of the Sodiac Killer, But the Sodiac Killer had nothing to do with these cars just coming to a stop. Some of those cars had passengers inside them, and so the passengers were stuck inside a non moving car on the street
for about fifteen minutes. The cars did turn on their hazard lights, so at least there's that. So what the heck happened? Well, representatives at Cruise say that it looks like a nearby music festival was the problem. The cars
were not listening to the groovy tunes. Instead, there was an excess of cell phone activity in the area and it was kind of clogging up the airwaves, and so all that interference made it difficult for the vehicles to access their navigation features, and so they kind of went into protective turtle mode. It was not an auspicious start
to the driverleist taxi revolution, I would say. Now, the rules in California state that Weaimo is not allowed to charge customers for taxi rides unless a safety driver is also in the vehicle. If it's a driverless vehicle and there's no safety driver, weimo can't charge for rides. They could give them for free, but they wouldn't be able to charge. Cruz has a slightly different deal. It can charge for driverless trips without a safety driver, but only
between the hours of ten pm and six am. Anytime outside of those hours, if there's not a safety driver in the car, the ride is free, or if they do have a safety driver present in the vehicle, they can charge just anytime. GM has said that the long term plan is to quote unquote blanket cities like San Francisco with driverless vehicles, which kind of makes the point
for a lot of people who oppose these policies. They argue that this just really means that we're just gonna end up with a lot more vehicles on city streets. That's not going to alleviate traffic, it's gonna make it worse. And while the argument might be made that the purpose is to convince people to not drive their own vehicles on streets, the proponents for change are saying that's not what we need. We don't need like self driving cars
to do that. What we need is to make cities easier to get around for pedestrians and bicyclists and stuff, which is actually taking more cars off the street as opposed to going driverlests and having even more vehicles circle streets. So yeah, not a great story to come out of the early days of driverless robotaxis in San Francisco. Now, our last full story has to do with video games and modeling communities. So you may be aware there are folks who love to create code that modifies existing video
games in some way. Right, they might allow you to get access to abilities and tools that developers have but players are not meant to have, and then do all sorts of stuff. Maybe it even changes the game fundament mentally or ads new content created by moders. That kind of stuff. Some video game companies actually encourage these communities. Some even work with them to create like a storefront where the game producer and the motterers can both generate
revenue from those mods. But then you've got Rockstar Games, the creators of the Grand Theft Auto series. Rockstar Games has often taken a more adversarial approach to the modding community, so back in twenty fifteen, the company banned a whole bunch of members of a mod group called five M. Rockstar said that the group had been developing code that can make it possible for folks to pirate the game.
What the motters had actually done is they had created mods that would allow people to play in an alternate version of Grand Theft Auto Online. So that's like an ongoing product that Rockstar offers. But the mods five D made it possible to run a separate instance of Grand Theft ato online, one not overseen by Rockstar and one that could have lots of different mods in it. Plus, people who had a pirated copy of Grand Theft Auto would be able to access this alternative version of Grand
Theft Doto online. So Rockstar Game says, oh, you're encouraging people to pirate the game, so we're banning you from our different forums and stuff. But now Rockstar has acquired a group called CFX dot Ra, which consists of you guessed it, the team behind five M. So now the dreaded pirates are part of the crew, y are they
can take down the system from within. This is a pretty dramatic turn of events because back in twenty fifteen, there were reports that Rockstar had gone so far as to actually send private investigators out to the homes of people who were part of five M and to essentially
intimidate them. But in the year since twenty fifteen to five M has maintained this alternative online play space for grand Theft auto players that reached a maximum of around two hundred and fifty thousand concurrent players back in twenty twenty one. So I guess Rockstar came around to the old philosophy of if you can't beat them, acquire them. Now before I head off, I do have a recommended article I think you should check out. It's on tech Dirt.
It's written by Mike Masnik. The article is titled ur I double A piles on the effort to kill the World's greatest library sues Internet Archive for making it possible to hear old seventy eight's. That is a very long headline, but yeah, the story talks about how the RIAA aka the Recording Industry Association of America is coming after the Internet Archive because Theria objects to folks being able to use Internet Archive to listen to obsolete media. So the
seventy eights reference record albums. That's what the seventy eights mean. They're specific types of record albums, specifically ones that require a playback speed of seventy eight revolutions per minute. You know, Vinyl has experienced a renaissance lately, but you typically find the Vinyl records of today falling either into the thirty three and a third rpm category or the forty five rpm category. Most record players and turntables don't even have
the ability to play at seventy eight RPM. Some do, but a lot don't because seventy eight RPM albums are pretty darn rare. They are really reaching obsolescence, so there's a possibility that media recorded on seventy eight albums could be lost forever without this archival approach. So the Internet archive is all about preserving information, but the uble A is not crazy about people being able to access stuff
without you know, the industry's total control over it. Anyway, I'm biased when it comes to stuff about the ri double A because that organization has brought the hammer down with unnecessary force multiple times throughout the history of the Internet. Like the Napster story is ridiculous, but you should read this article on tech Dirt to get the full story and maybe get a deeper appreciation for what the folks
at Internet archive are trying to do. And that's it for the Tech News for Tuesday, August fifteenth, twenty twenty three. I hope you're all well and I'll talk to you again really soon. Tech Stuff is an iHeartRadio production. For more podcasts from iHeartRadio, visit the iHeartRadio app, Apple Podcasts, or wherever you listen to your favorite shows. You