Tech News: World Famous Hacker Passes Away - podcast episode cover

Tech News: World Famous Hacker Passes Away

Jul 20, 202329 min
--:--
--:--
Listen in podcast apps:

Episode description

Hacker Kevin Mitnick passed away this week. Back in the 1990s, Mitnick upset a lot of powerful companies by infiltrating their computer systems and became a hero to some and a villain to others. In other news, we've got a lot more AI stories, plus Ukraine authorities bust up a huge botnet operation. 

See omnystudio.com/listener for privacy information.

Transcript

Speaker 1

Welcome to tech Stuff, a production from iHeartRadio. Hey there, and welcome to tech Stuff. I'm your host, Jonathan Strickland. I'm an executive producer with iHeartRadio and how the tech are you. It's time for the tech news for Thursday, July twentieth, twenty twenty three. First up, some sad news. One of the most notorious hackers in the early days of the web has passed away. Kevin Mitnick became famous in the mid nineties when the FBI raided his home

after trying to track him down for a couple of years. Mitnick, who had landed in trouble with the law a couple of times earlier for his more daring hacker tendencies, stood accused of infiltrating and exploiting computer systems belonging to some very big companies like Nokia and Microsoft. He pled guilty to cybercrime charges, and he was in jail until two thousand and Upon release, he was ordered to not use the Internet without first getting government permission. He eventually got

that restriction lifted. Mitnick was a divisive figure. Some hailed him as the spirit of a true hacker, someone who's curious about systems, and will do everything they can to learn all about them, including how to infiltrate them. Others said he was a dangerous criminal and an early example of the type of person who posed as a threat

to companies and government agencies alike. He certainly got under the skin of some very big companies, and a lot of people would argue that that is why he faced such persecution from authorities, that the reason that they went after him so enthusiastically was because he ticked off the wrong people. Mitnick himself seemed to be pretty good handling attention. He embraced the moniker of most famous hacker in the world with glee. He also made a new career as

a security consultant, helping companies create more secure systems. But last year doctors diagnosed Mittnick with pancreatic cancer, and this week, actually on July sixteenth, he passed away. So, Kevin Mitnick, I think you know, you can't say was he a hero or a villain. He was a human, a curious human who loved to learn things and had a mischievous streak as well. And yeah, I think he did essentially

take off the wrong people. He could have exploited those companies and made a lot of money off of it. Might not have been able to keep all of it, but he could have made some. But he didn't really do that, So there is that to say. I don't think his intentions were outright malicious or anything like that. Anyway, I suppose we can think of this episode as being dedicated to his memory. This week, the United Nations Security Council has been holding meetings about the topic of twenty

twenty three, and that is, of course, artificial intelligence. On Tuesday, Jack Clark, a co founder of a company called Anthropic, which is in the AI biz, had some words of warning for the United Nations. Clark said that the tech companies that are currently developing and acquiring and deploying AI really can't be trusted to guard against misuse, abuse, or

other problems that arise with artificial intelligence. Clark argued that we don't fully understand AI, and I mean, I think when you have people who are in the AI business saying yeah, we don't fully understand it, we should really

be paying attention. And he says it would be a mistake to just assume everything's going to work out fine while companies rush to figure out ways that they can capitalize on artificial intelligence, and he called for concerted effort to create tests to better understand AI capabilities as well as its flaws, and to anticipate how such technology might

be misused in ways that could create harm. He also called upon the need to establish standards and best practices and argued that right now it's pretty much the wild frontier with you. If any rules or regulations restricting tech companies as they develop and release AI products, and considering the potential consequences that could happen if someone put AI to malicious purposes, that's really not a good thing. So you could argue that regulation stifles innovation, and it's certain

that that can happen. But a lack of regulation can also lead to disaster, and I'm talking about disasters like using AI to design new chemical or biological weapons, those kinds of disasters like James Bond level stuff. Later this year, the UN will hold a global summit all about AI safety, and I expect we'll hear a lot more then, And we're not done with AI by a long shot. In today's episode. It's a running theme through many of our stories,

most of them, I would say, so Strap yourself. In Fortune reports that chat GPT is apparently getting less smart or getting dumber, at least in specific types of tasks, and in particular of solving types of math problems. It has started to slip now. Fortune cites a study that Stanford University conducted. Researchers at Stanford compared chat GPT's performance over time at answering various common prompts, from building code,

to solving math problems to answering sensitive questions. The researchers found that chat GPT experiences a great deal of drift. So in AI, drift is the word we use to describe changes in how AI completes a certain task like that will change over time, the AI will drift from one approach to a different one. Drift isn't always a

bad thing. You might see over time that the technology gets to be better at performing certain tasks with a higher accuracy, so they drift toward a better, more consistent approach. But these researchers saw drift go way the heck in the other direction, so they said they first started using GPT three point five and then later GPT four in

their studies, which spanned over several months. Now, one of the tasks they gave Chat GPT was to determine if the number seventeen thousand and seventy seven is a prime number or if it is not. It is by the way, and they said chat g or GPT three point five. I should split this between GPT and Chat GPT. GPT is the large language model. They said that GPT three point five's version of Chat GPT showed improvement over time, but when they switched to GPT four they saw a difference.

They said that in March, chat GPT got the right answer more than ninety seven percent of the time, but three months later, when they were still testing the system, it was a totally different story. Three months after hitting a ninety seven percent accuracy rate with this question, Chat GPT would give the correct answer only two point four percent of the time. Ninety seven percent to two point four percent accuracy. It's actually hard for me to even

grasp that big of a drop in performance. We've heard previously how programmers had noticed that chat GPT's accuracy for writing code had a big drop where there were a lot more mistakes or being inserted into code. More recently, so what's going on? Why is GPT getting worse or appearing to anyway well. According to the researchers, one possible explanation is that open AI will make changes to the large length which model, and they're doing so in an

effort to improve performance for certain categories of tasks. But while this happens, the LM, the large language model can start to experience setbacks and other categories of tasks. So you might make it better at something, but then it also gets worse at other things because there are all these interconnections within the neural network. So maybe you're fixing LM so it's better at processing visual imagery, but as part of that process you somehow also undermine its ability

to do math. The big takeaway from the study is that AI can and does experience dramatic changes in performance, and so it's important to keep an eye on that. You wouldn't want to lean heavily on generative AI if it was going through one of those big old dips in accuracy for whatever you were planning to use it. And it's important to monitor AI models if we want to avoid putting too much faith in the system that, for whatever reason, can at times be very much unreliable.

So another warning sign for AI, and it's not just math or if you're a brit maths that open AI's products struggle with. According to researchers Sophie Yinch and Christianne Kersting, open AI's chat GPT three point five has one of my really bad habits, which is that it tells the same jokes over and over. So, according to the researchers, they ran more than one thousand tests with ch at GPT asking it to generate a joke, and ninety percent of the responses were the same twenty five jokes. So yeah,

this one hits super close to home for me. I guess I should say something now about a princely sum or maybe reference a fantasy film as being a documentary or something, because those are kind of my go tos on this show, or at least they used to be anyway. The researchers were interested in studying chat GPT three point

five five's capacity for creating and explaining jokes. Ours Tetnica cites part of the report which explains that nearly all the prompts resulted in a response that contained a single joke in them, only a prompt that read do you know any good jokes created a response that contained multiple jokes in the one response, So the joke chat GPT three point five used the most. You know, they enumerated the numbers of jokes. The number one joke for chat GPT is why did the scarecrow win an award? Because

he was outstanding in his field? Now I have to admit that is a banger of a joke, and with Halloween season approaching, or according to at least some of

my friends already being here, it will only become increasingly relevant. Anyway, the researchers found that while chat GPT appeared to have a grasp on the structure of jokes and even the incorporation of things like wordplay, you know, puns, that kind of stuff, it couldn't tell when a joke was or wasn't funny, or adequately explain what made a joke funny, and if a joke didn't follow a more traditional structure

that would trip it up as well. This kind of reminds me of when little kids first learned to tell jokes. If you've ever been around a little kid when they're trying to tell a joke, it's one of my favorite experiences to have because the kids. Typically, they'll understand that a joke has a setup and a punchline, but they don't necessarily know how one follows the other, or some of them don't believe there needs to be any connective tissue between the two at all. It could just be

a non sequitor. But they understand that the word underpants is inherently funny, and that I think is important for us to remember. Well, the research focused on jokes, which at GPT, the research itself is not a joke. Humor is a very human thing, and AI struggles to get a handle on it, and I think that demonstrates some of the limitations current AI typically encounters, and it also illustrates why it's a bad idea to lean heavily on AI for content generation, which will lead us into another

AI story after we come back from this quick break. Okay, we're back, and before the break I mentioned we were talking about AI and content generation. This next story has something to do about that. In Gadget reports that Google is pitching an AI tool to big news outlets, including The Wall Street Journal and The New York Times. And this tool reportedly code named Genesis. Like, that's not a

red flag or anything. It can generate news articles. So the idea is you provide the data, you know, the salient points of a story, and then this tool, Genesis would craft the actual article. Google is apparently positioning this not as a replacement for writers, but rather as a tool for journalists that they can use to automate certain tasks as they focus on other aspects of their job. I guess those aspects would be gathering the information needed to write an article in the first place. I admit

I failed to see much of a distinction here. Also, we have seen numerous recent examples that replacing writers with AI doesn't have a positive outcome much of the time, and with some AI models proving to be unreliable with stuff like hallucinations and drift like we were talking about earlier, you really need a firm editorial hand to fact check everything and make sure that the article is actually drawing the correct conclusions, and one begins to question if the

AI is even solving a problem here or if it's just creating new headaches. If you have to spend twice as much time fact checking and rewriting an AI generated piece as it would take for you to just craft it yourself, it's not really a solution. In Gadget reports that witnesses found the demonstrations quote unquote unsettling. You know that seems fine. Oh as for that Genesis code name, I realized a lot of folks might think about the

biblical reference, which makes sense. When I hear Genesis, my thoughts immediately go to Star Trek the Wrath of Con, which technically also was making Genesis a biblical reference. But in that movie, Genesis was this scientific device that could jumpstart life on an otherwise lifeless planet. However, if you were to use it on a planet that already had life on it, it would exterminate all existing life and

then create new life there. So obviously, in Wrath of Con bad guy gets hold of it and threatens to use it as a weapon. So it just seems like using the name Genesis to talk about creating content is already a built in metaphor when you're thinking of like

the Star Trek two version of Genesis. As we sit back and watch Google and Open AI and meta compete with one another to determine which AI tool will destroy us all, we should keep in mind that Apple has been working on their own version at least, that's what Bloomberg reports. However, the company reportedly does not yet have a plan or timeline regarding when, if ever, it will release its AI technology to the public. I expect we will see aspects of it incorporated into existing Apple features.

I think Siri would make a ton of sense in that regard, But maybe we won't get a fully Apple flavored version of chat, GPT or Google Bard. Apple's large language model has its own approach, it's its own thing. It's not using a language model built by someone else. It is, however, built with a Google framework called Jacks, so naturally Apple's framework is called ajax, which is sad because if it had been called Apple Jacks. I think there could have been some real great cross promotional tie

ins down the line. But never mind that. Again. According to Bloomberg, Apple plans to make some sort of major AI announcement next year, So maybe we will hear that Apple has its own plans to incorporate AI or maybe even release its own chatbot in the near future. You can count authors as another group rising up with Hollywood

writers and actors to voice concerns about AI. In this case, the Author's Guild issued an open letter directed toward AI companies, calling out how those companies have used published works in order to train their AI models, and that the companies did this without securing permission from authors or publishers, and without compensating authors. Sarah Silverman, the comedian, brought up this

concern earlier this year. She demonstrated that an AI chatbot was able to summarize and quote passages of her book, which certainly raises some copyright concerns. It wouldn't be legal for me to reproduce a copyrighted work manually, so it should also not be legal for AI to do the same thing. The authors are also concerned that AI would end up essentially plagiarizing works in an effort to craft something based on a prompt. So the letter contains a

passage reading quote. These technologies mimic and regurgitate our language, stories, style, and ideas. Millions of copyrighted books, articles, essays, and poetry provide the quote unquote food for AI systems endless meals

for which there has been no bill end quote. I suspect we'll see a lot more anger and demands for compensation for various data sources that these AI companies are using to train up their models, and I imagine it might spur lawmakers to consider new rules relating to how AI can be trained and how authors and others should be compensated for the use of their works in the context of training AI. Now still related to AI, but

now segueing over to tech business. We had a couple of different earnings calls this week that talked about Q two results in the tech sector. Tesla was one of the companies to do that, and in the call, Elon Musk again talked up the prospect of autonomous vehicles, but while doing so, he also did something that's not typical

of his approach. He acknowledged in the past he was perhaps a bit too optimistic about how long it would take to develop reliable autonomous technology, and even said that maybe he's still wrong about how long it should take. I think that's a more measured approach, particularly when we know that agencies like the NHTSA are investigating crashes that involved Tesla vehicles believed to be in either autopilot or

full self driving mode. Musk also revealed that Tesla is in talks to potentially license its self driving technology to another automaker. He also said he believed that Tesla's manufacturing robots could end up revolutionizing factory processes, with the goal of even having them on Tesla's own factory floors as

early as next year. That does sound a bit aggressive, according to Reuter's, fewer than a dozen of those robots have been built so far, so it would take a lot of work to get up to speed to do that. The Taiwan Semiconductor Manufacturing Company or TSMC, is a major semiconductor fabrication business, one that meets a huge percentage of the global demand for chips, particularly for higher end microchips.

TSMC is working on building out a mass production plant in Arizona here in the United States, but recently the chairman of the company announced that it is going to be twenty twenty five at the earliest before that plant comes online. That's a delay from earlier predictions, and the reason for it, according to the chairman, is a lack of highly skilled workers who are needed to install equipment in the facility. The company also predicted that despite a

dip in demand due to macroeconomic factors like inflation. You know, we've heard a lot of reports that people aren't buying as many say computers right now due to things like inflation. The company is still looking toward a very busy future because other companies are investing heavily in stuff like AI, and AI requires a lot of compute power. So while consumer demand might be in a dip, the industry demand

is on the rise, largely in thanks to AI. Okay, I've got a few more stories to cover, but let's take another quick break and we'll be right back. Okay, wrapping up the news. We got three more stories to go. So Netflix has seen some recent chaine ign reports that the company has quietly acted one of its tiers of service, namely the Basic tier, which for nine dollars ninety nine cents a month, users could subscribe to watching streaming media

content in standard definition but free of advertising. So you had the ad supported tier, then you had this Basic tier where you're watching in standard DEF but you don't have ads, and then you had the higher priced tiers. But now that option is gone, and that means that subscribers will either have to opt for the less expensive but ads supported tier, or they're going to have to call out the extra dough for the more expensive AD

free experience. The Basic Plan is just not an option anymore. However, people who are currently on the Basic Plan will remain on it until their subscription expires. Once their subscription expires, then they have to make a choice of which tier do they go with, because the one they had been using will no longer be available. Netflix had previously killed off this feature in Canada, so this wasn't completely out of the blue, and now the option has been eliminated

for both the United States and the United Kingdom. And while I don't think Netflix talked about that in their earnings call, which was yesterday, the company did reveal that they saw an increase in paid subscribers to the tune of five point eight nine million customers, So I guess all that cracking down on password sharing has paid off, though the company did have to weather a lot of

upset customers in the process. Authorities in Ukraine have seized assets of a bot farm that was designed to disseminate misinformation and Russian propaganda in Ukraine, mostly as you would imagine about the ongoing war with Russia and Ukraine. This

was a really big sweeping operation for the police. It involved twenty one search operations, It spanned multiple cities in Ukraine and collectively police seized a huge amount of equipment computer uters, servers, mobile devices, more than two hundred and fifty GSM gateways, more than one hundred thousand I think it was like around one hundred and fifty thousand SIM

cards from different mobile operators in the region. So the bot farm was using all this equipment to create bought accounts on various platforms in order to spread Russian propaganda as well as to gather data about Ukrainian citizens. So huge operation, both on the hacker side and on the law enforcement side. And yeah, it really just shows how big of an emphasis there is on disinformation campaigns out of Russia. We hear about it all the time like this.

That is a big part of Russian strategy to undermine opponents, whether they are wartime opponents or political opponents. We've also seen similar things out of China. So yeah, it's not a prize in that sense, but it is kind of just shocking to see just the sheer number of components that authorities seized in the process. And you know, that's one hacker operation. There may be others that are active

right now. Canada launched a tech initiative to attract international tech workers, specifically people who had been working in the United States but who are now without a job and thus in danger of losing their visa status in the wake of all the mass layoffs that have happened in the tech space over the last year, and shortly after opening this program to attract international tech workers, Canada closed it.

You see, the parameters of this initiative were to allow interested parties to apply and the sign up process would be opened for a year or until the program received ten thousand applications, whichever came first, and the program had ten thousand applicants within twenty four hours of it launching. So I think this helps illust how huge an impact those tech layoffs here in the United States have had and the challenges of the people who are on a

work visa face when their position gets eliminated. I mean, they're on borrowed time to stay in North America. Now, there are leaders in Canada who are calling for an expansion into this program, but others are saying that they need to take a methodical approach to make best use of tech talent and not to rush into something without having a good plan in place. Which sounds reasonable to me.

I think that makes sense that you want to make sure that you actually have a pathway for people to follow and not just become a collecting house right of tech talent and you don't have anything for them. So I do think it's important to have a plan in place. However, my heart goes out to the thousands of people who are hoping to be able to apply for this program, but they didn't get the chance because it was closed

out before they could get their application in. Because a lot of them may not have that long to wait before they face the necessity of leaving North America due to the limitations on their visas. So yeah, it's one of the huge consequences we have seen as a result of all the tack layoffs. Before I sign off, I do have an article recommendation for folks. This one is in Ours Technica. It was written by Dan Gooden. The article is titled Attackers find new ways to deliver d

doses with Alarming sophistication. So a d DOS, if you are not familiar with the term, is a distributed denial of service attack. Essentially, the way this kind of attack works is that you use a large collection of devices to send messages to a target with the intent to overwhelm the target with all that traffic and potentially either just slow it down so that it's not useful to any one who actually legitimately needs to access that server, or just shut it down entirely, like it just can't

handle that traffic and it shuts down. D DOS is kind of a sledgehammer approach to an attack as opposed to like a scalpel. It's a very blunt force kind of attack. But as this Ours Technica article explains, there have been some evolutions ind DOS approaches that have made

them far more dangerous than they had been previously. That the standard ways to detect and prevent de dos attacks are slowly becoming obsolete because of these new approaches, which calls for new ways to detect and respond to the attacks. There's a lot of input from the company cloud Flare, which is heavily involved in protecting clients from DIDOS attacks, so I highly recommend it. Again, that's at Ours Technica. It is titled Attackers find new ways to deliver dedosses

with alarming sophistication. Once again, I have no connection to Ours Technica. I do not know Dan Goodin. I just have read lots of Dan's articles, but I've never talked to him. So it's just one that I thought was interesting and worth your time if you want to read something really interesting and certainly more than a little alarming. All right, that's it for this episode the tech News for Thursday July twentieth, twenty twenty three. I hope all of you are wealth and I will talk to you

again really soon. Tech Stuff is an iHeartRadio production. For more podcasts from iHeartRadio, visit the iHeartRadio app, Apple Podcasts, or wherever you listen to your favorite shows.

Transcript source: Provided by creator in RSS feed: download file