Deep Seek Disruptions, NVIDIA Vulnerabilities and More: Cyber Security Today Weekend Panel for February 1, 2024 - podcast episode cover

Deep Seek Disruptions, NVIDIA Vulnerabilities and More: Cyber Security Today Weekend Panel for February 1, 2024

Feb 01, 202548 min
--:--
--:--
Listen in podcast apps:
Metacast
Spotify
Youtube
RSS

Episode description

Cybersecurity Today: DeepSeek AI Disruptions, Nvidia Breach, and TalkTalk Hack Revisited

In this weekend edition of Cybersecurity Today, our panel reviews the most significant cybersecurity stories of the past month. This episode features Laura Payne from White Tuque, David Shipley from Beauceron Security, and Dana Proctor from IBM. Key topics include the sudden emergence of DeepSeek AI, Nvidia’s vulnerabilities and their effect on stock prices, and TalkTalk’s latest data breach. Additionally, the discussion covers the soaring API security vulnerabilities reported by Wallarm and the UK’s potential legislative action on ransomware payments. Stay tuned for expert insights and analysis on these pressing issues in the world of cybersecurity.

00:00 Introduction and Panel Welcome
00:41 DeepSeek AI Disruption
02:09 Security Concerns and Reactions
04:06 NVIDIA's Vulnerabilities and AI Security
07:15 Economic and Geopolitical Implications
12:13 AI in Business and Security Practices
20:57 Open Source AI and Cybersecurity Risks
25:37 Responsibility in Data Management
26:25 AI's Unstoppable Progress
26:53 API Security Concerns
28:41 Non-Human Identities and API Challenges
30:36 The State of Cybersecurity Awareness
35:05 Legislative Hopes and Cybersecurity
37:25 TalkTalk Breach Revisited
44:10 Ransomware Legislation Proposals
45:34 Shoutout to Cyber Police
47:04 Closing Remarks and Audience Engagement

Transcript

Welcome to cybersecurity today, our weekend edition. This is our regular month in review show where our panel looks at some of the key stories from the past month. And today we have back on the show, Laura Payne from White Tuque. Welcome, Laura. Thanks, Jim. And we have Dana Proctor from IBM. Welcome, Dana. Pleasure to be here. Hello. And we have resident guest, David Shipley, our artist in residence. David Shipley. Yeah. And culture critic.

Also manages in his part time life to be the head of Beauceron Security. Welcome, David. Thanks for having me. Good to have you guys here. Want to talk about the news that's come up and we'll jump in and present the stories that. That we want to present. If anything happened this week in the world, it was DeepSeek. And for anybody who has been living under a rock, I'll just do a quick sort of, quick analysis of this.

In the middle of, the weekend, somebody announced that, and we paid attention to this. It's not that they came out of nowhere. But DeepSeek had been offering a version of, What would be chat GPT four Oh, or something like that. And all of a sudden. They dropped in a new model, which was at least as good or probably better than anything else in the marketplace. It was a learning model. It was absolutely state of the art. It was online in China, but they're giving it away free. So it's free.

It's open source, so anybody can download it. And, so you can get a copy of this for nothing. It costs 98 percent less to run than open AI. And Open AI came in and they were incensed about this. They said, Oh, this is, a great challenge in the market. We love competition. Then they said wait a minute. They've been, they used chat GPT to train this model. So they're stealing the information we stole. Fair and square.

And Microsoft got behind them and got in, was incensed and put out a note supporting open AI. Of course, the next day, they, Microsoft put, DeepSeek on Azure. So I'll do the easy shots on this. First of all, if, there was a great security threat and by you.

This, as I've said before, A. I. Is the great shadow I. T. And if you have people in your office who actually got on there the first day and we're putting corporate or personal information on a server in China from a software they barely heard of, I suspect that we forget the training. I think you should take their computer away, give them an etch a sketch and Just leave them there. Cause, that's just over the top because everybody was up in arms as well.

And then, this software, not surprisingly got hacked. Because there are a bunch of guys who do math or ladies who do math. Actually, it's a 29 year old lady who is one of the brains behind this. And they're all great. And who's ever had that in their shop where you've had a lot of people working on a system who were very technical and ignored security. They really ignored security. Apparently they didn't have a password on their database.

And so they're going through the growing pains on security right now. Actually, somebody let them know this week how easy it was to jailbreak this. So this is where we are. I just want your reaction to that panel because a big supporter of open source. I think this is great. But the security mistakes that have been made this week are a year full in one week. . I'll take the first stab at that one. , not surprising that something would emerge like this.

Now, when everything is being given away for free, you always have to question what's the motivation behind the free, right? And something really interesting that kind of popped out of, NVIDIA's woes, of course, around this having crashed in stock price, over this release of an alternate AI that doesn't require all their GPU. Compute. But also the fact that they had to disclose 7 vulnerabilities this week that require updating. So you wonder, was there more to the timing on this than just.

Hey here's some fun, new tech go play. I don't know. How do you mean, first of all, the seven, was that NVIDIA that had to? NVIDIA has seven vulnerabilities with patches out for them. There's three high. One of them, allows full execution, arbitrary execution if you exploit it. And, it is. exploitable. I haven't dug deep enough to see who is actively exploiting it, if it's actively being exploited. But, yes the proof of concept seems to be, possible.

So yeah, we've got, we've got a double whammy for Nvidia on that side of things this week. Dana, what did you think of this week? It was a fantastic week. Let's just say from a AI perspective, this has been in a pursuit from an IBM perspective for years. So this absolutely reinforces the we're on the right trajectory. We're on the right path that AI is here for the future, but in a lot of ways, I love exactly what you said, Laura, that it amplifies that. Wait a second. If it's free, why?

Why is it free? But larger than that, it reinforces what we know is best practice. We know best practice is securing and I'm just going to stop right there. There's Securing A. I. Is a conversation that we're not well equipped for in so many different ways. We're not well equipped in securing data centers right now. That's why we continue to have data center breaches.

What it's reinforced to me and a lot of our client conversations is reinforcing actually securing and we can get into how the actual machine learning modules securing the platform of the security service that it's running on actually securing the objective of The large language model or the small language model. So there's many aspects of how to secure all of that I love. It's now a conversation because people saying I could use DeepSeek.

Hold on, as an individual on your own phone looking to see what's, looking for some advice is one thing. Using it as a business and a business context, hopefully it gives people some pause. It is an open source program. It will, that's the beauty of it from a security point of view is that it is, the code can be exposed, people can look at it, we can improve it. But talk about, rushing it into your business this week might be a bit of a problem. Let's talk about that too is using it.

The foundational benefits of using AI, and I think we talked about this last time, was when we input the right formulAInto a calculator, we have strong confidence of what we're going to get back. The challenge with AI is we don't know if there's exfiltration. Are they injecting code? Are they reissuing how the components that are in the AI model are being conducted?

That's where we need to focus on the whole pipeline of an AI, and I'm hopeful that this I'll call it situation amplifies that conversation of the whole pipeline, secure the pipeline, the machine learning pipeline, secure the model, secure the platform, secure the actual application that you're using for it. So the first thing that, that one of the questions I had in my mind is, in, in, in criminal investigations, there's this fascinating Latin term called key Bono who benefits.

And so it's been interesting as the week's gone on from the massive impact on that, stock to do an analysis back on short selling. And so short selling is when you take a bet that a stock is going to perform poorly. And short sellers made 6 billion plus Monday when Nvidia's stock tanked . Now what's interesting is Bill Ackerman, who's a famous billionaire hedge fund in the U. S., is asking some interesting questions.

A lot of people don't know that DeepSense is actually a subsidiary of a Chinese hedge fund. They started as a bunch of quants doing analysis. So all of a sudden they get into making an AI, highly disruptive AI model, and they time the release. So it impacts the stock market with maximum fear. It raises interesting questions that I'm sure the SEC investigators are all over. But they wiped out a trillion dollars worth of value. And that's the other geopolitical sort of strategic implication.

It wasn't the only announcement that weekend that was designed to rattle the United States. You had this fusion announcement, and there's been constantly a stream of fusion news coming out of China to rattle, the Western world about possible Chinese dominance in clean energy and in AI. I think we can't look at all of this, in isolation from our typical security silos and say, okay, follow the money.

There was a lot of money made on shorting this thing and who knew what, when, these are good questions to dive into. And then what was the broader context? AmericAIs embarking on a America first foreign policy. And that is going to invite a response back in various ways. I think we just saw the Chinese respond back using an economic weapon, in a very intelligent way. What's fascinating is next, if I go into the user behavior thing, and let's call this the TikTok effect.

Yeah, before you go there, let's just wrap up the thing on the investment side of this is an investment company that put this forward. You're absolutely right. And this was a side project for them. This is what they did in their spare time. Which is the astonishing thing. So you're absolutely right. This is an investment company. And this was a side project for them.

And, The exuberance, the up and to the right, that all AI is going to require ever more consumption power and ever more expensive compute, which fueled NVIDIA's amazing ride to the stratosphere, was always ludicrous. It's in every other field of computer science, We've always figured out how to reduce the cost and improve the performance. They're like no, not this. This is going to be upwards to the right, infinitely forever, 500 billion Stargate data center. It's going to be amazing.

And all of a sudden it was like, maybe not. Assuming for the sake of argument, the deep sense is actually a genuine leap in performance on a compute basis and not the result of distillation, which is a fun term that I learned on the weekend doesn't just apply to whiskey, it applies to how you abuse APIs on an AI chatbot to learn how it works to shortcut the 100 million training cycle, allegedly.

But what's interesting is if it was a leap forward, it's because of the American embargo on the NVIDIA latest and greatest chipsets. They weren't able to use the latest stuff, so they innovated. And under that argument, what I would suggest is that it was inevitable that someone was going to challenge the up and to the right for infinity, compute and AI chip requirement, on that side. And in some ways, maybe this was a small mercy that the bubble burst now. Versus later.

So maybe it'll turn out to be a good thing. But that's just on the economic side there's also a bunch of people have learned new words on this. I always love it when you get newscasters who learn about things, there's a thing called Jevons paradox, which everybody's now touting, which is a Jevons paradox goes back to the industrial age and says, when we brought steam engines on and they, we made them more efficient. People bought more steam engines.

And so now everybody's trotting this out going if we drop the cost of A. I. Processing, there's going to be more usage of it. I think NvidiAIs gonna bounce out of this. So not poorly because of Jevin's paradox. I still don't understand. And I think where the Chinese might wanted to do some damage, I think, was, you've got open A. I. Where I think it's a good idea. Microsoft's invested like 10 billion, $20 billion in it. All kinds of people have invested billions of dollars in this.

They're gonna lose four to $5 billion. That's why a lot of, there's been a lot of hand wring saying this is cheating. They can't really do this and OpenAI can do it. And if open AI could have done it, they wouldn't be losing 5 billion this year. So this is a major hit, but I want to bring it back to the security piece of this, because I think it was a wake up call in many aspects, because it was the big one that happened. There are other AI announcements.

We keep bringing these AI things into our shops , we have no proper way to bring AI in. And can you hack these things? You could hack open AI. These things are quite porous. In many cases, there have been a lot of hacks, and we're still not. Bringing and we'll talk about that mostly on APIs have been where the hacks have been very big, but we still haven't figured out how to bring this into a corporate environment. That's a problem. Dana. I think you were alluding to that. Yeah, it's a problem.

It's a problem when we're leaping before we're thinking. Unfortunately, I think the mass adoption is becoming the afterthought. But when it's planned out, when it's used, within a conscientious approach, we're seeing great success, but that's the push to I need the efficiencies. I want the benefit. I want the speed. We sometimes act far quicker than we should be. We all know. Best practice doesn't mean we always follow it, right?

Best practice let's look at this before we just bring it into our network, before we just open it up and allow our developers to use tools that we have not vetted our third party supply risk programs are strong. The other piece of this, and I, that was one of the things I said when I did the story this week is you shouldn't be bringing any piece of software into your organization. Without it being vetted, first of all. It doesn't matter where it came from China, New Zealand. It doesn't matter.

You shouldn't be bringing in a piece of software with no vetting whatsoever, but the other piece, and this is the learning lesson , that I think everybody should pick up because you can blame these guys who had this side project and they didn't secure it properly. How many test environments are there out there that have lax security? Where people are saying, it's just a test environment.

And and there have been some big hacks of people remembering that, oh, the test environment is hooked to our other network, or I use my office PC on this insecure test environment and then trot it back to the office. If there's one big learning lesson out of this , and that is that security has, we talked about, you don't bolt it on afterwards, you build it in. We have to get that right. We have to start getting that yeah.

I love the advent of, before we talked DevOps, and then we started talking DevSecOps, right? Put security right in the middle. Now it's AI or ML DevSecOps. Security, to your point, bringing it through that life cycle, it just becomes something that we are.

But, if I digress for a moment, talking to some potential early professionals and graduates coming in for internships, any guesses on how long they spend in their computer science programs right now on cyber security, much less API security programming that, David, you jumped right to the end. Yeah. Because, it's more important that we still teach really old programming languages because it's easy. And we remember how to teach those then. Dana, you're on something like modern computer science.

Programs in Canada need to be teaching ethics, need to be teaching critical thinking, need to be teaching, security by design, and thinking about the assertiveness training for, programmers who can become more confident in saying, Hey, I understand we're on a deadline and we're on a rush. Mr. Project Manager, Madam Product Manager, here are the concerns I have about, okay we're just going to get DeepSeek to write the last, bit of this particular module and just slap it in.

And we need to get this rolled out. We need to get it out. We need to release, the Facebook move fast and break stuff methodology combined with AI generated code equals. A lot of work for firms like Laura's, I think, and in the propensity of these things to take what is out there and amplify it in terms of if 99 out of a hundred programmers make the same silly SQL injection mistake, but that's the 99 that any model is ingested, guess what you're going to see? The same SQL mistake.

I don't know, Laura, if you have any thoughts about where this fits. Oh, I think and where I was going to take a tangent on this too was in the direction of, we've talked a bit about programmers and people who notionally should know what they're doing, although I think we know there's a broad spectrum of programmers and the first priority is make it work. There's some appreciation for that as well, but I really that thought around.

Requiring ethics and, security principles as part of our programs. But we also have a lot of people who are not technical at all, who are adopting these technologies. And, the fact that with button clicks, you can implement a lot of these in your environment, or you can. Access them without bringing them in to your environment. I think that's where there's a lot of gray area that people don't understand. And to Jim's point earlier about, this is the shadow IT of shadow ITs, right?

It's so easy for somebody if there aren't any controls and there isn't good. Practice and communication with employees about what's acceptable and not acceptable for anybody in your organization to start funneling files into these services. And they're just trying to do their job for the most part, I presume, right? 99 percent of actions taken are just to get something done and they're done with good intention. They're still the 1%.

And that's not a scientific number, but, but I don't know that many people are using AI as their exfiltration method of choice if they're an insider threat, but, there's just a lot of, people who.

See the opportunity and they're willing to just set aside that fear or that concern because the deadline that's in front of them is more important than that really difficult task of both understanding how to decide if AI is trustworthy, and then the work of vetting AI to see if it's trustworthy, because the vendors don't make it very obvious. There's a lot of paper policy around how we do things, And it's not even, they say this is how we do it, but then there's no actual how within it.

It's just statements that we've made it secure and we don't move your data around and we don't allow this and we don't allow that, but there's It's harder and harder to find what is the mechanism that they use to make those things true. And my security Spidey sense always goes off. I hear lots of promises. I hear lots of we don'ts, or lots of we do's, and not a whole lot to back it up to say, But this is really how we do it, that I feel I can trust.

And, you raise a really interesting point about the people bringing this into the workplace. And what I was thinking about was, the first major disruptive technology that really turned over the IT central control apple cart was the iPhone. So before that, smartphones were carefully issued by the IT department. God love them. They were Blackberry's. Fantastic Canadian company. And we had control. But then the CEO saw the iPhone and wanted to be cool.

And they were bringing the iPhone in and you had no choice. You were supporting the iPhone. And then everyone saw the CEO had the iPhone. So they wanted the iPhone and here we are today. And we now have the app universe. And how many organizations have good control? Over that. And so what was terrifying was DeepSeek was the number one app on the app store. Like it was that lemming moment where I was just like, if all your friends were jumping off a cliff, would you jump off a cliff?

And turns out if it's an AI thing I can play with on my phone, yes. The teenage answer is I would drop, jump off that cliff. Isn't that a great commentary for our society in a way right now though, right? It's the populist if I can get that word out, but also the advent of the dupe How many things do we see online of well, you can get the dupe of this or you can you know The cheaper alternative that looks like it. This essentially is the dupe of the gpts don't get me started on the app stores.

Cause the, it's as soon as this. This app was out there, there are like clones of it, and who knows what those things do? I no longer have faith in all of the great filtering that's supposed to happen for the app stores. And there's just app after app that is a duplicate or is a takeoff . And what do those things do? I want to bring you back though, because there's another piece that just, that I think people should be thinking about very carefully. I'm an open source guy. I love open source.

I think it's great, a great thing to happen, but you now have an open source software that is as good as anything on the market that every cyber crook in the world, every hacker has access to. And that's gotta be something that people worry about from a cyber security point of view. Oh, yeah, there's already been a copy to make it so that it can turn out malicious code and, turn out phishing. There's a reason why phishing's up 250 percent in 2024. This is bananas. Phishing wasn't small.

Before you had a 250 percent increase in it and the quality. Vintage fishing. Damn man. Like I think 99 problems we had in better fishing wasn't one. So there's, yeah, absolutely. That mess of it which goes back to the whole AI arms race. And my favorite, Ian Malcolm quote from Jurassic Park, and it's a speech where he turns to John Hammond and says, you didn't earn the science. of genetic manipulation.

You stood on the shoulders of those who earned it before you and you said let's do it, not whether we should do it. And I feel the same way about the AI madness race right now that I did about the point that Michael Crichton was trying to make to us about the bleeding edge science in the 1990s, which was genetic, manipulation. That's where we stand. Is there going to be another company that now is 3 million to develop it and they're going to race?

Yeah. This is the start of that particular thing. But the more that this gets open source, the more that this is going to be abused. I can't believe I'm actually coming out in support of, not that OpenAI. If you're listening, I don't believe you're being ethical or responsible , FYI, but they are at least not open sourcing it. So yay. Don't blame the Chinese. Mark Zuckerberg. Is it's llama code that this new, this new DeepSeek was based on. You can thank Mark, you can thank Mark for that one.

Although, like I said, I think open source is a good thing, but again, we have to be prepared for it. We don't open source the, 3d printer model to make AK 47s. Not yet, but you can make a pistol. And I think that responsibility is a really great conversation, right?

In reading a lot of the reports of everything that came out yesterday, few things struck me as well as one of the comments that I read was, and this plays into a conversation we've had before about fatigue and apathy is sadly, our data has been anyone else's data for years already. So that bothers me. That concerns me because They're not wrong, but at the same time, surely to goodness, we can stop the proliferation.

And I think to your point, that's where, releasing these, and I'm just going to keep calling it dupes of AI and just generically GBTs is there. We need to be better custodians. Consumers to that as well of my ability to do something faster, quicker, more efficient. Laura, it's your point of getting it out sooner is a great objective. Getting it out and leaving a path of destruction for anybody that uses it is not really the definition of success or certainly not long term success.

So that's why I'm loving the conversation now of Yeah, this A. I. Is something everybody's 97 year old grandmother to my Children. I know people in the construction industry that are using AI. I know people in agriculture using AI because they want to see, can this improve some of my mundane tasks? Can it give me a leg up and let me do things more than I did before?

Those are brilliant reasons, but it's created cottage industries and one that we're already in within our, my, my business of protecting the AI model so that you can trust it. Doesn't mean I can't get somebody off of Facebook marketplace to come and do the electrical for my new home, but will they pass building inspection? Probably not. I still like the analogy of.

Think of AI as an intern and yeah, so we've graduated to some really smarty pants interns here, but Would you let them run around in your business and do whatever? they feel like with whatever they can get their hands on at the beck and call of anybody in your business and hope that maybe They actually produce something of value. I don't think so. And maybe the same principles need to be applied here, right? If you bring one of these in it it's you, right?

And it's your job that's on the line if that person screws up and you need to manage them and you need to know what they're doing and you need to be responsible for not misdirecting them into abusing resources. And, I think if people thought of it that way of okay, they're responsible for what they. Bring into the organization or what they share their data with, and, they have to take responsibility for directing and managing it. Maybe they'd stop and think a little bit more.

Yeah. We're going to have to come up with a better strategy, though. This is not going to go away. I still remember the days when I was running, I used to be a project manager, then I was running projects, and project managers would Come fresh from their training and give me the speech that said, you can have two of the three faster, better, cheaper. You pick two. I'll pick one. I said, Oh, here's the one I'll pick. You don't work here anymore. I said, if that's your only game, forget it.

I have to move fast. I have to do it quick and I have to do it well. And I think we have to accept the fact that the AI world is out there and we're going to have to find a way to cope with it because they're not going to stop. Nobody's going to put their hand up and say, okay, pinky swear. We will, we'll hold off for six months. Not going to happen. Yeah. Elon tried that. Remember at the start of the great AI mania was, Hey, can we have a six month pause? So I can build my own.

So I can catch up. So that's never going to happen again. Kids. There's going to be no pause. Another big story I want to bring up that happened. And then I'll drop it to you guys for years. I'm monopolizing them, but I think these two. AI one was a big story. The other one a firm called Wall, arm that I hadn't heard of before, but does a API security, no idea. I'm not doing a commercial forum. I had no ideAIf they're good or not.

But published a great report this week on APIs and made some stunning claims that, a staggering 1020 5% increase in, in. CVEs from the last year that were attributable to APIs I have to say that I've done a lot of work in APIs I don't think we did as much thinking about security as we might have knowing everything I know. Now I might have insisted on some things differently. I don't think that we made them insecure, but I think we all bought the API economy.

Thing and I don't think that security kept up with it. And this report seems to be saying that we have a huge problem with APIs and what ties together with what's driving most of the communication with AI APIs and a big weak spot. Is anybody else was anybody else's shocked by this as I was?

No, the interesting thing is there's been this whole new normally I hate when the security industry decides to reclassify and come up with a new acronym because like we don't like we need new acronyms like I need to be balder like it's just non starter, right? But in this case, I'll accept the non human identity, the NHI argument. What the core of APIs, or at least API that's worth a damn, is the idea of some kind of an identity and access control schema as applied to it.

And this is really freaking hard. Are you talking about tokens that are then, what's the rotation? Is it certificate based? All kinds of fun things. Non human identities are to the internet what IOT is to humans. There's what, 50, 100 billion devices now on the internet compared to the 8 billion plus humans, right? Non human identities far outweigh all of this. We suck at identity and access management for the smaller amount that is still carbon based. We really suck at non human identities.

That's problem number one. And problem number two is that API security Takes a degree of secure of sophistication and cleverness and it used to be a scaling problem. You'd have to find some super clever. I drink Jolt and caffeine 18 hours a day, pen testers, and they could do some really clever things with APIs. But thank you to AI. I guess that's our segue over from that to this.

You now can have the bots work those API endpoints, running all kinds of different experiments until you figure it out. So yeah, this is just our inability to manage identity and coding securely coming to roost. That's my quantification. I feel like APIs were built with the security by obscurity was the model like nobody will never will ever guess my one key that allows access to everything, right? Anyway, that's the state of things. And it's finally catching up.

And, but we're still building more APIs apparently based on security by obscurity. I don't recommend zero out of 10 for security. Security. Yes. Bad review. Bad Google review for that one. Yeah. I was surprised by that stat. I the number one attack surface. I don't know about that. I dare say, our threat intelligence index will be coming out shortly. Last year's was certainly around the misconfiguration of hybrid cloud.

And hybrid cloud being the most prevalent configuration of any of our organizations. But the other one, and David, I'll absolutely line in with you on this one was, the reuse of existing IDs, right? The logging in instead of breaking in that continues to be. prevalent approach that, hackers nation state, our next door neighbor having some fun is able to do and continuing to do so hygiene around our identities, the configuration by obscurity, as you said of APIs.

But dare I say, and I'm with you, Jim, a lot of what we've ever done around APIs does almost rely upon the nested ability of it, that there are other compensating controls around it. So it's almost a, if you can find me. You can probably extort me or exploit me. The proliferation I think is amplifying it, but I would challenge that. I don't know that it's the most attacked . I still think the hybrid cloud or the proliferation and the lack of configuration controls in the hybrid cloud is there.

For it, you're not implying that because this company does A. P. I. Security that they would make this a bigger. I'm gonna be talking with the C. E. O. I actually got an interview with them this afternoon and I was going to raise that minor point was, And I'm not dismissing it. I think this is an important thing to bring up. And it's nothing we think about a lot, but maybe just everything went up by a thousand percent. I think that's a brilliant point, right?

It's, what we're seeing from our X Force team last quarter, we had a 26 percent quarter over quarter increase in ransomware attacks. Was the root cause of those, the exfiltration or the exploitation of an API? I don't think so. Third party, validation. I'd love to hear more, but here's the other thing. I don't doubt it is heavily attacked. Is it the area that if I was a CSO of an organization, I would say stop the presses. We need to change course.

And address our APIs. And I think this is like the, we talk about maturity within security stocks. And remember folks, criminals, smart, but lazy, smart, but hardworking, they usually go into tech, smart, but lazy, they go into crime. So mostly, excluding some brolegarchs, but the point here is, if you really want target X. Right? And they're a bank, or they're a medical facility. You're gonna start with phishing, because why wouldn't you? And you're gonna see, okay how far can I go?

And then you're gonna use credential, maybe even start with credential reuse. Do I have these creds from a broker to begin with who's already done the phishing? No. Okay, do some phishing myself. No, no creds. What have they got exposed unpatched? Oh, they're running exchange unpatched. Sweet. And away we go. Okay, looks like everything they got to patch . They got any custom applications? Oh, yeah. Let's poke the APIs. We almost need to do Maslow's hierarchy of, attacker needs.

At the bottom, Ye olde, I already got the creds from somebody else. It's a food, basic, water kind of thing. And then work your way up. And then when you're getting into API you're getting into self actualization. You're getting into the really hard stuff. You really want to do it, but thanks to AI, it's getting easier . I don't know. So if that becomes a thing, Maslow's hierarchy of criminal needs, cyber criminal needs I claim that. Shipley's hierarchy of criminal needs.

I am amazed and know not what to say to quote Shakespeare. Who else has got another story for this week? Laura, you look like you, you're thinking. I burst my story and not that it was really anyway, but I just that pick up on Nvidia and the relatedness. Oh, yeah, no, I yeah, we should highlight. There's always somebody who's got something every week, right? So it's not like Nvidia special or unique and that they have some vulnerabilities posted this week. It's just really bad timing.

Like the thing about the stock market is it's so emotional, it's amazing. And I say this as someone who's now studying, psychology and neuroscience, right? And just now when you get a panic, you're like, Oh, and they got insecure shit. Oh my God, even more panic. It's just I almost in knowing what we've read about Jensen's leadership style, that the guy has his hands deep into everything. Like we should have a Jensen being watched like, man, I hope he's okay.

Like this has probably been a shitty week for him . So that's me, out to out in video and out to Jensen. I hope you get a weekend off and I hope the stock price recovery has been good. In terms of other news, what I am hearing within Canada, and I'm going to put this in the faintest hope. Hail of the Hail Marys is that there may be a shot at resurrecting Bill C 26 with all party consent.

The theory of this particular miracle on the hill, should it actually emerge, is that sometime between when the speech from the throne drops, There's a couple of days before they get to vote to knife it. In those couple of days with unanimous consent, all they need to do is a single vote, with unanimous consent and they can pass it. Which I would like to encourage any of the parliamentarians who may be listening to this, which is probably an audience in the low single digits, but I'm.

Ever so hopeful. Please pass this legislation. If you are a political staffer or politician, before you put your political interest into the next thing, or if, parliament survives by the grace of Donald Trump's tariff apocalypse. So that could be pleased just like five, five minutes to pass critical infrastructure laws, cause we're going to need them.

That's what I'm hearing from conversations in and around the hill and that's why there seems to be some buy in from all parties on which is positive. Didn't interview with Senator Colin Deacon just this week. The buzz from and he because he's talking to a lot of people and there is a resentment or at least a bit of an anger of the fact that they didn't get this bill passed that people have, I think people like you, David, and you've been doing open work on this.

And I think Colin has been as well in highlighting that this is the ultimate stupidity. We should have passed that legislation. I think everybody going into this election, maybe looking at it and going, let's get this done. Let's not look like we're fools. So maybe there'll be something out of that. So my heart's been trampled by this bill so many times. Here I am holding out a rose one more time for hope. Never fall in love with cybersecurity. It'll only break your heart.

Dana, did you come up with anything this week that twigged out at you? Or did we talk it out through all the rest of this? No, I have one, but I will just remark, Bill C 26, it should be the ultimate nonpartisan. So it really is a no brainer as much as I know I voiced some of my concerns with how it's written. It needs to be passed. I will stay optimistic, David, that Non partisan minds will prevail and we'll get some movement in Parliament.

But if I, the story that occurred to me was of interest to me, but I imagine it'll be interest to others because it was back in 2015 that there was an initial hack by TalkTalk in the UK. TalkTalk is a provider, a phone provider in the UK, that in 2015 when they were hacked, it was one of those pivotal moments that I recall in my career. Where it finally hit home to a lot of people, a lot of consumers.

What do you mean they had access to my credit cards, my email address, my phone, my banking, right? It really exposed people and what did I read in the news? I think it was two days ago. TalkTalk has been breached again, this time about 19 million records. Initially in 2015, it was apparently child in their basement. If stories are to be believed that it was a, amateur hacker that had access and got access to a number of client data.

This one, if I've been reading reports correctly is more that it was a supplier. To talk that was compromised, which again, speaks to third party security. And that's where, when I look at things like AI, there's just some common threads. But yeah, Jim, that one, 10 years later talk. It might be the same guy still living in his parents basement. You never know, would it be the first time though that somebody got hacked and then somebody came back and hacked them again after that?

This is frequent. The cat came back the very next day, I believe, is the song. But, okay, so there's a couple things interesting about the Tok breach. First of all, blaming the third party or supplier is like the new, fugitive. It wasn't me, it was the one armed man. Nobody cares, Tommy Lee Jones takes me down. I don't care. That's how your customers feel about it. We don't care. Yeah, we just want to make it right.

And it was on a panel earlier this week and the panelists made the point, you can try and transfer risk, but you can never transfer accountability. And, the fine 2015 was 400, 000 pounds. So now what's this one going to be? The research brain part of me. I've seen some really interesting things in the psychology of organizations with respect to breaches. So one thing I've noticed is this concept of breach fatigue. So breach happens, it absorbs all management oxygen and energy and money.

And it's typically a 12 or 18 months thing, top of the agenda, etc. Now what's interesting is 36 months or so after. It's now at the exact bottom of the agenda. It's almost if you've ever had a sugar high and then you crashed from it, that there's this really interesting attention pattern. And what I wonder is this the first example of at a 10 year scale, a high target organization.

So this idea of sustaining cultural change and management interest in investing in this, that the longer the time horizon grows, the harder this gets. To actually do that. There's an attention decline thing. And I think this is a tremendous opportunity for management science. And then the reason why I say this is that this is not going to come from the computer science. Cyber security researchers.

This is why I believe business schools should be spending time thinking about management issues related to cyber and studying these issues with Harvard case study kind of examples because this is what it means to build resilience in management thinking around these issues. So Dana, I love the tenure arc of this, but it also like it supports that it.

Antidotal examples that I'm seeing more and more, that this becomes a trend or as Battlestar Galactica, the reboot series has, all this has happened before, all this will happen again, which I really hope isn't absolutely. And I think one of the things that it. It occurred to me in reading this as well as in 2010.

I imagine the compromise was exploited through a lack of controls Undoubtedly, it was just a lack of controls today to your point I imagine those controls are significantly bolstered, but it begs and I was not paid to do this What's the cyber awareness training on the people that are actually bringing in the suppliers because fair enough there that they're the scapegoat the third party But who's vetted them? Who's vetted their engagement?

Is that third party integrating into the ecosystem someone's job that actually has a strong cyber awareness or what would be best practice? And that's why there's still so much wild west to securing an environment. It's, I think if you asked all of us, we'd all have a very different opinion, hopefully similar, but slightly different opinion of what our priorities are.

And this one, as we've talked about before, is They found a vulnerability, if we believe what's being reported, in a third party and got access to near 19 million customers records. And what's interesting is the indirect and direct cost according to a 2019 BBC article were 77 million pounds. Wow. That's hot. Back then in 2019. And so that, that's what they estimated, like the revenue impact, the cost, the fine, by the way, it was a whopping 400, 000.

So I'm sure that someone really sweated that one out. What's interesting is if 77 million pounds doesn't create an enduring security culture. The cost of that. What will, but I do think your point about, so many organizations treat awareness as, check check, they know passwords, they know not to click the link, we did the phishing test, we're done, great. But what we're actually finding is the work of understanding attitudes and perceptions and monitoring that and shaping it.

Matters, and it's not 77 million pounds. Like it's a hell of an ROI. And Laura said it best earlier. This is not about check marks, we have too much of that in this industry. Yeah. Checking boxes doesn't make you secure. I think just on the note of the fines, if we look at what's been happening with the SEC fines in the last year and a bit, they're certainly increasing, you'll see multiple rulings. Over a million dollars. There was one just landed in the middle of January for $45 million.

It wasn't exclusively to cyber issues, but it certainly, the, this data, the lack of stewardship over data, was a major component of that level of fine being levied against the investment firm that, received it. So there's some hope. That the fines and penalties are starting to get serious enough to, convince people that they actually need to pay attention, if for no other reason than they really like to keep the money they have instead of. Spending it on fines with the sec. Yeah.

David, I'm going to let you wrap up with the final story. You got a story for us. Yeah, so the, the UK has put forward a legislative proposal. This is so they haven't actually put for the legislation. They've said, this is what we're thinking about ransomware. And they have until April 5th. And they've, they're addressing 3 possible proposals. They're saying, what if we banned ransomware payments?

Outright and up to what if we required mandatory ransomware payment reporting, and giving people a chance to provide some feedback. There's some excellent case studies and examples in there. Obviously, I'm on team. We've got to cut the money off. If this is a money making machine, I'm gonna keep making the money. I think it's worth considering.

I think the idea of staging it, so maybe year one is everyone's reporting what they paid, which even in that moment of reporting or having to report, people are gonna go, do we really want to do this? Great. If we can create some disincentives and frictions around that area. It'll be interesting to see if they do this. The Australians are on this track. This is something I talked about in Canada with federal stakeholders. And because the blinders were on getting C26 passed, I will say this.

In an era where Canada as the haven for money laundering is becoming an increasing friction point with the United States in an era where we're trying to reduce friction with the United States, perhaps following the money on ransom payments is a way of endearing ourselves, to folks on that side. Yeah. I'm going to wrap up with, I'm hoping I'm going to ask you if you've got a stinky this month, David, but I am going to come up with a shout out.

And that was to the brilliant cooperation of the police officers who brought down all those networks over the week. And we did a story on it. These networks had probably been responsible for millions and millions of dollars of hackers, exchanging tools, educating hackers, and I don't think everybody realizes how much work police put into to do something like this. This could be a year long job to make that final announcement.

But cooperation between All different police services and we're able to bring down those networks. A big shout out to them. I don't think we pay enough attention to the work that those people do. And it's pretty thankless, for the most part. I keep hearing things like what are the cops doing? They're doing a lot. If you really want to help them, go talk to whoever is in your government department.

U. S. Canadian, whatever, and tell them they might want to put a few more cyber cops in there and they might want to make it so they don't have to request a PC and take six months to get it. Absolutely. We'll say for the stinky, which is the sort of the, frown award, is to all the fraudsters targeting Canadians who reaped 670 million last year and reported fraud up 20 percent from the year before. So they had a great year.

Shame on you and also the folks that continuously are cutting the budget to the Canadian anti fraud center and policing agencies Trying to fight these folks Equal shame on you. So a dual nominee joint stinky on a fraud against everyday Canadians all right guys Thank you so much for dropping in and this has been great Thanks, Laura Payne from White Tuque Thank you, Jim. It's great being here again. And Dana Proctor from IBM. Pleasure to be here, and I'll let you know the next book I'm reading.

There you go. And David Shipley, always a pleasure. We got Jurassic Park and not enough movie references . Although for anybody who's under 40, The One Armed Man is, it's an old TV show, but it was remade into a movie. Yeah. The Fugitive, Harrison Ford in his prime, if you need a treat. Go Netflix it up this weekend. Great. And thanks to the audience for being with us. If you're having your Saturday morning coffee with us, or wherever you're listening to this, thanks a lot.

You could have spent your time anywhere and you decided to spend it with us. Send us notes. You can reach me at editorial at technewsday. ca or if you're watching this on YouTube just Go underneath, put the comment in there. I will get back to them. Love to hear what you're thinking, and I'll be back in the news chair on Monday morning with more stories from cyber security because I'm sure there's somebody out there making the news as we speak. Thanks a lot.

Transcript source: Provided by creator in RSS feed: download file
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast