Cybersecurity Today: Insights from BSides and RSAC - podcast episode cover

Cybersecurity Today: Insights from BSides and RSAC

May 03, 202555 min
--:--
--:--
Listen in podcast apps:
Metacast
Spotify
Youtube
RSS

Episode description

In this episode of Cybersecurity Today, host Jim Love is joined by roving correspondent David Shipley to discuss his experiences at the BSides and RSAC conferences. They dive into the significant takeaways from BSides, including highlights from notable presentations such as Truffle Hog's AI Apocalypse and Eva Galperin's talk on the 'World's Dumbest Cyber Mercenaries'. They also explore emerging trends in AI, deepfake technology, and the human side of cybersecurity. The discussion shifts to RSAC, examining vendor presence, CrowdStrike's gamified approach to engagement, and the broader implications of cybersecurity costs and industry consolidation. The episode underscores the importance of ongoing education, responsible cybersecurity practices, and the need for clear communication in the industry.

00:00 Introduction and Guest Introduction
01:24 BSides Conference Overview
03:55 Key Highlights from BSides
04:31 AI Apocalypse and Security Concerns
11:21 World's Dumbest Cyber Mercenaries
15:57 Deepfake Technology and Countermeasures
22:45 RSAC Conference Overview
28:48 Experiencing Autonomous Cars in San Francisco
30:00 The Future of High-Tech Mobility Solutions
32:22 AI in Cybersecurity: Implications and Discussions
37:26 The Role of AI in Coding and Its Challenges
40:34 Chris Krebs and the Importance of Speaking Truth to Power
44:36 Human Side of Cybersecurity: Security Champions
46:49 Operation Shamrock: Tackling Pig Butchering Scams
51:47 CrowdStrike and Vendor Strategies at Conferences
53:16 The Cost of Cybersecurity and Industry Consolidation
54:46 Conclusion and Future Interviews

Transcript

Welcome to Cybersecurity Today, on the weekend. And my, we have David Shipley. Now, our, not a co-host, he's a roving correspondent. I always wanted to do this, we got a reporter from the field, David Shipley, over to you. But you've been at BSides and RSAC. I always thought it was called RSA, but it's RSAC, right? It just was renamed this year to RSAC. And the C is supposed to stand for community and culture and conference.

And, so yes, RSAC is brand spanking you, I was hoping the AC was for acronym, RS acronym was just, I was just glad it wasn't RSA ai dear God, we know your passion for ai, David The, so let's start with, and I was really glad we could do this because, a lot of people can't get down there. It's a big deal for travel. It's a fairly expensive trip too, and the, hotels are not cheap. But even if you can, it's a bit of an investment of time, but you've been down there. So I wanted to cover what.

Happened what your observations were, what you saw, what you learned, and some of the things that we can maybe, do a little pre-research for our shows coming up to see some of the guests we might invite or some of the topics we might be looking at in the next year as well. So that's, that, that was my intent for this weekend show. Can we start with B sides? Yeah, because I have to say this, maybe I'm just losing it, but I really didn't know anything about it until you said you were going there.

And then I started to look up the website. Interesting place. Did, can you tell us a little bit about BSides? the BSides conferences, and I can't remember if they got started in Las Vegas or it was San Francisco, but it was the other conference to some other major industry conferences back in the way. Then they've been going for a couple of decades now, and they appear in, your local community.

New Brunswick, we have BSides, Fredericton, Halifax is BSides, Halifax, there's BSides, Regina, and what's really awesome is that the. This volunteer sort of community driven organization helps the industry really create these events in a box. And for many speakers, it's their first time. And we think about how we do skills development in this industry and build people up. This is such a tremendous moment. And I can tell you like, there, there have been some just outstanding talks there.

And again, a lot of people don't get to get in front of Defcon or, black hat in the US or Sector in Canada, and you'll often find that there's some really great speakers that never made it there that were totally worth your time. Besides San Francisco, I was excited. I, we didn't have a booth, I didn't have to go work it. I was going for RSA to, a number of meetings, as a startup with, clients, business partners, investors, all that fun stuff.

And, so I told my team, I said, I want to go on the weekend, and I just wanna be a nerd again. No one knows me there. I'm not that famous. I can just disappear into the crowd and see some amazing, folks and occasionally like these larger ones, will also bring in just some rock stars like Eva Galperin from the Electronic Frontier Foundation. And by the way, I know what it's like, the groupies for this program are incredible.

I'm sorry, but it was, I thought it was really cool you had a picture with a fan there in like thousands and thousands of people. You've got listeners of the show there. I gotta get out more and meet some people. I'll talk about The RSA media thing in a bit.

so I got there and, As far as BSides go, like this is one of the larger conferences, over 2,500 people at this particular BSides over the weekend, and they took over the Metreon, theater, which is near the Moscone, campus complex that RSA takes over for that week. And they had lineups for every single talk.

And I had a chance to catch quite a bit the one session that there were two sessions in particular, I mentioned on the show earlier this week that just jumped out at me both for the quality of the presentation, but the nature of the content. And I can, I'm happy to go into those two shows. Yeah. Look, that's part of the thing. I do wanna talk about those, the speakers you saw, and particularly

new things . These are the shows where you hear about the stuff we're gonna hear about for the next year. I think, the first one that obviously jumped up was, the CEO and founder of, the company that made Truffle hog, which is, one of those, network packets, sniffer kind of companies. And, he was giving this talk on the AI apocalypse.

And first of all, I just wanna give Daniel Ehrenreich a shout out because he incorporated these AI rap mixes of some of the things he was actually talking about, and they were funny, but they were actually lyrically decent. Jim, I'll give the AI a, a check mark on that. I'll be a little harder on it later. And then through his talk he really did one of the clearest, cleanest explanations and I've tried my best to follow some of this. Conversation on how large language models work.

But one of the things I didn't know or hadn't fully realized is that in this whole statistical mapping of word relations, when generative AI is going to put together what it thinks is the best response, is this notion of a three dimensional space. And so the linkages of the words and the strength of those link and the distance between those linkages is part of how this calculus happens, which is fascinating, right?

Because for me I've been doing a lot of thinking about the human mind, how we evolve. Spatial understanding is actually critical. Physicality is critical to how our brains actually work. the bots are mapping things out in this kind of geospatial way was fascinating, but that wasn't the key point. he pointed at this great research paper, the name of the paper was alluding me, but it's in the show notes from last week. We'll make sure to put it in the show notes.

he pointed this really great academic paper that talked about this issue of the 3D mapping and the AI guardrails that exist, inside of these models. Can be found and that they exist as a single direction inside that three dimensional space. So if you've got an AI guardrail that says, we're not gonna let you make a nuclear whatever, and I'm purposely avoiding that so we don't get, filtered by poor ai. Content filters for podcast things, which is the exact same reason.

Interestingly enough, why in Daniel's presentation he did not say the other part of nuclear something. Yeah. So they were trying not to do it. But it was interesting because it got the ire of the, community. Their back started to go up a little bit when they thought, okay, what, people are being censored. But it was because we're trying to work around the non-human AI sensor bot.

But anyway, the point being the researchers are able to find these guardrails and these guardrails, could everything don't create malware? And Ehrenreich point was you can build really effective worm factories and command and control infrastructure using existing tech. And so his whole point about the AI apocalypse is it's closer than you think. And he had this great chart, which I thought was phenomenal, that showed the impact of different worms, and the cost.

And you go back to, not Petya, it was now into the multi-billion dollar cost. Remember the one that started in the Ukraine? The, he was describing a worm, so he would, with ai, he was describing how easy it is to create, like he was laying out the cookbook, think for a, despite the, despite the guardrails. Just because purposely you could go and find the guardrails and just remove them and build your own With not that high level of difficulty.

And I think that's the point that I was most impressed with at BSides, whether it was deep fake creation or malware creation. We're not talking PhD levels of brilliance here, to be able to figure these things out. Oh no. I'm surprised if he says he can find them and disable them, because nobody knows what's happening in these models.

And as a matter of this is one of the problems that we have is, if you take a look at, it was Amodei from, Anthropic, was talking about this, and Anthropic is one of the companies that's trying to actually do some stuff on safety. And they're the best of a bad law. I don't think that there's a great amount there, but they've been trying to map the neural networks and find out, and they've been able to track.

A little bit of sentences, and this is why this 3D model is actually a better way of thinking about AI than what we usually think about. It's a complex model next to impossible to understand. Easy to fool, so the paper already laid out how to do this. Now they looked at a particular, I don't know if it applies to every single model out there, but they looked at multiple models that were available to go and do this.

And fundamentally, I think what this gets to is the guardrails that are built into these large language models are superficial, right? Yeah. They're bolted on after the fact. The fact that these material were trained on the widest possible data set, including how to make a nuclear, bomb of a certain kind. That was interesting to me because, what he was saying, was that, he was showing this up into the right graph, which, as a startup founder is the, that's the catnip, right?

We love seeing those. Except when it's malware and it's malware losses compounding and increasing from the Morris worm to NotPetya. with that, he was able to say chart shows that already multi-billion dollars of damage have done by worms. So therefore, ai, chain this way can be a multi-billion dollar damaging event.

Which was interesting because California, there was a lawmaker here that tried to pass a law to say, these AI makers would have to be accountable and would have to demonstrate additional safeguards, if they were gonna build something that could cause half a billion dollars for the damage. And a ray was like never gonna happen. Yeah, and, because the next thing they'll be going after the people who run social media networks and them accountable.

Let's not live in a fantasy world, So that legislation did get, vetoed by the governor. it did not happen. But it was interesting to say this wasn't some, hypothetical that the politicians were worried about. This laid the groundwork of the irresponsible behavior and gold rush attitude towards this technology is setting us up for a lot of paint and, and if we'll get into some of the talks at RSA that were broadly supportive of how AI is being integrated into the tool set of adversaries.

But so Ehrenreich's talk was really good. And what was great was it's really approachable. So I thought it was great and its delivery style was phenomenal. But the next talk that really jumped out to me and obviously I have been a huge fan, speaking of people who have fans but, the electronic, frontier Foundation, the E-F-F-E-F-F, yeah. The only big organization that came out to defend Krebs. Yeah. Yeah. And we're gonna talk about, Chris. 'cause he actually, came to RSA and, moderated a panel.

And was warmly received by the community, notwithstanding the fact that this week in a, an escalation, they yanked his, version of the, the nexus, the, the, additional security Yeah. To cross airports called global Entry. So they're just, anyway so the other talk at BSides, it was really good. Eva Galperin and, Quentin, from EFF, But they came and they did this talk about the world's dumbest cyber mercenaries, and it was hilarious. Yeah, I was hoping you'd talk about that one.

I saw, because we've been obviously slipping notes back and forth, and this was, my favorite title from all of the ones you sent me. They've tracking this particular threat actor since about 2018. And they study this from 2018 to, just over 2023. And what's fascinating is they lay out all the comical mistakes that, he made in building his tool set. And what was fascinating is he was using a Windows, baseline, not a Linux baseline for his command and control infrastructure.

And so a lot of stuff was just done, very quickly, not properly locked down poor operational security. And they were able to just start pulling on all of these fascinating threads to understand who this threat actor was, who they were targeting.

And it was just, a fantastic talk It's a fantastic talk to watch, to really understand how much work goes to painting this picture A shout out to Maltigo, which is a German, manufacturer of software for creating complex graphs and mapping and understanding cybersecurity stories. 'cause they had one of the coolest Maltigo graphs I have ever seen of this particular character. What was amidst all the laughs and trust me there were a few laughs 'cause it was well-delivered.

Talk was just this piercing moment that for as comical and silly as some of the mistakes that were made, this threat actor was still highly effective. And to me it was just like, wow. This is just the example of how the environment is still so toxic that someone that's not that high on the food chain can be so effective. Not withstanding high levels of in what we perceive to be incompetence, it didn't matter. are there any good story, any of the best humorous pieces that you remember from that?

I think the biggest laughs I got was just the very specific I. Misconfigurations, like even simple stuff, like it wasn't set up so his command and control servers weren't set up so that if you didn't have an index html, you could do full directory listings. this is just the bare bones basics. And I was just like, oh my God, this is hilarious. But it was also nice to see that securing your stuff, even for criminals is hard. It's Hey, welcome to our pain, man. Good to know.

Yeah. but this also promotes this idea, and you know me, I'm much more optimistic about AI than you. Tool sets we are giving to people. It was bad enough two years ago when people could have these little franchises and any idiot could become a hacker and often did. They're not script kitties anymore. These were sophisticated tools that people could run. Now we're giving them the keys to the Manhattan project, of intelligence and saying, go to town. Thank God they're sloppy.

At least, and there is a trend. We've covered it on the program where, and I hope that the FBI, and I hope that Canada in the RCMP is going to wake up and start to go after some of these guys, because the more you trot them out, the better. Like the more you trot some guy out who, could, I don't care if they're 24 years old or 68 working in their basement, whatever, whoever they are, trot 'em out and let 'em see what happens when they play with these toys, because that would be a good thing.

but having worked, with the police and community now and get a chance to learn from them, they're doing some of those things now. So the takedown of lab host, so that was phishing as a service based provider. The consumers of that product were getting door knocks over the last couple of months. And whether or not they were getting prosecuted or they just got the, we know who you are and knock it off speech, which was a follow on act for when the Genesis marketplace was taken down here in Canada.

You had Surete du Quebec, you had RCMP, even though they weren't necessarily gonna be able to prosecute these criminals because dear listeners, in Canada, we have underfunded our justice system and we can't even get people through with. serious, crimes through the justice system. We can't get around to getting these folks through the system, but at least they know we know who you are. . So you've, you just, yeah.

You've got you had this the world's worst hacker, Joe or and what else jumped out to you? So one of the cool things, so I decided to, give Jim's love for AI a chance. So I went and I hung out with the AI village folks 'cause, and I would've been the village idiot in that crowd. That's okay. These are some of silicon valley's like whiz kits, showing off what the latest and greatest. So they put together this demonstration.

Unfortunately, the laptop they were trying to get it to work on, just fell apart and died when they were about to do it. So we're sitting in this theater and they're scrambling and they get this 6-year-old laptop. And then what they were trying to do was gonna be very ambitious. Their whole concept was to do live deep fake karaoke.

And that's what got my attention now what really was impressive was right there in front of us as we were having this very interesting and deep conversation about DeepFakes, et cetera. They were trying to get this thing up and running on 6-year-old hardware. And they were, it was interesting to watch and if you can get drivers to work in Linux, which I mean, no small feat sometimes that's the level of difficulty is to build a deepfake machine. These days it's not that hard.

And everything they were showing was open source packages, some cool research from Japan, a few other things. And this is all available on GitHub and other things to build this little deepfake factory. And they almost got it fully working, again, on really old hardware. we didn't wanna out them because they were trying to show us the concept, but they took a very unpopular celebrity right now and, had their face involved in some of these deep fake karaoke scenes, which was very funny.

I thought that was fantastic. But was interesting was also to learn the depth of some of the counter deepfake technology and the limitations of those deepfake technologies. So what I mean by that is we had a very interesting conversation that you can measure someone's heartbeat. We can't see it with the human eye, but a sensor can actually measure your heartbeat in a video, through your skin tone, which is, absolutely fascinating.

That's really interesting to see some of these technologies and then they were talking about how there are counter deepfake technologies emerging. So the arms race is fully on, on deepfake video, detection, which I thought was interesting. But the last thing I'll say is this. One of the things that was very interesting in was all the latency issues.

And one of the cool things about seeing them try and work this on a 6-year-old piece of hardware was watching, the latency issues almost get resolved. But, we had this deep conversation about the audio, 'cause they were talking about cloning someone's voice with deep fake, they were talking about cloning the face with deep fake stuff. And what was interesting was the point the researchers made is that. And we're actually experiencing this right now.

But Jim, you can't see this is the video quality on, on many video calls, suffers. And so people are willing to forgive and they're just used to forgiving stuff that could happen in a deep, fake, model that's trying to do this in live. And they'll just assume it's just the video connection. Yep. It's just the wifi. And I hadn't really given that enough consideration in all this.

Yeah. And it's, but don't forget, people were fooled by, in the olden days, two years ago, three years ago, by just garbled voices on the phone. these fakes, I don't know what you saw. I'm just intrigued by the fact that they could do this on 6-year-old hardware. The video cards would be the biggest issue for me on those. right now, I'm working on an AI project with some people who are going to do a reveal of something. It'll come up with, a manifesto about AI and the project in there.

And we wanted to have somebody read it and one of the guys, 'cause we're all into ai. I said I'll just dial somebody up to do that. and even we freaked out with no. But he came back with two deep fakes narrating this thing in less than 10 minutes. It was, and they were hard to spot. Yeah. it was very interesting one of the other talks at BSides that I loved was, a local professor came in and he did a AI 1 0 1, all the different kinds of AI and the solutions they provide.

he had this fantastic example, when we get into RSA, we'll talk about this, but generative ai, large language based AI is only one. Part one, a system of ai. we forget there's lots of other different ways of doing it. he was talking about collision avoidance systems, in air traffic control. these are simplistic models, not generative ai, thank God. 'cause you don't want these things hallucinating that have a clear benefit over a human.

Normally I am team human over AI at that point, but because, people in these, situations can have these different error sets and you can have two planes make the same move and then make the situation even worse when they're under a kilometer away from each other. He was explaining this very simplistic, logical machine model that didn't need to be generative ai. And I thought that was really powerful and he had some really great points on it. there was a lot about AI and a lot to learn from it.

the good news is a lot of the sessions were recorded. if you go to B side San Francisco's YouTube channel in the next couple of weeks or months, they're so fantastic content that's coming down the pipe and you'll get a chance to see these talks and maybe you'll agree with me or disagree, but there were way more content and talks than I could possibly, get to. So just, mark that on your calendar if you wanna have some really good conversations or to conversation. Really good learning.

Check it out. And yeah, we'll have to check out to see where the local BSides are. You've got some in the East coast? I haven't been as aware of them out in the Ontario area but maybe I've just missed them. A really good one is BSides Ottawa. And actually last year, the most recent one, they started to work on creating a policy village.

And what I mean by that is teaching, the hacker and security community, okay, if you're concerned about the state of laws, legislation, regulation, the education level of our lawmakers, how do we change this? And I thought that was phenomenal. I had some brief conversations with some of the organizers ahead of it. I had some scheduling conflicts, so I couldn't participate more actively, but I thought it was awesome, right?

Like there's a small group of security professionals and hackers in Canada that are starting to replicate the hackers on the Hill, which was that moment, almost 20 years ago were Mudge and a few others just marched into Capitol Hill to go tell the man what was coming their way. So that's phenomenal. But yeah, there are phenomenal ones. I've been, I've had the pleasure to go to, bSides, in Vancouver, ended up doing it virtually 'cause of the pandemic once, as well.

So I think we could do a lot more to cover these so if you're listening and you organize a BSides and, we would love to know about it. Yes, sir. And yeah, 'cause if I missed it in Ottawa, I'm, I, people don't, people that watch the show don't know I don't live in Toronto or Ottawa. I'm probably equidistant to both of them. No, this would be great. Let's talk about RSAC just as a contrast, this is, I presume this is you, this is RSAC is a little more commercial than B sides.

And listen, what I'm about to say is, I will fully acknowledge I own a cybersecurity company. I am a vendor, right? I get this. But even I like it It is something else to go see this. Now for those not familiar, what's the scale of this conference?

Between 42 and 48,000 people descend on San Francisco for this conference in the area there's this, campus of massive conference centers called the Moscone Center, but there's Moscone North, Moscone South in there is typically this absolutely massive collection of vendors. There are 400 almost vendors on the floor and some of these booths. Our multimillion dollar, investments, and it's not even just the booths to paint a picture around this multi-building campus in the heart of San Francisco.

Various vendors rent out entire bars, restaurants for the week, week and they have one that's for the mid-tier folks, okay. and then they have the other ones for the executives that actually write the multimillion dollar checks that are even higher end than that. Some of them, like CrowdStrike had multiple locations, at this thing. Proud to say Canada's one password, I think we can still lay claim to them.

They had a pretty wicked setup on, one of the nearby blocks as well, so this is massive and the trade show floor is. incredibly loud. I easily got about. Five to six kilometers worth of walking per day cruising through the, vendor floor. Now obviously I'm there to learn, I'm there to figure out, what is going on here? And there are two things that I saw way too much of at that conference. Patagonia, vests and, agentic AI on every vendor's thing.

So you know that I've got my fill of those for a while. 'cause I always hate to toss in a term agentic AI is the new. Independently operating ai. In other words, it can take a task and execute it from start to finish without human intervention or at least a large part of that. This is the thing that is going to fill our hearts and minds for the next 12 months from the commercial side of ai.

There's a lot of other great stuff happening, which if you follow my AI program, you could, we could talk about, but this is going to be on everything. So get your BS meters going on this stuff because it, you're gonna get hit with it on every sales pitch out there for the next 12 months. Your Python script is not age agentic ai. That's we're only a marketing brochure away from it though. Oh my God. Everybody and their dog.

And here's the funny part I'm, I am so age agentic AI in particular is built on many cases, large language models and generative AI again, and they have the same hallucination problems. And in fact, there's some really interesting research that says, right now they only actually do the thing you want 20 to 40% of the time. It gets really interesting. And then what I find fascinating is the possible, collision of agent ai.

'cause let's assume that multiple pieces of software running on the same, device or endpoint and, are they running locally? Are they running in the cloud? Are they running hybrid between those things? What happens when they compete with each other in unintentional ways? And all of a sudden you're like that. Very overworked manager with a bunch of interns, high school level interns require constant attention. Is this the productivity boost? We think it is.

And this is where it's so important to use these tools, to understand these tools and use them correctly. 'cause you're gonna say hallucinations and I'm gonna step in and say. These models are probabilistic. The hallucinations are not a bug, they are a feature. They're probabilistic models. They will not even answer the same question exactly the same way twice. And the larger you restrict them to exactness, the more they just become search engines, find the text and repeat it.

And so that there's a constant game there. you don't take a sledgehammer to go installing windows. And you don't take a probabilistic engine to do exact things. This is why I'm such a big critic the marketing piece of this is because there's lots of good things you can do with AI and you should do. Who wants to read logs? Nobody. You can now read logs if you get No, but if you get, if you could read logs where you could never hire enough people, never do it.

And yet if it misses 2% , that's two per, you weren't gonna get that 2% anyway, but putting it in there to land your plane, probably a really dumb idea. You probably want an algorithm. Yeah. . So speaking of transportation and ai, inevitably one of my friends convinced me that we had to try out Waymo.

And for those not familiar, this is the, alphabet, company that in certain cities in the United States, I think, Phoenix, Arizona, Los Angeles, San Francisco have fully self-driving cars, robo taxis on the road. Eat that Elon. I take a great deal of pleasure in being able to say that. But anyway, we're gonna get letters on you again, Listen, like a lot of people were waiting for the Tesla Robotaxis, it was yet to emerge. But here's the thing. So I get in this thing, and first of all.

Like props to Waymo. This is not like a budget car you get into, this is a Jaguar. She's nicely, they know their crowd, right? San Francisco. So like California, this is a nice car and you get in, it's very comfortable. But I swear to God it is the freakiest thing at, I feel like this moment where I'm a horse and buggy driver, seeing the horseless car go by, driven by somebody. That was the moment of realization.

And so everyone in the car gets their phone out and we're videotaping and we go through Chinatown and we're going on a little bit of a rip with this thing. And Jim, I hate it, pains me to admit it, but that car was safer driving than some of the Ubers I was in this week. It was patient, it didn't, it didn't feel like it had to rush. There was a, there was one particular sticky traffic situation we in and I was like, oh my God. How is this thing just gonna have a breakdown?

Are we about to get into a bricked waymo that's just gonna give up? Nope. It waited for the right moments and it was safe to do it. It executed the maneuver around a vehicle that was in the intersection that shouldn't have been in the intersection driven by a human. Fascinating. So it was interesting to see that particular use case of technology. It was stunning. It was sitting into the future.

And the reason why, I support this idea of this high tech mobility solution is when I think about when I get older, the possibility in 30 years that this is gonna be more normalized and that, a senior may not need to have a driver's license anymore, but they can still own a car. Or even better yet, they can partially lease a part of a car in a very affordable way and they can still have independence.

Having had to remove my mother's driver's license, probably being the oldest son, probably went down there about a year, two late in terms of her driving, went to visit her and convinced her to, she couldn't drive the car anymore. I'm much more comfortable with a Waymo than I am with some of the people on the road. Nevermind my mother drive on the 4 0 1 in Toronto. you'll know that you'd rather have AI driving cars, but this is a good example of statistically. You're dealing with probabilities.

Statistically, you're safer in a Waymo than you are in a car driven by a human. Just nothing wrong with that. , if you wanna sidetrack on driverless car driving, Waymo took the expense and time to develop both a radar or what they call lidar approach and a camera. And Musk has rushed his self-driving to production or. Is a genius and has not, and whatever the, that will be adjudicated by the accident rate in self driving cars. But he has only used cameras.

And so the cameras, because people are, can only see, so why not? Why should you have this lidar? The of you should have it because it's makes it twice as safe, but different statement. And there is a YouTube video you can look up and it's, uses the Roadrunner analogy and it shows a Tesla going 40 miles an hour through a painting on a desert highway. Yes. that's the limitation of that. But anyway We ran that on the show.

It was actually, a brilliant demonstration of why you need a belt and suspenders with AI and bringing this back to security or cybersecurity. this is the type of logic we, and people listen to this go, why are these guys just going on and on about ai? We need to become educated in this because it's going to matter in terms of our ability to evaluate these tools and understand these. How they work and understand which ones are just hype and where they're useful.

On that note, there were some really great discussions I got to have this week about the security implications of ai. let's go right back to nist, right? Identity and access management. We suck at that for human beings still, notwithstanding all of these things we're even worse with non-human identities already existing. Just think API Keys think about all these things, and if you have a combinatorial explosion in identities, we are in a lot of trouble on that foundational piece.

Now, what was really interesting is, as a very smart senior executive I was talking to, we had dinner, and he said, okay, so let's game this out. You've got a or multiple agentic ais working on your behalf and an incident happens, how am I going to be able to know from the security perspective, did David click on the phish or did his agentic AI click on the phish? What are we gonna do about attribution and understanding things the complexity of this is really fascinating.

So that was one very interesting angle on this issue around agent AI that I thought, and AI in general is the synthetic identity management. And I really enjoyed that. The other part that was interesting was in some of the discussions, there was a lot of hype thinking that, people felt the, self-aware criminal AI was imminent based on all the vendor language on the expo down on the floor. And they pulled it back. And said, this is how criminals are using it in the attack chain in smart ways.

You still need a criminal. It's just the scalability that criminal's able to do. Gen AI highlighted in one keynote, the ability to do malware, and vulnerability discovery and development, just speeding that cycle up in interesting ways. 'cause Rob Joyce, who's the former, head of the NSA and a few other things had pointed out that. the AI bubble and the hype around it. But he was originally a skeptic, but came to see, okay, no, this is becoming part of the tooling that's used.

And, to the credit of the, software industry and the security industry, finding zero days is a lot harder. And there's no single one O day to completely own something. You've gotta chain, an O day or at least numerous end days together to get something. It's not as easy as it was a decade ago. And thank God. But on the other side of productivity boost on the criminal side means the ability to build really clever chains is now gonna surge again. And this is the ebb and flow of industry.

So those are some really interesting conversations and again, some of the academic track that was happening was I. Kind of taking a calm breathing breath of, okay, the vendors are down hyping the ever living hell out of the threat of this. but here's where it's actually at. You can't dismiss it. To your point, we're spending a lot of time talking about this because it's not going away. It's not a flash in the pan. this is one of those big technological epoch shifts.

Who knows how it's all gonna play out? it's not the immediate threat. But what I loved about some of the programming messages that were there, whether it was keynotes or not, was it's still the basics guys. It's still the basics.

Yeah. The thing I was, it was just, we were going into this call I was doing some things and I was looking at the number of, we did the world Password Day show, and how much, I always forget what day it was until I've already recorded the show and say to hell with it, but I actually got tuned into World Password Day this year looking for a password story. It wasn't hard. 1.7 billion passwords on the dark net dumped out there. That was a really easy thing. And I started to think, we need to get.

Past passwords. in a world of ai, that what I get, but until that time, a 12 digit password that's not reused is a really powerful thing to have. Two factor authentication, a lot of these things. And we mystify ai, to anybody who's listening to the audience out there, self-awareness is irrelevant. The power that we have now in the computational abilities of AI to affect us in industry is already there. There's just, there's no question about that.

Self-awareness is an interesting psychological and philosophical thing and maybe an impact on human society and all that sort of stuff. But don't think you escape because we're not an artificial general intelligence. It's the tools are incredibly powerful and but when you look at that, evaluate all of those tools and bop them up against just good old fashioned doing the right thing, and you will find that even in AI can at least be slowed enormously.

There are some places I think, and I wanted to ask you about this, that where I think are people talking about this AI encoding. Is by the way, anybody who thinks you're gonna stop AI from being used in coding, that horse left the barn as well. Just announcement here, Google's doing 30% of their code with it. Microsoft's doing 30% of their code with it. But there's some real problems in rushing that out as well. And I didn't know if anybody was talking about that.

one of the things is AI code tends to make stuff up. Occasionally not, and hallucinations aren't as big a thing as everybody says they are, but they're there and statistically they will appear. It'll make up a library. That has no, that where there is no library. And if you're smart enough to scan that code using ai, find that library is, there you go. Wait a minute, I could plug something into that. Ooh. So there's all kinds of flaws in the security.

I'm more worried about the security of AI generated code that I'm about the accuracy of AI generated code. Yeah. And we saw this and we were reporting earlier in April about the, that trend of going and sniffing for that, what they call the slop, slop squatting, and Yes. And then registering an actual repo and library with that, and then poisoning a code base.

the other part that's interesting is something you mentioned earlier about the probabilistic nature of the, the engines is you're never gonna get it to write the same code exactly the same way. And one of the things that I think deeply about is, I. And this is the liberal art to me. And maybe I'm completely wrong.

I'm gonna have a moment of vulnerability on this, but I think that you really need to understand the structure and grammar and purpose of language, whether it's, a human language or a programming language. Because languages evolve, code evolves, the way that we talk about things evolve.

And if you don't understand how that code actually was designed to work and how to improve it, if 30% of the code was generated by machine and you have no idea what the logic and structure and argumentation it was trying to make to achieve that, if it was just bashed together notwithstanding the fact that this code may become even more inefficient. It works. Lots of code works, man, but it's not great code either from a security vulnerability perspective or it's wasteful.

And the best programmers I've ever had the privilege of hanging out with. And again, keeping in mind the limitations of my languages are web-based or a little bit of Python, but the best code developers are equal parts translator. Sure. Taking the idea and turning it into a computational language. But they're really good ones are artists like I and I actually mean that, is that the cleverness of how they put together? I have the privilege of working with my CTO is one of those off the charts.

Guy is, he's listening to the podcast. He probably doesn't listen, he's so busy, but he's 10, a hundred times smarter than I am, at doing this. But no generative AI has that creative artistic nature that he has. Exactly. And the ability to go and do that. I do think that there's challenge to that, but I know we're running law, but there's a couple of other things I wanna talk about. RAC 'cause we Absolutely, to this, Krebs.

So for those listening, Chris Krebs is the inaugural director of the, critical Infrastructure Security Agency or CISA in the United States. In 20 November, 2020 after the US election, he came out and said there's no signs that this election was in, interfered with, from the perspective of the actual voting process, et cetera. Fairly innocuous statement, not politically loaded, just statement of fact. He told the truth, that got him fired.

And that really should have been the end of the story, in some respects. But of course, in the last couple of months, we've now seen this revenge streak from the Trump administration where, his security clearances were yanked clearances of his current per previous employer, Sentinel One were under threat. This created an untenable situation for Sentinel One and for Krebs. So he had to leave his job and is now fighting multiple investigations.

So that's the context of what's going on with Chris Krebs and. He came on stage. He's an incredibly brilliant and eloquent speaker. Clearly knows his stuff. Again, one of those like really smart folks, he just, you bump into and you're just like, wow, like Jen Easterly, like Rob Joyce, and, yes, I am clearly a fan of all these like cyber superstars. Who knew that we would have that, but.

He got up there and he did this great panel where they had, the guy from the New York Times that was the executive producer of the Netflix series Zero Days. And they were talking about, how and why the Hollywood effect and what they were trying to accomplish and whether it accomplished what it did. And it's a really great talk.

But towards the end of the talk, the elephant in the room came out and Krebs had said, listen, the organizer of this conference, they asked me to do this way before this whole thing blew up. I was originally really reluctant, but they really wanted me to come. So I did, I'm a man of my word. he showed up and he did it. He didn't make it political. If it was on me there, I'd be, I would be a emotional wreck, right?

That's a lot of heat, unwarranted heat on there, but composed, classy, on that side. I was thrilled that the community gave him such a warm reception. I would love to see more from the community because his point at the end of his speech was, what's happening to him isn't just about him. It's about the, it's about trust and that's the fundamental part of the cybersecurity community. And I worry about this. 'cause like I said we get complaints when we get political. I get that.

And I'm, and I am personally political. I have my own personal B beliefs. And they might surprise you as I've said, 'cause you've had to reign me in. I'm I'm, I can be a law and order guy with the best at times. And people have had to soften those edges of me. The issue here is in corporate life is we've all been that person in the room who had to speak truth to power and tell them there is a problem here.

and if you've had a career as long as mine, you've heard these words, Jim, we want you to be honest, but sometimes you can be a little too honest and that serves no one. And we all know leadership comes from the top. So if the best people in our industry can be smacked down and told that if you say the truth will stomp you out, will take your career and all the people you work with that should be sending warning lights off to everybody now.

if you're a citizen, whether you're a citizen in the US or Canada, or we get people in Denmark and Britain around the world who listen to us. If you're a citizen and you have people who are working on cybersecurity in your government who don't feel that they can speak the truth, you are at risk. That it, so it's not a political thing, it's, this is a professional thing. It is, and it's a community thing, right? This is bigger than one man, being persecuted because of a vendetta.

It is about trust and integrity in our profession, in what we do. I do wanna go back a couple other things about RSAC. One of the most fantastic panels and obviously I'm a passion for the human side of cyber. A good friend of mine, Dr. Jessica Barker, took full disclosure, known her for, almost a decade now. One of the world's top experts, on the human side of cyber.

She had this phenomenal security champions panel, and Tanya Janka, Canadian, Tanya Janka, who's written several amazing books on AppSec, along with, two other brilliant women, gave this phenomenal journey about how you create positive security cultures through champion programs.

what was really interesting was at one point, I think it was Tanya that said, and she had done some work with the federal government, in Canada and by turning, developers into security champions by changing the culture of software coding from constantly reacting to vulnerabilities and incidents to proactively caring about security around things, their sick day utilization down massively people. And these are government employees that normally, take a large amount of sick days, probably.

'cause the environment is not always ideal. And their turnover dropped. And she actually, got a call from HR one day and she's what are you doing? And she said, okay, what is this about? And then they laid the numbers out, she said, and painted the picture. And they're like, we need more of this. And oftentimes I hear executives like, oh, security awareness. Oh, security culture. Oh, security champions. Yeah. Let's get back to, how are we gonna buy more hardware, software.

But here it was the clearest, cleanest return on investment argument. I heard that entire conference happened at that session. We're gonna get here on the program. Oh yeah. I've seen, her talk, at several conferences including Atlantic Security Conference. And again, we have such brilliant community members to learn from and for anyone listening to this and going, how could I possibly catch up?

And, we're all running our own races, but there is this so phenomenally, number of people to learn from. So going to these local conferences, these, BSides and other ones, educating yourself and following some of these really smart, not the influencer cyber folks, but the actual folks that write books that do stuff and that type of stuff. So always be learning, on that side.

So one of the talks that hit the hardest for me emotionally at RSA was by Aaron West, who, is a former, prosecutor in the US who created Operation Shamrock. And Operation Shamrock is tracking the proliferation of pig butchering, scams, and she's gone to Southeast Asia. She has. Seen these complexes. She had pictures of them. She had stories of the horrific violence visited upon the individuals who were lured and human trafficked to be part of this, and it hit hard. What was really.

Interesting was Aaron was bucking the trend of, there's been a movement to stop using the phrase pig butchering because people find it unsympathetic to the victims. They find it culturally insensitive in terms of the original. Languages that were used to create it, et cetera. And so there's been a lot of this discussion about calling it romance baiting or financial grooming, and she said, no, we need to keep this visceral. We need to talk about it.

Because the, her point was, and I thought this was interesting, is that the idea is that they're going to, fatten the target up. Then they're gonna take everything from snout to tail and they're gonna keep taking it's not just the way that these people see their victims, it's this extraction of everything potentially possible in this visceral way. So I thought it was interesting 'cause it was a really powerful counter plunge to.

The movement now, even in my own company, we've moved away from pig butchering because it was making people so uncomfortable. But it was that moment we're sitting in the audience and going, damn, she has a point about some of this. So yes, it makes us uncomfortable. It should make us uncomfortable. I really wanna make sure we're clear on what this is. You've got two elements . You've got the people who are the victims and the people who are doing this are also victims. One is vulnerable.

People who are taken advantage of because they're lonely or they have problems, or they're somehow easy, easily led usually. Maybe having problems with their lives. Maybe they're seniors, maybe they're whatever. But these are the people who are most easy to victimize, and , they take everything. And we did that story about the couple that were committed suicide. I, there may be a lot of suicides. There just may be a lot of people who live miserable lives.

you picture, I'm retired now, more or less. If you took all my retirement savings that would be. Just devastating. How would you come back and tell your partner that had happened to you? How would you live? you go from, being retired to being impoverished But the second part is . People who are human trafficked, they are beaten, they are prisoners . And as you point out, horrendously treated. Makes me sick.

Yeah, and then, and you've got the top tier criminals who bail outta these complexes when and if finally a raid actually happens. she showed this satellite image, and I apologize, I can't remember which country it was. at the start of the pandemic, it was just a couple of buildings. By the time it finally got raided, it was a multi-city block camp. Like it was like a downtown, small Canadian city downtown. And one of the stories she told that just. broke my heart is like some of them.

Some of them, the horrific violence on the people, but also they had cycles of victimization. So if you hit your numbers in one particular scan compound, they had a group of trafficked individuals, who were then, In sexual enslavement. And so the reward for one criminal was to go victimize another group, the depravity and the scale of this. And then what she was mentioning was that this movement is now spreading like a cancer outside of Southeast Asia.

It's now starting to pop up in, Latin America and in other jurisdictions. And when you look at the numbers. And I reported on this, earlier in April, or later in April, was the IC three report came out and it was $16.6 billion in reported cyber crime. Of that 13.5 billion was cyber enabled fraud. And of that, this, pig butchering romance fraud, financial grooming was, five to 6 billion. This is.

Six times larger than ransomware, as much as we have this giant conference dedicated towards stopping apps, the cyber fraud side. And what was interesting is Jen Easterly, the most recent former director of Cs a, when she was talking about that, hollywood thing in zero days.

And the whole original discussion about the, cyber Pearl Harbor, cyber nine 11, she said, what worries me more is the cyber boiled frog that we're seeing in this explosion of fraud and that we're being boiled alive, and we don't realize it. And I thought that was really interesting. So that was one. And, it was just, yeah, unnerving. Yeah. Yeah. No, Erin did a great job of putting a human face to it,

. Last thing I'll mention in case anyone was worried that CrowdStrike was, not gonna be a thing post crowd apocalypse last summer, no, they're doing just fine. And, two things I learned about our community, that vendors have got us dialed right in. So actually three things. Number one, our obsession with Lego in this industry, is fascinating. And everybody had a Lego giveaway, right? So that was interesting. CrowdStrike gets the award for, most sought upon Chotsky.

They had little action figures. Of the different criminal groups. apparently every year they have a different limited edition like Statue at action figure. to get it, You gotta go and sit for a demo and see various product lines. they give little tokens and, have gamified the people going to that conference to chase to get that chotsky. And I, I just gotta, again, as a behavioral, scientist, amateur as a psychologist, amateur, as a neuroscience amateur.

I just thought, man, you guys have dialed this right in. But the third one, dear vendors, honest to God, I went looking actually to try and do some shopping. It was the most frustrating experience of my entire life, trying to get a straight answer of how much will this thing cost me? Give me a ballpark so I know whether to have this conversation. And I saw every iteration of that dance possible. And I'm gonna go back and do some deep reflection about my own company.

'cause now man, walk a mile in someone else's shoes as a buyer for a bit, and all of a sudden you get some great deal of empathy. But the last thing I'll leave with this is Jim, we desperately need to build a sense for the cost of security.

I've been thinking, about this more as a possible research project and how to do this ethically, et cetera, but in many ways, like how we build the CPI basket of goods, and that's loaded with everyone's feelings of what's in the basket, outta the basket, et cetera. But we need to build a BA cybersecurity basket. And talk about the cost. Okay? This is the annual cost this year to cover, all possible threats. And it's gonna be an astounding number.

And you're gonna realize that we are living in an age of, one side security solution, plethora, cornucopia of solutions, and we are literally starved because we could never come close to affording them. And anybody, bonus point, anybody that says this industry is consolidating, I picked up Richard Stein's awesome, security yearbook. There are 4,000 vendors still listed in there. And there were 400 of the most well off vendors at this conference. This industry is nowhere close to consolidating.

We'll leave it on that. the motto of the show. Just because you can't do everything doesn't mean you shouldn't do anything. Clean up your active directory. Yeah, David, the hour has evaporated. I'm glad that I couldn't make it, but I'm glad I had the chat with you because I at least get filled in on some of these things. You're very welcome.

And. and over the next weeks and months, we'll be calling on some of the people that David saw and doing some interviews and talking about some of these topics. thanks for spending your weekend with us, and have a great rest of it. I'm your host, Jim Love. Thanks for listening.

Transcript source: Provided by creator in RSS feed: download file
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast