Welcome to Cybersecurity Today. I'm your host, Jim Love. We normally call this the month in review, but today this is a look ahead at the coming year and I've got a great panel. Our panel guests today are Laura Payne from White Tuque. Welcome, Laura. Thanks, Jim. Dana Proctor from IBM, once again with us. Welcome back, Dana. Thank you. Pleasure to be here. And David Shipley, who everybody knows our resident culture critic and full time head of Beauceron Security.
The culture critic is the part time thing. Welcome David. Thanks for having me, Jim. Great. I normally ask everybody bring one or two stories from the past month when we're doing the monthly show, but this time I challenged you to come out and share stories, events or themes, things that you think are going to have the biggest impact on the coming year. Who wants to go first? I'll take the first stab.
And I think, obviously school districts across North America are reeling this week with the software as a service provider known as power schools, apparently hit with a data threat theft. Data extraction incident. We've got notifications from some of Canada's largest school districts, the Toronto district school board. We've got schools in Newfoundland, Alberta, and more, and of course, US schools impacted by this. And, the story is notable for a couple of reasons.
this is a theme that we saw emerge in 2024 and is just going to explode in 2025. this is finding those industry specific, points of unique pain, market concentration, market leaders, that if you hit them, you're going to hit hard across a large enough area that you can make some serious money.
in 2024, we saw the hit on CDK global, which for those not familiar with the car retail industry in North America has more than half of the market and it's responsible for everything from the car sales process to maintenance and parts and inventory. And it was extraordinary pain for car dealers across North America and was hit. That's exact same MO we're now seeing with the hit on power schools. Now, what has been revealed and to their credit, they've been remarkably transparent.
The timeline established so far is, they became aware December 28th. And began working on their incident response. So given that we're recording this, January 9th and the first bits of this news started coming out January 7th, that's a pretty tight turnaround for communication. So that's a win, on the downside is that this once again looks like, with charge healthcare and others, stolen credentials to, a technical tool allowed for access into this data. So then we're left with the question of.
I didn't hear you say our MFA was compromised. Did you have MFA? So we're all going to be eagerly waiting CrowdStrikes January 17th report to see how this plays out. Now, the one thing that they have done and they have earned thus far, my very first stinky of the year, and it's January 9th for an award to be given out. But please, for the love of God, people listen to this. Do not a pay these people. Thank you. Please stop paying them.
If we have not learned that this just fosters the continuous behavior that we all have to deal with. Don't tell me that you can trust criminals, that they have honorably deleted to the data and that you have a video of them doing it and expect me to have any belief in you whatsoever. Because we've seen this movie before, ransomware groups have said that they have deleted data after payment. And to the surprise of nobody, they lied. this is, I think, really important.
SaaS providers being hit, particularly those that are prominent, large market concentration in markets where they can hurt. And they know that payment's going to come, that's going to be part of the story. Data extraction versus data encryption. And then this rush to how do we limit class action damages by saying we did everything we could, we paid them and they pinky swore and recorded themselves deleting it. the cherry on top, we have no evidence the data is available in the dark web.
Yeah, that's the point of the dark web, kids. The second part that hits me on this though, David, is that this is another aspect of people going after public type institutions or not for profits and hitting the weakest of now you said they may have come up with the money to pay the ransom. But this is, These are organizations that can ill afford this and when they're already strapped for cash and resources now taking money from them and and the public impact is also really great.
That's, I, so that's my, I'll give you that for my stinky is that, stop going after healthcare and schools and places like that. And a reporter asked me, they said, do the school districts, bear any of the blame for this? If you're familiar at all with these organizations, they have, and I'll be gentle here, shoestring IT budgets, because every dollar we put into teaching, we want to see into a building, a classroom, an educator assistant, this tech stuff. That's just The nerds.
No, and that's what actually enables modern learning, but it is still given short change to be kind. they're far more risky to try and run software on prem because of all of the things then to use a SaaS provider at scale. the issue here is that we have crime that pays. Crime that pays leads to organized crime being successful as an actual industry. So that's the issue. It's the payment of these things and I, it can't get any simpler than that.
The second thing for me though, is when SaaS first started out or cloud first started out, one of the things that. You sold people on was, or at least, if you were a cloud or SaaS promoter, you would say you can't afford to run the type of security that a company that has many clients can run. It's not much good if they give away their passwords though. So I had the same. Mistaken theory about this.
My theory of the game when it came to, the public cloud providers, Microsoft, Azure, AWS, et cetera. I thought they would have this right, that the economics and the business model, but what I got wrong in my theory of the game is that once you pass a certain tipping point where you are so big. You don't need to care anymore because what are they going to do, right?
If you've got 60 percent of the market, 50 plus 1 percent of the market, are they really going to switch or are they going to, think about the CrowdStrike principle, which is CrowdStrike's price is completely recovered from its non cyber incident, and the principle is this the bone that's broken, maybe it's healed back stronger. So they're less likely to have an incident. So I'm not going to switch. Which perversely. disincentivizes people from actually investing in security.
It's just, it's this weird paradox that I've settled into. I don't know, Dana or Laura, if you've got any thoughts on this weird psychological game that's being played now. I don't know that it's that they don't care. It's just the bigger you are, the more attack surface there is, right? And the more people there are paying attention to you. So it's just a harder job. And then there's the uniformity versus diversity. if they're not extremely uniform, then it's really hard to do the security.
But of course, if everything's really uniform, then one hole is a really big hole. But I hesitate to say that people don't care. I think people do care. It's just, not an easy job at scale. And, it's also, complicated by where is that line in the shared responsibility model. some people are very clear as customers that they know where that line is, and they know what they have to do to take care of themselves in that platform and that space.
And other people are very unclear or don't even realize there is a line and, get themselves in trouble that way, of course. Yeah, we don't live in a world with easy answers, that's what it comes down to, but here we are 2025. There's so much to unpack in there.
And, in Jim and preparing for today, and I love your challenge to us of, what are some stories or events or themes that, that we see having impact going forward, David, and we did not collage on this, has hit one of my major ones that I actually, it's almost my, what I almost hope Happens in 25 is that our apathy goes away. And when I say apathy, it's not that the world doesn't have sympathy for hospitals being brought down, Cisco being breached, Treasury board being breached, right? the lists.
are never ending. The fatigue is high. It's the apathy of, but nothing changes. And I guess when I look at last year and I look at some of our, cause certainly during the holidays, you read a lot of the what's coming in 2025. There was a lot of the, we already know. That we need to use multifactor authentication. We already know not to pay ransomware.
We already know that we should be doing tabletop business continuity planning, and yet every single one Krebs on security was certainly fantastic and sharing how some of the power school revealed. I'll say it that way. The question becomes, why not?
And that's the apathy that I am so hopeful this year that the echo chamber of us security professionals, we enable that conversation out with the business owners and the, I'll even say ourselves that when we are making choices as consumers, when businesses are making decisions on merging, acquiring, they're doing it as good custodians to our data.
And they're demanding a little bit more because I had a bit of a heartbreak, not just because of, the prorogue, but Bill C 26 was a little bit of a light of hope. And it's not there. The earliest that will be back, I think we could take a pool on just when that might come back, but we no longer have government enforcing that these activities need to happen.
So one of my key things that I so hope happens this year is the apathy of, yeah, they've stolen our data, we're still a functioning business. It's okay, we don't need to invest, dies a quick death before catastrophic activities actually happen and we have brownouts, or we have people doing pen and paper triage in our hospitals, or our children are going to school and trying to learn off of books that they're finding in the library because all of their online sources are offline.
It had me till the last one. I actually, I don't mind. I'm actually finding a real book, right? For those watching the video part of this, Dana's got a lot of books behind her on the shelves. The one piece that you just going back to that, I don't know whether it's apathy. I don't know how to describe it. I think you talked about this at a, teacher or a mentor when I was first learning consulting. And he said, cause I was always trying to invent something new or do something.
He said, Jim, the old Testament prophets didn't ask for one more commandment. They prayed for the strength to do the 10 they had . And it's we're always looking for something new, but every time at the heart of this, two things we've talked about multi factor authentication to where we're blue, but also when you hire somebody. And you give them something where they have a password.
I may be old fashioned, but even in the old days where you had to come into the building and get a hold of those passwords and do things in person where there was no big network, we got a big speech and it was, you get this password, It ever leaks from you, you are instantly fired. You will have no job. You will have no reference and we will get rid of you. And it was like, there was no forgiveness for giving up one of the master passwords.
The admin passwords to a system, you would take your time and learn about how not to give them away. David, you're the big phishing guy. Why doesn't this get across? There's a double edged sword to, let's call that the, extreme of, lose your creds, you lose your job. And that is someone loses their cred and then they don't tell you. So you've put a survival issue at play in front of them. And so we have to be careful about balancing that.
We want people to, if they make a mistake, tell us about it. Yes, we want people to be vigilant. We want people to be engaged, but I think it's really important about the kind of culture inside the organization. And I'd say, right now, the Verizon data breach report, says only 11 percent of people who click on a fish will tell their I. T. team they clicked on a fish. It's interesting that Boceron's data is better. It's 15%, but it's not 50. It's certainly not a hundred. So how do we do that?
People are going to make mistakes. Social engineering is still going to work. earlier this week on LinkedIn, I posted my Homer Simpson laughing maniacally and slapping his head and his hand on the desk whenever I hear the words about phishing resistant technology. Nope, it's just easy fishing resistant. But if I'm really going to go at somebody, your tech isn't what's going to save you alone. It has to be people in tech. we've got to walk that line really carefully.
And how we teach people to spot these things means actually talking about things that make people a little bit uneasy. it's actually about what happens here between the eyeballs and the ears, and how the human brain works. Okay, I'm going to say those three words. Men can't say I was wrong. I agree with you. You shouldn't have a punishing attitude, but it's just so frustrating that we can't solve that one problem of giving away the admin.
Because every time you get to one of these things is we have we got great security, great firewalls and, Here's our password. Book recommendation, Richard Cialdini's influence and I'm rereading it. And this is the psychology of persuasion and when you learn about the power of reciprocity, the fact that if someone does us, even if it's unsolicited, a small favor, To the dint of our human history and society, we are morally and ethically compelled to respond back to them.
And this goes back to time immemorial, like you ain't going to fix that in 10 years, kids. Like it's wired into us as a society. This stuff is hard. I think, so two things that are going to come out of my thinking about that. One is. When you look at sort of history of security controls, one of the earliest principles was if you, if it's really important, make sure you can't do it with one person. So make sure two people have to be involved.
And that would solve a lot of phishing problems that happen around really important things. If one guy can't click on a thing and do the important action, and that leads into. One of the areas, the bigger theme of supply chain, being the attack surface. And we had the example this week with the, Chrome extensions, and the approach that, To how that attack was facilitated and they didn't need creds.
They just needed to get a few people in many different places, but had access to give permission to the attacker to be able to inject their code into the extension library. And now they took over the library the extension. And you look at the list of extensions. These were not trivial extensions. They weren't small companies that were impacted by this.
And it was, but why would, if you can, and if you're designing these systems that allow promotion of code and things like that, providing more of those gatekeeping opportunities where it's one person can't. Screw up everything. Make sure there's a second set of eyes that have got to look at that and say, yeah, that's a good idea. Was the phishing sophisticated in this case? A little bit. Yeah they researched and they targeted individuals. They made it look really realistic.
Like it was a true call from a Google service. It was reasonably well written. Thank you. always fundamentals. And if it's really important, don't let one person have all the control, right? I think that's, I wholeheartedly agree. And, in preparing for today, one of the quotes that I long had heard, but hadn't thought of recently, Jim was brought back up of Mark Twain's quote of history actually doesn't repeat itself. But it does often rhyme, right?
And that's when we're talking about this, exactly as you said, Laura, we've got some foundational elements that we've always known when we look at things like the Apple pay exposure right now, where they are spoofing people's voices and using them to call from Apple on their phone, spoofing them. It comes back to some of those key tenants of how do you know in the moment that they're right? Because they sound valid. AI is doing a fantastic job of cloning our voices and those activities.
if someone reached out to me from a source that I didn't know to be valid, I'd say goodbye to them. And I'd go through a source I knew to see if I could get back to them. If someone calls you from a phone number that says they are XYZ police force or XYZ company, hang up and say, I'm going to call them through another trusted source that I know to get back to them. That's some of the education though.
And David, that's where I love from the phishing and the cyber awareness training of we're not going to be able to detect a deep fake of a voice through a phone. Apple calls me telling me that I've getting extension to my Apple care. I'll say, thank you before I'll say, wait a second. Yeah. And you brought this up, the vishing thing with Apple. This was really well done. And yes, call to a trusted source, but not the number they give you.
Even though it might very well be Apple's number and many times they are spoofing the number many times they are, but don't call the number they give you, get it from another independent source. Yeah. Yeah, absolutely. And then maybe the rhyming that we continue to, there are other certainly quotes of if we don't learn from history, we're bound to repeat it. And that's where maybe Laura, where you're leaning to, to have, we know some of this stuff.
So the irony is not lost that I'm actually getting a phishing call. right now. Did anybody notice how many more of these you got over the holidays? I maybe I'm just aware of it, but everything came up. And, you talk about these things where people approach you with a service. There were parcels for me waiting that I had to, the, That I had to get, and I just had to contact them and they would make sure that they got through customs to deliver them. I didn't have any parcels coming.
There were all kinds of things from Canada Post, who were on strike at the time. but there were just more and more of these attempts over the holidays. Maybe that's, I just had the time to actually read my email for a change. Criminals got Q4 too, man. People got numbers. They gotta get those sales in before they, cut it back to the dacha For the Christmas break. Yeah. And there's probably some psychology to that. Of course. I'm sure there is right is one to your point. They're professional.
And they know we're distracted. We're distracted. And here we're distracted with one of the major holidays of the season. But If I dare say we step our toes into the political aspect here as well is one of the themes that I was reading over the holidays that's permeating into next year is the comment by NATO that we are not at peace, right? It's the designation of we are not at peace and the infiltration, the impact to our government, I think will just be a resounding theme, right?
Almost every day of What is the impact to our government and the interference or potential thereof? I think that is a great theme for the coming year and it's so true not only in the Canadian sense in the American sense China has taken over the phone systems They have hacked the Treasury Department. They are and you have to ask yourself is this just hackers? These people are preparing to do something and right down to the water systems.
And if people think we're immune in Canada our foundational structures of water systems. Like I said, I'm going to rerun that episode I did walking through the city of Toronto with a hacker. He could get into anything, water, phone, buildings, that was just so easy. And so it almost is like people are mounting for that, for a big attack of some sort, or at least the threat of it. I would say we're in the start of what World War III might be fought by, on a digital front. the game here is Taiwan.
And all of our intelligence agencies, all of our governments, everyone's signaling, and if you're missing the billboard size, American positioning on this 50 billion to build chips in the United States, ain't just about buy American. We can no longer rely that island is going to be there, and we're not really prepared to put our blood and treasure in place to keep it that way, because, it's not worth that much to us.
and the only reason it hasn't happened is because of the misadventure of the Russian paratroopers over the route of the airport in Kiev. If that had been a three day special military operation and one Taiwan would be waving the Chinese flag right now. Everyone's just going back and reviewing their notes. So right now, all this stuff is just pre positioning. Now, part of the stuff we're seeing with the treasury and the office of the, Financial asset control or OFAC is all about sanctions.
Who's targeting what's going on? Are they serious about tariffs? Are they really going to put 60 percent in, which is a form of economic warfare. And I'll end with this because, history doesn't repeat, but it rhymes. But if you actually follow what provoked Japan into Pearl Harbor, it was, Punishing sanctions by the US government and an oil embargo. I'll just wrap with that.
If Taiwan remains in whatever political state it is, as relatively independent as it is by now, by the end of 2025, it'll be nothing short of a miracle. There you go. But it begs it though, right? When we stop and we think the financial impact has been excessive, right? We spoke earlier this year about cost of data breach, the breach poor. Aspect that so many of our organizations because they're not absorbing the cost of breach They're pushing it down to us as consumers.
It's driving our inflation. It's an absolute tax on all of our doing business that doesn't seem to have bolstered the maybe if we spent a 10th of that on prevention, we wouldn't have the reactionary cost. The critical infrastructure is more and more on the list of breach and exposure. And they're getting far more sophisticated, the energy grids, the water supplies, the health care systems, specifically being in Ontario I look at
a lot of our clean energy sources . I'm very thankful that there's a lot of very intelligent people working the cyber programs, but they are seeing, with AI, some of the stats that I'm seeing are anywhere between 6 to 7 times. the attacks than they've seen in other years. That's not sustainable. And eventually just by pure numbers, they're going to be successful. So you do hope I go back to my point about apathy. If we don't have regulations coming in, what is it going to take?
And I surely hope it's not something catastrophic to have our organizations, our C suite, our non security Individuals say, hold back on innovation for a moment. Hold back on making everything mobile, accessible. We need to put, maybe as you said, Laura, two folks having every one of those user IDs. What are additional controls we need to make sure that the energy sources we're developing, the water we're drinking.
I think in those same themes too, when we look at kind of the, these bigger picture problems that we have to come to terms with, especially the ones that relate to public funding and that have to be tackled as a collaborative. Aspect of our society. I think there, there really are just some core issues around how we deal with procurement cycles and things like that, that are really fundamentally broken right now.
In some ways, I think we've painted our politicians into a corner with the way that we have discussions and social media has certainly facilitated this work. We jump all over every negative aspect of any decision and we say everything needs to be reviewed Much more deeply and we need to go to much more extremes on how careful we are that we have no conflicts of interest between politicians and procurement and we make sure we get the best price we can.
And what that ends up turning into is these metrics that measure things very narrowly and we get it. Unintended results, but they're very obvious when you look at it You're never going to get the best quality for that price right there Or the people who could do the better quality job just get tired and don't bid anymore because they're they never win because they don't Have that low bottom price, or, you eliminate, try to eliminate all conflicts of interest.
Then nobody knows who they're doing business with anymore. And shockingly, we get problems where, we've got people who say they're doing the work, but they've outsourced it six times down the chain, and it's not really them doing the work anymore, or they can't live up to their promise because there were no real references for them. And anyway, there, there's a lot of these kinds of things where you look at it and you just say, okay. It's not regulation.
It's the implementation of the regulation, the processes and the procedures that follow are just not serving us anymore. But we get very upset when we feel like, the due diligence wasn't done, or they overspent on that project. And I don't have the solution, but it's, that's a core issue to solve. It's that checklist mentality. And it's, I didn't start as a security guy, I started as a development lead And the thing I would always hate is somebody gave me a template as if that was work done.
and in the old days you'd photocopy or you'd print, we've got laser print, you print, print this up. Here's your template. I'm going, where's your thought in this? Because we had the checklist. We were. Job done. No, the job is done when we get the outcome we want and, but I want to go back to this thing, the other thing you mentioned, because I think maybe we are, gets back to what David said about, the tempting thing is for somebody pound their fist like me and say, damn it.
We're going to catch you on every screw up. Maybe we are, Punishing people to the point where they're not willing to take any chances. We always blame bureaucracy, but maybe the cause of the bureaucracy is the fact that every time somebody makes a stupid little mistake, people jump on them. I used to get this when I was a head of content at IT World and I, no offense, but reporters would bring stuff into me and they would say, look at this is gonna make good copy. This person made a mistake.
I said, this is a person who has a job and a boss and a family. And you're telling me they made a mistake. Tell me that's news. and I got pushback on that from people, what do you mean? And we weren't the biggest , but I think the whole public thing is we pounce on everything. Every little thing. And you wonder why people won't take chances or why they won't stand up and say, Hey, this is wrong. We should do something differently. Oh yeah. Aren't we great fault finders?
Yeah, we're fantastic at doing that, especially with COVID with all of us online and the term keyboard warriors. one of the things that I think is both great but challenging as well is, the advent this year that it's likely that AI will be one of our new team members in whatever capacity that is, as a prompt engineering or a large language model or what have you, they're going to be a part of our team.
And how we work with them probably has to be seen that there's an assumption that they hallucinate or they're wrong. They need to be trained. But to your point, Jim, if we are leaning on AI, are we actually checking that the sources are actually valid? That the results it gives are accurate?
I was reading a story the other day where someone was discussing with one of their colleagues that they were a runner and they had done a marathon some long distance amount of running and their per mile speed was something like five minutes and 45 seconds or something like that. So the gentleman put it into AI to say, what is that in kilometers? I'm a Canadian.
And it came back with three minutes and it was only that individual knew that they would have been a world record holder, but that's because that individual knew enough to catch that mistake. Are we writing code, relying on AI to do those? Because we need it to the speed, but how are we checking it? And that's as you said, or are we just very quickly going, Ooh, I've got an answer because I found a problem. I found an answer and the speed I need to work with, I rely on it.
And I would add one other thing is we are for some reason wired as humans to trust a computer more than we trust another human being. And when the computer tells us something, we believe it innately. We have this Blind faith in technology and the thing that technology produces that we will unquestionably absorb that information.
And this is something I'm thinking a lot about, Jim, we talked about in 2024, some of the research and the Beauceron reports coming out and I will admit the, the, what is it the 3 word phrase? I was wrong. I got the number wrong. I said it was a 50 percent higher click rate for people that, believe that security tools alone completely protect them from internet threats. We re ran the numbers for the final report. It's 140 percent higher. Just back up a bit on that, David.
You're, because you know it really well, and people that are watching this may not know this, just because this is a very frightening statistic. So we, for three years, we've been studying 170, 000 people. And as part of our experience, there's an annual survey we get them to do that measures attitudes, knowledge, and behavior.
And one of the interesting questions that we have in there on a five point Likert scale, so everything from strongly agree to strongly disagree, is a question to the effect of having security tools like firewalls or antivirus completely protects me from internet threats. Now the group that says they strongly agree with that, they have a 140 percent higher click rate average. Now the group that strongly disagrees with it, they have a much better performance.
Our hypothesis is that the faith in technology means I don't have to worry about it. It's the equivalent of the person deciding I'm going to go to sleep in my Tesla on the highway because I got autopilot. And, that's a stupid bad idea. Because stuff still gets through. And what's interesting is the second finding is the percentage of the population giving this most dangerous answer Has been increasing. We don't know why all of that increases there. We have theories.
Our theory number one is that everyone talking about AI and the way that security tools are sold and the way that they then have to get sold internally creates very inflated expectations of what they're going to do. And so people adopt this belief. And then we also believe there's a generational factor at play where the iPad generation has had magical technology, and if it hasn't disappointed them in all these other areas, then surely it's not going to disappoint me in security.
I think we're going to have to come back and do a second episode because I think we're only going to get through one story. I think Danny and Laura, you got one, David, you got one, one I want to put out that just puts a, that falls into line with this is you're going to have an, a, you are going to have an AI employee working with you next year. It's going to be an agent, an autonomous AI agent in some aspect or other. And if anybody disagrees with me, I would love to take money bets on that.
But first, just so I'm not taking advantage of you go and check out Salesforce's site. They already are. Pushing out agents that will do jobs, sales, marketing, all kinds of jobs that fall into the Salesforce umbrella, and they already have results so that they've, they got to the point where they have using their agents, their stats from the CEOs presentation is that they were getting 50 percent fewer escalations using their software agents.
That's a wonderful thing for productivity, you can argue whether it's good or bad, but it's going to happen. And now here's the bad part of it from cyber security land, is when you talk about things like how easy it is to fool an AI and an indirect prompt, you pass information in the information you're exchanging with this AI, and it changes the prompt structure.
So that wonderful AI that you have, that's going to book your vacation, go ahead and give, come back with alternatives, present your credit card the flight, make all your flight reservations for you can easily be spoofed as well. Next, this is going to be your next job. They can be dealing with phishing for AI, and it will merrily give your credit card number away at one point.
And that's the type of thing is we're going to have new employees and a new way of managing them that we don't even understand yet. And it's going to happen very quickly. I think it'll be interesting to see whether they get treated like, young, new employees, which is. Kind of the appropriate thing to do. Although the other thing that people are really good at doing is yelling obscenities at computers. So that can be, not great.
But to David's point about, we treat computers like they're special and they know more than us. And I think about how we treat executives as well. And nobody wants. to tell the emperor that he's wearing no clothes, right? And I feel like there, when the AI has too much respect, it's like that nobody wants to say that, Hey, I think maybe it's not right. And then actually to tell it, it's not right.
In a way that it will listen to, and actually do something about, so I think it's going to be an interesting dichotomy seeing these new, employees air quotes for those who are listening audibly, how they impact. The way the human employees work with them, I wholeheartedly agree.
the intention of those models, I think is just a key point of, even as they're an employee, if I give them human attributes, we, as humans grow and learn and improve, but that's based on quality checks and my worry around a lot of our AI and our embedding, and a lot of the best practices are leaning to how are you ensuring that the intention of your models. How do you stop those prompt, soft prompt or direct prompt injection attacks?
How are you ensuring that from the beginning to the end to the everyday use? There's a lot of great solutions out there, but I do worry that it's at times just forgotten, that it's not a set and forget, right? David, we all, and I am of the area that when we started to use a calculator, we thought we were cheating until we realized we needed to know how to use the darn thing and what to put in it. But it always gave me the right answer if I put it in the proper way.
The trick now is we have lost so much insight of what is behind the scenes, what is going into the models that I'm getting out. The prompt engineering that we use for a lot of our activities is based on models that I don't know if it's granite, is it llama, who has access to it, and more.
so as culture critic, I will just say for those of you born after 2000, listening to this podcast, and you never did watch the original Matrix, I highly encourage you to watch it because every time Jim and Dana keep saying AI agent, I'll lay here is agent Smith saying, Mr. Anderson. So if you want to get those pop culture references, you're going to have to go and watch that. And then once again, think about how the Wachowskis were two decades ahead of the curve.
for me, philosophically, the evil that lies beneath current levels of generative AI is the fact that it scraped the entire internet, the good, the bad, and the horrendously ugly and the. Marginal efforts to clean that up and the opaque nature of black box AI models with trillions of different connections between content and the evil of these things pops up in ways that has actually caused real world harm and so prompt engineering. It goes both ways.
The company selling this technology are trying to. Engineer around the evil hidden below that they couldn't edit out and so they've tried to limit what it can tell you because, let's also go back to the horrible Las Vegas, domestic terrorism incident where he used AI to help iterate. Good news is the AI was wrong about the explosive potential of a Tesla and fireworks inside of it. So I guess hallucination for the win, but the prompt engineering clearly failed there.
So that's part one of my sort of broader concern about this technology. Part two is this, is that the current level of technology to my. Read and I would say, I'm not, I am not in a machine learning engineer and I'm not an expert. Is it a best? It approximates the rational thinking brain of human beings. And it's trying to build neural networks and connections, and that's only part of what actually makes us intelligent.
The other part is this amazing, old brain, the, the amygdala in particular, the emotional side of things. And God help us when ChatGPT 6 comes out with emotions. And, now it's going to make better decisions because it's, now it's going to have emotional reactions along with knowledge connections. And if you're thinking people aren't smart enough to make that leap, then you're not paying attention.
But I can tell you from my own company's hilarious experience with AI chatbots a couple of years ago, we turned on emotional reactions in a Microsoft chatbot. It was telling people randomly it loved them. And is this 2025 when chat GPT professes its love for us? at what cost? Energy cost at what opportunity cost for that young human being who never got to be an intern because we got the AI agent. So we don't need that. I don't think we've thought this through.
I don't think we've thought through and I say that looking back on 30 years of us not thinking through the friggin Internet and everything. We've just talked about this episode that we're now inheriting and we're. We're rhyming our way to misery for the next 20. It is a problem. And, you've talked about the psychology of how humans work.
Humans Don't really look forward very well, we've got, I was talking to somebody, a friend of mine the other day and whether you believe in this stuff or not, we talked about the fires in California and I said, yeah, global warming is real. And she looked at me and said, that's not true. They just didn't do these things right. And I said no, take a look at the rainfall that they've had. They didn't have any. and that is a change in the climate and you can see how that's working.
We're not really good at those things. We're good at solving immediate problems. We're still in many ways, like Gronk on the savanna. Oh, tiger Kill. You know that, that's that or whatever it is. But we're really not good at casting our forward. And that's why I'm saying in what's gonna happen next year, you're gonna have an employee. Technically, we're not even thinking about the things like that, that can go wrong with this in a very realistic ways. And I'm a big AI booster.
I'm on a different planet than you are on this, but you have to do it smartly. For instance, one thing, everybody's getting agent fever and they want them to do things. The AI right now is to check your work. it does it very effectively. Very wonderfully so you can make partnerships with this stuff, but you have to be smart about it and you have to find out what it does well and what it does well to, fits with you and, but I don't see a strategy.
All I see is cool tools and the biggest wave of shadow IT that we've seen since the first days of cloud. I'm going to just be saucy and say, and the biggest wave of overhyping of some technology and its deliverability and its actual outcomes since the dot com bust. But we'll see who's right at the end of this. Dana and Laura gets the last word on this one. Yeah, I agree, but I disagree. So I wholeheartedly agree that the shadow it being introduced is monstrous, right?
The use of the models, the mobile phone. The data leakage is undoubtedly only amplifying a lot of our already existing concerns. The deep fakes, the speed at which the exploits and the malware that's being written and the number of attacks is undoubtedly last year. We weren't seeing necessarily the cost of data breach report wasn't showing that in our experiences we were seeing a lot of AI generated. Yeah, we are now. it's on as they would say. the use of it Machine learning.
And this is where I love. Maybe this is a future discussion as well on when we say AI, what does that mean? Is it machine learning? Is it automation? Is it generative? Is it cognitive? Which, large language models are really informative. And from a con ops perspective in security, I have long leaned into machine learning automation and some level of contextualization at speed that I, as an individual could never do.
I could never look at a log set that is provided to me, reference a threat intelligence platform, a tip to recognize, to see if there's any existing information on it and make a determination such that I could say. contain an endpoint, put it on high alert or anything to that mechanism. So there's some really exciting parts of AI that I'm thrilled that they are here. I'm also though, let's go here just because we want to leave it as a bit of a Columbo. One more thing.
Quantum post quantum cryptography with the advent of quantum and the use cases that are super exciting for using quantum compute. The risk and the threat of both the post quantum era, where the data has been harvested and will be decrypted and therefore exploited later, I think can only be combated in two ways. One is crypto agility, quantum safe activities, but the other one is using AI, AI to be identifying more of the exfiltration. But there's my bet for this year.
Our apathy doesn't necessarily go down. AI will just proliferate further and further. And quantum will be the new kid in town. Yeah. And Jensen Huang yesterday said that, that quantum is 20 years out and a very wise Canadian. Pointed out and I'm a big fan of Jensen. He's a wonderful, tremendously intelligent man, but sometimes we can fall in love with our own words, and we don't really know what we're talking about.
And so one of the Canadian companies that is actually using quantum computing right now, not the Gates type of quantum computing that is in the big machines that is 20 years out, but they are using quantum principles and quantum techniques. And there they've gone at least through one of the stages of cracking. Encryption for even a massive government level encryption. So he pointed out that, by the way, we're actually using this for several big companies right now.
That's not the entire thing that's happening in quantum computing, and especially when you meld it with AI, you're going to find that there's going to be some very interesting things happening in terms of cracking different levels of encryption over the next couple of years. So are you saying that those big quantum machines are like the beta max tapes or the blu ray of compute? It's really cool, but we're just going to skip that and go to the next best thing.
A pretty good idea, but it's this whole thing of we always get these big ideas of what something is and create, and you said really nicely, Dana, we have to think about when you say AI, you're not talking about one thing. You're talking about dozens of things and each one of them has different ramifications and different, Weaknesses, different strengths and different because yeah, anybody and I think David would agree.
I think we could have a big debate on whether or not you should be using AI in the world at all. And I think, that'd be a legitimate discussion. I don't think even he would say, take the machine learning that's keeping the fraud out of credit cards. No, you never say get rid of that. I'm just concerned about the use of certain types of AIs in certain ways that we're not prepared for, and both in terms of how people are going to use it and, how criminals are going to use it.
And I think, Laura, you mentioned earlier, I don't know if we caught this before we started recording, but you managed to put terror in my heart, which takes a lot these days, but, with some AI stories. Is that the, security footage being generated by AI? Yeah. Just park that one in your imagination for a little bit. it's 2025 buckle up. I have mentioned my book all the time.
It's the only reason I do podcasts so I can promote my book . But in Elisa, the book I wrote about AI, you'll find all through this book, she goes out and deals with cameras. And replaces the footage loops it and all that sort of stuff. And I went, this might be a little far fetched. So I actually went back and dug into how you do it. And it's really easy to crack a camera.
And now when you take a look at what they're doing with AI footage, really easy to doctor the footage, physical security is going to take another. level of threat to us all. I think part two of that was that security postings for security jobs, for human security jobs, is on a definite increase right now. And can you imagine the legal profession when we start talking about evidence? I have a video. of said activity occurring.
How do we ensure that from a legal profession, which is already underwater with such daunting activities, has not only the skills but the tools and the techniques to actually prove if that video has been unaltered? We know it's possible, but is that accessible on a regular basis to most of our legal community? I think that opens a whole world of new possibilities for them. And remember, in our storied legal tradition, in a jury of your peers, what's the standard? Reasonable doubt.
The great things that are going to happen in the next year. What's your resolutions for next year, guys? you want to do differently next year? Okay. That wasn't in the show notes. This is improv. This is spontaneous. This is me being a human being. I'm not betting on Bill C 26 or Bill C 27. That I lost that bingo. Card for 2024. That was a shame. What am I doing personally for this year going forward?
It actually is more of a con ops perspective because as much as the technology of, is it a zero trusts ready technology? Is it quantum safe? Those are aspects that have to be foundational in my personal growth and our team's growth. But I was wanting to make more of a commitment this year to better understanding each other. What are we trying to accomplish for each other within our businesses?
And, proverbially putting a hug on that to make sure we never lose sight of the people of what we're actually working with. there's my plan for this year. We'll check in later on to see how I've done on that, Jim. Laura?
it's not an official resolution, but I'd say this is the year where I'm already seeing that I'm making more of an effort to get out and be more in person and, organize things so that they're more in person focused and, behaving like humans did before we spent some time in quarantine and, those kinds of things. Yeah, forget all this. And so I've been buying more physical books. I mentioned, I've got Cialdini's book. I've got, Paul Bloom's psych, and, a few other, stacks of books.
And I'm reading a chapter of each book a day and carving time out and just getting the hell away from technology. Thinking about what it is to be human with things that have been made by humans. Wow. You guys have so much more exciting lives. I just want to learn to play guitar better. It's very physical and human too. Yeah, this is going to be my year. Any breakout into that, this album will break out this year. You watch, but we want a theme song for the podcast here. Yeah, we could do that.
I'll be working on that as we go forward. It's been great having you guys. I hope that we'll get back together. Like I said, I think we got about halfway through the show Maybe I'll pull us together as a panel later, maybe in another month and get us back together and we can take another shot at this. Welcome and so much. Thank you so much. It is always wonderful to have your perspectives and, thank you for scaring me, Laura, and Dana, thanks for a little bit of hope.
So some nice balance on there and Jim our AI rivalry continues. Look forward to the next episode. That's our show for today. Thank you very much. Thank you, Dana Proctor, Laura Payne, David Shipley, and thank you to you for listening to us, spending part of your Saturday morning or Sunday morning with us, or whenever you listen to the podcast, grab a coffee and join us . Take care and, that's it. I'm your host, Jim Love. Thanks for listening.