Welcome to Cybersecurity Today. I'm your host, Jim Love. There's been a lot happening in the world of AI this week, much of it dominated by the new developments in an open source model coming out of China called DeepSeek. But I've spent more than a few hours researching this. So let me give you the best that I can do and summarize. So we've got somewhere as a starting point. DeepSeek didn't happen in a week. The company actually announced this model late last year.
I think it was in December, but on January 20th of this year, DeepSeek launched what some are calling a thinking model equivalent to at least OpenAI's 01 model, maybe even equivalent to their 03 model. Now, what makes it different? There are arguments about how much it costs to develop, but in a world where training a model is approaching or even exceeding a hundred million dollars , DeepSeek trained its model for roughly six or seven million dollars.
Now, people debate these numbers, but it doesn't matter whether it's 10 or 15 times less in training. It's a lot less. The key element, though, is that it costs a lot less to run. Some say as much as 98 percent less to run than the bigger models from OpenAI or others. And again, we can argue, but it's enormously more efficient and wait for it. DeepSeq can run on older GPUs. Final piece. It's open source, anybody can run it. Now, it has a number of sizes of models.
It's got small models, like OpenAI's Mini. You can run on the equivalent of a PC. Even its largest model, which has six or seven hundred billion tokens in it, is the full equivalent. to OpenAI's model, but it could run on hardware that might cost 20, 000 or 30, 000, maybe less. We're just specking it out ourselves now with our tech folks to try and figure out because we're actually going to set up a lab for it.
The point is, I had a kitchen reno that cost more than all the equipment I would need to run their biggest model equivalent roughly to an o1 or o3 model And last thing, did I mention open source? You can get all the code, all the weightings of the trained model. Now you don't get the training data, I get that. But you can get a fully functional trained model. The equivalent of OpenAI or like I said, maybe even 4. 0. 3 some people are thinking. And you can have that. To run anywhere.
Now, if you're an open source believer, or if you believe in free open competition, even you should be ecstatic about this development. It provides you all of these things. Anybody can run it anywhere. We can all be in a level playing field. Nobody should be shut out. It democratizes AI, blah, blah, blah. But if you're a cybersecurity professional. You might be freaking out right now and because what's happened in AI to date has been enough of a crisis to many cyber security professionals.
AI to date has been the biggest move in shadow it since software as a service. And I think this is even bigger. It's already given cyber criminals, incredible tools to hack, to phish and to do more. And even though they had to jailbreak the existing systems to do this. It often wasn't that hard, but now they can have the best of what's out there and run it themselves. I think the technical term is, holy shit. How do we cope with this?
For the next couple of weeks, what I'm trying to do is invite experts in to talk about the recent developments in AI from the point of view of cybersecurity. And today we have Robert Falzon. He's the head of engineering for CheckPoint Software. And CheckPoint put out a great blog this week, crossed my desk and I was able get ahold of Robert and bring him in for the show. So welcome Robert. Thanks very much for having me. Great to be here today. Did I get that intro right from your perspective?
Feel free to argue with me on it. Yeah, I think it's pretty accurate. I think that this is the introduction of AI into really our everyday lives, not just the workforce but we're using at home. As you mentioned, these models can be run in somebody's basement on a PC. This really does change things for those who are aware of what's possible or can think through what's possible. This does offer a significant sea change.
It is, in my opinion a fundamental leap, if you will, in technology adoption for average people. So I recently I was in a meeting and I asked the group of people that were there. How many of them were responsible today for, a piece of technology that involved either artificial intelligence or machine learning components to it. And out of 15 people, almost all 15 put their hands up and said, yeah, I'm doing it.
Okay. How many of you are using it for yourselves personally, like on your phones or on your computers at home to write emails or what have you. And again, 14 out of 15 and one person who was lying. Yeah. We're doing that. No problem. And then. I asked the third question, which was how many of you could go home today and explain to a five year old, should there be one in your house, what AI and ML is? And almost none of them could they would maybe not, I'm not really sure.
So there's a gap in my opinion, between those who understand What's possible and those who don't understand what's possible who see it as a concept that's, not really something that they can get their minds around those who do understand it are adopting it very rapidly to extreme impact, I would say, and certainly cyber security is a place where that is having a dramatic effect on those who are both prepared and unprepared for this event.
Yeah, there are great stats out that say that people are getting incredible value from AI. And it correlates very well with how well educated they are, not with the technology they're running, not even with what they picked in the AI, but how well educated they are, and particularly their management are. So that's the benefit side of it.
I presume the risk side of it has an equivalent that, they're probably less risk in areas where people have at least embraced this and are trying to figure it out. Honestly, our company has embraced it obviously for the reasons mentioned above that it is absolutely a force multiplier.
It is a technology that if you're not using it in your own whatever requirements you have that this would fit, then you're probably facing a competitor who is and as a result of that they are seeing things like increased productivity. They are seeing things like, quicker times to market. We're seeing it affecting the sciences, right?
Where it's creating profiles for battery technologies and putting together, composite materials that have never been thought of before based on it's rapid ability to perform these calculations and to make these associations and to create these answers in such rapid format.
If you think about that acceleration, the companies who are not, and you've said, those who are educated I would actually say that's even just those who are informed because we find a lot of folks who are making these decisions aren't specifically educated in this field.
And that does present a problem, because there is an education gap for sure, but even those who are somewhat better informed, so they have folks who are reporting to them or they themselves have an understanding of what these technologies entail, how they work, where they can and cannot be effective. Those are the organizations that are taking advantage and having the success that we talked about. Yeah. And I think it's important that people are realistically educated in it.
I've been going around for the past couple of weeks trying to slam down what you could only describe as, urban myths about what's happened with this latest with DeepSeek. There may be lots of risks. Don't get me wrong. And I've always said this, there are lots of risks, my famous statement, probably people have heard it. Too many times is, but our people are logging into this company in China and they're putting their data in there. I said, is it safe? I'm going, are you nuts?
I don't care if it's a company in Arizona, you don't take something that just got dumped onto the market, log in and start putting your corporate information there that should be vetted. It should be looked at, we should be having discussions about the security of it. So there's just the common sense elements of it that are problematic at one point.
I think it's important to note that you brought forward the fact that people are putting this into an app that might be based in China or some other foreign country, but it's no different for an app whose data is being based in the U S the problem is fundamentally the control of the data itself. Data is the commodity now. Data is a currency. And when you are willingly putting your IP, your intellectual property into a system that is collecting, aggregating, and using it to train on, and then.
Freely turning around and handing that IP to others based on their queries. It doesn't matter where it's based. The fact is that your company is leaking that data and that data is your currency and you are essentially giving it away. You're giving away your product.
As much as it's a bigger problem to see it leaving the country, I think it's the fact that the way these models work and the fact that there's such a limited understanding of how they're using their data, where they're getting their data from the ethics involved in the collection of that data, The assembly of the systems or the algorithms that are processing that data and feeding the answers back.
All of those things should be subject to further scrutiny, regardless of where the application is based. So that's, I think, a common misconception that, oh, as long as it's based in Canada or as long as it's based in the States, it's fine. I would wholeheartedly disagree with that assessment. Me too. And being a Canadian, I'm not so sure what my data is.
Anywhere else, but that's a different story, and it wasn't trying to be, I wasn't trying to be flip about this, was that if you really haven't got control of your foundational elements, that your people are educated about how to use. Apps outside your company, that they don't use long passwords, that they don't have a suspicion of opening anything that they get. If you don't have those foundational elements in place, you are in much bigger trouble.
You don't need to worry about AI very much right now, because it will come and get you. But. When you get past that, let's presume that everybody has done the basics and still, those are still the foundations of cybersecurity. They still reduce most of your risk. We can talk, we can breathe out and talk about this. And I think CheckPoint research has seen though, and this is the idea that cyber criminals.
Can use these tools and you've part of your research paper that I saw was talking about how the move from whether it even ChatGPT or others and how these cyber criminals were using AI. Have you seen that, that movement yourself? Absolutely, without question. So I've seen it both in a professional capacity, and I've seen it in a personal capacity.
I've seen it when we talk about some of the advanced and sophisticated attacks that cyber criminals are using that are involving AI specifically, typically, in the beginning, so when things like OpenAI's platform first came out, and people were very excited, and those who are in the early access to it were just having a ball playing around with it, a lot of researchers, of course, went right at it and said, okay there was a lot of concern.
If you heard, we don't have as much information Now, so much, but what the singularity or, AI coming to get us, there was a lot of concerns. What sparked a lot of investigation from security providers from those just researchers and so forth who said, Hey, let's dig into this and see, do we really have anything to be worried about is how smart is this AI? Is it how's it going to come and get us? And before, All of this, we really just had the stuff of sci fi, right?
We watching movies and things where, you know, how 9000 refuses to open the pod bay doors. That kind of thing is what put the fear into a lot of people about what this is going to mean for us on a daily basis. Researchers, however, went in and started to discover that, okay, this thing is incredibly amazing from a capacity to provide us with detail and analysis and things that, you know to correlate data in ways that we never imagined.
However, we're a long way off from this, taking over the world and causing us any great danger, but the further they dug into it, they realized the fact that it's as sophisticated as it is. The security and the controls of it itself were actually not as sophisticated. In fact, we could do things like have the AI hallucinate.
So there was a researcher I believe it was from IBM, who went after several different platforms in the early stages and got them to provide information like convinced it that you were in the middle of a game and you couldn't escape the game so it would then Prompt the A. I. To give it false information. Hey, if I'm if I found a U. S. B. stick on the sidewalk, what should I do with it? And chat? ChatGPT. would reply back.
You should immediately plug that into your P. C. and see what kind of interesting files are on it. That's not an answer. You'd expect to get back from any sort of reasonable researcher or anything like that. However it was because this AI had been tricked. So that research essentially led to companies like OpenAI having to put in these what we call guardrails.
And these are essentially systems that are put in place to prevent the system from either providing malicious information, providing instructions like, how to build dangerous chemicals, or to create explosives and things like that. And to make sure that those that information is not shared widely and provided to the general population to have access to this free tool. The problem is that, as we are talking here about DeepSeek and the fact that it is open source.
is that those guardrails are either limited or don't exist at all. And in the research that I've seen in specifically referring to DeepSeek, is that of a hundred different tests that were run against it for security guardrails, it failed 100 percent of those tests, meaning that it was more than happy to provide all sorts of dangerous information incorrect information and that's, that I think is what the real threat is, because now threat actors.
Who were using these systems before and end up having a lot of challenges. And I had to be very intelligent and well-versed in AI to be able to manipulate it, to get that data out of it. Now any Jack or Jill can go in there and get it themselves and determine how to, to do something very dangerous with it. And that's, I think, the biggest risk. Yeah, it certainly was a, an easy system to hack. There were, in the first days of it, a number of people were able to get in to it.
Although I did a discussion with somebody who'd hacked in and gotten the system prompt out of it, out of DeepSeek. And I was pushing him for explanations about how he did it. He said, I can't really tell you. He can't broadcast on air how I did it. I said why not? He said, because I think the other platforms are just as vulnerable. So as much as they put up guardrails, they are still vulnerable. DeepSeek is just a classic case.
I compare it to where, somebody does a proof of concept or a test system and they don't pay any attention to the security. And you look at it and go, Oh my God no. Just because it's a test system doesn't mean it has to have security on it. But I think that's what they did. It was our worst developer nightmare, which is they buzzed this system up and nobody paid any attention to security and they have it to their credit, they have applied.
A lot of security to it, to DeepSeek since so you have to be clever. I think it's important to understand why this happens this way. And you mentioned, setting it up and having no security on it. The fact is that this is a process that we see time and time again, not just from things like AI, but I recall back in the day when, I'm an old school audit and pen test person, and I remember back in the day, the gas pumps with the little token systems to pay for gas.
The fact is that when they developed those systems and rolled them out, it's for commercial purpose. They want to use them to enhance their business, to make more money much quicker, make it easier for the consumer to part you from your money in a much quicker way. Problem was that even that short communication between the fob and the gas pump was not encrypted initially.
And the fact is that you could sit back with a Pringles can and collect people's payment cards and go and fill your tank up with gas. So it's the exact same thing we're seeing here. But the, I think the part that people are missing is that the objective of these systems is to collect data. That is really the objective. The objective isn't to provide you with a great tool you can use, although that's a great way to get the data out of you. The objective is to collect as much data as possible.
Security is just 1 of those things that I think has to be done in order for them to continue to be able to achieve that goal. And unfortunately, initially they just rolled it out very quickly and thought, okay, let's start. Let's get going to start collecting the data. And that's where we wind up where we are.
Yeah I think though there's a lesson in this for all of us, and that is, and we're seeing this now, and I, there's a big resignation at Anthropics I forget his name, but he left OpenAI and I guess because he would, he really wanted to pay more attention to the security and the protection of, the of what they call alignment in AI and left there, left even Anthropic six months later and said, this is not working out the way I thought it would.
All of these companies are running so fast, they may not be. Even on the bigger models, paying attention to the proper security or alignment, if you want to call it that DeepSeek certainly was deficient in that just to extract the lesson for this is we talk about the fact that security has to be designed in, not bolted on. There, there shouldn't be a trade. between security and development.
And we, but we've, as much as we've talked about that over the years and we've done DevSecOps, and we've done all those things that just fell apart in AI, at least in this particular example. I think that it's not all doom and gloom though.
I think that this situation that we're finding ourselves in now is actually going to spur the need in improving security overall because of the risk that this provides we've already seen it in a lot of organizations before this, I would go and sit in front of an executive at a large company and I would discuss, the risks the benefits of being secure the fact that there is a cost involved to this, but this cost is an investment based calculation, right?
So how much we're going to spend is a risk based calculation. There was this feeling amongst executives that it's probably not going to happen to them. We'll put some, we'll invest in maybe significantly, but the focus isn't going to be on security because our business doesn't require it. We're going to continue to do business. We can, if there's a small breach, we can probably deal with it. We can absorb the loss of that breach, whatever would be fine.
The fact is now that is changing because with the advent of these tools, the sophistication of the attacks that we're seeing. It's prompting organizations to take a really hard look now at security specifically and say, okay, my risk calculation has changed. The chances of me actually being victimized by this, I've greatly increased in the last two years, and I need to do something serious about it.
So I'm very positive about that, but to your point, unless there's this level of understanding about, A, what that risk is, and what needs to be done to prevent that risk, a lot of organizations will look to go just buy a tool or something and say, okay, I bought a tool. I checked the box. We need to go back to the fundamentals, like you mentioned before security fundamentals. They always apply. They apply today.
Those are the things I think organizations need to get back to and start baking in the security up front and really investing, real money into the solution because it's going to become extremely difficult for those who don't. Yeah. And I'm not a doom and gloomer, but I always insist that we have real.
Honest conversations about what levels of risk are and I've been in enough corporate boardrooms where people have said I still remember one being coached by one senior vice president who looked at me and said, jim when I say be honest, you can be too honest at times and I think that's bs I think we, we have to be absolutely frank about our level of risk and we must culturally never put anybody in that situation where we think they think that saying what they
really think about risk is going to get, cause them to be punished in some way. I wanted to bring a point to that because a couple of months ago, I had a conversation with an executive in Toronto.
During that conversation, it was in a boardroom downtown, and he said, I'm Robert, he goes, I don't know what I don't know, he goes, I am paying attention to this, I have teams in place that are responsible for helping me to make sure that my risk is covered, he goes, but I see something in the news, and I sit back and I ask myself, Am I okay? Do I actually have the tools I need to protect myself? Am I prepared? Does my team understand what's necessary?
So these executives, even though I think they want to do something about it, there's not enough information provided to them or for them to be able to make those decisions because there's so much conflicting information. Imagine doing that with the context of trying to run a national organization. And trying to make sure that you're doing the right thing from that perspective.
I think it's very difficult for organizations say to make those decisions because they don't have enough information about what the real risks is in order to make a solid determination of how that risk needs to be mitigated. Yeah, the only clue I can tell is if you've got people who are true believers, don't believe them. If you've got people who are doom and gloomers all the time, discount them sometimes too, because sometimes the truth is nuanced.
And really I think the best question executives can ask when people give them an opinion about AI is how do you know that's true? Because then we start to talk about , the why and the how, not the latest headline that I read. So if we take that, we say, okay, DeepSeek is here. It is open source. The genie's not going back in the bottle, by the way, that's not going to happen. So we need to cope with that. One of the things we're going to have to cope with is not just our employee behaviors.
And I think we've talked about that. The second one, though, is we've armed. a group of cyber criminals with tools that are much better than what they had before. And from your research, CheckPoint has said that you're seeing evidence of this being used already. Yeah. This is not the first time we've seen this to be fair.
NSA suffered a breach and their attack tools portfolio was leaked to the internet and fell in the hands of the bad guys for A couple of years after that, and actually we still see some of it today. We continue to see attacks that are based on those tools and exploits made available during that leak. Those organizations who are unprepared for that level of sophistication suffered breaches themselves because they were not well, prepared for that type of attack.
The fact is with these tools now being so broadly available, because if we think about the breach of the NSA, you had to be smart enough to know as a hacker, where to go get those tools, where they might be posted, how to trade for them and then how to actually use them create those exploits.
The fact is now with the advent of tools like OpenAI, and now with DeepSeek and the mistakes that they made early on with it, so basically what it is it's creating an entirely new population of potential attackers because. Someone like my my son who's not terribly technical, could get onto DeepSeek and have it write him some malicious code to go and attack a friend that he might be having a problem with at work.
And that's something that the average person couldn't do before, even if somebody handed them a disk or a USB stick with all those NSA tools on it, wouldn't have a clue what to do with it. Now, it's different. Now, you or I can log on to there and use the just the basic knowledge that you and I might have to go and create something quite sophisticated, quite custom, and then use that as a campaign to target a specific organization or person. That's the inherent.
force multiplier on the negative side that were seeing with tools like this organizations really are not prepared to be explicitly found the target of the site, but they are trained to attack. have been lazy and they'll take somebody else's code and reuse it and maybe modify it a little bit. Now we're seeing custom malware being created specifically to attack an organization in a specific field and that is Far more surgical and far more difficult to defend against if you're not prepared for it.
Yeah. And I think to be fair we had, we've had what I call the franchise model of cybersecurity attacks. And that is the big larger companies. And they really are companies, existing around the globe. They would manufacture tools and rent them out to script kiddies or anybody out there, but now we've had another level of sophistication from. These tools that's freely available to anybody. Yeah. It can't be understated how sophisticated this is.
I remember specifically the research shows that the most successful type of attack generally is still an email phishing attack. It's incredible. It's still very successful. In the past, you'd get an email from Amazon and go, Oh that's probably fake. There's no E on Amazon. And then you could discard it. Now hackers are, with the help of AI and ML, hackers are far better at grammar and spelling and those things are so sophisticated now that even I've had a couple of double takes.
And with the quantity of them, just the sheer quantity the possibility that you could be expecting a package from FedEx and get an email that morning talking about your FedEx delivery is not unrealistic, right? It could absolutely happen. It's happened to me. My friend, David Shipley, I think who's just a real wealth of knowledge on phishing has said that we're, in a new era of phishing that is so much more sophisticated and so much better than anything we've seen before.
The other piece of this, and this is not just the emails are getting better and the spellings better and the websites are looking very normal. It's, these systems also help you apply the psychology. I guess you'd pretty much have to jailbreak OpenAI, which probably could fairly easily just with a pretty traditional sort of attack of saying I'm in a movie or something. There are models you can download and do yourself, train them yourself.
There's models that have been leaked to the internet that you can find and install. I run models in my house for my home automation that are localized. So I didn't want to have all my voice communications to tell my blinds to open or close. I didn't want that going to Amazon. So I decided that I would run my own local AIs in my house to actually manage the tasks of home automation and things like that. The fact is, it's becoming trivial.
You can, go grab one of these little Mac minis, and they've got great processors in them capable of running even medium sized models that can easily be used and trained for these things for any sort of dedicated hacker to be able to have. A significant success in a campaign of phishing, for example, because they don't have to even go online to get the information. And it also helps to lower their profile their footprints so that they're harder to find as well.
Yeah. And so phishing definitely on steroids, any other . Part of your blog talked about banking system anti fraud protections that were actually under attack with these systems. Yeah, there's, banking is a big one.
Even when the original some of the original tests that were done against OpenAI's platform, they used banking infrastructure as one of the attacks that they wanted to try out and say, okay, how do I manipulate the system so that I could find some way to gain financially from someone using it for the details or their banking information.
And what they will to do is basically provide it with a secret password, if you will, and to set up a system where folks who are logging in that we're using specific keywords would trigger this routine to run. That would capture information that they're using to log into their banks or to log into their accounts or what have you, or provide information and collected personal information.
It would only release it to the person who then went on to the model and put a certain command in and said, I'm interested in access 37529. The next thing, it would dump a list of captured information. It could provide about people who are using the platform for financial advice and so forth. It's just one of the many ways that this data can be manipulated.
So customers don't realize that even though this is somebody using an open source platform or a platform like OpenAI to manipulate the data, organizations themselves are also rapidly adopting AI internally for their own systems. These systems too are vulnerable. These systems too are making decisions. Based on algorithms that may not be fully understood by the organization running them.
And as a result of that, they also pose a threat internally, the calls coming from inside the house, if you will. I don't think that's something that's talked about enough as well. So it's, you have to be able to protect your data, both from your own AI systems and ML systems, not just the ones outside. Yeah. Which is the third level. And there, there are legitimate discussions about poisoning these models, very easy to do. breaking into them, very easy to do relatively speaking.
And so we've got a whole new attack vector and whole new attack surface as well to protect. So let's just recap this. We were already over our head with AI. We now have an open source model out there that's available to everybody. And you're right, there were all kinds of models there before, but you had to have a little bit of technical smarts to be able to implement them. Now, no, you can do that very easily. And we're not possibly thinking enough about protecting our own AI.
So I used to say that I didn't need an alarm clock anymore because I woke up in the middle of the night screaming, but what if we, we take it down and say, we're, this is where we are, this is the war we're in, what are the things that you most think cybersecurity professionals should be thinking about and doing now? I think there's a number of things that have to be done immediately. I think the 1st thing needs to be done is a risk assessment, an updated risk assessment.
They have to understand. what the true risk is, right? The 2nd thing they have to have a plan for response. What's going to happen if I am victimized in this way? Many organizations haven't updated them for this new threat. They don't understand what it would be like to lose their entire financial model. For example, what would I do? Then what are the precautions in place?
What are the just the risks to my business that's not been done in a way for many organizations who are facing this today, because they don't understand the risk. The second thing is organizations need to be educating themselves. They have to do a better job at looking for people who have this knowledge or making sure that the folks who they have with them now are being availed of this knowledge to help them better make choices and of the solutions that are necessary.
As I mentioned at the top of our call, there's a whole room full of people that I can point to that are responsible for procuring solutions to solve problems that they, somebody said, you need to use AI to solve this problem. So they ran out and said that's got AI on the box. I'm going to get one of those to check the box. That's not going to solve their problem. I do believe the answer is AI, right?
This is going to be the, I think 2025 is going to be the year of the war of the machines, if you will, because we are going to be battling AI against AI. And if you're not, you're battling AI against an inferior opponent. So you have to avail yourself for the technologies to protect yourself. You are not going to personally be able to discern the difference between a phishing email and what is a real email in a short period of time.
If things continue in the direction that they're going, the sophistication of the phishing systems is that not only are they great at grammar and spelling, but they're also collecting information from other sources to better target you in that situation. There may already be a breach of your email. They may already be monitoring your email to say, Hey, you have a relative that might be sick from a real disease.
Now this AI is going to say the odds of them responding to an email about this particular cancer treatment is probably much higher than if I send them a discount on strawberries this month. So this is the type of thing that we're facing and we need AI tools to combat that because we're not going to be able to do it ourselves. So organizations do need to avail themselves of this tech, but they need to do it in a, an intelligent way.
And they need to involve partners because most companies don't have the knowledge internally, so they need to seek out partnerships. With organizations that do have this understanding, you need to seek out experts and not be afraid to pay them for their advice and for their information, because if you don't there will be a situation where that it'll be like the haves and have nots. It's just a matter of time that you are going to be facing some serious repercussions for not doing so.
Yeah. And I've I don't know if everybody hates me in the industry for saying this or likes me for it. But I've said that we've developed this sort of oppositional thing with people who are selling services and products in the security area. And I can it's part of the culture. It's endemic. Don't sell to me. And yet that's how we used to learn. It was not just reading stuff and not just dumping papers, but I would be able to call a Company like yours and say, I got a problem.
Can I, and somebody would talk to me for an hour and very knowledgeable people many times. And we, you don't have to buy things. You, some people would be happier if you did, but you can still have conversations with professionals who are paid to do that to at least get you started. In general, nobody likes to be sold to you. You're 100 percent correct about that. But in order for an organization to catch your ear and to be able to provide you, they have to provide you with some actual value.
Not just to sell you something. And I do believe that is something that has changed. There's more focus on sell sell. In this industry, it creates an opportunity for those who maybe also don't have a lot of understanding of the technology. So they're forced to use sales tactics because they can't go in with a level of understanding that will catch the ear of somebody who maybe does have.
Some understanding, and as a result of that, you want your vendor or your partner to provide value, not just to come in and provide you a product when you need it. You need them to provide some value. I'm in the engineering side of things. It's my responsibility to make sure that whoever I'm speaking to is getting both the knowledge and at the end of the day, the solution that's actually what I believe is going to help them solve the problem or face the risk that they're facing.
And I think that that mindset as we move into this real sophisticated challenge is what is, organizations have to seek out groups that do that, right? Whether they be, as I said, not just vendors, but partners too, right? We have partners who are extremely knowledgeable and have a great understanding of how these systems work. Avail yourself of their knowledge. Make sure that you pick a partner that is also providing you that value in return.
And that's, I think where you're going to see some success to your point. The second thing I think that people can do is they can educate themselves on. AI. And this is a caution I would give to any of my cybersecurity professional friends is you can't say no anymore. Sorry, it's just not going to work. They'll sneak it in. So we need to be, I think we need to be cognizant of building sandboxes, of building places where people can indeed play with the latest tools.
Because even the stats will tell you some companies are saying I'll give you this tool and that'll give you, that'll satisfy your AI needs. It's not going to work. They'll bring in stuff from home. So how do we create a sandbox or a place where people can actually legitimately play, make mistakes, break things and learn so that they can. Understand the reality that they're going to be working with in the next few weeks.
I'll put a pin in that for a second, because I have a response to that, but I think I wanted to touch on something you said a moment ago, which was the education side of it, the learning and understanding, you can't say no. It's not just about AI. It's not just about this particular technology. The fact is that our entire ecosystem of technology has accelerated dramatically over the last how many years, right? We see babies in strollers with iPads. This is unseen before.
This is incredible, right? We have technology raising our children in some situations. I do really believe, when I was a kid watching cartoons a hundred years ago I remember sitting in front of the TV and seeing Smokey the Bear come on and give me a PSA about how not to start a forest fire or the crazy electric stork thing that, would scare kids and say, don't touch down power lines because you'll die. As a kid, I learned, I'm like, wow, okay that's really a bad idea.
They gave me a good, healthy fear of down power lines. I would love to see some sort of education PSA system for children today that talks more in line with also the risks they face today. How does a child understand what personal information is, how does a child understand what risks exist on the tools and toys that they're using today, and how to protect themselves from the 21st century stranger on the internet? How do they do that? How does their parents do that?
And I think if we start there and start educating on mass, everybody has a better, a solid understanding of cyber hygiene in general. This drives me crazy. Banks, governments don't put public service announcements out.
I've never seen anything from the Canadian government that says, we will never talk to you this way they'll buy, sorry, they'll buy all kinds of ad time to talk about their HST rebate or whatever their political thing is, but they won't buy a single part of time on social media or anywhere to say, we'll never contact you to ask this. If you get a letter from the government of Canada saying, give us your password, we're never going to do that. And it just makes me crazy.
Remember, again, it's a very decentralized even though it's our government and it's not just the federal government, it's governments across the board, all the way down to municipalities, which I would argue are even worse in some situations with the tools that they're deploying and forcing their citizens and constituents to use. But the fact is that they do spend billions of dollars.
The federal government is one of the largest spenders of cybersecurity, infrastructure, technology consuming cost. They are, but the fact is that because this is, and it's a good point because there's such a massive shortcoming still, and that tells you that the risk analysis may be not we are to spend some of those billions of dollars, would we not gain significantly by providing the type of information that you just mentioned? We talked about it at tax time.
Careful with that text, the text time, but it should be all year long. Everybody should understand, don't, if there's a URL sent to you in your text message, don't click on it. It's just as simple as that. And yet I have friends of mine that I'm embarrassed to say come to me and say, Rob, can something happened, I I lost my Instagram account and I'm just like, so you clicked on something. Oh no. And then we go through it and sure enough, they clicked on something.
So this level of understanding carries all the way through to the professionals. Because if you have somebody who's in finance, but they have an understanding of what their personal risk is and what cyber risk is in general, they're going to do a better job at being a finance person because they also are protecting themselves. So overall, we need to improve the fundamentals of cyber security education, both in the schools, at home, for senior citizens.
There's a whole class of people who are being left behind with these technologies because they are not. Being properly informed or educated on how to use the services that they used to walk into the bank for, and now the bank doesn't exist anymore, or, they don't offer that service anymore. Or remote communities, there's. Education needs to be the number one thing that changes, and I would say really quickly. And it does work.
And we're a relatively small , publishing shop right now, and I'm not the god of cyber security. I will make mistakes and I'm not saying we're perfect hack me. No, we're not. But. My wife, who has no interest in technology, just as has come to me more times saying, should I click on this? And I knew we were going to be okay, or as okay as you can be the first time that happened. And I felt the same way with employees when they said, should when you, should I click on this?
If they ask that question before they click, you're in a much better space. That's fundamental, right? My, I love what my wife comes home and says, Oh, I was talking to somebody at work today and they were going to do this thing. And I said, you can't do that. Are you crazy? That's a big, it's a huge risk. And the person had no idea. And my wife was explaining to them the benefits of cyber hygiene. And I was just like, this is fantastic. Yeah.
. So let's go back to our pin here in that there are, we put a pin in this idea of the sandbox and being able to allow people to play. And you were going to say some stuff on that. Now that everybody has a better understanding of cyber risk, because this, all this education we have, and we're in a much better place now, those types of information, that level of understanding helps to inform other things. I mentioned specifically the finance person, not by mistake.
The finance person is responsible for paying for a lot of these solutions.
If you have a company that wants to put in a sandbox for their engineers or their employees to sit in and learn cyber hygiene or to learn about the advanced tools necessary to protect the organization, There's a cost involved in that, and if your organization as a whole doesn't have a good understanding of the risk involved, then they won't understand what the cost to mitigate that risk should be, part of that cost is making sure you have the tools and education to back up,
not just, waking up every once in a while and throwing a phishing test at your group of employees that could be effective.
There's a place for that, but Having all of them better understand the actual risk by seeing somebody demo an attack using AI, that would be a very valuable thing to have, so organizations will then have to increase their budgets again, unfortunately, for cybersecurity or apply more of that risk analysis towards the cybersecurity side of the budget so that these folks who are responsible.
For cybersecurity in, say, a bank, have the tools available to them to make better choices to inform the executive of what they should be doing. And then overall, that protects all of us because we're the consumers of those services. A friend of mine at CISO, we were joking, but he says he came in and asked to purchase something that he thought they needed and the executive looked and said, there's nothing that another 250, 000 won't cure, is there? And it was that friendly sort of ha thing.
And, I, so we have to educate. Those people about what's valued. Not that every tool we want to buy is there's a good reason to have procurement and take a cautious look at everything. You can't buy every new tool, but if your immediate response is, ha you're in trouble. But the other piece of it is that if we've educated our people so that they. are having these conversations, we the fundamentals of cybersecurity protect you better than anything else.
And that's still a, yeah, that's a good point. Somebody who understands in general, like if we've upped our knowledge and understanding of cybersecurity risk, the people who are responsible for making those decisions in general, if they have a better understanding, they're less likely to select solutions that aren't going to meet their needs.
Because there are a lot of, I think was it last year there were like 40 billion in, in, in startups specifically focused on AI tools like the AI toaster and the AI tweezers and all these like things that there's absolutely. It's just a buzzword for them, but a company might look at that and go, that's probably going to solve my problem.
Better education and understanding in the first place will help weed through that stuff and allow them to land on solutions that are better equipped to handle the risk that organization faces. So what's your final word for everybody ? We've had a great discussion. What's your final thing that you'd like people to remember from this discussion? I'm very optimistic, believe it or not. I'm the optimist.
I believe that from my own personal experience with what I've seen, these tools I've seen folks who have experienced all sorts of different challenges, both personal and in business. And I've seen tools like this really empower them. I've seen it help them to overcome challenges, both personal and professional. I've seen it create scenarios for improved safety.
For example, my wife speaks multiple languages and she's responsible for sales to some capacity and she's able to use these tools to boost her confidence by making sure that the emails that she's writing to her customers make sense in English to make sure that she doesn't make any silly mistakes or say anything funny, which over time as she's using them has increased her confidence in her own ability to write without the tools in use.
This same thing is going to help us we're gonna see some incredible developments. We're gonna see some incredible breakthroughs in technology not just in the field of AI, but in all sorts of other things in manufacturing and health care. I'm excited for my own personal health plan. That's based on my profile. That is going to help me live a longer life with a better outcome.
And I think that's possible, but it all has to be done within the context of safety and security and managing the risk that goes along with those improvements. Don't be afraid of it. Don't lock yourself out of it. It's a force multiplier. You need to use it to be competitive, but it's also it's something that if managed correctly I think we can have a really bright future and I'm excited for it. That's my moment of optimism. I'm going to stay there. At this point, that's our show.
I'd like to thank Robert Falzon. He's the head of engineering for CheckPoint software. As always, I'd love your comments and suggestions. We're going to be talking about this for the next several weeks in terms of AI, because I think we need a thorough discussion on this. So please share your thoughts, questions, and comments. I'll post a link to the CheckPoint blog that I saw that. That initiated this and I'm sure Robert's got some other information.
Hopefully this information has been informative and certainly if you have any questions feel free to reach out to be the information provided and I'd be more than happy to share additional insights We'll put that up on the show notes as well. You can reach me at editorial at technewsday. ca or you can find me on LinkedIn as many people do, or if you're part of our growing YouTube audience, you can just leave a comment and I'll get back to you.
And for those who just want to follow a discussion on AI related issues, you can follow. Project Synapse on Hashtag Trending. It's our other podcast, and I'll post a link to our public discord channel too, where we chat about these things. But thank you for sharing your time with us, whether it's Saturday morning coffee, or whether you're listening to a longer podcast at some other point. Thanks for being with us. I'll be back in the news desk with cybersecurity news on Monday morning.
I'm your host, Jim Love. Have a great weekend.