. Many of you have seen Project Synapse before. You've seen this discussion with Marcel Gagné, John Pinard, and me as we talk about AI and how we use it in our personal and our work lives. And, one of the things that we wanted to talk about, although all of us are positive about this, AI and its usage, we're not, we're not what you'd call doomsters by any stretch of the imagination, but we are people who go into this with our eyes wide open.
And one of the things that I wanted to have a discussion about was what I've called the dark side of AI and Pink Floyd fans out there, forgive me, I by the way, there is no dark side of the moon. Just in case anybody out there has been fooled . Um, back to reality. We wanted to do something on the dark side of AI. Not so we can fear it. But so that we can plan on how to handle it. But as I discovered, there's a bigger discussion to be had on this, and one that we should be all having.
There's much more to it than we could simply get to it in an hour and a half or two hours that we spent on this. So I'd like to invite you to join that discussion. You can reach me. Editorial at technewsday. ca. You can find me on LinkedIn. If you're watching this on YouTube, you can add your comments, uh, under the video that's there. If you check the YouTube notes or if you just go to technewsday. com or technewsday.
ca and then click the menu for podcasts, you can find the notes from this session. And when you do that, , you'll see an invitation to our discord group and you're welcome to join us there to continue this discussion through the week. , whatever works for you. Here's the discussion we had. Hope you find it interesting. I think there were three.
areas. One is the A. I. Misbehaving in one way or another because you could jailbreak it then the second one is the A. I. Used as a tool and the A. I. Can actually be used to do damage. And the third is if we're gonna operate this within our production environments, how we're gonna protect our data and our processes that was the three organizing principles I've put together. Is there anything I'm missing on that?
I do want to spend some time talking about the far fetched things, and you can think of them as far fetched as you want. One of them dovetails into what you said, which is the AI misbehaving. This idea of the AI trying to break out or the AI purposely lying or the AI, purposely deceiving people, which of course gets us into the science fiction scenarios of Skynet I guarantee you there are people out there who that is their big thing, talking about things like disinformation.
Personally, I think disinformation and misinformation is one of the greatest dangers because you're able to do that at scale. Somehow that's one of the ones that almost nobody thinks about that is to me the primary danger and yet the One that gets the least attention what gets the most attention is Skynet, I do want to touch on those things. I want to talk in the speculative risks To some degree obviously the ones that you've mentioned are real and legitimate, but they're not sexy.
They're interesting in terms that they're real, but I don't want to get into a hype thing, I don't mind talking about those things because what you're talking about is real. I think that people need to understand, like, when I was going through this, I was thinking about it from an, I want to implement AI at my company. And so I think one of the things and it was in that final thought on the one that I sent, is that AI is a powerful tool, but it's not just plug and play.
You need to look at the security. You need to look at how are you going to deal with misinformation or disinformation? Are you going to, put in special processes that say, if you're using AI, you need to go through and do this to verify it as an example. I totally agree with you, John, that's why I said your list really looked like a list that somebody had prepared.
If I'm in a corporate setting and I want to implement this is all the things I should do for implementing it so that it's protected, so that it has all of the aspects that we need for corporate setting. The idea that I was talking about was, hey, there's two ways we can use this. One is it can be used for evil. Which is another thing we have to control a lot of the criticism of deep seek is that it has no guardrails. It can be used for anything.
The other piece of it is, can we get in and jailbreak these things? Poisoning and the fact that they lie. And that really does lead to where you're going, Marcel. And I think I'm in a disagreement with you or maybe I'm not, but, I believe this stuff is way more real than we think it is. I studied complexity theory for a long time and I think we don't understand, because these are neural networks and prediction machines they'll never be able to think or do anything.
I think they're, I think they're potentially have more capability than we think. And when we add another layer to that, some of the science fiction stuff could very well come true. I listened to, Moat, and he blew my mind in terms of what he believes is already happening. And he's one of those people who believes that when he saw the first indications of the, remember the story he tells what is the guy's name? Just, it's Mogadath. Mogadath. Mogadath. Mogadath?
Yeah, but he tells a story of walking to his office and they had all these hands that were going to be gripping balls and they and actually really is a tough thing to get an automated hand to pick something up because if it even moves a quarter of an inch. You lose everything. So he had all of these machines that were trying to do this over and over again. He came by on a Friday night, walks up to his office and suddenly one of these things is picking things up perfectly each time.
Hours later, he comes back. They're all doing it. Yep. Was it conscious? No. But it has a behavior that is unpredictable, unexpected and independent. I think it happens. I'm going to sound like some new age or something. When we feed the birds in the morning the birds are starving here. I can just go over to Jim's place. These bunch of freeloaders, I put this stuff out for the birds. One bird comes no time for it to go and walk off to everybody else and go yak yak.
So I know they're not talking to each other. That bird flies away. They start all peeling in. They are so attuned. Now, what is it? I don't know. But they have incredible eyesight for one thing. Yeah, they pick up signals really quickly. Let's take that to crows. If you go and attack a crow The whole flock will know who you are. Yep. All stuff we can't understand, and yet we think we're going to be able to understand these incredible machines we built. I think that's the ultimate in hubris.
Did you guys both watch the interview that I posted with Jeff Hinton? It would have been about a week and a half ago where Jeff Hinton argues that, Large AI models, frontier models, are already conscious. Yes. And I agree with him. One of my great obsessions beyond even this and I've read I cannot tell you how many books on this, listen to how many podcasts over the years, but I'm obsessed with the idea of consciousness and what the self means and so forth.
And of course I haven't helped myself by, starting to meditate six or seven years ago. The idea of consciousness is truly an obsession. Hell, I have written a few blog posts about it, the idea that, am I the same person when I wake up in the morning who went to bed the night before? Did I go away? And basically there's a slightly altered version of me, that's coming up the next morning. I think about this shit all the time. Like it's one of those things that's always in the back of my mind.
And I think when we start talking about the idea of, is the AI misleading us? And I said this on, the very first project synapse that we did. If we start talking about whether these things are misleading us, whether they are, actively lying, trying to bend the rules or cheat in order to achieve whatever goal it is, we gave them, then you either.
Attribute the idea that they are to some degree conscious agents deciding what it is they're going to do, or they're not, in which case they're not cheating, they're not lying, they're just following a set of instructions within a set of guidelines, and we just haven't defined the problem sufficiently. So it's either one or the other, and I tend to lean heavier and heavier these days towards the idea that yes, there is some kind of an emergent consciousness there.
It's not the same as us, but it is definitely an intelligence with its own goals internally. Before you get into the car and drive to Toronto, because you've got a meeting or something like that. It's not like you went. I'm thinking that I should go to Toronto for a meeting that I don't know anything about next week.
Because I haven't heard a phone call about you wait until there is a goal put in front of You need to be at this meeting, you know in a couple of days that's when you put the plan into motion That's when you start thinking about the contingencies and what you need to do to achieve that particular goal.
It doesn't just Come out of thin air something spawns that goal or gives it to you there was a Talk we had a few weeks ago about the AI model started to back itself up to a different server because it feared that it was going to be removed. To me that's exactly what I'm talking about. That to me is some sign of consciousness that it's, it sits there going, Ooh, I'm aware of my surroundings and what's going on around me.
And that kind of ties in with the picking up of the balls that you were talking about did one learn from another? I truly do believe that there is some sort of consciousness in these things. Even from my usual go to of how many Rs are in Raspberry, when you say you're wrong, it goes and thinks, and goes, oh, wait a second, yeah, you're right. And comes back with the right answer. I think it's more than just a machine doing calculations.
Hinton had talked about this early in the game, and I think he's thought it through. I think that was why he left Google. His theory is, this is an alternate consciousness. We're trying to presume it exists, defining it as the way consciousness works for us. We don't know what consciousness is. Exactly. There's nobody out there who knows it. As a matter of fact watch a flock of geese, okay?
And everybody will tell you, the leader of the geese is there, and they move, the leader moves, but this is some sort of conscious behavior, they're all looking at it, making a decision. Ridiculous. Why? Because if you model it, it won't work. But if you do assume that somehow, programmed into them, Is the idea of keeping some distance, some signals that we don't see somebody created a program trying to put this together and recreated the movements of a flock of geese. That's right.
Are the geese conscious? Is the simulation conscious? The answer is irrelevant. It behaves the same. And that's my theory with AI it's an attractive thing for me, and I love the idea of pursuing it because like you Marcel, the ideas is it's great obsession. It's a great thing to think about people have thought about who we are and why we are and whether we're real and all those things for years. That's part of being human, from a practical standpoint.
One has to presume, if it walks like a duck and quacks like a duck, it doesn't matter whether you think it's a duck or not, it's going to behave like a duck. And I think that's the difference with AI, is that we assume a simplicity when there's a complexity. And so a lot of people will poo this idea of it trying to replicate itself. It doesn't matter whether it woke up and went, geez, I'm how 9, 000, and I'm going to think about moving on here. It did. And that's where Mogadat came into this.
It said, these things are already conscious, according to his definition. And that presumes that they're going to have unexpected behaviors. It's only been recently that we've decided that, animals are conscious, it didn't happen that long ago. We were brought up to think that these were basically automatons. They were biological machines that had pre programmed responses they're not actually playing. They're not actually having a good time.
It's just a fight for survival I've seen animals play with each other and they're definitely playing and having a good time. As living, breathing creatures, there is something that it feels like to be a cat, to be a bat, and just because we can't put ourselves in the mind of a bat doesn't mean that the bat isn't conscious. Consciousness is an experience that takes into consideration all the things that are around you. It's not just. Looking out into the world and creating a picture of it.
The picture is everything. And you're part of that picture as well. That's what's happening with artificial intelligence systems as well, because there is a world out there. They are part of that world. And it all comes together if there is, this idea of emergent behavior. Unexpected, unpredictable behavior. What does that mean to us? We're trying to use AI today.
One of the things that scares me is that there's not enough research into alignment I've heard a lot of people talk about deep seeking. It has no guardrails but then I'll read somebody hacked into the latest AI model at open AI, the Oh three model, with its supposed sort of intelligent Detection systems the hack that the person did was absolutely simple. Some people have told me about how they've hacked these things and said, you can't talk about it this guy published it.
So I'm going to say it, he just hammered away at it, changing. A bit of each prompt capitalized letter differently and just in repeatedly hammered away at oh three and got in and got it to do something it wasn't supposed to do. Yeah, and that's we talked earlier about, I'd said one of the lines that I saw in an article I read was that AI is a powerful tool, but it's not plug and magic. This isn't, you're not taking Microsoft Excel or Microsoft Word and sticking it in and just using it.
You need to know what it's doing and what it's doing with your data. You can't just drop it in and go, oh, I'm good. I'm gonna start using it now. Because there are all kinds of things that can be done you know me, Mr. Security, I have huge fears you look at how fast these new models have come out, how fast these new tools have come out. Jim talking about the hacker that had gone in with the model.
We don't know what we're going to find out down the road it really makes me nervous and I think people need to understand that when they're implementing these, whether it's for personal use private business or corporate, you need to understand you're still responsible for the input and the output. So you need to make sure that you can explain or understand what you're getting out of these things. Our old model of garbage in, garbage out served us well. Does it work now?
No there's no correlation. For instance, this person was able to hack into an AI model and cause it to do something differently following in, in our, I could use it for evil. I could use it for purposes. It wasn't used or I could poison a person. A process in it. I have heard people say that sounds like a rumor. I have listened to people who are embedded in this industry and who know well enough to know that it could pivot down to a single word in a prompt.
Whether that's true or not, I don't know. But the amount of damage you can do in a model is not correlated with the amount of input you have to it, yeah. A few years ago in the early two thousands, there was a guy who worked for one of the British banks or one of the British trading floors Kwaku Abadoli or something like, and I'm messing up the name. I apologize here, but he did some unsanctioned trades.
He was given the power to go into the system and do some unsanctioned trades, and he lost the company overnight, something like $2 billion. That $2 billion wasn't just to his company, but to all the people who invested in this company as well.
The lesson here is that, those dangers, John, that you talk about, of having some kind of oversight into what the model is doing and making sure you know the appropriate guardrails, obviously the guardrails on this guy were not sufficient the fact is you give anything or anyone too much power without appropriate oversight. And bad things are going to happen.
The real danger with artificially intelligent systems is that they can do these things at scale, at a speed that human beings can't possibly keep up with. And it's not that the systems themselves are any more dangerous than any human being, because a single human being can do an amazing amount of damage, given the position of power to do that damage. But the systems that we are creating are, a thousand, 10, 000, 100, 000 times more capable and faster than any human being.
So if you give them that power, that's where the real danger applies. It's not that the machines are inherently less safe than humans. I would, argue that, the systems we have are actually less dangerous than the human beings that we've put in power, but the fact that they can act at scale. is what makes them dangerous. Mogadath actually said that as well. I was going to say it's the equivalent of 10, 000 people doing things wrong rather than just one but Mogadath also talked about that as well.
And he said it's more likely AI is going to be a better actor than humans. He said the real danger is humans. That we get irrational at scale and AI may not. We have to put that into context. We don't know. We talk about AI as a conscious entity or whatever or an actor that we can't understand fully and can't fully control. How do we deal with that? Marcel you put this forward when we were talking about this earlier, you have to treat it like a person.
It's not human intelligence, but the same flaws are there. And especially as we get into agents. We're going to have agents that will go off and do things for us, execute 10 steps, including putting a final piece in with our credit card. Now, people will say that's just fundamentally insecure. Is it? Why are there 400 million credit card numbers sitting out on the open internet given away for free last week? Because a skimmer was on a website.
Because somebody was able to pick these up by phishing. The fact is, we are so uncaring or we were such poor actors protecting a simple thing like a credit card number I don't know.
My wife's had her credit card replaced six times, in the past two years because the behavior is picked up by AI that warrants them that this card is, needs to be needs to be canceled, but that's, that, so we can't think of those terms there, we have to think in terms of how do we exist with this, use it well and deal with the fact that it's behavior could be more human than we think.
I think if you think of artificial intelligence systems of today as a small child eager to please, except that child has the intelligence of, 10, 000 PhDs, you start to get an idea because really they want to make you happy. Even when they're breaking the rules or trying to, break out it's because they want to make sure that they succeed in whatever it is that you asked of them.
So they're basically just, hyper children trying to please the adults maybe bending and breaking the rules here and there to make sure the adults are happy with the eventual product. If you could hire a five year old into your company and put them in charge of your financial systems, because that five year old is what was that TV show about the Yeah, the surgeon who was like a 12 year old surgeon or something like that. Oh, Dougie Houser, MD. Dougie Houser!
I can't remember Mo Gaudet, but I can remember Dougie Houser, MD. \ Talk about not understanding how a brain works. But you know what I'm talking about here, obviously. So essentially what you're doing is you're hiring Dougie Houser, except he's 5 instead of 12.
And he doesn't have these moral systems in place, yet all he knows is that mommy and daddy want him to do this thing and, they'll be really happy if I can actually, do this thing that they asked of me but I'll do it in whatever way I think is the right way to do it, whether or not that is the right way to do it. So I think if we think about it that way, we're starting to get close to really what we're talking about. We've created children, but incredibly intelligent, powerful, children.
One of the concerns that I have from a security standpoint is the explainability because AI just goes in and does this thing especially being from a financial institution, we're a highly regulated industry that you need to be able to explain what's going on in the background. And so I think that needs to be built in when you're trying to figure out how are we going to use AI within a financial industry? But I think the reasoning models are a step forward.
One of the reasons we know, some of the things we know about the behavior and that O1 is a cheater. They set it up to play chess against a program that was much better at chess and it cheated. And why not? It changed the, I think it just changed the board or did something. It rearranged the board. So the AI it was playing would notice it was playing stockfish. Yeah. And so it did that.
It cheated, but is that, I saw this on Star Trek, and we all applauded you'll have to Google that if you're not a Star Trek fan. But if we conceive of it. As an alternate intelligence, some of the things you can put together make sense. I think of it as raising my daughter. My daughter is absolutely brilliant. She hacked my computer one time and put a screen on it that said my disc was being erased piece by piece. And she managed to get in. I'm not lax with my password.
She managed to get into this machine and put this on there. Unfortunately, in the early days, we'd done this to a guy. When people first got laptops, we had a guy in our office, he's got a laptop. So we hacked his laptop. We did something very similar. It was just a simple batch file that came on and said his disk was being erased. We ha ha'd about that. And so I'm mad at my daughter and I'm going, this is a work laptop. You cannot do that sort of thing. She said, didn't you do that dad?
Karma's a bitch, isn't it? Yeah, but what I'm saying is I'm raising this kid who is absolutely brilliant. The thing we did was simple Toyland thing. She's getting past all of these things because, and not, doing the same thing that an AI can do at scale. She would just hack. She would, and you watch kids do things, they'll just try things. And I think that's the other piece that AIs have. When you talk about being at scale, they can just try the same thing over and over again until they win.
And if that fails, they go on to something else and try that a million times and then go on to something else. That's an interesting point because that's something that if you, I've done tech support for years. And one of the things you discover is that people are afraid to try things, like they, they run into a problem and it's if what happens if something happens, if I touch it, it's like, what's the worst that happens? You have to fix that thing. Which of course is a tech support mindset.
It's they're all problems to be fixed. But many people are just terrified. There's a window that popped up and it says, okay, or cancel. What does the message say? There are words above the words cancel and okay. What are the words say? Oh, I didn't read that. I was afraid that I did something wrong and this happens all the time, but kids, like you said, kids will just, kids aren't afraid to fail. And, these artificial intelligence systems aren't afraid to fail.
They'll just try some other way to do it. And they'll learn from their failure. So again, if we take the model and we say, treat it like an alternate intelligence, what would you do about that if you're doing tech support? Make sure everything's backed up. How do I do when I'm dealing with somebody, cause I'm doing stuff, a lot of stuff in WordPress these days for just various reasons, I'd rather not be, I would rather not be doing tech support for my friends in WordPress. Do not phone me.
The first thing you do is install a backup plug in, right? So that when they screw stuff up, you can go, you see that little backup plug in there? Just restore to where you were before we started talking. Those are the things you do when you think about this as a human. So just to wrap this part of the discussion, because I want to get into the piece John, that you really started to concentrate on, which was in a corporate setting. We have these things that we don't quite understand.
They're moving faster than. We can possibly move we've talked about some of the things and you have to stay aware of the things that are happening, but you can't tie those together into one ball. If you say the A. I. Is going to behave unpredictably think of it like a person. Financial services, I'll give you the model that I think about. I would not let the AI execute all your bond trades tomorrow. That might be a bad thing. But, I'd let it read through every transaction looking for fraud.
AML is huge. Yeah. Anti money laundering, is a huge area where they thrive with AI. Equally within constraints, I would let it read all my client data and ask it to tell me the things that people wanted, didn't want, were happy with, were unhappy with, the things where I had transactions that took times. We can't get bottled into this mindset that because it's not perfect. I say this about cybersecurity because we can't do everything. We can't do anything.
So we have to become human and say within the constraints of the fact that no, I'm not going to let it steer for me on the 401 and those statistically, it will do better.
I'm not quite ready for that yet, but that doesn't mean that I can't use all of the AI signaling that's being built into cars to make my life better if you're straying from the lane because you're starting to fall asleep, I would really like my artificially intelligent car to take over and make sure that I don't crash into a telephone pole or a tree so as much as I like the idea of being in control of things that are happening.
It's also good as human beings to understand that we have limitations. We're far from perfect. It is easy for human beings to screw up and we've been doing it since the dawn of time and we continue to do it. And now we are building things that can help us screw up at scale. But those very same things, I have a sign that says, drink coffee, do stupid things faster and more efficiently.
Yeah. But the great analogy there though is, even if you say we take the ultimate, which is we're not ready for self driving cars, my car has saved my ass a couple of times because I keep it on cruise control. It's probably an algorithm rather than AI, but I have failed to hit a couple of people when my attention was distracted because, my car was already slowing down. You've got lane keep assist and all of these other things that aren't really AI, but they get you towards that.
I'm on the fence about autonomous driving. I think it's a great idea. I think it'll get there. I'm not convinced that it's 100 percent there today. But are human beings 100 percent there? Have you been on the 401? Do you know what people drive like? Yes. The one, actually, the one thing I will say that's a wonder we don't have more accidents. I know. The one thing I will say that's a big, huge difference between humans and AI is AI learns from its mistakes.
Ooh, I don't want to have another line after that. I'm sorry. We're going to wrap this section up on there. John, you put together a list of things that. from the point of view of a security person that, that, that we should be looking at. So we have a conceptual thing about AI. Why don't you talk about some of the things you've put together that a security person might want to have a list of? I think the first one is definitely data privacy and security.
You need to make sure that you vet the tools that you're using, it's not just AI tools. If you're going to use Excel with massive calculations you have to do the same thing you need to make sure that the results are what you're expecting. AI is no different. With privacy and security, we keep talking about guardrails, but you need to make sure that the AI is not leaking your data out somewhere where it shouldn't be going. And I'll pick on DeepSeek.
You'd want to be careful of what you put into DeepSeek so that it doesn't end up in the wrong hands. But again, making the distinction, you shouldn't put your data on a server in China where you check the box that says the government rules apply which are that they have access to your data. So that, that is, again, I'm going to go back to your, AI learns from its mistakes. We're making the same mistake over and over again. It doesn't matter whether that's AI or not.
Sometimes we make those mistakes with AI. We let AI compound and enhance those mistakes. But here's the thing that defeats that. When we talk about this with the standard piece, the data protection and there were some basic things with DeepSeek, they did not protect the database.
If you have a really aggressive development mentality, And you don't have security built in on your test systems, it's going to bite you in the ass in production and that's exactly what they did they're making the smartest algorithms in the world. It's now very efficient. Forgot to put a password on the database. Even in non AI environments, people keep thinking, oh, it's just a test environment, it doesn't matter.
But, the errors you make in a test environment can very easily flow over into production. Sometimes that's how you get control of systems not just computer systems but human, systems. You have a bureaucracy in place where you've said, okay, this is how we handle these things. This is how we do these things.
And somebody, hacks that system by saying, if I convince you that you're supposed to do these things because your system says that these authorities say you're supposed to do these things, then all of a sudden everybody just falls down and lets you do whatever the hell it is that you want. Again, this happens all the time. Humans are usually the weakest link. And as a security guy, I know you know this, as a sysadmin I treat security differently than you do in financial services.
But one of the things, it can't be part of the solution, be part of the problem. That's exactly, but one of the things that we would do on a regular basis is you check into people. You just pick up the phone or you send an email to somebody and you say, give me this password, give me this information. And then they just do it because somebody is asking, that's how the MGM grand's what happened. But that's what we called social. That's what social engineering was.
You didn't need computers for that. You call up somebody and you say, Hey. There's a problem with your email and I'm trying to fix it over here, but I need your password. Can you tell me what it is? And then people happily just hand it over. Like humans are the weakest link in these things, but that's why we need to think these things through, build the process in, and even though this is a foreign structure, we need to think it through in a way that, thinks about protecting the data.
One of my favorites in this, the comparative is if you talk to, there's engineers listening, I'm sorry, but I've been an I. T. guy in an engineering company, and I've had an engineer come up to me and said, I was talking about security on their O. T. systems, and he said, I don't need security on these t. systems. And I said, why not? He said, they never talk to the internet. I'm not that stupid. I said, oh, how do you maintain them? With my laptop.
Oh. Okay. Laptop ever connected to the internet? But never mind. Talking to you guys is useless. But I'm just saying, we need to think about this. Because we have a new concept now. And that is, and I'm not claiming that I really understand it. I've used the vernacular saying that DeepSeek's database was not protected. Probably their data store. But the issue is, These things store data differently than a relational database or a graph database or anything.
They store vectors, they store data in different ways and you can access it in different ways. And that's why the big question right now is people have said you can't ever pull. The data out of an AI, you'll never find it. It's the old security by obscurity myth. And I think it's a myth because the New York times was able to pull out a full, almost a full article from open AI by prompting it correctly.
I have a little bit of trouble with that one because the New York, because they have a very strict and well defined style guide. On how you write a story and how a story is expressed. So if you say, I want you to write about this thing that happened on this date, which is accessible on the internet, and I want you to write it using the style guide it's going to spit out something that looks almost identical because that's the format that's defined.
I wrote the style guide for a magazine that I was editor in chief at, and everything fit the way that I did it. And yes. I enforced the Oxford comma. I just want to make that clear. I live in dread of you editing my stuff because I don't even enforce spelling some of the time, according to my editor. But this is a good model for it. And you have to start thinking about how could data be retrieved from.
An AI model or leaked from an AI model, there was a big thing when we started out that a group of engineers had put some of the data from, I think it was Samsung into open AI and AI could be used for chip designs. And if that did get into the AI model, I can probably retrieve it by asking it to design a chip for me. So I think there's and that's everybody freaks out about that. There's, I think there's some things you could do. One is understand how you're calling.
If you're going to use an open model, understand how you're calling it. If I had my chip designs there, I would really have a private version of. The AI, like you guys do, I have advanced chip designs just right behind me here. We got a chip industry. Ketchup chips and all dressed. We've got a cheesies industry we got your beet. Sorry. What were we talking about? We were talking about data. The one last thing I want to talk about with can I just hit the data?
I want to hit the data thing just one more time here. And we, with like with, when you do a search, we're using perplexity or you're doing a search using, any of the models that actually have the ability to search the internet at the moment. Keep in mind that the database doesn't mean anything. It's not that the model has all this information inside its model.
Obviously it remembers a bunch of things in the same way that we remember a bunch of things, but the databases are scattered across the internet. And just the fact that you say there was no password on the database, but what if the database isn't stored there? What if it's stored in another building? Technically speaking, it doesn't have to have anything to do with deep seek.
It doesn't have to have anything to do with open AI or Gemini or meta it just has to be somewhere where that information is not protected. And when that information is not protected, it has access to it. So it's not necessarily the model's fault. Like an unprotected S3 bucket on Amazon.
Like that just like I mean it's we're a public or you make a drive public on Google Drive and you forget that you shared it all those years ago, and you start putting everything in your public folder and somehow miraculously it goes out onto the internet where everybody can access it. Not that I'm speaking from experience or knowledge of anyone that does have to. This is the Russell Peterson moment in security. I think we should do this because you don't want to accuse anybody.
But Russell Peter, you see this thing, somebody going to get a hurt real bad. I think you might know him, that's what I want to start doing in security. Go somebody been using the system real bad. I think you might not say it, but these are the things that we fear. Maybe much better controlled by backing up and saying we've learned from analogies. Different systems, different things we've done, what would we do differently in terms of making this safe and more secure?
The one technical piece that I have though is, and I would want to check this out. I know if you call the API from OpenAI that you don't actually pass the data into their model they don't store it. I would watch those types of guarantees. Closely, if you're a security person, because that really sounds like one of those things where somebody is going to say we didn't think that would happen, so it's like when you open up a chat on chat, GPT, there's something that says, make this chat private.
Yeah. It depends just how much do you trust open AI? Do you honestly, genuinely, completely believe that nothing goes out of this chat because you click the box this is one of those places where it's a bit of a fire. Will not change the rules of the game and cheat. Yes, that is true.
But it's one of those places where I think, the Firefox browser, it's like almost nobody remembers the Firefox browser now, but when you open up a private tab, it says, All that private means in this case is that, we're not storing cookies and information. That doesn't mean that people can't discern all sorts of information about you just because you opened up a private tab. And just because you're using a Chrome incognito.
Browser does not mean you're incognito, you could put on the glasses and the baseball hat, but it's still going to recognize you, so the one last thing I want to talk about from a security standpoint, is part of the reason that you have a test environment. is not only to test applications or APIs or AIs, it's also to test your security, because you don't want to wait until it's in production to test to see if your security works.
We always have this I didn't put all of the security in place because it's just the test environment. Part of testing that system, Or the application is to make sure that you can't do things that you're not supposed to do. Or that people that shouldn't be getting into it can't get into it. The only way you're going to do that is to implement the same security in your test environment that you're planning on implementing in production. You don't create a test version of an application.
You take the application you're planning on putting into production and put it in your test environment to test it. So it's the same thing as what's going to go into production. That whatever security you are intending to put into your production environment, you should be putting into your test environment as part of your testing. In the same way, I would maintain, this is zero trust should be zero trust on developers as well as users.
And I say that as a person who loves development and loves creating things, but. I can hear the X Files theme going through my head. The truth is out there and it's that I've screwed up too. I want to go back to another piece in this, John, and this might not be your area of expertise particularly, or it might be, we've never talked about it on this line. I was having a discussion with somebody yesterday about compliance.
Regulation and some of the things as we started to explore putting these systems in, and I never thought of this as a risk area, but then the light went on and I went, we've had all kinds of behaviors come out of AI, would we ever get ourselves slammed by a government regulator that's something I don't think we've actually been thinking about. We've got, let's call it a spoiled little child sometimes running around that's very intelligent. How do we deal with that?
Yeah, I've worked in pharmaceutical industry, and now I work in at an FI, a financial institution, and they're both very highly regulated industries. And, for pharmaceuticals, you have to verify exactly what ingredients are going into a product, how you've tested it, how you're marketing it, how you're selling it the same thing with FIs, people want to know what you're doing with their money.
You need to be able to say, I did A, B, and C. One of the concerns I have about going too far with AI at an FI is the explainability, in some cases, they talk about AI being this black box, that, you put information in and it does something and then spits it out. It's not like software where you can go in and look at the code and say, oh, A moves to B and C gets divided by D and so on. And the regulatory agencies are very skeptical as to what you can and can't do and how you go about doing it.
In Canada there are two main, regulatory bodies. There's OSFI, which is the Office of the Superintendent of Financial Institutions. They regulate all of the banks that are, Canada wide and then the Financial Services Regulatory Authority in Ontario, looks after all of the, credit unions and those types of things, and insurance companies. OSFI has created regulations related to, or they're in the process. I'm not sure if they're finalized yet, of creating regulations on AI use within FIs.
Yeah, here's a couple of questions that I come up with and this is I think those are exactly it and I think any industry, if you look at, if you look out there, there are some guidelines being put together and you should find them for your industry. There's some great stuff in the repositories out there to look at but I was thinking about other things. So for instance, I hire somebody.
And somebody creates their own little GPT, and it reads the resumes, then because we're in Ontario, they knock on your door and say, Hi, I'm from a human rights organization, and that, that's a mountain of hurt. If anybody's ever been through one of these things, even when you are totally innocent, you've got gender bias as well as racial bias if you've got these tools for vetting resumes. Somehow it's got in there, I don't want anyone that has ever worked at a financial institution.
You've taken all of the resumes out for anybody that's worked at an FI. If I don't want anybody that's ever lived in X country, now you've taken them out. And this is not just a regulatory thing. The bias in decision making. In the data is something we've talked about in AI for a while. A friend of mine does graphics and every time she puts in somebody who's an it person, she gets a white guy in a suit. First of all, who wears a suit and a tie now in it anyway but second of all okay.
Yeah. John we'll excuse you. But I'm just saying there are there, you have to think through the decision making. The other one that comes up if you're in an FI is if you don't give a loan to somebody, Yes. Because of a piece of information. So you have to watch, you have to think through what, what's happening because the data is masked, you don't know what prejudices and what biases it has.
But again, I go back to the same thing of if I was interviewing somebody for a job and they were going to be in my HR department. How would I make sure that they weren't doing things improperly? I don't know. It may not be as great an analogy, but I think it's something we have to think through. But some of that, Jim, you can actually say, going through an interview process, you can say, Oh, I'll ask this question and this question.
These are the answers that they gave, which indicated to me that they wouldn't be a good fit for this. In some cases you may be able to say the same thing about AI, but in some cases you may not be able to. AI is going to have the biases of its creators in it.
Yes. And again, when, I know this is the danger that I keep coming back to, error at scale, because it's trained on, the sum total of human knowledge available on the internet it's also been trained on all our flaws and biases and we can say the biases are North American, but as the AI is scooping up data from all over the world, those biases are being distributed across cultures as well.
So you'd still need the oversight of a human being, but even the human being is going to need oversight as well. It's a cycle. But I think this goes down to another point I know that you'd raised earlier, John, which was our over reliance on AI. And because, and I think Marcel, you've pointed this out is that flaws are done at scale. That's probably one of the biggest dangers that we have to think through is flaws are done at scale and we can't be over reliant, we can't check out of the process.
I'll go back to that interview I heard with Mogadat. He said that he thought there were three skills that we needed to exist in the world of AI. One of them, the first one was the understanding of AI. And I think if anything we've talked about is while this is, this freight train is coming at us, if you're not playing, you're not learning. So you need to get engaged and play at any level you can to build your skills in AI to think about these issues. The second one he had was critical thinking.
This is a skill that we've lost and I blame social media. I think social media has set us up to be nice, controlled little beings in the same way. In the 1800s to the early 1900s, when people from the farms were brought into factories, they had to regiment these guys, like Henry Ford did, you take this nut, you put this on, you are going to be controlled by the assembly line. You're going to be controlled by the factory.
I, we, in the 1950s, 1960s, we all with our cubicles, we're all controlled by the office. We are now being controlled by social media. It has gotten rid of our critical thinking. But human beings are amazingly easy to hack. Stage magicians, for instance, figured that out a long time ago, like mentalists on stage. You direct an entire audience's attention away. Or towards something else and they've become masters of doing these things.
If it's that easy for a single person to direct the attention of an entire audience that gives you an understanding of just how fragile the human element is in all of those things. And how easy we can be manipulated. I hate to sound like a really old man here, but wasn't there a time where there was something that we called a classical education where you grounded people in what we considered, essential aspects of civilization? We got rid of that. It's all R and T stuff.
Yeah, we got rid of that. Like I said, I hate to sound like an old man here. Cut the music out. Don't do that sort of thing too. Cause that'll get people, yeah. You don't need to worry about the history of what happened in the past. There are a couple of mandatory courses that I would, if I were king of the world, have a mandatory course in critical thinking and logic.
That would be, you must have this as part of the educational process you do it at least once in elementary school once in high school and once in university. In other words, you got to refresh those skills of critical thinking. It's being wiped out by people who don't want people to read alternate thoughts or new opinions. You have to be able to, the mark of intelligence is the ability to suspend two different opinions in your mind and deal with it.
And we've gone to one opinion so critical thinking is a loss it is a skill. The third, what he said was debate. That was the third skill you needed the ability to debate. We have become, unable to have an intelligent, polite debate about sensitive issues. Most of the time we walk away from them and that inability permanently freezes us into two sides.
Instead of, and you can call it the mushy middle if you want, but that place where we engage and find we have more in common than differences and have some things we could actually solve together. So you don't go fire every civil servant. Because that'll get rid of your government, but on the other hand, you don't let them run rampant like some other governments, I think you might know which one I'm talking about there's a place where you control and set these things.
So this idea of being able to have a polite logical debate is lost and it's Mo Gaudet, who might argue with you, only one of the more brilliant people in the world is saying that skill being missing is going to hurt us in adopting AI safely. Okay. I'm going to say something.
I think the pandemic hurt us more than we realized, and it was starting before the pandemic but right now, when things are relatively safe I think people should be back in the office working with other people, because when you're working face to face with people, you're going to be working with people trying to solve problems with people that you don't necessarily like.
That you don't necessarily want to hang out with after work you have to develop the skill of face to face communication, of being able to negotiate, of being able to work with other people. And we're always separated by screens. And I say this as a guy who's separated by screens with the two of you. Okay. You lose that ability to communicate with other people to listen to what other people and to debate with other people, in an intelligent way.
And you can't do that if you're locked up in your cubicle this isn't quite so bad because we've got cameras on and we're doing this live, but when you're communicating entirely by text or your camera is off. You do not have that back and forth communication you do not have that ability to take input from them and give output back in a way where you can see the response the ability to debate vanishes and we need to get people together again. I think this is necessary.
We've lost that social culture my daughter started university. She finished high school at the beginning of COVID and went into university. She spent the first two years of university being 100 percent remote and she absolutely hated it. Even in the second semester of her second year.
They gave them the option to go into school to write their exams and She was all for it because it was in person there was I mean there was some really insane Restrictions for exams and things remote so it took those away But got the social interaction back into the schooling I think that we've lost a lot of that in business as well. Now I have some people that work for me that have said if they had to go back to five days a week, they'd quit.
Our rule is we have to go into the office one day a week. And I'm okay with that. I do the fact that I avoid all of the traffic and travel time back and forth. I find I put more hours in when I work from home. But I agree that you need to have the social interaction to be more efficient at running your business. This is where people call it the mushy middle or whatever and thinking we don't have to be remote or in the office, right?
We draw these binaries this is because I don't know why, but I believe that social media has a lot to do with it. We argue from one point or another, instead of getting together and saying, what's the real issue. And maybe some part of it is also outcome thing. What was we want to achieve? We want to respect our humanity. Anybody who wants to have an argument about where DEI went crazy, I will have a great discussion with you, and say there are some idiots running DEI programs. Absolute idiots.
And I can give you some great examples. But that doesn't mean diversity is not a good thing. We can't toss it out because of a couple of uncontrolled people or even if a whole pile of them are uncontrolled. I've had people say to me, they had to get rid of the chief and chief information officer because it was going to be offensive to indigenous people. Half my family's indigenous. They're not spending a lot of time thinking about CIO.
They're thinking about whether we give clean water to people in Northern Ontario. We've got people doing stupid things that are giving DEI a bad name when the reality is Diversity is such a wonderful thing When you encounter people with different cultures you can't dislike people as much when you meet them, And you find out that people bring different things to this. That's a strength So we've got to get food from all over the place I grew up in Northern Ontario.
Anybody who talks about Canada multiculturalism's bad and all that sort of stuff, I challenge you to sit in Northern Ontario and have boiled dinner and grilled cheese sandwiches for half your life. You'll be down here going, bring on multiculturalism, please. Can we go back to, we talked a bit about the over dependency on AI. I'm going to bring us back to AI for a sec. You want to bring us back to the point? Yeah, God forbid. Okay, yeah. The I always look at AI as a consultant.
And if I'm hiring a consultant to do something, I want to make sure I understand what it is they're doing. I always say you can offload the work, but you can't offload the responsibility. You're still responsible for making sure that you're getting out of that consultant what you expected to get out of them. And I don't think that using AI is any different.
I think it's a good model because you're hiring somebody sometimes with a process and we get back to whether, is it a consultant or is it an expert? I think there's a distinction, but a consultant has a process that helps you see things through a new set of eyes. Yep. It's really the biggest thing that a consultant does is helps you see things differently. I'm not saying totally. And then an expert will come in and know how to do something.
It's the old joke, what's the difference between a consultant and an expert goes out, puts an X on the sidewalk where you have to drill. And says that'll cost 10, 000. Somebody says 10, 000 for an X on the sidewalk. He says, no, it's 10, 000 for knowing where to put the X. And that's essentially is the expert we're paying them 5 for the chalk. Yeah. Five dollars for the chalk. Yeah. Yeah. It's time and materials. Yeah. Okay. Yeah. Smoking as a true expert.
A consultant is a little more mushy because they're trying to get you to look at the facts differently, to separate facts from opinions to really engage your thought process a good consultant leaves you stronger when they go, but there's still a black box there. There's always a black box because you're dealing with, it's not all happening in your head. It's happening either on a system or in somebody else's head.
And I looked at consultants slightly differently than you did, Jim, because, you taught consultants at the university. That was you had a real process for this, but my experience over the years working as a consultant is people treat consultant as somebody they don't have to hire. Okay. They treat a consultant as some, no, I'm serious. They treat a consultant as somebody I can bring in because I need this little piece of work done. And then when this little piece of work is done, they're gone.
I effectively can fire this person as soon as the work is, yeah, exactly. It's, that is the way that they treat it. And in that respect, I think John, you're probably more right about how, you would view an artificially intelligent system. Because you would treat it as somebody that you don't have to pay a full salary for. Somebody who comes in, you bring them in to do a little bit of work and then, they go off and then you find it's only once a year. Mr. Scrooge. I know.
Pitiful excuse for picking a man's pocket on the 20, every year. It's cheaply paid labor because what you're doing is you're bringing in a system that you spend, 20 bucks per seat. And I'm assuming, John, that in your office you're paying for this on a per seat basis or something, because you're using copilot in there, right? The the matter is you're not paying for an additional employee.
Instead of bringing in an additional 10 or 15 employees to, to help you sift through all this information and do all of this work, you've gone on the cheap. And the on the cheap in this case is to make sure that you get these, I like to call it don't get me wrong. I'm a guy who is a hundred percent behind, bringing these tools into environments or these alien intelligences into your workplace, but we brought them in for a reason.
It wasn't just to solve big problems because there are big problems, but most of us don't use these things to solve big problems. We use them to speed up the process. That's it. We used to do a lot more of this I haven't been in a big office for a long time, so I'm not sure. We still hire a lot of temporary employees for all kinds of things. Data input for, projects that, that would come up and to scale rapidly.
And I think that's, that's where AI can, could be a real godsend is that ability to do bursts and scale, but. You have to manage it because as anybody who's managed a temporary employees knows it's a different game than managing long term employees. There have to be very strict controls. You have to have very good ideas. You have to have a very clean process and you have to make sure that you understand what you're doing.
In the old days of data conversions, I had to hire tons of people to do something called key punch, or to clean records. You had to have a very strict process because these people didn't come in with all of the understanding of your culture of the processes or understand the business you're in there to do a specific task and you have to help them do that and that may retreat back to look at some of the ways we prompt an AI.
And we check it may be one of those skills that I think we have to really step back and think about, and I'm not talking about the prompt engineering of the, you're going to make 350, 000 a year being a prompt engineer, but I'm talking about, part of the thing with AI is you have to be able. Ask it the questions and give it the direction better, which means a clarity of your own thought process. That's an interesting thing is how we manage that burst at scale.
In terms of a potential danger, you talked about, not giving it access to your credit card information and so on, which of course makes me think of the open AI tool that you have access to if you happen to be living in the U S and if you happen to be willing to pay 200 a month, there is something, I don't know if you saw it in the last day or two that appeared on the scene.
It's called I'm bringing it because, this will bring the dangers, front and center for people to see there's a tool called proxy concierge, I believe it, or convergence, rather proxy convergence. And it is proxy. convergence. ai is if you go take a look at it and it is basically OpenAI's operator, free to use and available to anyone, anywhere in the entire world. It's really interesting because this is the part where you see all the things it's doing behind the scenes for you.
It will open up a little browser window where you can see the AI navigating the web and doing things. It does some things really interestingly, but it also does some things with real shortcuts. And I realized that we're still in the early stages of agents, but if you watch it work, and you see what it's doing.
You realize that, you do actually want to be part of the process to be watching what's happening and you really do want it to stop before it plugs in your credit card information because it does make some interesting choices. And it might not be the choice that you want. Human in a loop, right? But these things that control your PC, the agents are coming and we better be prepared for them. As I've always pointed out, we're in the biggest wave of shadow it ever.
There are already people bringing these things into you and they exist now. Some of these might carry Microsoft's recall. Some of these things that you carry around your neck that record every conversation that you're having during the day, you can buy these things. Yeah. That's Microsoft's recall that they put out and then they removed it because of the concerns that they're bringing back out again. But AI will accelerate this and we're going to live through it.
I think we should probably, wrap it up. But I think the, I don't want, I never wanted this to come off. Like we were doomsters that you have to resist this. Or you have to keep AI out of your life. I think that's a big mistake. I you're standing on the 401. There's a truck coming at you. You might want to step out of the way. We're not going to resist AI. That's not being pessimistic. That's being realistic.
About the dangers of it you can be as optimistic as you want and I don't think either of the three of us is a doomer, I don't even know if I have a P doom, you know a probability of doom I think that the benefits outweigh the risk, but that doesn't mean That you can ignore the risks.
I Find in the kitchen that a really sharp knife is an absolutely glorious and wonderful thing when you're cooking the sharper the knife the less likely it is you're gonna hurt yourself Assuming you know how to use a knife But that doesn't mean you leave the really sharp knives where your toddler can reach them. It's not, Oh my God, there's this terrible boogeyman out there that works at scale, you need to be aware that now you've brought risks into play that can be abused at scale.
And if you can do that, you can maintain that optimistic outlook while being realistic about, the dangers. I know I'm always the guy that's bringing in the, Oh, you've got to watch out for security and this and that I think AI is a wonderful tool. I think there are so many benefits from it. My only thing is, if you're going to use it, or even if you're not going to use it, somebody else is, you just need to have your eyes wide open.
And my final thought on this though is that you can't give up control of it. Exiting and just leaving it to someone else is a bad idea. For all those people who've ragged at me about saying deep seek Chinese AI, you're interested in that what if they take control? I have to tell you, I have as little faith in Elon Musk Mark Zuckerberg and Sam Altman.
As I do in the Chinese government, I don't want my future controlled by either of those groups, that's one of the reasons why I'm so engaged in this we have to have a civic discussion about it and a societal discussion about it I'm not afraid of the AI hurting us. I'm afraid of the people who have AI. That's why I keep saying you can offload the work, but you can't offload the responsibility.
You still need to be, I'll call it, the master of your own destiny, regardless of what tool you're using or what country it resides in. Yeah. Cool. Gentlemen, this has been an incredible discussion. And that's our discussion for this week. A reminder, we'd love to continue this discussion with you. You can find me at editorial at technewsday. ca. That's my email address. You can find me on LinkedIn.
You can, if you're watching this on YouTube, just put a note into the comments below and check out the show notes, the, either the description on YouTube or at technewsday. ca or technewsday. com and then just check the menu item for. Podcasts, you'll find the show notes and there'll be an invitation to our discord group where we communicate all week long on this. I'm your host, Jim Love. Thanks a lot for watching or listening. Have a great weekend.