Questions Executives Should Ask About AI - podcast episode cover

Questions Executives Should Ask About AI

Feb 15, 20251 hr 1 min
--:--
--:--
Listen in podcast apps:
Metacast
Spotify
Youtube
RSS

Episode description

Unpacking AI: Executive Insights & Essential Questions

Join us in this special edition of Hashtag Trending and Cybersecurity Today as we dive deep into AI with technology consultant Marcel Gagné and cybersecurity expert John Pinard. We discuss the necessity for executives to understand and implement AI despite limited knowledge, the need for question-based learning, and the significance of a comprehensive AI bootcamp. From real-world applications to the evolving AI landscape, this episode provides a nuanced view on leveraging AI in business while addressing the critical question of safety.

00:00 Introduction and Welcome
00:19 Meet the Panelists
00:38 AI in the Executive World
00:54 Bootcamp for Executives
01:17 Starting the Discussion
01:44 Understanding AI Challenges
03:00 The Importance of Asking Questions
07:45 Historical Context of AI
11:30 Practical Applications of AI
15:06 Generative AI and Its Impact
23:09 Future of AI Models
30:39 Introduction to Google Recorder App
31:11 AI for Meeting Transcriptions
33:18 AI in Marketing and Business Applications
34:07 The Future of AI in Business
36:03 Debating AI's Potential and Limitations
38:09 Advanced AI Models and Their Uses
40:12 AI in Consulting and Decision Making
49:47 Risk Management in AI Implementation
59:34 Final Thoughts and Wrap-Up

Transcript

Welcome to this shared edition of Hashtag Trending and Cyber security Today. If it's Saturday and you're listening to this or even Sunday, welcome. If it's Monday, this is your reminder that we're off for the holiday and we'll be back again on Tuesday morning. And now for this weekend's show.

For those of you who have heard this in the past, this is a discussion group that meets weekly featuring Marcel Gagné, a technology consultant who wrote the famous Cooking with Linux blog, John Pinard, an executive with a financial institution and an expert in cybersecurity. And of course me, podcaster, author, and consultant. Today's discussion came out of a study that revealed that most senior executives say that they have to, and are going to pursue AI.

But many of them in an unguarded moment would tell you they really don't understand it. And above all, they don't know what questions they should ask. So that's where we started in preparing for this, I put together notes for a workshop that we'll be offering remotely, which is a private bootcamp for executives on AI. If you're interested in that, you can contact me at editorial at technewsday. ca. Just put bootcamp in your subject line. And now, the music's starting.

Welcome to Project Synapse. And my guests today are Marcel Gagné. Marcel, welcome. Thank you. Good morning. How the heck are you? , and, John Pinard, how you doing, John? Doing well, just finished shoveling. Now I'm all set for the, for the podcast. Shoveling, you're back at work knowing that's, no, I'm shoveling that later. Bam. Bam. Okay. There'll be a little trimming on that. I'm sure but who knows? Maybe we'll just have fun this morning.

I the topic, and we've talked about this and both of you have mentioned this in different ways to me. John, you said we should do something a little more high level. And Marcel, you said. When are you going to actually live up to what you've scheduled and talk about the state of the nation in AI? Depending on how you present that, I came up with this crazy idea, and it came out of a survey that was done from Cisco.

And just as I was reading this thing, as you go through these surveys and studies, things jump out at you. And one of the things that just leapt out at me from the page was 97 percent of CEOs plan to integrate AI into their operations. That sounds good. Only 2 percent feel truly prepared now, the, If you've heard your C. E. O. making a speech, and they all sound a little more confident than That's because you didn't ask him anonymously. We had a meeting yesterday with a great guy.

He wants to do more in this. But he doesn't know what to do , and so this study came up with something else that I thought was really interesting. , 74 percent of these CEOs, and this is like 2, 500 CEOs of companies over 250. This is a significant group. They believe that they're limited understanding of AI.

Their limited understanding of AI hinders their ability to ask the right questions in the boardrooms and a light went on for me and it's not like I've never been guilty of this and maybe you guys are better people than me, for a long time in my career, I could catch myself not asking a question because I thought it was going to make me look stupid. And that's every day for me. Yeah, I always ask stupid questions. Yeah, Marcel, you were, you're special.

My mother used to say that he's special, the, no, but the but it's so true. We don't ask. Many of us don't ask questions. It took me, I think I was in my fifties before I finally gave up and stopped trying to be the smartest person in the room. Sorry, but that maybe it's just ego, but, and I think a lot of CEOs are like that. They don't want to appear stupid in front of their people. I eventually gave it up and I can track it back to one morning.

Where I was doing this meditation thing, trying to become a better person and all that sort of stuff. And I was thinking about honesty, because Buddhism is about honesty. So I'm not trying to sell religion to anybody, but I'm trying to get like a why this started. And I finally went, I'm going to be honest, I don't understand this. So I turned to somebody who had given some terminology and I said, I, maybe I should know this, but I don't know what you're talking about.

And he could not explain what he was talking about, and I knew this happened, maybe I'm just not that smart, but I stopped, I've never gone down that route again, if somebody has, a lot of people, if you take them off the script, they're lost. They can, the stuff that they've memorized, they can talk to you until they're blue in the face about it, but as soon as you ask them the question or something that takes them in a tangent, they can't answer it.

It's like the actor dressed up in a white coat on television talking about a particular drug product or whatever. They're not actually doctors. They have memorized a script. Exactly. They're not? There's a second thing I've learned. Oh my God, don't take that stuff. But it's, but the other thing too, Jim, is you talked about in the survey about it being CEOs. I'd say it goes way lower than that. Down to VPs, directors, managers, there's an awful lot of people.

I get it all the time at work of people going, yeah, we want to use AI. Oh, what do you want to use it for? Oh, I don't know. Any ideas? And by the way, nobody's making fun of anybody for this. I just want to make that clear. Like it's, we're not. I'm not roasting people for saying that they're enthusiastic about it, but now we can get to the heart of the problem. And that is they don't know the questions to ask.

So part of the way I wanted to frame this thing, I'm calling it a bootcamp for CEOs or other executives is what are the questions that you should ask? Here's some basic information. What's the questions you should ask? So I'm going to start out with two questions that I got and I didn't. Invent these, I'm sure maybe the person I got them from didn't invent them. But John Thorpe, who was mentor of mine, and was one of the authors of the Information Paradox.

And the Information Paradox said, we're getting better and better at building systems, we're getting fewer and fewer benefits from them. Why is that? And that was actually, I think, may still be a problem, but it was a huge problem in the 1990s. John wrote that book. And he used to say there's two important questions that you should be asking. All the time. So what? And who cares? And they sound rude, but it really is true.

And so when somebody says, and we got the culture of our company down to that. And I attribute that to John and others who really worked on this. So if somebody came up with a great idea for a system or a great idea for a function or feature, we could say to each other, so what? Who cares? And that was code for. You have to talk about in real terms, why you're doing this, not because it's cool, what's the benefit of it? And I think that's a discussion we need to start having on AI.

But first, people have to have a basic understanding. You can't ask somebody to, to ask the right questions if they don't have the foundations. And I'd almost add a third one to that too, Jim, about what's in it for me. And what I think could be the company, it could be an individual department, it could be an individual user, right? When you talk about AI, what sure we're going to implement AI, but what am I going to get out of it? What's the outcome that we're looking for?

Not the features, not the thing. What's the outcome that we want and which is a good place to root yourself in. If we were going to give enough of a foundation for people to be able to ask these questions, what would you say about AI? I explained. What is AI and why do we care? And I explained it from the point of view of saying, look, we were really clever with systems from the 1950s to about the early 2000s. We were very clever using two little things, two bits, a one and a zero.

We could, we did amazing things when I started, and I don't know what you guys felt like, but we, I worked on a little. computer system for a financial company. We're cross country. I was working on my first projects there. We would thought we were magicians. We could make stuff happen with just these crazy languages and ones and zeros and we could do all this stuff and I found it amazing.

And I used to walk around saying, if you can tell me the process you follow and it's reliable for whatever you do, I bet we can automate it. And that was my start as an analyst. And I didn't realize till later that, that I was describing an algorithm. And that's what computer systems for all those years were based on algorithms. A predictable pattern is followed to get a solution. If you can understand that, you can automate that. And then, so why is artificial intelligence different?

It approaches the problem differently. It really can look at the outcome and start to back project how you get there and start to fill those things in. Whether it was machine learning, the famous machine learning example, which is, you look at a picture of a cat. You break it down to the pixels. You see a million pictures of cats. Pretty soon you start to build a routine by which you're going to be pretty reliable at picking out a cat. Why you'd want to pick out a cat. I don't know.

Maybe you should, but I got on the wrong track. I should have started with dogs, but it works the same with dogs and cats. But that exact way of seeing things is the way, it's the way our brains work. You look up at the sky and you see a bunch of clouds and you go, Oh, that one looks like a bunny. That one looks like the Battle of Hastings. That one's looks, that, that's You had me till the Battle of Hastings. Okay. Yeah. Anyway we are pattern seeking creatures.

We actually look for things that look like things that we recognize. And in a sense, that's exactly what we're building with these machines.

The question to ask sometimes comes out of if you'll pardon the expression laziness when I first went to work professionally in IT I was working for a company called Honeywell which at the time was the second biggest computer company in the world next to IBM And, one of the jobs I was, I went to work on the support desk, but one of the jobs that I was given was a job that nobody wanted. Absolutely nobody wanted. And it was called software distribution.

So someone would call up and they would say, I, we're going to be ordering this big system, but we need a compiler. We need this application, blah, blah, blah. And you had to create a big, giant magnetic tape or a big spool or something like this. And then you had to ship that to the customer and then they would load it into their system. Now, this was like a horrible job because you had to go through this package depends on this package. And it was all on paper.

So this was all, this was a tedious process, which is why nobody wanted to do it. I did it for a week. And then the very next thing that came up in my head was, I work with computers. I'm surrounded by computers. Is there a way that I could basically feed all this shit into a computer and have it spit out all of the requirements necessary to create a software distribution project so I don't have to do any of the work? I spent about another two weeks working on that.

Came up with what became the software distribution system for Honeywell at that time. And, and the whole process was automated. I would just, they want these packages and then Push a button, walk away while the system builds everything, and by the way, it wasn't instantaneous. It took hours for this crap to happen. The only thing that I had to do beyond that point was mount a tape or mount a disc or whatever. But I think what we need to do with artificial intelligence is the same sort of thing.

It's what don't I like doing that I could maybe get the program to do? And that's what we've done every single step of the way in IT when we've automated anything. We look at AI like it's, and it is so different and it is transformative. And it works in ways that we've never come across before. But in the end.

It's a question of what did we see in computers in the ability to write a program to do something, to automate something that we could make this thing do for us as well, including writing new programs that we need. So sometimes it's not how do I, internalize or use AI in my business? It's more a question of what do I need to do? And maybe if I've got somebody that actually knows something about AI. They can help. And sometimes that's what it is.

You bring in a person who knows something and you say, is there a way that we can make this easier by using this new tool that we have at our disposal? The same way that people did it, 30, 40 years ago or whatever. Using these new computer tools that we have at our disposal, to make something work simpler and better and faster. In some ways, take the magic out of it, get yourself a person who knows something on your staff and, that's their job.

Their job is to figure out how do we make this work as opposed to. How do we build a corporate, AI, something or other. But that's how we build I think taking, getting rid of the monotonous tasks by letting AI do that, or computers of some kind, and taking the person that has value in other areas.

I always talk about this at work that, being able to automate things like meeting minutes, meeting bookings, all the rest of that, because, quite frankly, they pay me way too much money to be spending time having to write minutes after a meeting. If you can take that away and allow the people the ability to focus on, the key things that you want them to do, I think that's a big plus.

Yeah. I used to love that when I was at Ernst Young, I had a partner, David Doncaster, I probably talked about him before and David was the perfect outcome guy for business. Just had it inherently, he'd have a. 12 of us around a table and he'd just go around and count up our billable hours. And he said, this is going to be the most expensive two hours you've ever spent in your life. Make it good. We don't think about, what we're sacrificing. Or what we're giving up when we do monotonous things.

I think that's it. That's a reason, I don't think we actually value some of the things that we automate and the things that we've already been able to do. But as I said, this was all, and what you, even what you guys have talked about predicates that, the process, or you've got an expert who knows the process. We came to a different world with machine learning, where it could say, wait a minute, I can spot this pattern. And I could spot it in real time, which is really cool.

Cause you can start doing things like fraud detection. You're just way faster than a human and you don't have to be right all the time. You just gotta be close enough. And that machine learning was there. The problem with machine learning, of course, is really expensive. You gotta find a thousand. of cats or dogs or whatever, a hundred thousand of them. You got to show it to them. You got to tell them each time it's wrong. So you've got an incredibly expensive piece.

As a matter of fact, I think one of the biggest problems with that was just getting the data that you could train these things on and, and being able to have the resources. To tell the thing when it's right and wrong for all of this time. Once you do that, you've got a pretty good, reliable pattern detector. But then generative AI came up. I still believe this story that I heard about open AI is that they were.

They were sitting around and they were put, maybe not sitting around, but they put a lot of information into this original processor, which I think, had eight GPUs or something. It was a relatively small thing happening and they just dumped a pile of information and then they started to find out, they started to see that it can answer questions. And it could talk. And I think for the longest time after that, they dumped more data in it and made it more scalable.

And, but we saw something that could create, and I don't know what you guys felt the first time you saw, chat GPT and what did you ask it to do? I asked it to write a poem. You guys? The, to be clear, the first thing that I saw that I experimented with that I built on my computer at home was GPT 2. Which was, long before, two years before GPT came out. In fact, I wrote an article about it for Linux Journal, teaching people how they could do it at home. If they were into that sort of thing.

And I asked to comment on, I, the very first thing I did was I, for some reason, me, it's always a haiku. Write a haiku about this. It's always the very first thing that I do for some strange reason and I did that back then as well. And then I went on to start asking you questions or to reflect on this topic or that topic. Oh yeah. This was a master of hallucinations. But, but by the time ChatGPT came out, I was, I was already swimming in this stuff. But even there, I, it was a game changer.

It was like this incredibly friendly, easy to communicate thing. And when I was doing GPT 2, it was all command line stuff, right? So you're doing everything in the command line which guys like us are, I don't know about John, cause he's a Windows guy, but Oh, ow, ooh. Sorry. Zing, yeah. You guys always beat up on me for being a Linux guy, okay? Anyway. No, we beat up on you for being an Android guy. Oh, okay. We have nothing against Linux.

No. But, but like I said, what made that revolutionary was the idea that you could just, sit down and talk to it. And the interface was so incredibly simple. You point a web browser and you start talking as opposed to loading up and running software and compiling stuff. It was magic. Yeah. And the difference between Marcel and I is my haikus all begin with something that rhymes with Nantucket. You can do haikus with Nantucket. Yeah. Yeah. I started out my, you asked what the first thing was.

The first thing that I did with chat GPT was I asked it to write a job description. for a job that, that was already existing. I already had a job description that I had written. And I asked it to write a job description for that position because I wanted to compare it. And it was quite impressive that it's, there was things that it missed. There was some things that, that, it got wrong.

But for the most part, it was, there was actually a lot of things in there that, that weren't in the job description that we were currently using. I want to be practical like John when I grow up. Wow. He's amazing. I'm just boring. No. You jumped to something you could use in business. Yeah. And where I looked at it as let's eke out the intelligence behind this thing. Let's see if, let's see if there is sentience behind this program. You went straight to business applications. Good grief.

We should just hang up the show here, Jim and let John take over for the rest of us. The end. Yeah. But understanding this. So this is the thing, but we fought, we would, however we got there and there is some great history there and some great experiences, I think all three of us had the same revelation, which this could create things. And that was the new experience for me.

So we went from algorithms where if you know it and you can get the formula exactly, you can automate it and miss a point, miss anything, you're doomed. You went to pattern recognition, which was, I can find a pattern very reliably and I can tell you yes or no, or stop or start or something, but that's about all I can do. And then this magic happened, which it could create things. One of the things that served me well when I was starting out was I understood the basics of computing.

Like I understood what binary digits were. I understood how it fit together. And I think a lot of us did, if you had really early experience. So if you were really early in computers, you figured that out. That foundation served me for 30 years because I could sit in a room and say somewhere in there, something is a zero or a one where it shouldn't be.

I knew that to be true, I would find it, and that was, so you, and now we're in a different world, and it is a different piece, but we still have to understand the fundamentals. And I think one of the fundamentals that people don't get, and this is maybe this could answer a real question for somebody who's an executive or manager or somebody just want to figure this out. This is a probabilities network.

And by probabilities I mean what it does is it absorbs a whole pile of things, words in particular, we refer to them as tokens for the most part when you're talking about technically, and just predicts the next one in a sequence. And I think that's, when you get down to the foundation, I think people need to understand that. And, I always say that if you take the expression, sly as a Everybody's filled in cat, Marcella.

Yeah. Marcella always has to be different, yeah, but with Sly's a fox, we know that these patterns exist and the bigger and bigger amounts of data you can ingest, the better and better you get it at being able to predict these patterns, which explains two things. One is it explains hallucinations. If you're predicting the next word all the time, you get off that track, you will wander off. The thing it doesn't predict that we started to see is what I call emergent behavior.

With all of that happening, how do they store that? They created a neural network, a different way of storing information in mathematical vectors. And that can create. By doing the same thing, predicting the next token in a sequence, diffusion models for video are primarily the same, except they really work with pixels, more than word tokens. That whole thing brought us a new level of computing. And so it's a little bit unpredictable.

But very creative, and that I think is the foundation that the picture that people need to build so they can understand what they're dealing with. I think they'd fear it a lot less if they did, if they had that foundation, I think the other thing too, though, is that AI has just become this big generic dumping ground where whether you're talking about machine learning, or artificial intelligence, or gen AI, or AGI, it's just AI.

And you have two people talking about AI, they could be talking about two completely different things that all dumps into the same bucket. So I think, for CEOs, for any of us, I think that can create some confusion and, some misunderstanding or limited understanding because people will start to try to understand AI and then, somebody will talk about machine learning or AGI and then they go I don't get it. And so I think that's one of the key things is being able to try to explain the different.

Flavors, if you will, of AI so that people can, okay, so there's four different buckets and I'm focusing on bucket number two. I think this is happening already. And in some ways if we go back to people first throwing a computer on their desk, what they were really doing was learning how to use the mouse and move things around the screen and so forth, but we, I think at the beginning it was like the computer, like a la Star Trek, the computer, this magic box that does stuff.

And then at some point you realize that there were databases, and there were spreadsheets, and there were word processing programs, and there was a web browser, and there were games that ran on these things, and all of a sudden it starts to differentiate, and the computer is not just this thing, it's just this thing that runs all these other things, on which all these other things work.

And in the case of AI, first of all, there have been artificially intelligent systems of various types for decades now, the magic started when we could talk to it and it talked back to us like a normal person, November 30th, 2022, when Chad GPT, broke onto the scene.

But as we realize more and more that there are specialized areas, even when we take a look at a system like Quen 2. 5, which does videos images and it does the deep learning stuff and it does all these other sorts of things magically, seemingly all together. It's handing it off to other tools in the background. And so this is starting to be. clouded for us. But yes, there are tools that are specific to making videos. There are tools that are specific to making, images.

There are tools that are really great at doing mathematics, great at doing research and so forth. And I don't know if you saw it. I, in our discord yesterday, I posted the chat from Sam Altman, the post that he made on X, where he said that, GPT 4. 5 and then five, or, As he put it, weeks to months away. And he mentioned that GPT 5 will be the last of the, monolithic models. Like it's the last one. Everything else will be some kind of a thinking model.

It's also going to be models that understand that if you're trying to just write a haiku or something like that, maybe you don't need the reasoning models that costs, tens of thousands of dollars an hour, or a minute to run or something like this. You can pass it off to a lower layer. And we forget that human brains work the same way as well. We are not using, the whole brain to do everything everywhere all at once. We have areas that control motor motion.

We have areas that interpret what we see with our eyes. We have things that, dig through memories to find information that we need. And we're building artificial intelligence, but maybe sometimes we should take out the word artificial because what we are building is intelligence. And in many ways, it works like we do. And maybe that's part of what takes the magic away, but maybe it brings a level of understanding of what it is that you're dealing with. A really smart person.

But yeah, if we take that model that we've actually modeled it on whether it's the same and Geoffrey Hinton, who's the godfather of AI, the guy won the Nobel prize and the Canadian who did this, but Hinton talks about an alien form of intelligence and he does the same thing you did. He says it's intelligence we should be talking about, but it is roughly modeled on what we can do. And this is what I mean about building a mental model for yourself.

So that you can start to ask questions about this. What did we have first? Algorithms. We do that all the time. We have a part of our brain that will process things. It was called, the famous thing was System 1 and System 2. Daniel Kahneman came up with this idea that system one and system two, he said, system one is really automatic. It's like an algorithm. I see this, I do this really useful for really quick moves.

If you're running through the Savannah and you see a lion, you really don't want to sit down and go, Oh, Let me think about this. So you've got that automatic part of our brain. You have the pattern recognition. That idea we can take in all kinds of things and we can see stuff and distinguish it. That was the first pieces of machine learning. Now you add in this idea, oh, I can start to analyze things and predict. I know what the most logical thing that's going to happen next.

And when you put that together, nobody quite understands in the human brain how it works, but we actually think we become intelligent. And we are at that point where I think we've worried so much about artificial general intelligence and all this sort of stuff. What's happening now, though, is an emergent behavior. The reasoning model, whatever you want to call it. These things are starting to behave in ways we don't yet understand. But they do reason.

And, primitively right now, Primitively, Primitively last year. We make fun of these things. This, AI is smarter than half the people you know now. No, it's smarter than 90 percent of the people I was trying to be kind. Okay. Be kind. He's one of those people who does, it can't do this, it can't do this, or it won't be able to do this or something like that. And of course I use my, the line I like, this is the worst you'll ever use. So you might not want to lean too heavily on that.

Never. But what I suggested to him basically to shut down an argument at one point was it doesn't have to be smarter at everything. It just has to be smarter than you at most things and it already is. So let's go back to the premise we started with was here's all of that rambling leads us to a point where there is a really good question to ask if you are a CEO or a manager, somebody else is what type of AI is necessary. To do this and what problem are we trying to solve with it?

And I think when you can start to match those up, because, I see this all the time, and I think we figured this out because we talk about the cost of compute all the time. And, for those who are listening, you may not quite understand this. This whole idea of scalability is big, right? They took GPT two that you were working with essentially with some Fine tuning and essentially they just kept making it bigger and bigger and scalability.

And even Altman in his latest post has said they're not at the end of scalability yet. He might be looking, but so we can, as long as we can keep making it bigger, it can think. Now that costs a lot. You've got to have hundreds of millions of dollars to do a training run. They have this idea that there's so much at stake. In a training run, if you blow a training run, you blow a hundred million dollars.

So they're trying to work this thing through and this is a fascinating thing listening to these guys, and you make life changing decisions. In a hundred million dollar project, you don't want to get it wrong and you want to get this, there and I forget what the phrase they use, this is the time of your life here, you're going to make it, you're going to make a big time decision. So you've got these huge training models that you're working with and they're very expensive.

So if you're going to use AI for these fantastic, huge foundation models, you're not going to need that. To do the minutes of your meeting, right? You can spend a lot of time with a relatively small portion of an AI and Marcel can build it for you in an afternoon because he's probably paid for all the tools, but that does minutes better than anybody or anything. Yep. And so you don't need that. So the question that I think people need to ask is what do we, what problem are we trying to solve?

What level of AI would be appropriate now, as you pointed out, Marcel, because of the cost of running one of these things, if we want to solve a big problem, you engage that whole neural network at once really expensive to answer the question, which is why, and, but that will get cheaper. We'll deal with that. That'll get cheaper. By the way, that's not going to stay at that level. But for today, it's very expensive. So you want to use the right tool for the right job.

And I think Marcel, you were pointing out that open AI is really structuring it so that it will go and get the right tool before. Exactly. If you look at that post from Altman, he said, everybody hates the model picker and so do we. He actually says that in the post that I put in the discord. In other words, there will be no model picker. You will have a window.

Here is chat GPT and everything will happen through that one interface and it will miraculously in the background figure it out by itself. I just want to throw the meeting minutes. Okay. So for people that want to walk away from this conversation with something that is damn useful. This is. An app that is available. It's a Google app and it's called Recorder. And obviously it doesn't hear you guys at the moment, but as you can see, everything that I say is being transcribed in real time.

So all you do is you just fire up this app and you put it on the desk in your meeting or whatever. And instead of worrying about the meeting notes or worrying about buying a subscription to otter. ai, I'm sorry otter. ai, I really apologize for this. But you just put this on the desktop and it will identify who the individual speakers are. And it will separate them into conversations and at the end of it.

You've got the entire transcription and you have the, not just the entire transcription, but broken up according to all these people. And if you want to get really fancy, you take the audio file that's generated from this thing and you throw it into notebook. lm. And you tell it to make a conversation, and you use notebook. lm plus so that you can direct the AI to effectively create. Minutes of the meeting.

In other words, so and so talked about this and then so and so talked about this and so on. And, it's magic. It's already at your disposal. This is so incredibly cheap and it's already out there. Don't worry about getting John Pinard to take the notes of the meeting. Just put, and by the way, you should really have an Android phone for this. Put an Android phone on the table, run Google recorder, and it'll do it all for you. There's got to be an iOS version of that. We'll find one.

That functions better, I'm sure, Jim. The question is, what do you want to use that's appropriate? And you've brought up another good question. Why is this appropriate? What are we trying to do? Are we use make the making the best use of AI because simple AI you can solve a lot of other problems. If you can get down to which level you need to solve these. Because some of these things, as you pointed out, Marcel.

Can be, I wouldn't call it kludge together, but you can put them together using very simple desktop tools that are very reliable today. You don't need to spend a lot of money if you think 20 bucks a month for John to, and I'm John, I'm talking to your boss. If you think that 20 bucks a month is not enough for John to help him with those minutes, we're going to have a little, we'll have a bake sale for him once a month. So no, but it's trivial. Go fund me. That's right. Don't forget my GoFundMe.

I need more money. But the cost is trivial. This was one of the first things that I did with AI, was to get rid of notes and minutes and things like that. The second thing I did with AI was to get rid of marketing writing. And I used to do a lot of marketing writing. I don't do it anymore. If you read anything that's marketing for me, including a press release for my new book. I have AI do it. Why? It's formulaic. It works perfectly with the earliest versions of chat GBT.

And if it makes a mistake, it just makes you sound better. That sounds like a marketing person to me, so the questions are, are we using the most appropriate tools and you should be able to ask why and what you need to do and how it fits because you don't need to get to the Zen of. AI to get incredible benefits. As a matter of fact, I'm going to say this. I don't know if it's, I'm going to make a statement and I, you guys can argue with it. I think that the biggest.

Danger that happened, last week in terms of AI, if people were really smart was, you now have an open AI deep seek, which can do 90 percent of everything you need in a company. I really don't see anything else that AGI or some super smart AI is going to do corporately. And how do I know that? Cause we've run these companies with human beings and it's as smart as we are for those things. I really, I don't see the necessity commercially to exceed that.

And that's the latest question that's going through my mind. I'm going to disagree with you, but I hope so. That's why we have these meetings, but just so I can listen to myself talk. I listen to myself talk and I bore me. Oh, really? I look at myself in the mirror and I think, God, this is like some of the best conversation I've ever had. I think before Marcel jumps in, I just, I think there are certain purposes for, I'll call it higher level AI, right? The reasoning model and things like that.

But I think Jim, you're right for probably 95 percent of what a business Needs out of AI, they can get out of a chat, GPT, four O kind of thing. Sorry, John, I don't want to interrupt. I believe we're at the reasoning model. I think we're there. I think that what we saw was a reasoning model. Oh, it has emergent behavior and it can learn that I think is a foundation. You do need that before you say you've got almost everything you need.

So yeah, like an O three or, the deep seek, reasoning model. But to go beyond that, there, obviously there will be certain places where it's of benefit. But yeah, I think you can cover off almost everything that anybody could need in business with what we already have. Marcel? Marcel? Okay, yeah, sorry, I thought you were going to go further with that. But, okay, so let me just disagree. You're going to blame him for you making a lousy argument, okay? No, on the contrary.

I was waiting for him to, give me the punchline. But, I disagree partly because, If I translate this to entirely in human terms, okay? You've got the machine. It doesn't need to be any smarter than this. But often, if you put a lot of people together and you give them all little bits of things to work on and to think about towards a common project, you get a lot more work done than if you throw it in the hands of one person and you say, I'll come back in six months and see what you came up with.

There is, as much as I like to think that, I'm really smart and so forth, I don't think I'm as smart as, the 20 smartest people that I can gather, put in a room together to discuss or work through a problem. I don't believe that for a moment. And I think it's the same. I think that there are places that we can still go from here. And I think that there are places that we can still go that will incorporate the idea of models talking to other models, for instance.

Not just like this one model that can do these things, but essentially bringing like 20, big experts, super experts into a room together, which is one of the reasons I don't think we're going to, what's been called a singleton. The Singleton is the one AI to rule them all. That eventually takes over and is like the singular AI for the world. And I think that's bullshit.

I think what we're gonna wind up with is a world with, a thousand, a hundred thousand, a billion ais, all communicating with each other. Some obviously higher up on the intelligence scale than others, and they'll pass the information on. And I think there's gonna be this give and take between all of these things communicating. I don't disagree with you. And I think there are uses more intelligent models than we have now. I think some of the things in medicine.

And chemistry and advanced science will be part of their, the scary thing is they're going to be used for defense or offense, depending on everybody calls it defense. I hope it stays that way, but they're going to be used for modeling. They're going to be used for potentially when in some types of governments for controlling populations, they're going to be, they're going to be big, good and evil uses.

Of a super intelligence, but the point that I was getting at is if you're a CEO or a manager or whatever you are in business right now, if you're waiting for the next level of AI before you think you can use this to run your business effectively, you have made a mistake. The question you should be asking, and again, we wanted, I want it to be a question show, why do I need more than I have today? That should be a huge question you should be asking. Why can't we be doing this today?

And I think that's a reasonable question. You absolutely can. First of all, as my old buddy, Sun Tzu once said, in times of peace, prepare for war. And, offense and defense, I think are basically the same thing. So much as I'd like to think otherwise, I think that we can translate that logic into anything in the real world. It's when things are good, that's when you gather your crops and you can goods because it's going to be bad at some point.

You always take advantage of the fact that things are good right now to build up for the time when things are not quite so good. Some people, there are people who I, who actually trust quite a bit in this business who have actually started. Using the pro plan because somehow they've managed to cough up 200 a month and the reasoning abilities. Yes, john, the reasoning abilities, the deep research reasoning abilities of opening eyes pro plan is apparently Mind boggling.

It's not the sort of thing where you say, Hey, what's the weather today? Create for me a haiku or something like that. But basically you lay out what it is that you're looking for. And this thing will actually have a conversation with you. I'd like to clarify this. Do you need information on this sort of thing? And then there's this back and forth thing while it tries to figure out what it is that you're trying to achieve. So there's a question and answer period.

And at some point you both agree, you and the AI agree that you have. Put in all of the things that you need and then at that point you push the button and it says You know, let me go away and think on this and work on this. I'll send you a message when I'm done. Google has their deep research as well. But, every indication is that Google's is a pale imitation of open AI's deep research at this moment, which again is only available for the 200 plan.

But if you were in any kind of a business that understands the concept that every once in a while you need to spend money, that 200 a month seems amazingly cheap.

For the ability to effectively brain dump this stuff and not just get an answer spit back out, but have something that will go out and research papers for you and look through websites and look at processes that have been done in the past and put it all together into a report that you can then, throw into another AI if you want to analyze and break it down in other ways and so on. So the, I guess what I'm getting at here is there are two things. One of them is.

You either do this yourself, you spend like the 200 bucks a month, you actually go in with, we spend 15 minutes looking at a house that we're going to spend a million dollars on, and then we spend a month researching the best computer system that we can get for 600. We have these weird ideas about what's valuable in terms of time and money. And this is one of those places where the cost seems high. When you look and say all the other models are like 20 bucks a month or 15 bucks a month.

I'm going to spend 200 bucks a month. fricking insane. No, it's not insane if you're in a business and you're trying to make money and you're trying to make things better, faster, stronger, like just go with it. 200 bucks. It's cheap. And an app that can record your meeting notes is cheap. But in the scenario that you're talking about, Marcel, and I agree completely that.

Where there's a need, spending 200 a month that eliminates the potential to have 10 people spending 3 months looking through all of this research makes total sense. But you wouldn't go and deploy the 200 a month plan to everybody in your company. That's not what I'm suggesting at all. I know, but that's what I'm saying is, it's a specialty need. What do you want to accomplish? What do you want to achieve? That, in some cases, you might need that 200 a month plan to be able to do that.

In most cases, you can probably get away with the 20 a month plan. The smart CEO or CIO is going to have this available. Every CIO 200 monthly. They're going to have that tier available to them if they're even remotely smart. Think about it. And if they're a little tiny bit smarter, they're going to have a person who understands this stuff. Who looks over their shoulder and says, maybe you should think about this a little bit more. Maybe you should think about that a little bit.

Jim was talking like 15 minutes ago about this friend of the, this guy he used to work with who would count the number of consultants around the room and say, this is going to be the most expensive two hours. This is a 200. A month consultant for God's sakes. It's cheap. Absolutely. McKinsey and other firms, including Booz Allen, which I was just should be quaking in their boots, about this because, and they'll be the first ones to bring it out. This is a real game changer.

And if you're a consultant and you don't have this in your toolkit, you're shame on you. I did some research for a client of mine. I probably will, when it comes to Canada, get the 200 a month plan if I'm still doing consulting. I did consulting for that same guy we had the meeting with. It took me an afternoon to do two weeks worth of work to find out who his competitors were and to come up with a reasoned analysis of their products from what was publicly available and all of that.

And I gave him to that and I probably cost took me about two hours to do with chat GPT. That would have been a two week assignment, or at least a week's worth of work at any other time. And so I looked at everything, I analyzed what the products were, I went through all of this stuff, and did it really quickly. Now, I'll just, using deep research, and I think it's quite reasonable from what we've seen, I would just plug it in, let it go, and have it produce the report.

Now, after that, I'm gonna look at it. And I'm going to go through it and come up with some ideas and places because, nobody's going to read a large report. They still want somebody to translate it for them. Now, if you're really smart, you can just dump that into Notebook LLM and it'll read it, it'll do a little podcast with you and you can actually interrogate the document.

I think we're getting very close to this idea and we're really far afield now, but we're really getting close to this idea of one AI, and you talked about it Marcel, these AIs are going to talk to me, one AI is going to write a report, the other AI is going to read it. That's, we're almost there right now, by the way. I confess this, I have not read one of these large, research papers. I have them analyzed.

I look at the points that are made, and if I'm really interested, I'll go and read the paragraphs where it's important. I do not need to have 18 pages of text so that I can actually focus on the one or two paragraphs that I want to put into. And do I trade something? Yeah, I guess in the days when I could actually spend a whole afternoon reading a paper, pull out my yellow highlighter and go through it and all that sort of stuff. That was immense fun and immense learning.

I don't have that time, anymore. I couldn't highlight because there's something in my mind rebels against the idea of marking up. Like a book, for instance, I have a friend of mine who is a voracious reader who has like little tabs, those little sticky note things every book in the house that he's ever read. And he highlights passages that he thinks are interesting. And I just shudder every time I look at that. It's no, you can't do that.

The thing with NotebookLM, the reason that it's actually that powerful, it's because it has been shown and proven that if you throw a vast amount of information in front of somebody and say, read this. At some point, the brain just shuts off. And if you're just reading that stuff, at some point, the brain shuts off. There are parts that just disappear.

However, if you're listening to a conversation, which, by the way, bodes well for us, okay, if you're listening to a conversation where people are going back and forth, You're more engaged. There's a part of your mind that is actually listening at a much higher level than if you're just hearing text going by or you're just reading text going by. And that's the power of Notebook LM and that's why it's such a cool thing. It's another reason why you want a pro plan on Google or Google Workspace.

If you have Google Workspace in your organization. Notebook LM plus where you can direct the AI. To put a podcast together that concentrates on certain areas and so forth is actually included in the product So it's actually one of those little bonuses that comes with it Yeah And those are the three big foundation models and we haven't even gotten into the idea of that You know that what you can start to run you're going to be able to have these tools available to you in your own organization.

They will be able to be run internally and that will happen more and more over the next few months and weeks that these open source models will get better and better. I've already seen one deep research, open source model that was pretty darn good. Considering it came out weeks afterwards that could do a lot of what open AI was talking about. Will it be perfect? No, but do you need that? And that's really the question. What's the AI you need? What level of expertise?

Speaking of open source AI, Daddy needs a brand new PC with a much bigger GPU. I would totally go with a Jetson Neo or something like that, an NVIDIA Jetson Neo. People want to put together a little pot for Marcel to build a decent open source artificial intelligence that he can run from his home. I might even open it up to the public. Who knows? Daddy needs a new computer. Daddy needs a better GPU. It appears, Marcel, poor Marcel. He doesn't have the latest in AI.

Could you, for just 19 cents a day, you could be helping Marcel. By the way, no one as yet has thrown any money my way. I just want to point that out Yeah, we're working on it. We're trying to get it. So just so there's one of the questions is why doesn't Marcel have all the money he needs to buy everything he wants.

But if we take the discussion we've had so far and what we've talked about in AI, what we've talked about its possibilities, there are other questions and part of these, the fact that we're not asking these questions drives me crazy. And one of the questions that, and this should fall right down your line, John, is how do I know it's safe? Yes. And I think that's a realistic question. But between the experiment and production, and I think people should be experimenting.

So everybody hear me clearly. I think you should be playing with these things all the time. You should find any safe way you can to release as many tools as possible to your employees. And let them play, you should have a very realistic discussion about what can go wrong when you're, and asking, so what, when is it safe, how do I know this is safe? And that shouldn't be a threatening question. So if you're using it for meeting minutes.

What's the cost of me getting something wrong in a meeting minutes. Nobody reads them anyway. You just produce them cause you have to write so that you can actually, they provide the next part meeting. So you can actually go over the minutes and tell the person who wasn't there. No, I'm just kidding. I'm a little cynical about corporate life. You might've noticed, but I'm just saying minute meeting minutes. What's the cost of a mistake in meeting minutes? Little minuscule.

What's the cost of a mistake in dealing with a customer? And everybody says, Oh, got to always, you can't make a single mistake and you, Oh, BS, you put people on that phone, the talk to customers who've had two weeks of training, you've, you don't give them all the answers you put, so cut the BS there's a risk level in terms of dealing with AI and by the way, you can identify that it's a but you, when you start to talk about the risks, you can figure out how you can mitigate them.

And that's the thing I, you shouldn't be regarded as a doom and gloomer just because you're asking, how do I make this safe? How do I know this is safe? How do I make it safer? And I think that's, and the reason why I'm saying that is because right now. I don't know if I'm going to do it this Saturday, but I am going to do a show on risk and it'll scare the pants off people as to what you can do to, to any of the current models, regardless of their safety. Two days ago, somebody hacked.

OpenAI's latest model, O3, and it was relatively easy to do, because I do the cybersecurity show, I've been walked through the hacking of DeepSeek, and as I've said to people before, I said I was pursuing the person. He said, you can't put this on the air. I said, why not? I said, DeepSeek has fixed it. He said, the other models are just as vulnerable. So there's a lot of risk out there.

That, by the way, if you say, if you're a CEO and you say I'm not going to have any AI in my corporation, take out your Microsoft software as well. Take out your firewall as well. I can just go on and on, by the way, don't let anybody have a phone, never, ever use a PDF file. These are all Wi Fi, bad idea. No Wi Fi. Get rid of Wi Fi. These are all being hacked on a regular basis and we have ways of dealing with it. Why? Because we have conversations about risk.

And are you going to stay ahead of the bad guys? I don't know, but you're going to at least be able to deal with understanding what the risks are. And I don't hear that conversation happening in AI. And I think that's wrong. As the great Captain James T. Kirk once said, risk is our business. No, actually, let's deal with this. Business is about making something at one cost and selling it for a higher cost. That's what business is about. The rest of those are, the mechanics of business.

Now, do you have to take a risk to do that? Yes, you do, but one should understand the risk they're taking and have a way to manage your risks too. You need to plan for them, be prepared for them so that you can deal with them when they come out. And if you're a CEO, you don't need to, or even a senior manager, even a director, you don't have to have all the answers.

You have to have the questions to say, how are we, what are the risks that we think are out there and how are we going to manage them if they happen really that simple, and I think those are probably the big question. We focused a lot on the rewards and you should. And that's why I say, it may sound crass, but the business of business is to make something at one cost and sell it at a higher cost. We should be looking at the reward.

And, but we should equally look at the risk and say, what is the risk? And if you take zero risk, I guarantee you, you make zero money. Yes. In a world of AI, that's coming. And if you've studied economics and you're talking about perfect competition, perfect competition is a machine that can automate everything. And that we're getting closer, you don't have, we don't have to be there. We're getting closer and closer each day.

So if you take the risk of doing nothing, what's the risk of doing nothing? An interesting question to ask. If you don't make a choice will be made for you. Yeah. Not making a choice isn't a choice. And that's why I keep saying, The question, is if, even though AI is not perfect, shouldn't we, should we just wait till it's finished, should we wait until human beings are finished? Ooh, that's a scary thought now that I think about it.

Yeah, that would be when AI is finished, humans will be finished. Yeah. Sorry. Yeah. Yeah. But if you get to that, you get that far, then I think you start, you can start to, to understand. So. If the perfect boot camp for an executive is, here's the foundations. Here's some of the potentials and possibilities that are out there for you. Here's some of the great questions you should be asking. This is, I think we've all agreed on this to the point, since had these discussions.

We should be starting now. And Marcel, you're always saying that, this is the worst AI you're going to use. Yeah. So you should be doing something now. Something's better than nothing. And that's the thing I always get at in cybersecurity too. It drives me absolutely nuts where people, because they can't do everything. That means they can't do anything. They'll do nothing.

Companies are used to the idea of hiring consultants to come in and, look at things with a different set of eyes than they have inside the company. That's part of it. Sometimes it's specifically for the expertise, but sometimes it's fresh eyes, to look at the process and say, we can do things a little bit differently, but you don't actually have to take your consultant's advice. When they say, I think it would be better if you did it this way.

What they have told you is, whether you think it's the right answer or the wrong answer is actually good because now you see things from a different perspective. And if you think of your AI as a consultant that you're bringing into your organization, just remember that you don't have to do what it tells you, but it is giving you a different way to look at how you do things right now. And that perhaps is the greatest value in it.

It gives you a different perspective and it gives you the ability to, strike up that conversation of those two, perspectives. So that, as you said, Marcel, it's another angle at looking at things that will hopefully help you to think differently about, the direction that you're going and give you some ideas or some other areas or avenues to think about.

Yeah. And as somebody in little known fact, I'm a fellow of the consulting association that trained preferably trained, I think maybe a thousand or more consultants over my time, maybe more than that. And I will tell you the piece of advice I give to them all. If you're giving answers to your client. You're making a big mistake.

If you're an expert in something, a technical expert in something, and you know how to split an atom, and all that sort of stuff, better than anybody in the world, you're not a consultant. You're a technical expert, by all means, go for it. Tell people how to do things, be prescriptive. But if you're a consultant, your client is taking the risk. After you leave, you have to leave them better.

And I'm sorry, but I just, I hate the type of consulting where somebody comes in and says, I've been at Ford and I've been at GM and I've been at Toyota and here's how you should make a car. Yeah, okay. How are you going to do it? How is it going to fit your business unit? Your client needs to think through that themselves and you should be the person that helps them do that. And that's why naturally I went back to the person who helps them frame the questions that they need answers for.

And some of those answers are going to come from experts. Some of those answers are going to come from research and some of those answers are going to come from decisions that they take based on the risk that they're willing to absorb. And the rewards that they're looking for, and I think that's a, I think that's a great definition of consulting.

And based on that, Marcel, I believe honestly that AI in the same way that Elisa oh, sorry, Elisa is the title of my book, Elisa, a Tale of Quantum Kisses by Jim Love. Sorry, I didn't do this on purpose, but I am going to plug that every chance I get. But Eliza, the therapist, and we started with that, Eliza was an algorithm. It learned to ask questions and in many ways in just from a number of questions. And some of the questions it would ask was, tell me more. It was quite simple.

gave better therapy than many therapists. That's just a reality. And now we're getting to a place where can an AI, by getting information and presenting you with some questions and with some things to think about, can it give better advice? Yeah. And that, for any consultants who are out there listening, you need to actually sit down and do the same exercise with your, with yourself.

If you want to be a consultant for the next 10 or 20 years, you're going to have to ask yourself, how do I reinvent consulting so that I can help this? So that, that was, just summing it up from you guys the aim that I wanted to have with this discussion was to be able to raise some of the questions, understand some of the foundation. I didn't want to turn this into a bootcamp, but I wanted to think through what a bootcamp for an executive would look like. Did we miss anything? Of course we did.

Yeah. And I think the big thing is don't be afraid to ask questions and don't have that fear of. Am I going to look stupid if I ask this question? There is no stupid questions, especially not in, in the days of learning AI and the, what it can bring to the business. You have to ask a lot of questions, some smart, some dumb, to get to where you need to, where your comfort level is at so that you can start to move things forward.

And you have to spend money to make money, but you don't necessarily have to spend a lot. Yeah. 200 bucks a month, Marcel, is all you need. Yep. I'm going to disagree with you on one thing, John. There are lots of stupid questions, but you only find out stuff if you ask them. Yes. Yes! Yes. Yes. Okay. So we're going to wrap it. Thank you, Marcel Gagné. Great to be here. John Pennard, thank you very much. This has been a great discussion. Always an enjoyable morning.

I'm your host, Jim Love. If you have questions or you have ways that you'd like to help direct this, consider that you're talking to the AI and you don't have to pay us 200 a month. We'll be glad to include those in our discussion, or you can join our discord group, and I think Marcel finally got me the link. So I'll make sure I post that again in our, in the discussion thread for this and on our YouTube, Channel and it'll be in the description on YouTube as well.

And if you're watching this on YouTube, please put your questions in the comments. I've had some great discussions with people so far, and it's wonderful to talk to you. So ask us questions. We'll be glad to help direct the, the conversations we have to answer those. Cause we, sometimes we, we ask stupid questions too. Live long and prosper and the thumb is out. Have a good week. That was our show. We hope you enjoyed it.

Love to hear your questions and comments, or if you're interested in a private executive or management group bootcamp session, just drop me a note at editorial at technewsday. ca or look me up on LinkedIn. I'm your host, Jim Love. Have a great, long weekend.

Transcript source: Provided by creator in RSS feed: download file
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast