Hey friends, we have some news from our sponsor TechScontrol. They just released version 32 of their document processing library which includes new core functionality such as document footnotes, SVGX Board and more. You can integrate document editing, signing, collaboration and PDF processing into your ASP.NET Core and Angular applications with TXTexControl. These powerful libraries will let your developer teams focus on their core competencies while TechScontrol handles your digital document processing. Check out all the new features
and see the technologies live in action by visiting the demo at demos.txtcontrol.com. That's demos.txtcontrol.com. Hi, I'm Scott Hanselman and I'm into another episode of Hanselminutes. Today I'm chatting with Noelle Russell. She's a chief AI officer at the AI Leadership Institute. How are you?
I'm glad to be talking to you. Everyone's talking about AI. We know it's the hotness, we know it's fun to say, we know it's fun to say because we hear it on the news every day. I like to say that the word AI has good mouth feel. People like to say it because ML and data science don't feel good in your mouth.
I like to say that you work together for a time at Microsoft, you've worked all over, you've worked on Alexa, you've got a lot of experience in space. You were talking about AI, you were talking about generative AI, years before anyone was thinking about it. How did you know that you picked the right course? Did you know at the time?
It's interesting because I always tell people earlier in their career, try everything. It's less risky earlier in your career. As you get a little older and a little more years behind you than ahead of you, you become almost risk averse.
I knew back then and it wasn't really AI, it was the art of being able to talk to your tech. You and I think share kind of passion of just like, how do I remove all friction? I have a son with Down syndrome, so that was my mission. I want to remove all friction. When I saw first my son start using, he was nine at the time, he's now almost 20.
Then I introduced my dad who has a traumatic brain injury, he started using, I started to realize like this is going to change everything we are no longer, I literally saw the future of we may not need keyboards, we don't need to use, you know, everything on windows, we taught people to do buttons, mice, menus, drop downs, all that could go away.
It was really much more alignment with, I can see the problem this solves by the billions as opposed to some cool app that I got behind. I remember earlier in my career, I used to call myself J-Boss Girl. Do you remember J-Boss? It was middleware and Red Hat ended up buying it. I was like all about the one product. I didn't really think strategically about that, but this time was different.
Yeah, you were doing J-Boss middleware work like 15 years ago, and you also worked for years at IBM. So at some point you got yourself to Amazon when Alexa was just starting. So this is like almost 10 years ago when we like, hey, we're going to be talking to this thing. And I have to say like I have Alexa's, I have them in the house, I've got several of them, I just got a brand new one that the one that looks like a clock.
I think no one ever says what the reality is Alexa, it's a dumb terminal isn't it? It's just that the magic is not on the device, the magic is on the back end. Absolutely. And actually I think that I always tell people who are trying to think about how to build good experiences and they focus on Alexa the device.
The device, well, number one, we as developers, we didn't do much for that, right? Like that market was kind of the device industry, CES, it put these markets, these devices in people's homes. So now everyone has one, which is amazing. They have some kind of device that can talk to them. Now all I need to do is create a reason for them to talk to me, my brand, my product, my service.
And if, and I've never seen it go a different way, if I let someone successfully talk to my tech, they'll never go to my website again if I give them a good voice experience or a good user experience. So yeah, I agree. I think, you know, it's either Alexa, it could be, you know, we had Google home devices. That was a good one. I mean, for going to say all of us have voice devices either on our phone or on our wrist.
We're just starting to scratch the surface of a world that's trained to use these technologies. I think it did take us though 10 years to get that adoption curve going, right? Like, I don't think if we tried to do what we're doing today 10 years ago, people were just like, what do you mean talk to this box? Like, I don't get it.
But now we get it. And that's why I do think, you know, the work that's happened on open AI and chat, you be T. I mean, it was primed. We had primed the whole world to start talking. And when it actually started working well, like that's when it started to take off. I, it always makes me think of a start track for when Scotty sits down at his Macintosh and they want the formula for our transparent aluminum.
Yeah, all right. Go ahead and he goes, computer. And they say, just use the keyboard. Yeah. How quaint. And then starts, you know, slapping the keyboard and it's very successful. I do think though that there was a bit of an uncanny valley of speech where people were disappointed by Alexa, by Siri, by Cortana. I'll use my wife, my non technical partner as an example. I would sometimes show her what I thought was the most amazing thing. I would say, hey Siri.
Set up an appointment with my wife next Tuesday at seven o'clock at, you know, Chipotle, very complicated compound thing. But it wasn't parsed with a language model. It was parsed with regular expressions and dreams. And it would work 20% of the time. And then if it didn't work, she would tease me brutally. And in her mind that I think in the mind of some people when Alexa or Siri or Cortana doesn't do a thing, they go, ah,
they promised us this. It doesn't work. You think that the language models are going to flip that bit and we're going to, we're going to enable all of these devices, these smart devices to be smarter.
I think it's a bit of graceful handling, right? Like in the past, if it didn't, like, if we didn't say what we wanted exactly the way the machine was trained to hear it, it would break. It was extremely fragile, very brittle. And that experience to someone who doesn't, I mean, know the technology, they're going to be like, it's broken.
And I think there's an interesting kind of data point that I've uncovered is that people over the age of 75 tend to be way more lenient. They're more tolerant of bad behavior of robots. They're willing to say something over and over and over again. The same is true. And I have, I have six kids and my same is true for my family, like all the kids that are under, you know, everyone under the age of 20 also has that same willingness to just kind of keep trying and working with it.
It's us people in the middle that we have like zero tolerance for inaccuracy. And I think there's something to say about that. And I do think generative AIs created the ability for us to at least handle those failures gracefully even be funny about them even be able to like provide quips or even redirect the user.
Down a rabbit hole or something that wasn't the original thing they wanted anyway, but they end up being happier with the experience. I think that is all all true. But I do think it's, it's kind of part of our process of being in the middle of being like, why does this work the first time instead of maybe rethinking it's kind of like prompt engineering, like maybe I rethink how I said that maybe I should say it in a different way.
And then maybe I think that is the most important thing is that you know, I think that I think the standard AIs definitely assisting us with that.
Did you ever see the Saturday night live skit on Alexa silver Alexa for the greatest generation 100%. Yeah, it was on repeat for a while. My dad is like the epitome of that. I actually went, um, you know, March on Lynch. Yeah. Yes. Of course, you're a sea guy. So you would know, but the March on Lynch during the, you know, hey, this like 20, I don't know, 15 March on Lynch calls up Jeff, Marcia Lynch calls up Jeff Bezos and complains to him and says, hey, Alexa's broken.
And I had to, he actually asked a bunch of people, who knows Marcia Lynch? I had no idea. I'm not an NFL person. I'm a little more now, but I was definitely not then. Heads down and they're like, no, well, do you know Marcia Lynch is? I'm like, no, no, no, great. We need you to go to Oakland and help him fix Alexa. And it was literally a Saturday night live silver episode like where he was like, I asked him to call the device
so I can fix it for you. And he's like, yeah, great. Okay. Alexa's. And I was like, well, oh, there you have it. It's not Alexa. It's Alexa. And he's like, yeah, that's what I said, Alexa's. And we went back and trade the model to just do whatever he said. And that's when I learned like many times these magical experiences are really just data scientists trying to make it work. Yeah. I just remember Keenan Thompson going, oh,
Dessa. Yeah. Alphia. It's like just answer any, and they had a whole list of things that will answer to. I love that. And yeah, and it's amazing who you need to be to get on that list.
Yeah. Now, it's been some years since you've been there. So I'm not asking for inside baseball, but using your educated understanding of someone who has an expertise in AA and also was there when Alexa was built, as we are looking at chat GPT being amazing and then looking at our Alexa's and going, nothing interesting has happened to these devices in a couple of years. Don't you think something's coming? It has to be coming. Absolutely. I mean, I built over 100 apps
for Alexa. And those apps still get about, you know, I have about two million unique users. And the code on those apps has not changed. It's like one of them is about daily affirmation. So it's like one of the most popular skills on the platform. You ask for an affirmation. There are 365, like it's a, it's like 800 lines of code that just takes you through and remembers where you were. And if you come in on day seven, you get the seventh one, like it is not magic at all.
It's prime for a gendered of AI reboot. So the good news is that it's happening, but like, yeah, we're going on two years chat GPT has been in the market. And Alexa is just now starting to, to revive that. So I'm excited because I do think it'll give us developers who built for the platform a chance to make skills that we always dreamed could be possible. I mean, why can't I make a skill on daily affirmation happen in 40 languages? There's no reason that's not possible
instantly out of the box without me as a developer having to do anything special. But I can't do that right now. So, so I do think we'll see Alexa kind of come into its own, which is exciting. But I think a lot of these companies kind of have to think about them. It's Microsoft's the same way. Like these teams are kind of built a certain way to deliver software a certain way. And when you now are going to inject something, it's just like every other company on the planet, people are
actually worried like, wait a second, that's my baby you're calling ugly. That's my stuff. I built that model from scratch. And so I think there's a little bit of holding on to the old way until absolutely necessary to move away from it. And that, I mean, Amazon's there now. I think we're weeks away. I think from that launch, I have a couple interviews with some periodicals in anticipation of the launch of the new Gen AI model on Alexa. So, should be exciting, exciting times. But yeah,
why wait? I anticipate it's a human problem, not a technical one. Yeah, but there is also costs of good sold. There's the cogs of the thing. So like the Alexa's that I have in my house on the one that's on my desk right now that I had to put on mute because it kept hearing me mention its name. You know, I paid $59 for it three years ago and I haven't paid since how many dollars, cents, pennies, fractions of pennies as it cost every time I tell it to turn
the lights off. I think that when people are building naive products, they just send strings to chat GPT for, you know, or 40 and it costs God knows how much money. I can only assume that if I were going to make an Alexa smart speaker kind of a thing today, I would have to come up with some kind of a subscription. But I also worry about sustainability. How many trees are getting burned because I just didn't want to stand up and like turn the lights off. I don't want to do that.
I don't want to feel like that. Well, there's that too. But you know what I'm saying? Like, how is it a sustainable business to have a dumb terminal smart speaker pointed at a giant model that costs two bucks every time I ask you the dumb question about how old Brad Pitt is? Yeah, exactly. And I actually think Amazon's kind of struggling with this too, right? Like in the news, we saw them have huge layoffs and cuts to the Alexa team because they realized like, wait, okay, the promise.
And I also think leadership change, right? This was Jeff Bezos' like pet project to create the Star Trek computer. And now he's off building rockets now. Like, that's his pet project. So there isn't someone who has the same, and I always tell people, when you build AI, one of the most important things you need to have going for you is an understanding of your core values. Like what's going to keep you up like working the midnight oil in order to deliver this project to production because
it has to be tied to who you are. Or you're going to give up like many initial recognition companies have done. Like it gets very hard. And if you don't have the stomach for it, you won't be able to win in the game. And that's kind of, you know, I always look at the leadership of these companies. Like, who has the stomach to kind of do all the hard things? Train it on a bunch of different languages that they weren't expecting. You know, when you have to add additional training data,
that's more cost, almost exponential cost to what you originally anticipated. So yeah, I think, and many times that ROI is not as we wish it would be instantaneous, right? It ends up happening over three to five, oh wait, Alexa, 10 years. And they're just barely in the black. So I think it does take both from a sustainability perspective, but also from a business model perspective. Like you have to be really in it for the long haul. And why are you in it? Like are you in it?
Because it's a direct tie to revenue. Or are you in it? Because it's making happy customers, you buy more stuff on Amazon because of this thing that they have in their house. And that I think is the more broad correlation is that people who have Alexa devices buy more stuff on Amazon, not through Alexa, but just in general. And I think that might be a lesson to learn. Yeah, I think that they thought that everyone would be buying groceries on Alexa. And it would be
a great loss later. And it would be just like, hey, Alexa, you know, put on some, give me some paper towels. But I don't think that happened, you know. Yeah, no, I agree with you. But I do think people use it for their lists. Like I think they think of Alexa and devices like it like a household name, right? Like there's not many people I can talk to that don't know what it is. Or in many of
them have had it to your point for years. So the world is well trained. And I think that's the good news is that the I should say the Western world is well trained on kind of you these devices to interact with software. And that goes back to your earlier point. What are we doing as builders, as business owners? Like what are we doing to capitalize on that? Now that we have an audience that has a terminal, where's our very cool experience on the back end that's going to have them
build a better relationship with us as a business? That's not happening. And that like I said, is more of a people problem. I find that a technical one like the tech is ready. We could do so much cool stuff. Yeah, well, it's just organizational willpower and reminding people that it is a UI that isn't unused UI. Yes. Yes. Yeah. Hey, friends and digital marketers. This one's for you. I've got 30 seconds to tell you about Wix Studio, the web platform for agencies and enterprises.
So here are a few things you can do in 30 seconds or less when you manage projects on Wix Studio. You can work in sync with your team on one canvas. You can reuse templates, widgets and sections across sites. You can create a client kit for seamless handovers and leverage best in class SEO defaults across all your Wix sites. Time is up, but the list keeps going. Step into Wix Studio to see more. One of the things that I also think is worth noting is that I don't know if it happened
on purpose, but I appreciate that chat GPT and co-pilot are now not names. They're not proper names. We spent 10 years yelling at invisible women to do stuff. Terrestrial. And that just felt, it just feels gross. You know what I mean? But I don't want the wake word to be computer because then it would happen all the time. So I need some word that is a non-gendered word because I just have never liked the idea of anthropomorphizing the thing.
There's nobody in there. Absolutely. Yeah. What do you think about that? Easy to say that now, but I don't know that without anthropomorphizing it, we would have gotten the adoption that we got over the last 10 years on a device. I think there is some strategic, and there were reasons behind Alexa. How do you find a word that in the lexicon does not cause,
you know, cost the least number of false wakes with the highest level of recognition? Like from a, you know, those of the linguists out there would, you know, confess that there's not many words that fall into that category, and the more general of the word, the more likely, and we had lots of false wakes anyway, even with this intentionality devices were waking up when you said anything similar. So, but I do think that that's part of the challenge is like, if you're going to build a product
that you want to reach 100 million people or a billion people, what do you name it? And that's why I also think we're now mature enough to be like, hey, we've learned some lessons. There were some people that were hurt by the fact that that device was named Alexa, specifically young girls in second grade with that name. And Alexa? Yeah. Yeah. Like that, that was not a good life for them. And they ended up trying to file a class action lawsuit like their moms were like, I'm not changing
my daughter's name because you built a device that happened. Well, yeah, nobody wants to wake up and find out that like their name sucks because someone, some, some person who did something horrible has their name and they share it. I joke in every conversation you have. Right. And then you got to switch to your middle name or whatever. So yeah. Oh, no. And then the moms who were older like, oh, no, you didn't change name. I negotiated hard for it during the pregnancy. So,
yeah. Yeah. You know, I've always thought that maybe echo would be a good, a good word for it. Because you know, it's already a brand that they own on the list. I can name it echo. It's a good statistic though. I should look and find out how many devices have which wake words. I'll be
interested in it. But I wonder why like you see online Sam Altman, the opening eye guy has a very strange and possibly problematic, you know, relationship with how he thinks and AI should work because he's obsessed with that stark scarlet Johansson movie, her, which again gets into like these uncomfortable relationships with, you know, gendered individuals who do not exist. Right. This person does not exist. And he wanted to sound like Scarlet Johansson.
She's suing him now. Right. Yeah. So, but, but I'm noticing though that, you know, that you mentioned the generation gap, my, my, my, my sons, they don't think about it that way because Chats CPT is just a thing. It's an app. It's a service. And by not naming it a real human name, we're moving away from anthropomorphizing it. So I think you're right. Do anthropomorphizing it jump started it. I don't know if it's required anymore for its success. I agree with you. And I think
Chats CPT proved that, right? And it's interesting because we are, we're almost reversed back into a push to talk mode. Right. So in the beginning of Alexa, I don't know if you remember, there was a device. I have it right over there called a tap and you would press it and talk to it in order for Alexa to work. And now, and then that became that was kind of like the old way of doing. We just want to wake
word. But now if you go to Chats CPT and use its voice application, it is also pushed to talk. And people actually like it. And they're using it. It's still, I think a little bit clunky, but it, I mean, it's, it's in V1. So we'll see it get better. But yeah, I think it's interesting that what happens in a world where I don't actually, I mean, I have a one of those humane AI pins. Right. It's also pushed to talk. I have to tap it for the device to start listening to me. So I'm, you know,
I don't know. I think there's something to be said about in a user experience that is voice enabled. How much of this wake word stuff do we even have to do if these devices are going to be very personal on route? You know, as opposed to having to yell to it across the room. I think that was the dream in the beginning. But right. Right. Right. Right. Right. Right. I'm not too much for using these devices in our hands. Yeah.
And are awareable. Well, I mentioned this analogy again. I want to hit it harder and see if it lands for you. This idea of the uncanny valley. The uncanny valley in like 3D animation is like when it's getting more and more realistic and then like, oh, it's face is dead inside. It looks like a zombie. And then it becomes amazing again. So like Thanos is on the far right side of the uncanny valley because it's like, wow, that looks amazing. And then like Tom Hanks in the polar express is like
deeply in the uncanny valley. So there's a creepiness factor. And there was a, there was a humane kin competitor low key launched yesterday at the time of this recording called friend. That is basically a necklace pendant that is always listening. And then it's always buffering like the last couple of minutes of what you're doing. And then if you want it to act on that, you tap it. So that's another model. So rather than tap to speak, speak, speak, speak, I'm interested in your
input. Don't you think that the always listening, which was a huge problem with Alexa when it first launched gets into the uncanny valley of creepiness? 100%. And I, so it's interesting when I wore the AI pin on stage in San Francisco. 2000 people were there. The number one question I had when I got off stage as opposed to like, oh my gosh, I was so cool. You talked about this and this. It was, is that thing on your shoulder listening right now? Like that was the number one question I got.
And so yeah, I think, and I told them, I'm like, well, a good user experience with a voice device will let you know when it's listening. And Alexa does the same thing. My little Moxie device also does the same thing. Like robots will tell you when they're listening. And usually it's a color change of some kind of LED display. And maybe. So for example, the Ray bands, I'm sure you've seen the new met the I had one of the first ones. As soon as you start recording or taking pictures, there's a
light that shows you. You can't be like all spy and do things without people knowing because there's a light that and it's that's the right thing to do. Ethically, people should know that you're listening. The challenge with the device that's listening, listening, listening is that as people enter your world, right? As they come and talk to you as you, you know, as they pass by having a conversation with a friend, all that data gets picked up too. And what is their rights to that information?
Then on top of that, once someone has your utterances, now we understand this concept of building neural text to speech and neural speech to text. Someone can use those utterances to create your likeness. And I don't know if you know, but in the like legal system, you do not currently, there is no legislation that states that you have the right to own your own likeness, which is why people have to go and fight for their likeness to be preserved. It should be a given right that hey,
that's my face, that's my voice. You can't use it, but no people have to sue you. They have to create and establish a precedent. And these precedents don't exist right now. So I think 100%, it is uncanny valley. It's also kind of ethically questionable when you have devices that are just going to be open air listening to everything. In your house, that's one thing, right, which is why I think Alexa survived that kind of debacle from a security perspective, because it's in your
house at least. I don't know if you remember, people used to scream at Alexa's from outside of house to get it to unlock doors before you figured out the whole voice pen thing. So they all have their challenges, I think. Yeah. The that's so interesting. I was in a meeting at Microsoft build a couple of months ago, and it was one of these like NDA partner meetings and meeting after the presentation. And I was in the meeting and I was talking to some folks, and then I noticed that
one of these guys had a light on his glasses. And I was surprised at my own visceral reaction. I'm usually into the gadgets. And I didn't, I didn't like think this reaction. I felt it in my chest. And I said, are you are you recording? And it was such an it was such an inappropriate thing. We are in a closed door NDA meeting. And he thought it was cool to just, oh, but this is just for my own use. And I was like, but I don't know that, dude. I don't like you're sitting there. I
was so frustrated. How do you not say something? Like I always just tell people. He didn't say a word until we mentioned it. And then he was like, oh, I'm so sorry. Yeah. He got busted. But like, I mean, busted isn't quite the word, but he just, he was thoughtless. And I was maybe a little more surprisingly crisp than I would have ordinarily been. Oh, sure. But I mean, that's what, I mean, that's what we're finding out data wise, right? Stanford did a study on trust. And I think
it's extremely interesting. But this whole AI trust and what happens when people realize that there's a different level of trust. You lose trust no matter what when you implement AI. Like your customers will trust you less because you're using a robot rather than you. That is just the truth. However, the level of trust decreases much further. If they find out it was a robot after it thought it was a human or after it thought you weren't recording, right? So it's way
worse to walk into a meeting. And then later, we like, oh, and I recorded this. Would you like the, you know, would you like the notes? Rather than saying, hey, I use this device. Help me record. Even if you just from a psychological perspective, tell people, I do it for this reason. Most people are not going to say no. Like most people are going to like, oh, I get it. You can, yeah, sure. Yeah. Well, it's like a newsperson taking their recorder and putting it on the table.
Would you mind if I record this? And Microsoft Teams already announces to everyone. We're about to record. Make sure that you know that everyone's cool with that. But it's just walking in with like a camera in your face. You might. Yeah. Without saying anything like without saying anything. But okay, so then do you think though as a person who's focused on ethics and responsible AI that there's going to be this tension between doing it right and then convenience because ultimately we give up
our privacy. We give up our, we give up things simply for it. Well, it's so convenient though. Gosh, 100%. I think one of my challenges, for example, I work with realtors a lot. I'm about to speak at the National Association of Realtors and be there open. They're opening keynote. And in this keynote, I always like to mention them like they're very quick to be like, well, they're selling, right? They're just super excited. They want to sell. They want to do as much as they can to get
as many leads as they can and close as much businesses they can. So a product came out called air. I don't know if you know it, but it basically creates a neural Texas speech. It allows you to write a script and a human sounding voice will get on a phone and have a conversation with you with the the customer. It'll drive them to even buy something. And the challenge though is that it does not
disclose at any time that it isn't human. It does not. And these again, there are principles of ethical and responsible design that would tell you not to ever do that because it's just not cool. But this is it gone to market. And these people that are immediately interested who are to your point more interested in the benefit it offers them than the impact or consequence. And right now, there is no legal consequence. Like there is nothing like I mean, there might be some fine that they
might get they may get eventually when we finally get some legislation through. But right now, there's no consequence. It's literally just do I do the right thing and tell someone when I'm using this technology. Even if it will give me a small hit on trust, it'll actually build trust in my brand as a whole. I think really people have to start thinking very strategically about who what kind of company do you want to be in this world of AI? And that at the end of the day, people
pick people people do business with people. You might be able to do some shortcuts on the front end to get someone in but to create lifelong customer relationships. You and I know this very well. Like people care about you. And so I say like lean, I always tell people lean in. I'm building responsibly. Even if it takes a little longer, even if you have to do a little bit more work because at the end of the day, people will trust you. And then you can use AI to accelerate those
results for your customers and for your employees. Yeah, it feels though like we need to vote with our feet. And you know, like I prefer I think that I am a person that prefers a little bit more regulation, but then in order to have regulation, you need educated and informed legislators. Then the free market capitalists are like, well, let it figure itself out. And then I think that'll always come down to convenience. But it does seem though that like people are voting with their
feet and they're going, you know, that doesn't feel good. I don't like that. You know, yeah, I mean, just think about Google lenses, right? If it came out now, it might do okay. Oh, Google Glass. Yeah, Google Glass. Yeah. Like back in the day, like people literally voted with their feet and was like, that is creepy. I'm never going to use anything like that. Now we're seeing visors and all sorts of yeah. Ray ban is no one is calling them glass holes like they did.
So I do think yeah. And that's one of the benefits when I build anything with AIO is tell people on cyber security, they call it like red teaming, right? You need this team that's going to like ethically attack and try and break your system. And in AI, there's a similar concept. It's called AI red teaming, but it's a little bit different because in addition to adversarial attacks, you actually just need a group of people are going to tell you, this is good. This is a good
experience. This is a bad experience. This experience resonates with this audience. This is going to hurt these people. This is going to offend this group of people. Like who's going to tell you that if you end up having a team that's very homogenous in nature. So I think that's also part of anyone who's building AI systems. Like just think you're going to want to collect as many of these diverse personalities and perspectives into one room as possible. So before you go to
market, you can get that feedback that says, wow, you're totally uncanny valley here, buddy. You should probably take a double, double take on what you're trying to build. Most people, they do that testing and production. And you can just do a little Google search to find out how well that's going. Well, yeah, we know how well testing and production went in the last couple of weeks. Not awesome. Cool. Well, I look forward to seeing if your predictions come true. And I thank you
so much for hanging out. And I also want to take a moment to congratulate you. I understand that you were recently given a Microsoft MVP on a new category of responsible AI specifically. So congratulations. It's super cool. Yeah, super exciting. I'm excited for the year. So thanks for having me. It's been a blast. We have been chatting with Noel Russell expert on AI and a chief AI
officer at the AI leadership institute. This has been another episode of Hansel minutes and we'll see you again next week.