Episode 97: Securing AI - podcast episode cover

Episode 97: Securing AI

Jun 06, 202440 minSeason 1Ep. 97
--:--
--:--
Listen in podcast apps:

Episode description

In this episode Michael and Sarah talk with guest Richard Diver about securing solutions that use AI and LLMs. Richard also talks about his new book on AI Security, and Michael and Richard talk about what it takes to write a book.

We also discuss Azure Security news about Chaos Studio, API Management, Azure Bastion, Front Door, AKS and Copilot for Security and lots more!

Also note, we have changed the URL for the show notes web site, so please use this from now on: https://aka.ms/azsecpod.

Transcript

Welcome to the Azure Security Podcast, where we discuss topics relating to security, privacy, reliability and compliance on the Microsoft Cloud Platform. Hey everybody, welcome to episode 97. This week it is myself, Michael, with Sarah. Everyone else is busy. And our guest this week is Richard Diver, who's here to talk to us about artificial intelligence security. Actually, the note that we have here says AI security stuff.

But before we get on to our guest, let's take a little lap around the news. I'll kick things off. I actually want to start with a personal news item. I've actually taken a new position within Microsoft, which I'll start in the new fiscal year, which is July the 1st. So right now, as some of you probably know, I work in the Azure Data Platform. So I work on Azure SQL Database, SQL Server, Cosmos DB, Postgres SQL and MySQL, all from a security perspective.

I'm now actually moving over to basically that sort of similar work, but for the whole of Azure. So I'll be working on engineering stuff and sort of learning from attacks and how we can change our processes. So I started that in January the 1st. Really excited. I just, you know, I love the security stuff. And by the way, I couldn't say anything nicer about the Azure Data team. It's a fantastic team of engineers. They really know what they're doing.

They've learned so much about database products. The stuff that I didn't buy, there's still a lot of stuff I don't know. I don't know about databases. Just the magnificent database team or engineering team in general. So it's going to be a bit of a bit of a bittersweet to be honest with you. But I'm really looking forward to this new endeavor. So in terms of news, first one is Azure Kailh Studio now has a feature where it can pause processes inside of virtual machines.

This is really useful when it comes to things like mimicking updating processes rather than just rebooting the VM. So it's good to see that. Just one, you know, again, just one more thing to add a little bit of chaos to your environments. Azure API management now supports the circuit breaker pattern. So there's a whole bunch of sort of design patterns that are out there. And one of them is a circuit breaker.

And the idea of this is that when a front end becomes completely overloaded, rather than sort of failing in a horrible mess and, you know, impacting all the stuff behind the API management, it does so in a graceful way. And that's the circuit breaker pattern. So you can now actually enable that patterns, a whole bunch of parameters you can have in there. I'll provide a link not just to that news item, but also a link to the circuit breaker pattern, which is on the Azure architecture center.

The other bit of news is Azure Bastion now has a developer skew. Right now, if you deploy Bastion, you have to pay for it. There's no sort of free version. So what Bastion lets you do is, I know the Bastion guys are going to hate me for saying this, but it's almost like a jump box. So you're not connecting directly to your environment. You're going through the Bastion and then you can put all sorts of policies around the Bastion.

And that way you're connecting just through that, as opposed to potentially compromising back ends if there's an issue. So it's a really magnificent product because it's really seamless the way it works in Azure. But historically you had to pay for it. Well, now there's a developer skew, which is free of charge. I'm not sure about all the limitations and what have you, but it is essentially free of charge for developers.

And I think it only allows you to connect to one VM, I think, but I could be wrong there. And the last bit of news that I have is Azure Front Door now has log scrubbing of sensitive data and that's in general availability. This is really nice. So if you've got logs, you can put rules in there to scrub out sensitive data. So I'll put like asterisks where there may be sensitive data.

So for example, if you're logging, you may decide you don't want to log all of an IP address or you may not want to log parts of a URI, that kind of stuff. Then you can have as your front door, just essentially scrub that out of the log so it's not there. Again, this kind of stuff is just great to see because it's not uncommon for attackers to go after log files because they might have sensitive information in there.

So anything you can do to get that data out of there in the first place is always a good thing. So that's my news. Sarah, what you got? So a couple of bits. I've got some Kubernetes and AKS and then some copilot for security. So let's go for the AKS type stuff first. So firstly, the open source project called Draft that Microsoft keeps an eye on and maintains has been updated. It's including a new feature called Validate.

So what Validate will do is it allows you to scan your manifest to see if you're following best practices so you can catch any issues early in the development lifecycle. And this is all part of our new AKS feature around deployment safeguards. So if people make silly mistakes, we catch them early, early. And then also in GA is support for disabling Windows Outbound NAT in AKS because we know it can cause a bit of problems in AKS pods.

So now if you need to, you can go and turn that off when you're creating new Windows agent pools. Next is could it be my new baby? I don't know because I'm very loyal to Sentinel, but Microsoft Copilot for Security has a couple of new things in public preview. It has got Azure Firewall integration. So that means that you can retrieve IDPS signatures from an Azure Firewall. You can look at the threat profile of an IDPS signature and do some other things there.

And then also, we also have WAF integration, public preview and copilot for security. So that means that you can have a look at, you can summarize and get copilot to pull things from the WAF and you can have a look at it in the prompt. So definitely exciting. If you're already on the Copilot for Security bandwagon, you should go and have a look at that. And that is my news. And with that, I get to kind of be Michael this time. We will turn our attention to our guest this week, who is Richard.

Richard, do you want to tell us who you are and what you do at Microsoft? Hey, thank you very much. Yeah, Richard Diver. I've got 29 years experience now in the tech and security world, starting in the Navy back in the 90s. But my current role is the technical story design lead in Microsoft security marketing. Now, Richard, what does that mean? For those of us who might not know what technical design marketing is. I get to work between engineering and marketing.

So I get to work with fun people like yourself and Michael and many other experts, work out the behind the scenes story and then bring it to the real world. That's basically what I get to do every day. I know. Well, I do know because I actually talk to you a lot, Richard. So, there's a couple of things. Now as Michael had said earlier, we're going to talk about AI security stuff, which I have to apologize was my note that I wrote on the podcast notes that we have. But let's talk about that.

So we've had some guests on already talking about different aspects of AI security. So some earlier episodes, we had Ryan who talked about co-pilot for security. And we also had AI Red Team folks. We had Amanda and Pete on. But I know that you tell stories more kind of, how do I put this, straight up AI security. So if people are trying to get their heads around all of this, and obviously many people are trying to use AI, security people are worried because they need to understand it.

Where would you say people would start with that? Good question. So AI security is not much different to any other security we've done before. There's a lot of basics in the data and identity layers. I think those are the two that most people get worried about is who's got access to it and what are you going to do with my data? So we have to go back and review some of the things we've maybe not done properly in the past.

But the AI specific security problems, and this is where the Red Team comes in very handy, is that it's a little bit like you can social engineer it. And so that becomes a whole new problem. You can't social engineer machine learning. Machine learning has a start and an end. You put data in, you get data out. It's pretty solid.

But generative AI, as we have today, that ends up with its own special unique characteristics that we have to be aware of, put some guardrails in place, and then monitor and watch and not let it fall down. That's basically where it goes. So you mentioned that people are concerned about identity and data. There's some top concerns. So let's start there. Now I know you did a Build session with my boss last week at Build or last week from the time we're recording this.

That was all about data security. So why don't we start on that one? Tell me problem, what we should do about it, why it's different in the AI space. Yeah, so we luckily came up with the three golden rules that made life a lot easier for us to present it. And we had a lot of fun doing that one. So the rules basically say that data already has a lot of protections in place.

So whether you're accessing a database or you're connecting to file servers, or maybe your user is uploading an email, looking at a website, every kind of data type you might use to interact with AI comes from somewhere. So hopefully it's either a trusted source like SharePoint or Office, or maybe it's coming from the internet, which means it might be untrusted.

So knowing the kind of data you've got coming in, the idea is you should make sure that if there are already access controls or maybe API security, or maybe you started doing the more advanced labeling of content to say exactly what kind of sensitivity label this data has, you would make sure that when you get to your AI system, you do not want to let that AI system have unlimited access to the data on its own, because otherwise I will trick that AI into giving me all the data it has access to.

So the first layer is make sure that your data is protected and that you don't undo those protections by giving even read-only access through a service account is not a good idea. So instead, you should use the user's credentials and get access to the data on behalf of the user. There are different ways of doing that, but that's the fundamentals of that first one. And then the second rule is about identity.

And this one was surprising to a few customers I spoke to about this, but you should think about multi-factor authentication, what device the user is using. So when you get to the prompt of an AI system, that should be locked down and controlled to your trusted users.

Now if you're making a customer-facing or public-facing AI app, you have a different set of problems, but mostly we're talking to enterprises and businesses that want to build some kind of chat GPT type functionality, but on their own data. And for that, they need to limit the scope down to the users that are trying to get to the data that that AI has access to.

I have spoken to a few customers of building monolithic applications that has all the data with all the applications and all the users in one place. And really you're asking for trouble there because you're trying to expect that the LLM or the large language model is controlling access and I would not use that as a security layer. And then the third one is layers of security around the model. So models today, they're getting better.

There's a, our red team works regularly with our own and other model makers to give feedback on how to improve model robustness in the first place and they're becoming more secure by default. However, I would always recommend that you put layers of controls like content safety, some kind of filtering. You don't want the user to work directly with them with an AI. You want some layers in the middle that look for potential bad things, filter them out.

And then even when you get the answer back from the AI, you double check it and say, is the sensitivity of the data you're giving me either malicious or higher level than I trust this user to have access to. And that's, you get one last chance before you hand it to the user. So we've got, if you watch the build talk that we did, there's diagrams in there and it makes it much easier to explain. So many, many moons ago, I worked on this little thing called the security development life cycle, SDL.

I heard through the great via that there's been some changes made to the SDL in light of AI security. Yes, there is because the build conference is coming up. We wanted to celebrate the 20 years of SDL. And so just before the build conference, we made it the new updated site go live.

If you go and look at the SDL site now, we'll put a link in, but aka.ms slash Microsoft SDL, you'll see that we've created new 10 new practices and these practices cover end to end security for both developers and through DevSecOps and into live operations. And the intent is we will continue to update this. We have an initial set there today and then we'll continue to add detailed practices within each of these 10 in the future.

Some of the best ones that we've added into the SDL is things like performing security design reviews and threat modeling. The number, I asked people to raise their hand in the session, but also other conversations I've had when I asked people if they do threat modeling, there's a little bit of an awkward silence of what is that? And I don't think we do it very well. So I'm glad that's now being called out something serious you need to go do.

Another one would be the things like performing security testing. There's a lot of different security testing you have to do apart from pen testing and red teaming. And so in the software world and also physical world testing, and then as we get to the end of that, there's also training. And so as we all need to do is everybody needs to learn about security, which is why we bought more security sessions to build this year.

We need everybody to take part in the security conversation going forwards. From a practical perspective, it's interesting you should bring that up. I can tell you right now, looking at the threat models that I review in Azure data, over the last, when I say nine, 12 months, there has been more of a focus on asking machine learning, large language models, AI protection of data of the models themselves, basically asking those questions during the threat model reviews.

Whereas prior to the sort of wave of artificial intelligence, those questions honestly were not really asked, except in environments that were just pure, true AI. But yeah, so now there's a much bigger focus. That comment and those topics are brought up during basically every threat model review, as they should.

Another work I got started in when I started learning AI last year, and I started to think, what are we going to say when all these different AI people at Microsoft are doing AI and they're doing security? And threat modeling became one of the quickest things we had to talk about because we had 50 plus teams all rapidly building co-pilots and only one team having to check them all before they went production. And so that team needed some kind of standardization in how do you talk about AI?

What are the components? And so building a, even building a tech stack, the fundamental tech stack for AI is the platform, the application, and then the usage. If you can go those three layers, then it's a good starting point to bucket things into at least three big areas. And then you need to make sure that you think of the threat from the user's prompt in the usage area through the application and then to the model. And it's always surprising how simple that is, but really effective as well.

Well, I'm realizing I'm going completely off script here. By the way, for anyone who wants to know, we don't really have a script in this true sense of the word. We have just a couple of bullet points to sort of discuss. This is not in the list. So if you look at co-pilots, right? So co-pilots is the big Microsoft brand around using AI to bring it to the masses essentially. That's my perspective.

I look at, we were designing the AI, the co-pilot for Cosmos DB and where you could basically give it a database and it could ask you questions like, how do I do this, that, and the other build up the SQL statement for you. And it was interesting, the safeguards that we had to put in place on the understanding that there was always the possibility that what question you asked could end up producing a query that was wrong.

And we need to make sure that we had defenses in place to mitigate a potentially incorrect query coming back from the large language model. There's all these sort of things you have to think about, right? I mean, could you imagine, you know, you say, how do I query for ABC? And it gives you the prompt back or SQL statement that drops the table instead. Right. So, you know, we do things like warning people like, hey, you know, you're connecting as an admin.

You really shouldn't be running these queries as an admin. First of all, it's just a bad idea anyway. But you know, if something goes wrong, then it can go wrong because you're an admin, you're violating the principle of least privilege. So you really probably shouldn't do this. And there's stuff in there that's like, you know, there's prompts to kind of just educate people on that.

But there's also some real good stuff that we did under the covers to help mitigate and detect a potentially, you know, a hallucination of a query. Do you see that kind of stuff happening as well? Yeah, that makes a lot of sense. From a zero trust perspective, you should trust that the users probably put something in wrong and then trust that the, or don't trust that the LLM interpreted what they wanted to do correctly.

And LLMs or large language models, even small language models, you know, they're very broad in all the things they can do. That doesn't mean they're very good at specific tasks. And so there was a really good session, a build between Mark Rusinovich and Scott Hanselman, where they were doing a very basic demo where they were trying to clean up the desktop icons. It's a really good storyline.

But in doing so, what they got the AI to do or the generative AI to do was actually call features and functions that knew what they were doing. So basic tasks like count how many text files are on my desktop. The LLM, terrible at that. It didn't, it got the number wrong every time. But if it called a function and it called the correct function, it would get the correct answer back. And so I think what we're seeing is AI using AI is where we're going in the future.

And that's good and bad because it all comes down to the developer putting these Lego pieces together. But if one AI gets it wrong and another AI gets it wrong, how many chances you got of getting it right. So the final part of that is we're looking at using something called an AI watchdog. And just purely from a security perspective of jailbreaks and prompt injection, you can't always tell from the language the user used that what they were trying to do was a jailbreak.

And so you need AI, separate AI that's not influenced by the user's instructions, watching both the input of the prompt into the LLM and then the output from the LLM and say, do these two things match? Does any of this look suspicious in any way? And it's not just one-on-one, but like the whole conversation. And you're looking at things like intent and semantics. And so that's where we're heading with it. Like we need AI to protect AI or protect users and AI interactions.

So Richard, you mentioned earlier on about the different layers of an AI application. But for the security folks out there, how does that translate into something that we can do something with in terms of say defenses or mitigations that we put in place? Can you run us through that? Absolutely. So if we started the usage layer, the first thing to think about is that we still have all the same cybersecurity risks from things like an insider, which is growing in popularity.

Then there's social engineering and phishing links and phishing documents. They're all going to be amplified by the use of AI by the attackers. They're going to find better ways to do that in localized languages or targeting individuals. And then we have this not a new concept, but we're resurfacing it with poisoned content. And that idea being that you might trust the content is benign. Let's say I'm a hiring manager and I'm going to look at a bunch of CVs.

And so I might want to take 10 CVs from my top candidates and have the language model review them all for me. What I don't know is that somebody might have hidden some kind of instructions inside of that Word document or even an image or an audio file, whatever it might be.

So at that usage layer, there's a whole set of things that we can do to protect ourselves, like identity security, make sure we're not logging in as admin so that if we do trigger some kind of phishing link, we're not causing more damage later on. We can do content filtering, some basic things like just know where the content came from. Looking for white text on white background is such an obvious one. And also special characters, smiley faces.

There's lots of telltale signs, very basic and simple you could do. I think those defenses will get more mature in time. Once we create the prompt and then we create the content and we add it all together, we're now into the application. So we've sent that in when we're waiting for a response. That's when prompt filtering comes in. This is something that in Microsoft Copilot, we build all this in and we look after all that, we make sure it's best defenses we can make.

But if you're building your own, you need to make sure that your prompt filtering is kind of top of the game to prevent some of the obvious harms. It's like malware filtering today. Then you have all your applications, security controls, it's the normal things like SQL commands or API security, et cetera. Eventually you're going to build a meta prompt.

So a system meta prompt is where you take what the user said they wanted you to do, you've cleaned it up and then you might add additional guardrails. Like I want you to act like a professional health and safety representative and I want you to answer the user's questions in a certain way. Then you give a whole bunch of instructions that comes along with whatever the user asked for. Now we package that up and then we send that to the LLM.

So now we're in the platform and in the platform again we've got layers of security here but in that deep safety system we're certainly filtering for harmful content and there's a bunch of categories for that. We've just updated the Azure AI Content Safety Filter now has customizable filters. So if you don't want certain words or certain kinds of language being used, you can change it.

But we've also got political problems and racist problems and harmful content like bio weapons, all of that can get filtered out. And then we ask the LLM the question. So it's taking a long time to get there, it's milliseconds in the real world, but by the time it now gets the LLM it's going to process whatever information it's given and it's going to send it back out. That's your on the way back out, we're back into the application layer.

We can look at what data is if I used content from HR and content from finance and content from legal, what is my outcome? Is it a mix of all three? Is it none? Because we've cleared all the sensitive data out like what is it? And so you have to do some kind of sensitive labeling on the data going out. But you also want to check it again to see if it's appropriate for the user that's using it. So are you giving out customer pricing lists when it's the customer you're sending it to?

No, I want to send that to my sales reps. I don't want to send it to my customer. And so you can check all these things before you then send it out. And then we're back into the first layer again, which is the usage layer. If the user isn't on a corporate managed device or on the right kind of application, if it's anything that's suspicious, you might not want to send that data to them.

That's some of the normal layers of security we put in place, like don't send it to Dropbox, just only send it to OneDrive or only send it to an office application. So there's controls, that's 10 controls I've just told you about in the diagram I'm staring at. But there's many different ways that you can try to prevent, expect it's going to go wrong and then do detection and response on the way back in real time.

Richard, you're talking to loads and loads of customers and I'm sure people who are listening are thinking about this too. But I'd like to ask people, where do you think, because there's a lot to do here, where do you think is the best place people can start and be the most impactful with these controls and mitigations and things they should be worried about? Because of course, we'd love to say do everything straight away, but we all know that that's not really possible, right? Absolutely.

I can plug my book here, I think, and say go read my book. So I put all this together just a couple of months ago, I decided to write a book on this topic. And through the chapters, I try to structure that of like all the things you need to think about. But apart from reading the book, I would say that at a bare minimum, start drawing diagrams is my personal offering to the world is like without a good diagram, you don't know what people don't know. And you don't know what you don't know either.

So by putting the diagram together of what you think the system looks like, then trace the data through that system and see if everyone else in the room agrees or as what ifs, you know, and just keep asking what if what happens if I interject a command here or what if I leak the data there. And so they're going back to threat modeling and that idea of getting more people involved in threat modeling.

A diagram for me is why I'm that's why I moved into marketing is I want to make sure that we we provide people with the tools to go and make a difference. And that's a very nice segue into our next into what I was going to ask you about next, Richard. As you said, you've written a book, both Michael and I have also written books. And I know there's a lot of people in industry who quite like the idea of doing it.

So although a little bit of a segue, I wanted to ask you what what's your advice for for people who might want to get their name on a book and be an author to show it to the nearest and dearest. I mean, I'm not going to lie. That's basically my was my motivation originally for doing it, that I just wanted to say I was an author more than anything. That's about it. I'll be honest with you. You don't do it for the money. I would say you do it because the best reason I found is to learn.

So I thought I knew a lot about AI as like confident enough to write a book. Why not? And so I started to track like, what does it take to write? I've written books before, but I've written them on Windows and Sentinel. They're very technical, and I wanted to make sure this book was to a broader audience. So I just started writing it. But I don't know how you both approached it. Maybe you can share your experiences here.

But the easiest way to get started, if anybody wants to do it, is you create your 10 chapters and inside each of your 10 chapters, you write 10 bullet points. And that gives you the bones of your book. Now with that, it will take you, I'd say 140 hours of writing, and then 60 hours of perfecting, which is research and diagrams and editing and getting reviews from 10 more people.

But eventually, you're aiming to write somewhere between 60 and 70,000 words, and it will take you about 200 hours of effort. So depending on how many books you sell and how much you sell the book for and whether your publisher takes most of the profit or not, you're probably not going to get minimum wage for writing a book. But what you do get is all the lessons learned of having to, the two hardest chapters for me, has to be the ethical framework and then AI governance.

So these are two things that don't normally think about ethics in a security world, some people do, but not everybody. When you're from a technical background, ethics weren't the thing I studied. And then AI governance is so new that took a lot more research to understand where are we at in the world of governance. And so you have to dig into the bits that you don't know very well.

But now I can say I'm more confident in being able to have these conversations just because I spent those 200 hours in writing. And then it is nice to feel it in your hands and you've got a book and you can put it on the shelf. And every now and again, someone says, Hey, I bought your book and I'm reading it. That's pretty cool. Yeah. I don't know how many hours we spent on the last book, which are designing and developing Secure ASI solutions.

As you don't know, I don't really track it to be absolutely honest with you, but I do agree with agreeing on basically the chapter outline and then the main points in each chapter on the understanding that you're probably going to change. We were pretty lucky. I think that the outline that we had for the book, I think we deleted one chapter and we took the content of that chapter and infused it into other chapters.

So for example, that actual chapter was going to be a whole chapter on Defender, Defender for cloud. And we realized it was actually better to not make it a separate chapter because it was changing so much and rather just infused the best practices into the other chapters instead. Just things that you should be aware of when it comes to, for example, in the Key Vault chapter, we talk about Defender for Key Vault, right?

In the storage stuff and the database chapter, for example, is a bit better example. We talk about Defender for databases and we talk about sort of the overall security score just conceptually. But I agree with you. I think getting that structure in your head helps substantially. So, but yeah, I agree with that a hundred percent. As for timing, I don't know. I'm terrible when it comes to predicting timing. Some people know this story.

When I wrote the crypto chapter, it was supposed to be 25 pages long and ended up being 88. So yeah, that'll give you an example of how good I am at working out what we should be doing in terms of time and length. Yeah. That's why I recorded all my times to just work out how long does it take. It's about 3, 530 words per hour and that's when you'll spend 140 hours. So if you go to 60,000 words. But you're right.

And that's actually why a lot of people don't write a book because they don't know how long it's going to take or what efforts involved or they get two chapters into it and it's amazing. And now you've got the grunt work to do. You've got to finish it. So well, it's more than just that. You're absolutely correct. But it's more than that. You're under a time constraint. Like you can't take forever to write a book.

You just can't, you know, I mean, people, you know, they're holding your feet to the fire. You can't just lollygag your way through writing a book. You know, you've we actually hit our schedule. In fact, we were actually about about a week and a half ahead of schedule when we were done. We were very proud of ourselves, but you know, we put in the hours. My guess is we probably put in more hours than we thought to make that schedule though. But that's just the nature of the beast.

I mean, you bring up a really interesting point about learning stuff. You will learn stuff as you write the book. Even if you're an expert in the topic, you will still learn things. And in fact, one thing I've found with doing, you know, sort of the stuff that I write about, you'll find bugs in products too. So there's a product from the Key Vault team called Managed HSM, Managed Hardware Security Module. And there was a bug signing data. There's a bug in their REST API.

It wasn't a bug in the HSM. But I, you know, I was doing some experiments and I found this bug calling the REST API. It was brand, by the way, it wasn't even shipping when I was using it. So you know, bugs are expected. But yeah, you'll probably find bugs and discrepancies in products and features as well when you're writing a book because you're being very disciplined about it when you're writing. I hope, you know, I hope people are disciplined when they're writing.

And yeah, you'll probably find issues. Is that something you came across too? Yeah, I started to come across the what if scenarios. So I'm not in the red team, but I get to work with them so often that I work, you get to see how they work or some of the thought patterns they go through. And it's really just what if. And so I thought about things like what would a poisoned well or a honey pot look like if you reverse what we're using for today, the honey pot is to attract the attackers.

Well, what if we try to attract AI? What if we get AI to go on? And there's been some examples in the news recently. I won't repeat them now, but there's some interesting stories of AI telling users to do things because it found it on Reddit. And so if you can attract the AI to go and use something as a source of truth, and it's really not, that's a great scenario. And on your point like about being under the gun, when normally I would be and I'd have been told you're on a schedule.

So if you go to a publisher, you'll be under a schedule and you'll say, I'll submit X chapter by a certain date and then version two X number weeks later. I didn't, I went self-published. And that's another angle is you do everything yourself. It means you've got the freedom of time, but you're right. If you start writing and you take two years, how relevant is the information you wrote two years ago? And you'll end up rewriting it so many times. So I gave myself six weeks.

I wrote it in six weeks. So I don't encourage anybody to do that. I had a lot of spare time on my hands. I just uninstalled Instagram and spent all the spare time writing a book instead. Yeah. Uninstalling social media is probably just a good idea anyway. Yeah. But that's interesting. So you did it all in six weeks. That's pretty good.

I mean, all things considered, you know, but again, and just for people who are listening need to understand, we're not paid as Microsoft employees to write these books. I mean, they're basically deemed moonlighting. You actually have to get an agreement from your manager.

In fact, in my case, I had to get an agreement from my manager's manager as well to actually write the book and you actually sign a moonlighting document because you're essentially earning money outside of, outside of Microsoft and it's not a Microsoft sanction thing. It's not part of your job. You are literally moonlighting. So yeah. And to your point, you're not going to, you're not going to retire on many books.

That being said, writing secure code and writing secure code second edition were immensely successful by any measure. I mean, I'm not talking John Grisham novels successful, but in terms of tech, tech books successful, they were ahead of the pack. We sold a lot of those books, a lot of books. But yeah, for the most part, you're not going to, like you say, you're not going to retire on the earnings from a, from writing a book. It's almost a labor of love if nothing else.

It's good to see people are still reading books with the opportunity of AI and search to just go find it, you know, summarize this for me or, or tell me what you think of X topic. You can learn so fast. That's one of the things I love about AI is I don't search anymore. I just ask AI for its opinion on something and it does all the searching for me and then brings you back to top five websites.

Now not always accurate and you got to double check it, but it's easier than doing like a search and getting 10,000 pages. I can go look at. So they're one of the problems we've got in publishing is it's getting swamped with low quality AI created content, which people are just trying to do for like a quick buck. So if you are an expert out there and you do know your topic, you should think about contributing to the proper corpus of the library, right?

And having content that people can read and appreciate because there is an audience out there. And this is where you might not self publish and you might go down the path of a publisher because if you can convince them it's a good book, they can do all the legwork of getting that book out there. And we're still selling books we wrote four or five years ago, one or two copies a week maybe, but it's still a, you know, Windows security and Sentinel.

They're still books people want to pick up today and go and read about. So they have long lifespans too. So be careful what you write in your book because it will still be around in five and 10 years time. But you bring up an interesting point there though. I love to read. I read all the time. I do use AI to summarize some things once in a while, but I do enjoy reading a book.

Look, this is probably more information than anyone needs to know, but I have a whole bunch of Kindles around, let's just say strategically placed around the house and leave it at that. And I have one in my laptop bag. Whenever I go on vacation, I take my laptop with me because of course I have to take my laptop with me. I have a Kindle in there as well. Because they use so little power, you know, the thing can stay on power for a long time.

But yeah, and then that way, you know, if I'm reading something, I can just pick it up and you know, where I left off. But yeah, I read all the time. I probably don't use AI as much as I should to sort of summarize stuff. I probably should start doing that a little bit more. The best part of course, you can ask AI to summarize things in a snarky voice, which is even more interesting. But yeah, is there anything else you want to add before we wrap this up?

If you're wondering why Sarah's not chiming in, she's actually got a, just right now, she's got a horrible internet connection and she's just dropped. So I just sent her a message on Teams and just say, Hey, just hang in there and Richard and I will wrap this thing up. And on that very topic, so Richard, one thing we always ask our guests is if they had one final thought to leave our listeners with, what would it be?

I think as I was writing the book, there's one quote that I came back to time and time again, and it's from Michael Crichton and Jurassic Park. And he said, your scientists were so preoccupied with whether or not they could, that they didn't stop to think if they should. And when it comes to AI, I think that we could all learn from that one still. I love the movie, but as we get into the idea of AI, it's like, should you be doing this with that data and using AI?

And it doesn't mean no, just stop and think. I love Crichton's work. I like a lot of Crichton's thinking beyond his books. All right. So with that, let's bring this episode to an end. Richard, thank you so much for joining us this week. You and I meet somewhat regularly inside of Microsoft, so it's good to sort of interview you beyond the work that we do. So again, thank you so much for joining us. And to all our listeners out there, we hope you found this of use.

Stay safe and we'll see you next time. Thanks for listening to the Azure Security Podcast. You can find show notes and other resources at our website, azsecuritypodcast.net. If you have any questions, please find us on Twitter at Azure Setpod. Background music is from ccmixtor.com and licensed under the Creative Commons license.

Transcript source: Provided by creator in RSS feed: download file