Episode 93: Continuous Security Development Lifecycle - podcast episode cover

Episode 93: Continuous Security Development Lifecycle

Mar 25, 202439 minSeason 1Ep. 93
--:--
--:--
Listen in podcast apps:

Episode description

In this episode Michael, Sarah, and Mark talk with guests Tony Rice and David Ornstein about insights into the Continuous SDL (Security Development Lifecycle).

We also discussed Azure Security news about Azure Key Vault, Cloud PKI, OAuth2, updated SQL Server password verifiers, Memory Safety and Azure SQL DB.

The Microsoft Azure Security Podcast (azsecuritypodcast.net)

Transcript

Welcome to the Azure Security Podcast, where we discuss topics relating to security, privacy, reliability and compliance on the Microsoft Cloud Platform. Hey everybody, welcome to episode 93. This week is myself, Michael, with Mark and Sarah. And this week we have two guests. We have Tony Rice and we have David Ornstein, who are here to talk to us about the continuous Microsoft Security Development Lifecycle or the continuous SDL.

But before we get to our guests, let's take a little lap around the news. Mark, why don't you kick things off? Yeah, so for my news, a couple things coming up for me. Tampa B-Sides going to be presenting at the main show on Saturday.

And so I'm going to be talking about, I think we called it the No BS Sock, but some real straightforward truths around security operations and how it works and what good looks like and how they actually grow from a couple part-time people or one part-time person into a global kind of 24-7 operation. And I'm also doing some training, which is still available, some slots there, on the Friday before, talking about security adoption framework for Microsoft.

So essentially, how are we guiding customers on the overall security modernization piece and kind of talking through how we guide that and some of the lessons learned, best practices there. So that's a two-hour session on the Friday from, I think, three to five. And then I'm also talking at RSA. So for folks that are going to be in that area, love to meet you face-to-face. I've got a couple of theater sessions on the show floor and then a main conference session.

It's actually going to be a really fun one. It's called You're Doing It Wrong, Common Security Anti-Patterns. So we're really kind of zooming in on all the things that we see organizations kind of get wrong, often for all the right reasons and without really knowing that. But making sure we're calling out, hey, these are the opposite of best practices. These are the things you need to fix. And then how to address those with best practices.

And we've got some interesting themes and whatnot that we're going to do there and reveal some interesting new content there as well. So that's the news for me. So I just have one bit of news this time, which is that Trusted Launch is now in preview for AKS. So essentially, that means that the Trusted Launch is improving the security of AKS nodes from persistent attack techniques.

So it means that we're going to have the machines underneath have boot loaders that are verified and signed and also same for the OS kernels and the drivers. So of course, you're trying to have an entire boot chain that has been all secured. So you should go and have a look at that because of course, we love all of our boot chain being nice and secure. So go and have a play around with that if you're using AKS. And that's it from me.

I actually have quite a bit of news this week, but I'll try and keep it brief. The first one is there's been a huge improvement in Azure Key Vault. In fact, I was talking to a customer this week who said that they couldn't use Key Vault. They had to use managed HSM instead because they have a requirement for FIPS 140-2 Level 3 hardware. Well, guess what? Azure Key Vault now supports FIPS 140-2 Level 3 if you're using the hardware SKU like the two variants you can have.

And if you use the non-standard SKU, that one includes support for FIPS 140-2 Level 3 HSM, which is absolutely fantastic. All you have to do is create new versions of keys and they will roll over to the new hardware. There's also something which actually blew me away when I saw this because I did not know this was coming out.

And that is that the Microsoft Intune Suite now supports the Microsoft Cloud PKI, which basically means that you can set up your own, essentially a certificate service to issue your own certificates for your own devices and your own services and so on. This is really magnificent to see.

So I've got a couple of posts on that information because this is really, really cool because it means that you can now start, again, issuing your own certificates for your own use as opposed to using a third party. The next one is so Meryl Fernandes, who is a product manager over in Entra ID, has a video called Run a Quick OAuth App Audit of your Tenant Using This Command to Protect Yourself. So basically what it is is a tool that he's added to his MS Identity Toolkit.

And it's a PowerShell command that will dump all the app IDs that you're using inside of your tenant so they can see if they're, are they being used, are they old, do they need revoking, you know what? A lot of people don't have a lot of visibility into their applications that are running with managed identities. So a big hat tip to Meryl for doing this. Next one is, actually I'm very proud of this.

This is actually the first feature I've shipped in a Microsoft product in a long time, very, very long time. And that is we now have support for iterated and salted hash password verifiers in SQL Server 2022 Cumulative Update 12. So historically we stored a password verifier using SHA-512, which is okay. There's no vulnerability there, but some compliance programs require something even more secure, which is to perform iterations over that, again, slowing the attacker down.

So we've now added that support. It's hidden behind a trace flag, so it's not enabled by default. You must go and enable it. But basically it's there for compliance purposes. And yeah, it's really good to see that being added. We started the work late last year. And this is driven primarily by customer requirements.

Next one is, unless you've been living under a rock recently, there's been a big discussion about using Rust versus, say, C and C++ for systems development, or more accurately using memory safe languages. And there's a document came out from the government explaining the benefits of using memory safe languages. And Rust was called out as an example in there. Well, then there's a salvo came from the other side, which is from Bjarne Stroustrup and Herb Sutter.

So Herb works at Microsoft and he's actually on the C++ ISO standardization committee. And he put out a post which is worth the read. And it's basically that the White House document ignores all the changes that have been made in modern C++ over the last half dozen years or so. So it's well worth looking at these. Don't get me wrong, I'm actually wearing a Rust t-shirt right now. I'm a huge fan of Rust. I really, really am. But modern C++ is also still worth a look.

It's not your father's C++ anymore. Unfortunately, a lot of C++ code is still written as essentially a glorified C where you're dealing with pointers directly and buffers directly and so on and so forth. Whereas modern C++ abstracts that all away with almost zero performance degradation. So it's worth reading, especially if you're involved in that space. Next one, which is my last one, is that Microsoft Entra ID logins is now supported in Azure SQL Database if they have non-unique display names.

Historically, you had to have a unique display name, which isn't always the case. It's quite possible you may have two, I don't know, John Smiths. Now you can actually create a login in Azure SQL Database using create login from external provider with object ID. And you can actually put in the GUID for the user as opposed to their name. So that way you get rid of this essentially this name collision. That's good to see as well.

Always great to see these kinds of advancements being made in our database platforms. All right. That is my news out the way. So let's turn our attention to our guests. This week, as I mentioned, we have two guests. We have Tony and David, who are here to talk to us about Continuous SDL. So gentlemen, welcome to the podcast. We'd like to take a moment and introduce yourself to our listeners. Hi, everyone. Michael Mark. Nice to chat with you again. It's great to meet you, Sarah.

My name is Tony Rice. I've been with Microsoft for 25 years. Yes, it's been that long. I've known Michael for a good fair few of them. I'm currently in the Digital Security and Resilience Organization. I'm responsible for Microsoft security policy all up, including the SDL. In my career throughout Microsoft, I've held various roles, including working in the field as a security consultant.

More recently, working with the engineering groups, helping them implement the security development lifecycle. And that's where I first discovered that it actually needed to be updated. And so one of the things that we've been working on is something called Continuous SDL. Hey, everybody. My name is David Ornstein. I'm an engineering manager at Microsoft. I work in the Digital Security and Resilience Organization with Tony. We're peers in the company.

I've been at Microsoft for 24 years, just sort of chasing Tony's tail along there a good amount of the time there. I'm a software engineer more than a security expert, but for the last 15 years or so, I've been working in the security space, helping run the Trustworthy Computing Initiative for quite some time. And then for the last 10 years or so, building tools and systems that are underneath our implementations of running the SDL at scale in the company.

So I'll kick it off with, I've got gray hair, what little bit of it is left. So I remember the original SDL and when it kind of got rolled out. But for our audience, can you just give us kind of an overview of kind of the SDL, what it is, what the purpose is, and how people should be using it? So what is the SDL? That's a great question. So the SDL is a process that was invented a long time ago, feels weird to be interviewed by Mr. Howard, one of the grandfathers of SDL.

Its main aim is to help engineers build secure software by helping everyone find security issues as early as possible in the lifecycle, removing them. Or if we can't remove them, reducing the severity of those issues. What we found as we've been progressing over the years is that it's really, really hard to do things in the ways that we used to. We used to have a final security review. We used to run tools occasionally, but now we run them all the time, always.

And this is where the foundation of continuous SDL comes in. So we're using the data-driven approach to continuously evaluate everything that happens within both the engineering pipeline and the source code. And we use that to make decisions for the benefits of security on a daily basis. It's funny. First of all, I'm not sure whether to be thankful or what about being referred to as the grandfather of SDL. I'm not sure if I'm being called old there or not.

But anyway, it's interesting though, because if you look at the book that Steve Lipner and I wrote called The Microsoft Security Development Lifecycle, I still think it's a useful book, it's a good book, but there's a lot missing. And to your point, David, one word that's missing in there is cloud. There is nothing about cloud. We touch on pipelines a little bit, but certainly there is nothing about cloud scale, cloud deployment, this constant continuous delivery of software.

And to your point, the SDL needs updating just because of that alone, as more practices become more agile. Is that a fair comment? I think you can think about it as two primary drivers, one internal and one external. The internal one is just the nature of building software and the kinds of things that we're building and how we're doing it has just changed radically. The SDL has been pretty much continuously changing for all this time.

Continuous SDL is just a name for the latest sort of wave of changes. But if you look at the SDL originally, we were releasing products every two or three years or something like that. And the way you can think about tackling security when you're doing that is just completely different than if you're building large complex services where you're releasing them on practically a continuous basis every day, maybe multiple times a day. You just have to tackle things differently.

And I think that's sort of the internal side. And then I would say the external side is just the nature of the threat landscape and the attackers and the way they are functioning. Unless you're asleep, unless you're not paying any attention, you understand that the world is just under some pretty serious threat. And we are and our customers are. And so a lot of what we're doing right now is responding to that.

And the changes for continuous SDL have been underway for a couple of years, but are really being accelerated as part of a broader initiative, which is Microsoft's response to a lot of what's going on in the outside world called the Secure Future Initiative, which includes continuous SDL, but includes a range of other things. By continuous evaluation, I assume you mean essentially tooling, right? So static analysis and potentially dynamic analysis.

Can you give some examples of what that might look like and perhaps even what customers, people listening to this could potentially use as well? OK, well, I'm happy to start with the continuous evaluation component because that's where we were. So the number of things that you want to make sure are true about a piece of software has grown substantially. We used to want to make sure there were no buffer overruns. And now there's 10,000 things we want to make sure are true.

Because as the threat landscape has changed and as the attackers have gotten smarter, there are just more and more ways that they can get in and more and more ways for us to keep them out. And so if you think about how long does it take to go sort of take one pass over a fairly large complex piece of software and figure out whether you're actually meeting all of those requirements in the SDL, that's a pretty substantial process.

And it originally, because we only released, as I said earlier, we only released, let's just say infrequently before the world of cloud, you could take some time during the tail end of the development cycle of a piece of software and sort of have your security push and have your security focus and do your evaluation. And that's just not feasible anymore because if you wait for that, what you find is just an enormous pile of stuff to do at the end.

And in fact, if you find the problems late, they're way more expensive to fix than if you can find them early. So the first major part of continuous SDL is continuous evaluation. And what that's about is using data. We'll talk more about the data stuff in a little bit.

But that's about basically doing as much evaluation as you can throughout the entire lifecycle, the design process, code, build, deployment and test environments, all that kind of stuff, and being able to examine the security state relative to all of the different things that we're trying to control and manage in SDL, examining that security state on a continuous basis and then being able to take action really as early as possible. Some people call it shift left, catching things upstream.

And it really makes it possible to fix things as early as possible and makes it really feasible to catch them all or hopefully all. All right. So let's get stuck into some of the guts of this thing. So what are the core elements of continuous SDL? If each of you likes, want to take one topic and just run with it. Sure. Well, I mean, some of the most obvious ones are automated tools that look at the code.

So static analysis and dynamic analysis tools, one of the code QL tool, which I think will probably talk about a little bit more when we get a little bit later on, that's available to Microsoft customers we run internally and we've been broadly deploying. And that's a great tool for doing really sophisticated analysis of the code.

But the kinds of security vulnerabilities that exist in the systems that we're building today go way beyond just the kinds of things that you can find with a static analysis tool by looking at just at the code.

There are things that involve, let's say, the use of secrets, being able to find all the places where you're using secrets and the methods of authentication that you're using and where and how you're managing those secrets and making sure that you don't have loose secrets in code or loose secrets in text files in your repositories. And there are a range of tools, some of which we have available in our commercial products like Azure DevOps and GitHub and Azure.

In fact, a lot of the automatic scanning stuff that we're talking about that we use internally is available for our customers there. And so we sort of monitor the status of all of those things. Plus we have a range of things that are more specific to the kind of products and services or maybe the engineering infrastructure and tools and systems that we use internally where we run automation.

So you can imagine a system that wakes up every day or a couple of times a day and sort of looks left to right across all the telemetry and the exhaust out of all of the different systems that are sort of part of the engineering environment where we are building, testing, and operating the services and then applies a set of digital rules to that stuff in order to figure out, understand as much as possible the sort of the true security state of the software.

Okay. So gents, I'm going to ask maybe because I'm the noob of the people on here about SDL. Of course I know what it is, but I'm definitely not a grandfather of SDL. That's sorry, Michael. But why are we updating the SDL now? Because of course it's been around for a long time, but what's driving the update right now? Is there any particular reason? So I think that's one of the big innovations that we've made if we can claim it as an innovation.

So similar to what we see outside from regulators and governments and the cyber EO, there's more and more organizations asking for transparency, transparency into what you do, transparency in the tools you use, transparency into the things that you find. But similarly, we see the same requests inside. It's just through a different pivot, through a different eye. So if we are going to run a tool and we're going to find issues and ask an engineer to do a repair if necessary, they want to know why.

They want to know what tool was run, when it was run, what particular issues was found, why was it important? And so pulling all this information together and being able to say on this day this tool was run, these issues were found, or none of these issues were found, but all these rules were tried is a really, really great way to actually bridge the gap internally for transparency, letting the engineer know why we're asking them to do work.

But also conversely, when we work with auditors and other regulators to say, hey, look, no, we ran this tool on this day at this moment in time with all these rules and we found nothing. And it's a great way to build a traceable evidence path, but also provide transparency into our processes. And this is going to be ever so important as we move forward. You talked about the scale and the speed of deployment in the cloud. How can any audit that you do in that environment is a point in time?

It was all the minute you've completed it. And so being able to provide this level of evidence will really go a long way. There's no way that the, I think you mentioned one of our dear friends before, Mike Steven, if I would say, the industrial complex for audits and review, there's no way that they can keep up with this. No, there's no way that a physical audit at a point in time is valid for other than that point in time.

But what we're building here is a way to be able to measure things more immediately, take action more immediately. And I think it's the way for future compliance. So you bring up an interesting point there about transparent and traceable evidence. What does that look like? What could customers do as well? Let's say there's a requirement to run static analysis tools to find specific kinds of vulnerabilities.

What do you provide that shows that you did this and these issues were found and these issues were rectified? What does that look like? In our system, and so David's the engineer here, and so he's built a wonderful system. What that looks like if we pick on, say, CodeQL. So across Microsoft, there's thousands of products used and tens of thousands of repositories with different languages in each repository.

The way that we've implemented CodeQL is that we capture all that information, all the scanned information centrally, and therefore we can measure, was a particular language in a particular repo scanned, and we also have the catalog of rules that were scanned for. And so from an external perspective, we know that at this point in time, all the languages in this, and we put this together in a package, and maybe David could speak more to this.

So I, for instance, I know that 10 repositories of certain language types was scanned on this day using this rule set and nothing was found. So that's really good for outside. Inside, you know, inside of Microsoft, if an issue is found, it's the same for the deal. We know the evidence that gets provided to an engineer in that case would just be the issue that gets found.

They would be given actionable information, including potential how to fix the guidance that we have, and David's working on some innovation around here, is to lead an engineer down the right path to do things and to do things immediately. The more we can give them, the more precise we can give them, that we'll actually go, in our solution, take them straight down to the line of code that needs to be fixed and suggest a fix. And in this case, we've captured the fact that there was an issue.

The solution that we built for CodeQL, the issue remains with the code until we get a evidence, a new update, a new scan, that the code has been fixed and therefore the issue goes away. The new evidence pointer doesn't include the issue. And that's a real benefit for anybody in security assurance because the issue travels with the code or ideally the code gets fixed and the issue doesn't travel with it. All too often in old school SDL, an issue would be tracked as a security bug.

It becomes disconnected from the code and trying to keep all those things in sync in a world of fastly moving software development is nigh on impossible. And so I think the way that we capture that information and the way that the information or rather the way that a issue that's been discovered only goes away once it's been repaired is a big step forward.

I think the short version of my answer to your question, Michael, is you actually have to design, keeping the evidence in a connected way, you have to design that into your system on purpose. So obviously inside of Microsoft, we use a lot of the same tools and systems that are used outside by a lot of our customers. But we also have special stuff inside just like everybody does and how they've set their own things up and custom tools and stuff.

And the systems that Tony is talking about, the reason that we have this transparent and traceable evidence element of continuous SDL is because it's literally designed in. So starting right with the SDL, the SDL is comprised of a set of requirements. Those requirements have identities.

When we perform the data-driven evaluation to figure out whether a particular product or service is meeting each of those requirements, we record a record that says, this particular product or service with this identity met this requirement or didn't meet this requirement. And then there's a list of evidence pointers. And all of that's in a data structure that is stored and persisted on purpose.

And those pointers to the related evidence point possibly to other claims of conformance or non-conformance. And I mean, you literally have to just design a system that's going to keep track of all of that data. So it's built in, but not by accident. So broadly speaking, I think that what I would say is keep that evidence.

If you're a customer building systems and you want to have confidence that they're secure, just like everybody understands, sometimes when you discover something's wrong, you want to be able to go back and look and figure out what happened in the past. That's the same reason we all keep audit logs around and stuff like that. Keep that evidence around so that you can go back and look at it.

We've made a lot of progress and pretty excited about systematically, in a heterogeneous way, being able to glue all that stuff together. But the core idea is you got to keep it and you got to know where it is. On the next topic, so you mentioned before, David, about data-driven methodology. So why don't you give us some insight into what that means?

If you think about how most, let's just say broadly speaking, auditing and compliance work happens, somebody says, okay, well, the standard says you have to do these 10 things. And then an auditor comes in and they've got a clipboard with a little checklist of 10 things. And they want to know whether those 10 things got done. So they go ask some people and they say, did these 10 things get done? And somewhere, ideally, you would find a document that's a record of those 10 things being done.

So that's a broad concept. For the SDL, a large portion of our original methodologies were based on what we often call manual attestation, which is you go ask a developer on the team, hey, for example, are you using a particular deprecated form of encryption that we no longer think is a good idea to use? And our SDL compliance program at scale across the company would rely on that engineer answering that question correctly. What we, again, when I said think of it as manual attestation.

The challenge is that the landscape of things that you need to know about your systems now has become so large that actually it's really not a reliable way for people to do that. The code, I don't remember how the code I wrote three years ago works. I can barely remember how the code I wrote six months ago works. And so the truth comes from data. The truth comes from looking at logs. The truth comes from looking at the code.

The truth comes from looking at databases that are records of things that actually happened. And so that's really what this data-driven methodology is about, is about taking all the SDL requirements and moving as much as we can away from anything that's based on manual attestation to actually evaluating data that we know tells the truth.

If you want to know whether every server in a data center associated with a particular service is keeping up with patching, you don't want to ask a human being that, even if they're responsible for making sure that that happens, because people can make mistakes. This landscape is very complicated. But if you interrogate the scan logs from all of those servers, you will know the truth.

So the data-driven thing is really all about having the evaluation process, the compliance engine at the middle of all this stuff that's doing the continuous evaluation we were talking about earlier, using as much as possible using data triggers, looking at the issues that are found during static analysis with CodeQL, looking at configuration of people's Azure DevOps instances to figure out whether, yes, actually, every repository is configured to

require two full-time employees to sign off on every pull request. So each of these things, you can look at the data and find out the truth. And that's what the data-driven methodology piece is about. And I suppose it just gives you better insights into whether you're making progress on that, right? I mean, if the numbers are going up, that's probably not good. But if the numbers are going down, then perhaps that's a good thing. Perhaps we're making progress.

Yeah. Well, I like to see the numbers go down to zero. But of course, sometimes you need a glide slope to get there. I think the other thing is that, going back to the point about evidence and transparency, if I have a piece of evidence that I've recorded that the reason that we decided that somebody didn't have any of a certain kind of vulnerability was because somebody said they didn't, that's evidence. But it's not very strong evidence.

Whereas, if I can say, well, here were the static analysis rules that we ran with CodeQL against the source code. And it was this version of the commits in this repository. And really, really detailed. You know what actually happened. That's stronger evidence. And so increasingly, this being data driven, it enables these different pieces are all sort of tied together. This enables the transparent evidence collection and organization as well.

I think if I was going to add one additional thing, perhaps, would just be that it enables us also to see where we can still make improvements. So to your point, it might go up. You introduce a new tool. You introduce a new rule. You get to immediately see that across the entirety of the estate because we're taking the data driven approach. And it's there.

And so we can immediately tell from our implementation what the cost is to Microsoft, what the cost is to an individual engineering group, and help them come up with a plan to address the issues if they're indeed our issues, if we bring a new ruling. From time to time, we have a incident. And in some cases, it was maybe because we didn't have a correct rule, or it might even be a new attack pattern that we're not aware of.

And so when we add that to our tooling, whether that's CodeQL or another tool, the fact that we can take a broad data driven view across the entirety of the Microsoft estate and see what the impact that has before we take action is a benefit to everyone. And that's an important point, right? The fact that this is continuous is as we see new vulnerability classes or new variants of existing vulnerability classes, we can adapt quickly, right?

We're not like we're waiting two years to update some tools or some education or whatever, right? This whole thing is designed to be updated rapidly as new threats evolve. It was always the case that the SDL team were tightly connected to the MSRC team for the Microsoft Security Response Center team for learning about new variants of issues. But now we're even more tightly integrated with those things. And from a, the tool that's closest to my heart is the CodeQL tool.

We can literally have a new rule in place and scan it for a new incident and a new variant. We can have a new CodeQL rule in place in the afternoon, say if we find something in the morning and we can have scanned all of Microsoft by the following day and have the results. It's pretty impressive feedback loop in terms of just how fast we can respond these days.

So you're on the CodeQL topic and I really don't want to spend too much time on CodeQL right now, but one of the things I love about CodeQL is the fact that it is, in the overall scheme of things, relatively easy to write new rules. That just blows me away. In fact, I don't even go as far as to say it democratizes writing rules because you don't need to go to some person who will quite happily relieve you of $100,000 to write some new rule for some bespoke static analysis tool.

There's something you can do yourself if you're using CodeQL. I'm not saying that it is really, really easy, but it's certainly a heck of a lot easier than writing your own compiler in the first place. I've written CodeQL rules. I'm not going to say they're awesome, but there's things that I've seen and I've written simple CodeQL queries to help me narrow down where vulnerabilities may exist.

So I'm a huge fan of CodeQL and I think we should at some point get someone probably from the CodeQL team on the podcast to really go through it in detail because this isn't a tool that's just Microsoft only. This is available on GitHub and you can run it on repos that you own. Yeah, this is a really great tool, which kind of is a nice segue actually into the next section, which is modernized practices. David, do you want to give us the lowdown on modernized practices?

Actually, I'm going to let Tony take that one because it's much more in his area. It struck me, you know, Sarah's question at the very start was, you know, why are you just updating the SDL now? And the answer is it's not just now. We've been updating the SDL for the longest time. It gets updated all the time. We've just been talking about how incidents, new classes of issues, variants of issues, how they affect the requirements and tooling. So we're updating all things all the time.

The requirements that we have are constantly being updated. But we see things happening in the ecosystem at large. And so if anyone's taken time to read the white paper, they'll know that we've updated six requirements last year. I think, well, sorry, we added six requirements last year. So what was all that about? So some of it was around issues that we just needed to pull out that were already embedded within a requirement.

But you wanted to pull it out to bring it into focus for the rest for all of Microsoft so we can really focus just in on one issue. But the other issues around engineering system. So we've seen over the years and recently that adversaries also don't just go after the code, they go after the infrastructure the code is built on. And so we spent a lot of time looking at what we could do in the environment to help improve the security of the systems that we engineer on.

And so that was one of the things that we do. The SDL now is all encompassing. It's the software that gets created. It's the operational environment that the software is released into. And it's also the engineering systems that the software is built from. But we've also constantly looking at what's happening in terms of Microsoft, where the innovation is going, what we need to do. And of course, AI is on the tip of everyone's tongue right now. And so we're deeply embedded.

People on my team are embedded in the AI group, the artificial generative intelligence and security team inside of Microsoft. And so as a team, it's being pulled together looking at all the potential threats that come along and vulnerabilities that come along with AI. And we're planning about them together as a group and bringing that back into the SDL.

And so we've made updates both in terms of how we pen test, how we go about looking and finding practically vulnerabilities in new innovation around there, but also things such as threat modeling. How do you effectively threat model AI systems? So we continually look across everything that we do and always think about how we can improve all our requirements based on what's changing.

Okay, gents, so I know that you've been prepped for this, but something that we always ask our guests at the end of the podcast episode is for your final thoughts, something that you'd like to leave our listeners with. Well, I'll just toss in, this is David, I'll toss in my thought, which is really in some ways just a recapitulation of what we talked about earlier, which is that the nature of building software is changing internally and the threats are constantly changing around us.

And the four areas in continuous SDL that we're driving at Microsoft, the continuous evaluation part, being data driven, having evidence and transparency and continually modernizing practices, those are absolutely essential for us. And I think pretty much no company and no set of software really lives as an island anymore. So all of our customers and all of our partners around the world who are building systems to some degree or another have to actually be thinking about all these things.

So some of these lessons about continuous stuff, data driven and so forth, I would recommend you internalize. You can read more about the stuff that we talked about here today. I think the white paper that we've been chatting about today is linked in the podcast web page. All right, so let's bring this episode to an end. Gentlemen, thank you so much for joining us this week.

It's just fantastic seeing SDL alive and kicking and being updated, especially in light of the new threat landscape, new development tools, new attack techniques, new everything. So it's really good to see. So again, thank you so much for joining us. And to all our listeners out there, we hope you found this episode of use. Stay safe and we'll see you next time.

Transcript source: Provided by creator in RSS feed: download file