Welcome to the Azure Security Podcast, where we discuss topics relating to security, privacy, reliability and compliance on the Microsoft Cloud Platform. Hey, everybody. Welcome to episode 98. This week, it's myself, Michael, with Gladys, Sarah and Mark are both away today. And our guest this week is David Weston, who's here to talk to us about a whole slew of things, really from secure future initiative to software development to various other topics that kind of relate to that.
But before we get to our guest, let's take a little lap around the news. Gladys, why don't you kick things off? Hi, everyone. I have been really busy with many other internal employees at Microsoft working on the secure future initiative or SFI. We will talk in a little bit about SFI, but first, let's talk about a public preview on Microsoft Defender for Cloud Integration with Microsoft Copilot for Security. This integration was first announced in MSBuild and now launched on June 10th.
Defender for Cloud, as you may know, stands for the first cloud native application protection platform or CNAP, which is a solution that not only helps address posture related questions, but also assess security admin in understanding the environment, remediating issues and mitigating risk with AI generated actions. It includes a step by step guide with ready to execute scripts and pull requests for fixing vulnerabilities in code.
There is also an assistant in identifying the appropriate resources, the owners and a developer for task delegation. This makes it easier for junior related team members to perform tasks like experience security administrators. It's definitely like a teaching tool. There's several free training videos, so please visit our site to see the links. Also I want to announce in public preview is a new SKU for Microsoft Azure Bastion Premium.
This service is aimed for customers that handle high sensitivity virtual machines workloads. It allows customers to perform advanced recording, monitoring and auditing capabilities for sessions. This is something that many customers have been asking about. We finally have released this needed capability. I have one news item. It's an interesting one, not because of the product that it represents, but because of the overarching rationale. I'm going to read this verbatim.
Log search alert rules using link storage will require using a managed identity starting July 2024, which is not far away. On the surface, this doesn't sound overly exciting, but in actual fact it's very important. We, Microsoft and certainly Azure specifically, are moving away from using credentials for basically anything. We know it's a long journey, but the point is we need to get credentials out of the environment. That includes not just us, but also customers of Azure.
We need to move away from using usernames and passwords that get embedded somewhere, or SAS tokens, those kinds of things. We need to get away from those. The reason is because that's what the attackers are doing. They're getting hold of those credentials and that helps them move laterally in the environment. If the credentials are not there, then they can't get them. The way we solve that, certainly for client authentication in Azure, is to use enter ID.
What that means for processes is managed identities. You really have two options, system managed identities and user managed identities. I'm going to provide a link in the notes. I really think everyone who's listening to this podcast really needs to dig deep and understand managed identities and why they're so important.
Ultimately, there is no credential there and that's incredibly important from a client authentication perspective, but also from a, the attackers can't get the credential because it's not there. That's the most important part. Again, this story isn't that exciting, but the overarching rationale is critically important. That's the news out the way. Now let's turn our attention to our guest.
As I mentioned at the top, our guest this week is David Weston, who's here to talk to us about a whole myriad of things, especially developer related stuff. David, thank you so much for joining us this week. We'd like to take a moment and introduce yourself to our listeners. Yeah, absolutely. I'm Dave Dwizzle Weston, as many of you who follow me on Twitter will know me as. I'm a vice president of operating system security here at Microsoft.
I actually have two jobs, but I just get one paycheck, but my boss likes to point out it's a large paycheck, so I don't complain. I do building of security in the operating system. Everything from crypto to authentication, code signing, mitigations, you name it. I work on a team of builders and I lead them.
One of the things that folks may not know is not only do we build many different flavors of windows from like Xbox to the Windows 11 you know and love to the Azure host, but we also build Linux distributions. A good example that is Azure sphere, which is our IOT project, which I hope lead has some flavor of a Linux distribution.
We also use Linux now directly on Azure host as part of Azure boost, and we actually have a Linux distribution, RPM based that anyone can download called Azure Linux, and we actually do a fair bit of work in upstreaming on that. Besides building the operating systems, I also lead a penetration testing team that does things like code reviews, implementing the secure future initiative or SFI, SDL, red teaming, you name it.
So we kind of have this really cool culture of people building security products, but also a lot of offensive expertise, and I think that really helps to make a better product. All right, so let's start off with something you mentioned at the top, which is Dwizzle with three Z's. So where on earth does Dwizzle come from? Yeah, you know, when you red team and you find good bugs, you often entertain yourself with those bugs. So this is years and years back.
As Michael will know, many of you will know, you know, there's a way to sort of present a headshot or profile picture in Outlook in the exchange client that actually used to be held on a SharePoint server way back when. And we actually found an arbitrary file upload vulnerability years back that would allow you to if you knew the picture name, you could overwrite anyone's photo.
And so of course, I immediately rather than reporting that bug, used it to draw eye patches on my red team colleagues back in those days. And so of course, we got it fixed. But those eye patches stayed in the Outlook client for a while. And eventually, some folks found a way to change the presentation name in exchange and they paid me back by changing my name to Dwizzle. And I actually found it was kind of cool. Pretty soon, execs were addressing me by it.
And then I was branded and couldn't get rid of it. So that's where it stands. And you know, along the way, there's been some other hijinks. Like I've done some reports in some pretty prestigious journals where on a dare or a bet loss, you know, I was attributed to Dwizzle of the M dollar, things like that. So yeah, the history goes back a long ways. And most of the red team folks here at Microsoft have pretty similar stories.
M dollar, man, I haven't seen that as an abbreviation for Microsoft for a long time. That's funny. That's funny. The M dollar. That's right. I'm dating myself for sure. Microsoft has been talking about the Secure Future Initiative or SFI. And many of us have been working intensively in it, as I mentioned earlier. Can you talk a little bit about it and your role in it?
Yeah, the Secure Future Initiative, the way I generally look at it is, you know, a cultural change and a prioritization across Microsoft. Security becomes job one. You know, I definitely think initiatives like Trustworthy Computing and SDL have certainly done that in the past. I look at SFI as even more all encompassing. You know, SDL and things like that talk specifically about how we're going to be secure by construction.
But SFI includes things like operational security, mindset, training, and I would say a secure by default ethos. And so I think that I play sort of two major roles in that. The first is as someone building the operating system, I think SFI really helps and supports what I think the security team's objective always is, which is to prioritize security.
You know, there's a lot of talk of that online, especially in the security community, which is sometimes a little cavalier about how difficult that is. At the end of the day, when you're building an operating system product, you know, people are not going to buy that if their software can't run or if it's too slow.
So a lot of what we do is balancing secure by default, but also figuring out from an engineering perspective, how do we also balance compatibility, which is sometimes, you know, the enemy of security and also maintaining performance. And so one of the big steps my team has done is taken the step of turning on things like virtualization based security by default, BitLocker by default, code signing and code integrity by default with features like smart app control.
And I think, Michael, you'll like this. We're really close. Hopefully in the near future, you'll hear more from me on this, but we're really close to removing admin by default and having just in time admin become the default in Windows. So I think that's a good, you know, example of where SFI is being put into action. We're empowered to do things that maybe previously we would have optimized for performance or compatibility.
But outside of that, my teams do a lot to secure operating system related services as well as security features. So of course, implementing the SDL from the SDL you know and love, which is threat modeling, static analysis, fuzzing, security bug bars and response, but also taking that up a notch, you know, redesigning, I would say features to take advantage of kind of modern security boundaries.
So in many cases, we're getting very involved at the design level to use things like the virtualization based security boundary or hardware. We also get involved pretty deeply in the operational security. So things like testing app specific defenses and making sure we're building bespoke detections through red teaming.
So across the board from design to implementation, you know, my team gets pretty involved there, which means I spend a lot of time sort of helping guide them towards a strategy that can execute well on that. So talking about compatibility, I noticed just recently there was an announcement made by the Windows Server 2025 team that NTLM was finally going bye bye. But also mail slots.
I think the last, if I remember correctly, didn't mail slots like first start in landman in the land manager on OS2? I think it's all the way back to OS2. Yes, OS2. Yeah. So there's a whole bunch of other stuff that's being deprecated. I mean, you know, I imagine the code behind that is probably old and crusty and you know, it's just just time to deprecate it. Probably very old C, barely C++ code.
And I'm sure it's been reviewed and fuzz tested over the years, but by the same token, it's probably just just old and crusty, right? So it's time to sort of, I don't even know, does any feature actually even use mail slots? I don't know if anything inboxes us, but certainly, you know, you wouldn't be surprised to hear this, but there's many, many crusty old applications that people rely on Windows. And so there are certainly third party applications that continue to do that.
You know, but I think attack surface removal is a big sort of implied part of secure by design. For example, you know, things like NTLM, we know at the end of the day, there are limitations to the NTLM as a protocol and as the standard in terms of security. And so when you talk about security by default in that particular area, I think the best thing we can do is, you know, disable it. I think that goes for a lot of attack surface.
There are many, many, I would say perennial features where we've had security challenges around them that we're really looking at redesigning. You know, I'd throw Win32k out there, NTLM, and there are many other surfaces that, you know, we continue to look at to try to either remove or fundamentally redesign.
And that's actually the exciting part of Windows right now, I think, is there's a ton of energy and a ton of support at the leadership level for really revolutionizing the guarantees that we can make. So that's always fun, right? When you think you kind of know what the limit is, I think we're in the midst of redefining it, which is an exciting time to work on an operating system at Microsoft. So why you mentioned redefining, perhaps even redesigning and even mentioned revolutionary in there.
Rust. Tell me about, I mean, I'm a huge Rust fan. And that's coming from someone who's done C and C++ code for a long, long, long time, but I enjoy doing Rust. So tell me what's going on at Microsoft around Rust. Sure. So the first thing I'll say is I think Microsoft actually has a long history with, I'll call it type and memory safety. Michael, you'll remember this, but big parts of Vista were actually originally designed in C sharp and dot net.
And one of the reasons was to achieve, you know, type and memory safety. We actually had a pretty long running experimental operating system called Midori that used a version of dot net that we called system C sharp that had pretty strong security guarantees. And even in products like Azure sphere, we had a subset of C called safer C that required some things around spatial safety, et cetera. So that's been a long running goal.
Traditionally the problem is, well, you know, languages that offer things like temporal safety and spatial safety and things like garbage collection that come along with that just usually incongruent with the performance and efficiency needs of a low level operating system. You know, having a garbage collector, for example, in your bootloader, things like that, generally incongruent.
So I think really what Rust did and to an extent Golang and other languages like Swift have really brought about, I think, a revolution in systems level languages. So languages that are appropriate from an efficiency and performance standpoint to use an operating system construction while also having memory safety. And so I think that people understand the elegance and the approach of that and the benefits it gives you when this sort of secure by construction, especially deterministically.
And so I think it's caught the eye of folks. You know, Mozilla originally designed this language as a way to secure their browser. That makes a lot of sense in that. But I think the community as a whole has been waiting for something like this. So we've seen a lot of folks gravitating towards it, including at Microsoft. And really just we haven't had this opportunity for a very long time to get the performance we need out of our memory safe language.
The thing I'll say about Rust in particular, why it's moving so fast is while the security folks you can call us wonks maybe have wanted this for a long time, what we're actually seeing what I'm observing is kind of a groundswell of support from your average individual contributor developer. They love Rust. They want to write code that's secure by construction. They're very interested in the language.
So a lot of the changes are less Microsoft strategy, although I'm happy to talk about why it's on Microsoft strategy and more of the passionate, you know, boots on the ground devs are just saying, I want to do this and going and building really cool stuff with it. That's how we got this far. And so I want to make sure that those intrepid folks are really the ones that get the credit because I think that is the truth.
So while while you're on a roll, I mean, why don't you sort of itemize a couple of things that are currently using rust and talk about some of the strategy. I think it's important. I mean, I think rust is an important language. It's definitely not going away. It's evolving rapidly. You could say that's good and bad. And then we can talk and then let's just talk about, you know, what's the role of C++ here? In fact, even C sharp as well, right?
Because they're not languages that are going to go away. But I'm will just leave the button to laser. Yeah, so I think we got kind of our feet wet with some really important projects. So the first is I would say the two things that have maybe been the most attacked from a pure operating system component standpoint in terms of memory safety.
And for those of you don't know, memory safety issues are about 70 to 75 percent of what we patch via MSRC issues and, you know, the updates you get through Windows updates. So just to set set expectations, about three quarters of the bugs Microsoft fix are in that memory safety category. And maybe the two most attacked components have been our font parsing.
So the things that would parse, you know, TTF or OTF, especially on remote attack services like the browser or office clients, as well as Win32K. Win32K is our internal graphics component that has a long, long legacy. I think it was originally designed in the late 80s. And it's become frequently attacked because it offers a convenient escalation of privilege attack. So if you want to get out of a browser sandbox and Windows, you know, Win32K is often a common choice.
And so we started Rust right in the place where it would offer the most security value, which is taking the sort of modern WinApp SDK font parser that we call DirectWrite and converting that into Rust. And that gave us an idea. And so I did a presentation on this about a year ago, but it took roughly two to three months of a couple of developers time. And the interesting things that came out of it is that performance actually got better.
Right. So I mentioned sort of at the top of my spiel is that traditional problem with memory safety is it was thought of as inferior in terms of efficiency and performance to C and C++. Well, our first experiment, although it's a small number, you know, small value, ultimately proved that you can write high performance Rust code. And so our second experiment, which was more learning about interop and less about replacing the whole component during performance was some GDI surfaces in Win32K.
And there we were trying to learn about what is the extent of interop? Can we sort of slice out individual components of Win32K and make sure that those can work well? This is especially important because, you know, Microsoft Windows is compiled with the Visual C++ compiler or C compiler, while Rust's backend is actually LLVM and uses that backend. And so just making sure that we have, you know, ABI's that are going to work together, et cetera, is really important.
And again, that was successful and is now shipped in Windows. So those were really the two projects that I don't know that they've gotten us to the north star in terms of memory safety, but got us to learn a lot. And so based on that, as part of the SF5, we have made a commitment moving forward to start to do parts of our trusted computing base or the most security critical new code will be written in Rust. And to achieve that goal, we have to do a few things.
The first is we need a stable tool chain with compilers and other things that can kind of work in that large scale, incredible scale Windows environment. So you got to be able to debug with WinDebug. You have to have PDB files. You have to make sure code generation is stable. And so we announced that we were spending as a company around $10 million on Rust-based tools with Azure being the first target. And I'll talk a little bit about where we're using Rust in Azure to start.
The second thing we did is we have donated a million dollars or in the process of donating to the Rust Foundation with really a goal of stabilizing those tools. Ideally, we'd like something equivalent to the Windows or sorry, the Linux long-term servicing branch. We want like sort of an LTS version of Rust. You can imagine calling Rust up in the middle of the Windows build is probably not ideal.
So we want something that has a lifetime support of several years and we can base sort of future Windows off of that. So really the major thing that we've got to take care of right now is which is sort of in front of us writing large scale Rust in both Azure and Windows is getting those tools in that long-term servicing branch up. But we're hard at work at that. And so I expect a lot more announcements on that.
And the other thing I wanted to touch on is Mark Resendovich, our CTO in Azure, he announced a commitment to moving towards Rust and again, a contingent on this tool chain. But one of the areas that we're spending a lot of time writing Rust is in a product called Azure Boost. Azure Boost is available now, but it's also sort of our future architecture for Azure.
And that's where we are offloading more of the performant aspects of Azure Hosts onto specialized cards like SmartNICs and our FPGAs for storage. The goal there is twofold really to reduce jitter in a multi-tenant shared environment like an Azure Host. If your neighbor in another VM is taking a lot of IOPS, potentially that can impact your workload. So Boost eliminates that by doing more hardware offload, allowing sort of deterministic performance closer to what people would have on bare metal.
But it also eliminates attack surface. So a lot of the network surface, a lot of the storage surface moves away from being software implemented to hardware emulated where you can use more of the hardware verification technologies, more deterministic secure by construction approaches that you would get in integrated circuit designs, et cetera. And so there's a huge benefit to that.
Part of reducing jitter has also been moving more things, more agents that do things like network routing, et cetera, which are ultimately hypervisor attack surface or cloud critical attack surface. Off of sort of DOM 0 and Hyper-V, the traditional host attack surface, and onto what we call the Azure Boost control plane, which is a small embedded operating system running Linux. And more of that agentry is being re-implemented in Rust as a standard.
So we are really betting on Rust as our future implementation path. In fact, if this blog is on reality, I can break some news here. We have a future VMM that we're using for Azure that will manage sort of Hyper-V, and that is being re-implemented in Rust and called Underhill. So if that blog isn't out yet, you'll see it very soon. So I can say without a doubt that we're betting heavily on Rust in Azure. It's a big part of our secure future initiative strategy.
And the last thing I'll say is, you know, I've done, and my side sort of as a side project, I work with CISA, which is part of the DHS that looks after cybersecurity here in the United States. And I work on the technical advisory committee led by Jeff Moss, who founded DEF CON and Black Hat. And we wrote a paper recently on memory safety that's helped CISA kind of codify their secure by design portions.
And so we're seeing a lot of momentum both from them as well as the White House's cybersecurity strategy department around memory safety. So I think Microsoft is really in line with kind of where the industry momentum is going. And it's been really exciting to see how fast we're moving with Rust and other memory safety technologies. Yeah, actually, Google has an amazing free public training on Rust. I don't have the link, but I'm sure we could find that to get you in the show notes.
I've run through it and it's pretty awesome. There are also a number of free books on Rust. I think Rust documentation right off the rustlang.org is actually sort of a free ebook that you can print PDF. And between the course from Google and that book, I think you can get a very long way to implementing your first Rust programs at high quality. And both of those are completely free. Yeah, I want to back that up.
I think the secret, whenever I'm learning a new programming language, one of the first things I do is set myself a project to build. Absolutely. 99 times over 100, that's a web server. So I build a web server. That's a good idea. So you're doing network IO, right? And then you're fetching files and then you're caching and then you're doing multi-threaded and then for grins and giggles, you add TLS support on top of it.
And don't get me wrong, I just do some of the simple verbs, get, put, post, and that's about it. But yeah, Rust is not a difficult language to learn. The only problem, I hate to say problem, but the only thing that you will, anyone who's learning Rust will end up pulling their hair out is the borrow checker. Until you get used to the borrow checker, you're going to be absolutely just punching your screen with your laptop.
I found that, but once I understood the borrow checker and how it worked and why it was so useful and frankly unique to Rust, as far as I know, it's actually very- There's actually a borrow checker implemented in C++ now that's pretty interesting in the C++ compiler called Circle. Is that right? Is it part of a standard or is it just some experimental? I believe it's not a standard. There's a fella called Sean Baxter who's got a really interesting Twitter.
If you're interested in memory safety, I recommend following him. And he's got a very full featured compiler for C++ called Circle that has, I don't want to take words out of the mouth, but most of the memory safety features in Rust are actually supported with C++ semantics or the full language. That's been a pretty interesting thing to watch come together. Interesting. While we're on that topic then, so C++.
You and I spoke about this some time ago and I was trying to say, hey, but if you're doing C++, what about modern C++, blah, blah, blah, blah, blah. You had a strong objection to that argument. Which is fine, which is completely fine. So let's go through that. Why shouldn't people seriously consider modern C++? The whole goal of modern C++ is that things like pointers and array offsets and so on are handled for you as opposed to you doing it by yourself. What's your opinion there?
I think when we say memory safety, we're talking about completeness. So I don't have any objection at all to people refactoring and retrofitting for memory safety existing code bases that they cannot, for reasons of compatibility, resource constraint, others convert to Rust. I think we have to be very careful conflating expert level available, or tools available to expert level C++ developers from something like Rust that is just secure out of the box.
The big thing that we've learned after 30 years of chasing memory safety is just because there is a way to do things safely doesn't mean people will. And I think the major, major, major advantage of Rust, Golang, Swift, and other languages is it's difficult to get wrong. And I would say that C++ remains inverted, which is it's difficult to get right. And as long as that's the case, I don't think we should recommend from a pure security perspective, people writing net new C++.
That doesn't mean I would never discourage someone from improving security. For example, Herb Sutter and other folks who are awesome in the C++ community for driving in safety have invented things like GSL span, which does a lot for spatial safety and provides a container format that is spatially safe. But then you have things like temporal issues where smart pointers, garbage collectors, et cetera. And still, I don't know if there's a perfect standard.
Google has things like Miracle Pointer, et cetera. But they aren't in standard part of the language. They aren't standard out of the box. And so until that's the case, I don't think it's quite comparable as a North Star to Rust. And I think that's an important point. The word that you mentioned there is North Star. And yeah, I do a good look. I've been developing in Windows, sorry, in C++ forever. And I love the language. I actually really do. And I actually really enjoy using modern C++.
But to be honest with you, when it comes to writing new stuff, my preference is Rust. The tool chain is good. The compiler actually emits usable error messages that actually help you. So yeah, I'm a big fan, even though I brought up on C and C++ for a long, long time. And I think that's especially important in team environments, Michael.
We know that you're one of the ranking experts in this, but I'm sure you could inform us like how many projects at Microsoft have you worked on where you're going to be the forever maintainer or isn't worked on by a team of folks with a variety of experiences?
And so as long as you have to be an expert and know how to use these language tools to be safe, I think it's really hard to reach our goal, which is ultimately memory safety and complete eradication of certain classes of memory safety issues. So you brought up before that Mark Rusinovich said that new projects should be written in memory safe languages. I assume that also means C sharp, right? Where it makes sense. Yeah, totally.
I think there's a lot of focus on Rust because Rust has been sort of replacing C and C++ in cases as a systems language. But let's be clear, like on platforms like Android that has a billion users, I mean, people have been writing Java. Most web-based services don't have a ton of memory safety issues. They suffer from logic issues because they're built in things like Python, C sharp, Golang, et cetera. And so, you know, and even JavaScript is memory safe.
And so I think if you were to take sort of the, I forget who doesn't maybe stack overflow is to take the top languages every year that are written or most popular languages, I think probably six or seven out of 10 are probably memory safe. So what Rust is really filling in is that systems level high performance language and providing memory safety. All right. Hey, Gladys, is there anything else you want to ask? I think we're kind of done. If not.
There is one thing I wanted to add that I think we haven't touched on that would be pretty cool is, you know, we talked about how to retrofit C and C++ with, you know, safer pointers or, you know, spatially safe containers like GSL span. But there's actually a really set of exciting technologies that I think can make this quite a bit easier at the operating system level. And that's memory tagging.
So ARM has a standard called MTE that's actually available widespread on Pixel 8 phones and Intel, AMD and others, I think are looking into these technologies. But in a nutshell, what these memory tagging technologies do is allow an allocator at the system level. So, effectively, malloc to provide to the CPU the ranges of memory when memory is allocated to a pointer.
And then what is really exciting is depending on variety of schemes from a coding the pointer to having, you know, bitmaps, the CPU can actually go and check every memory access and make sure it's within the expected boundaries of the allocation. And that is almost like sprinkling, you know, software checks all throughout the operating system. But it comes at, you know, you don't have to remember to do it. And it's done in a very high performance manner.
So we've actually published several papers on that. Again, I can link in the show notes at the Microsoft level. But there's also been, you know, Google Project Zero and many folks have looked at this. But what you find, depending on the scheme, is that we get pretty close to the spatial safety of retrofitting C and C++ code without doing anything other than enlightening the allocator.
And so I'm a big fan of this, especially as we're going through this transition to the Rust North Star of not only retrofitting your own code, but then code bases where we, you know, lost the source code or just, you know, don't have the experts to retrofit them. We can actually at least provide spatial safety with these architectures. And maybe even a step further is there is an exciting architecture based on ARM.
And in fact, ARM shipped some experimental devices called Morello, which include what's called Cherry or a capabilities hardware language. And Cherry goes a step further than memory tagging in that every pointer actually has encoded into it things like, you know, what type of memory it is, whether it's code or data, whether it's a pointer, et cetera. And those capabilities that can be checked on every memory access.
And so Cherry gives you a lot of primitives for building a memory safe operating system, regardless of the language you build in. And so Microsoft has even worked directly with the RISC-V community to build an ISA or instruction set that provides Cherry in RISC-V. So ARM's got Morello and has been, I think, promoting this and looking for a big technology partner to take on Cherry, which would provide more comprehensive memory safety for existing operating systems.
And then I know the RISC-V and Cherry foundations, Low Risk is actually the name of the organization, has been providing a bunch of, you know, basic developer boards and ships that people can go and hack on. So if you're interested in memory safety at all, beyond Rust, I think looking at these architectures as another tool in the toolbox to get to that North Star is really, really important. And I expect those to be hugely important as we transition towards a Rust-based or a memory safe feature.
All right. Is there anything else you want to add Gladys? Because if not, I'll just move to the, you know, if you have one final thought. Okay. All right. All right. So David, you know about this, right? We asked for a final thought, just a simple single thought. You're okay with that? Sure. All right. So one thing we always ask our guests is if you had one final thought to leave our listeners with, what would it be?
Yeah, I think thinking about how software is now running the base infrastructure for the world from everything from healthcare to food to industry is really important. The decisions we make as developers will be around much longer than probably any of us are alive and have larger consequences than we think. So thinking safety first and thinking like a car engineer does in the software space is something I really like encourage everyone to do. All right.
Well, thank you so much for joining us this week. I'm always excited to nerd out about programming languages a little bit. Absolutely. It's always my thing. But yeah, again, thank you so much for joining us. I know you're ridiculously busy, especially with just everything that's going on with SFI. You can't keep me away from a memory safety conversation. Yeah, I know. I know. Yeah, we didn't even talk about analysis stuff, but we can do that for another day. We could get further.
Yeah, let's do it again. All right. And yeah, so thanks again for joining us and to all our listeners out there. We hope you found this episode useful, nerdy, but hopefully useful. If you have any questions, you can just post it on our Twitter account and we'll happily get back to you. So stay safe and we'll see you next time. Thanks for listening to the Azure Security Podcast. You can find show notes and other resources at our website, azsecuritypodcast.net.
If you have any questions, please find us on Twitter at Azure Setpod. Background music is from ccmixtor.com and licensed under the Creative Commons license.