Welcome to the Azure Security Podcast, where we discuss topics relating to security, privacy, reliability, and compliance on the Microsoft Cloud Platform. Hi, everybody. Welcome to Episode 42. This week, it is myself and Mark. We also have a guest, Dave Lubash, who's here to talk to us about Azure Monitor, both Gladys and Sarah are on vacation. But before we get to our guests, why don't we take a lap around the news? Mark, why don't you kick things off?
Yeah, the big thing that I wanted to make sure folks are aware of this week, actually happened just a day or two ago, is the release of the Zero Trust Commandments. For those that are familiar with a little bit of the history, this is essentially the updated version or the replacement for the original Jericho Commandments, that essentially set out some really clear specific truths around security and what security need to be modernized.
These are heavily linked to and based on those, but it's a modern one for the age of Cloud and how things have changed in the past decade or so. Really proud of this work that participated very actively in the open group work around this. That just got published by the open group.
There's just to cover really quickly, Validate Trust Explicitly is one of the first ones, which is very much the exact opposite of assuming trust is to actually validate it and build it based on explicitly validated trust. Maybe like modern work because of effectively IT and the business units and the users have a vote in today's world, and so how do you incorporate that in? There's some interesting elements that we get to put in there around.
One of the things we realized is how important it is to have the accountability land in the right place. If you essentially hold your CISO accountable for decisions they didn't make, you're not going to get the best decisions by the people that are making them or by the CISO. You want to make sure that you're keeping accountability for risk where all the other risk is, which is oftentimes on the asset owners in the business. That's one of the key pieces there.
Pervasive security, this was directly tied to the original Jericho form. Securing assets by value, don't waste your time on the stuff that doesn't matter, the proverbial cafeteria menu. Spend it on the things that actually make a difference to the organization, very asset-centric, data-centric type of approach, simple and sustainable. Otherwise, you get lost in complexity and you end up not having effective security we found.
The attackers have time to figure out the weaknesses there, so make sure your people can understand and manage it. Utilize lease privilege, no surprise there. Improve continuously. This is a critical thing we found as everything is changing, business models, Cloud, threat actors, security capabilities. You really have to have a program built around continuous learning, continuous improvement. Then making informed decisions, you got to make it based on data.
That means you have to gather the data, you have to use the data, and you have to constantly look to do we need to get more data in or using the data we have, right? But yeah, I'm really proud about how these came out. I highly encourage people to check those out. We got the link in the show notes. Yeah, a few things peaked my interest over the last few weeks. The first one is in a product that's near and dear to my heart, and that's Azure SQL DB.
There's now the ability to enable just Azure Active Directory authentication. Historically, SQL Server supported through three authentication mechanisms starting up with the original SQL Server authentication, where all the authentication and the identities were managed by SQL Server. That's been around forever. I mean, that's literally been around forever since I first started working with SQL Server back in, dare I say, OS 2 and Landman days. It's essentially not really changed that much.
Then with Windows, we added Windows authentication, that included Kerberos. But now we have Azure Active Directory as an ability. That's been around for a while, but now you can actually require Azure SQL DB only use Azure Active Directory, which is really great to see because now you've got a Cloud-native solution, you've now got a Cloud-native authentication, and backing that as an authorization scheme as well.
Also available, I don't pretend to be an expert here, I wish Sarah was here, but AKS support for secret store CSI driver. So we're announcing the general availability within AKS, the Azure Kubernetes Service support for the secret store container storage interfaces, that CSI driver. Storing secrets has always been a huge Achilles heel for all applications. So it's great to see that there is a standardized way, that's now generally available for support on Azure Kubernetes Service.
There's another one that piqued my interest as well, but soon as we have Dave here, when we get around to him, I think he's probably better to talk about it than me. But we now have in general availability, Log Analytics Workspace Insights in Azure Monitor. The cynic in me thinks that sounds like Azure Monitor for Azure Monitor, but I don't know. So when we get around to Dave, I'll ask him to shed some light on that particular item.
We've also published, Mark Rosinovic published a blog post on some of the key foundations for Azure confidential computing. We talked about these a little bit last time, with some of the announcements around specific VM types that support things like SGX, the Software Guard Extensions, that are one of the linchpins for confidential computing. Confidential computing is ultimately encryption of data while it's in use.
I mean, there's more to it than that, but that's one of the most important parts. So I would definitely recommend that anyone who's interested in that topic, go and read this blog post. Another item is in Azure Logic Apps. The next item is in Azure Logic Apps. Azure Logic Apps now supports a managed identity for providing applications that support Azure Active Directory.
So for example, if you've got, say, Azure SQL DB or Blob Storage, you can now use that managed identity in a Logic App to both authenticate and to provide authorization mechanisms to that particular resource. I've touched on this many times, but I'm going to do it again because I think this is critically important. We're seeing three massive areas of improvement across Azure. Different products were different stages, but we're well along the way.
That is, customer managed key support for encryption of data at rest depending on whatever the services. The vast majority of services support that now. Another one is the use of for pass services is using private endpoints. Then the third one is making sure the services that connects as a client to another service have support for managed identities. The nice thing about managed identities is that the credential is not managed by you, it's managed by Azure.
So when the process starts, the credential is automatically used by Azure. So you don't have to worry about storing and protecting credentials. I'm a huge fan of managed identities. It helps remove one of the biggest stumbling blocks is, okay, so where do you store the credential and how do you protect that credential? That is a non-issue when it comes to managed identities. The other item I want to bring up, I don't even know how to broach this one.
So earlier in November, there was an issue where some Windows 11 features failed to run correctly, they wouldn't work, they wouldn't load. The reason was because a certificate had expired. So as you're probably aware, I think just about everything in Windows is digitally signed. When we start the process up, we got to check the signature and do all the usual X.509 stuff. Well, if the certificate has expired, what do you do?
Well, the correct thing to do is to fail the operation altogether and that's exactly what happened here. This is an ongoing issue with X.509 this and X.509 that is management of certificates. So the real lesson here that came out of this is as an industry as a whole, this is still an area that we've got to really focus on. If you're managing certificates, then you need to concern yourself with the correct life cycle of those certificates as well.
Because- In case you need a reminder that it's people process and technology altogether. Yeah, I was wondering when you would pipe up about that. You're absolutely right. I mean, at the end of the day, X.509 certificates are fantastic. I love the technology. I might be the only person in the world who does, but I really do. But man, when certificates aren't managed properly, things like this happen and it's not uncommon. All right. So that brings to an end the news.
With that, let's turn our attention to our guest. This week, we have Dave Lubash, who's here to talk to us about Azure Monitor. Dave, why don't you spend a moment and just introduce yourself to our listeners? Hi, I'm David Lubash and I have worked at Microsoft for 23 years. Most of that time in the compliance and security space for shipping products the last seven years in Azure starting in with application insights.
You had a mention about the Log Analytics Workspace Insights and how that's releasing. That is very meta. You were right, Mike, where you said that is insights about logs. What we've done is we've gone and made curated experiences around many different resources in Azure, many of the ones you were just talking about, Azure SQL DB, containers, Key Vault.
We work with the owning teams and we build these rich curated experiences across both the data that they can give us and then both the views that you can get on top of that. This most recent one with Log Analytics is actually the insights you get on your own logging experience across all of Azure Monitor. Yeah, I'm glad we had you all lined up to discuss that. Again, I was like, man, this looks like a layer on top of Azure Monitor to monitor or something.
I don't know. Anyway, I'm glad I wasn't too completely off the mark. The topic, I mean, Azure Monitor, right? So it's always been a point of confusion. I'm going to be tooting on this with you with customers. You'll see customers say, oh, you know, Log Analytics and then you'll hear say, people say, well, as your monitor. So what have we got going on here? I mean, is Azure Monitor something that sits on top of Log Analytics? What is it?
So Azure Monitor, to really a large extent, maybe about three or four years ago, we created that to try to help reduce the confusion. So first off, we had many ways to monitor, right? We had the old OMS system center. We have Log Analytics came out of that. We had the application insights, which is really your curated experience on top of monitoring your applications. We had all of these external facing different products.
First, we wanted to bring them all together under the Azure Monitor brand, but we also brought the teams together internally. So that first part was we brought these teams together so that we could look and we could go, wow, we have three different experiences built on top of logs. Let's make one better experience on top of logging, and similarly around alerting and the UX on top of this, UX on top of metrics.
We took these teams across multiple geographies and multiple teams, multiple areas, brought them together in an attempt to make one better experience and also to reduce some of that confusion. So Azure Monitor is the collection of capabilities that enables us to have a better unified experience on top of logs, metrics, alerting going forward. I mean, I want to make sure I completely understand this. I'm not going to be wrong.
I mean, I use Azure Monitor every single day, but I don't pretend to know all the moving parts nor the machinations underneath it. It's just like magic, stuff just happens. But at the end of the day, is it fair to say that Azure Monitor is a layer, like is a way of viewing the contents of log analytics. So different products can feed into log analytics and then Azure Monitor can be used to extract intelligence out of those logs as well as perhaps even raising alerts.
That might be fair from one point of view. The other view is when we created and started combining all of these capabilities under Azure Monitor, we knew we couldn't just go delete the experience that people think of as log analytics, and the experience people think of as application insights, because these had many customers already. So we've built on top of log analytics as part of Azure Monitor.
So instead of thinking that Azure Monitor allows you to view your logs, you should think of that log analytics is just the logging capability that is now built into Azure Monitor. It combines the logging capability that was already built into application insights and some of the other products. Right. So you brought up application insights and so on. So is there a general guidance around where app insights is used versus say? Sure. That's a great question.
As a security center, those are the other tools like that. So when you're sitting on top of a resource that creates logs, VMs, compute, Windows, laptops, these types of resources already create logs. You can also then improve those logging experiences by both filtering on the device itself. So you reduce the number of logs coming in to the right level.
But when you're looking at a resource that already creates logs, that's from what we would consider, oh, okay, let's we put the agent on there, either the new agent, the one of the older OMS log analytics agents, the agent sits in that resource and then routes logs to log analytics. Then, but there are also types of resources, including applications that you also want to collect data from.
So in the context of when application insights is used, is you're coming to your application and you're using an SDK to instrument the SDK to send data to what used to be an application insights endpoint.
Yet we're looking as we converge this platform, it's an Azure Monitor endpoint, your log show weapon in a very similar, they're in the same set of workspaces so that you can then look and say, oh, I have this workspace set here and it has my application logs, my resource logs, all of that in one place. Then you have your curated views on top of those logs.
Now you mentioned Sentinel, and some of the other Microsoft Cloud security products as they're going through some of their own convergence and branding. Those products typically sit with their extension to the log analytics agent on the resource. So what you're getting is the Sentinel team has done the work to try to keep the logs logging at the right level.
So you're both getting the right number of logs because it's really easy to turn on and make it so that you're collecting more telemetry or you're spending more money on telemetry than you would ever spend on the actual compute resource itself. Yes. I mean, I guess the way I'm thinking about this and tell me if I'm on the right track here is that, whenever you look at an architecture diagram, you have the foundational stuff at the bottom and I feel like Azure Monitor is kind of that.
It's like the trucking company that gets it there, not necessarily all the analytics and what's being shipped and whatnot. It's just make sure that the package gets there and the logs get out to all the right locations.
In both the agent and the cloud service piece, and it sounds like the log analytics and then the Sentinel and the app insights essentially then stack on top of that and create that value added insights and, hey, your application is running slow or hey, you have a security attack going on. I mean, it feels like that's the configuration. Is that the right way to think about it? I think so, but it two layers.
I mean, using the package distribution, maybe it's more the distribution center because on the one layer at the bottom, you very frequently at the lower level at the resource level, the Azure Monitor has worked with the resource teams, whether that's Sentinel, Azure SQL DB, Key Vault in order to collect the right data. That data then gets routed to what you would, your analogy would be like the distribution center.
Then on top of that, the views get built and those then tend to frequently come back to the teams that help provide that data in the first place. What you would see from Sentinel is both at the lower level, helping collect the right data and then at the upper level, ensuring that the right data is displayed, including of course, the right alerts are built in on top of that data. So you're just not alerting there. How do we do alerting in Azure Monitor?
What sort of things can I A, alert on and B, how do I get notified? Sure. So to start with, when you look at the data that we've ingested into Azure Monitor, it's really all comes down to two types, it's logs and its metrics. The logs are essentially the raw logs that have been indexed from any of the types of both log analytics and insights and gesting pieces. Then we also on top of that have the capability to create these curated metrics on top of this.
What these do is in some cases, they're just think of that as a part of the log itself, but maybe with a bit of more of a time series pre-aggregation on top of it. So now for alerting, you can build alerting on top of metrics, on top of the logs themselves. Then again, going back to that convergence story that we've been going through these last four or five years, it was three different, four different teams at Microsoft who were building alert capabilities on top of different types of data.
Now we really have one larger, more focused team that's job it is to build alerting on top of the entire platform. So you mentioned before about pre-aggregation and that sort of stuff. So make sure I get this right. So if I go to Azure Monitor, I've noticed that on the pane on the left-hand side there, there are some products and I think they call them like insights or something. I don't like Key Vault stuff.
So I can click on that Key Vault option, it will give me all this really nicely laid out, really actionable charts and telemetry like showing access failures to Key Vault, the number of puts, the number of gets, the number of key creation and so on and so forth.
So is that where you guys have worked with Key Vault, for example, to say, okay, we can display your critical information as part of Azure Monitor in one pane, as opposed to having to go into Key Vault necessarily and start messing around inside of Key Vault. You can sort of display it inside of Azure Monitor. The short answer is yes.
Then of course, additionally, whether it's Key Vault or storage, we've also tend to make a more, I would say, maybe a more combined or comprehensive story because instead of looking at my one storage account or my one Key Vault, I'm looking at a collection of storage accounts, a collection of, so I'm looking across a broader spectrum so I can get an idea of how all of these resources are behaving, whether it's across a geo worldwide.
So it just gives that better comprehensive view across the entire set of resources you're operating in your tenant. I love that view because I use Key Vault all the time. So having that one little view with absolutely no effort required for me looking across all my Key Vaults, and it's like a hierarchy as well. So I can see like an aggregation of the data or I can see drill down into an individual Key Vault. If I see that there's one being problematic, it's actually really, really nice.
So my guess is what you're doing is just taking some really low level telemetry from the Key Vaults, and then aggregating it and then making it look pretty so that I can actually understand what the heck is going on.
Yeah. That's what we've been working through these last, I would say about two years as we started this insights journey across almost, I mean, long term, of course, let's do it for all of the resources in Azure, but starting with the ones that customers are most interested in first, things like containers and VMs and Key Vault and Azure SQL.
One of the things that I always worry about and I don't know, maybe I'm too paranoid as a security guy, is what happens if sensitive data like PII or PHI, probably more PII than that, but sensitive stuff, even passwords, gets logged into the system accidentally. How would someone handle cleaning that up and finding them?
So first of all, if you did find something in a log that you didn't expect to be there or was accidentally logged, and this does happen, the view on that is, of course, you should think of it as not necessarily exposed, but it is something that does need to be rotated because you have put it probably into a system that you haven't maybe put the same RBAC controls on top of as production, VMs or other resources, and yet at the same time, in those other resources,
if someone did have access, they may not even be able to see something like a plain text password. So in the accidentally logging scenario, we can delete individual rows. We have purge functionality so that customers can come through and say, oh, this data is not supposed to be in this log. I need, of course, to do the three steps. One, stop sending it because you need to stop that bleeding.
And then second, go through and maybe look at, oh, do I need to rotate these or do I already have some rotation plans in place for these secrets? And then finally, executing the purge commands against the data that has shown up in these logs. Who's we? Is we Microsoft or is we somebody? I'm sorry. We, in this context, the customer. The customer themselves has full control over their data and can operate the purge API to delete this.
I was just saying that, and also we Microsoft, because we've had this experience many times at Microsoft where teams come to us and say, we've found some secrets here. What do I need to do to clean it up? Hang on a minute. I'm going to take Mark's paranoia and raise it with a bit more paranoia. So Mark has a valid point, right? If we have some sensitive data being logged, how do we scrub it? How do we clean it up? However, how do we stop someone from covering their tracks?
Because they realize that something bad happened and now they want to go and delete stuff from the logs. How do we prevent that? Or is that the question you were going to ask as well, Mark? Yeah, you stole my question. That's all you meant. We're both paranoia. So we're on the same track then. Okay. Good to have big paranoia. So every Azure resource has a logging story where they need to send the audit actions to the audit logs.
Now, interestingly, the audit logs are of course handled by Azure Monitor directly. The view that customers are getting for audit logs are also part of that at least one of the Azure Monitor experiences across all other resources. I'm worried that somebody is going to go and delete part of this logging. Well, there's a couple of things here. One, the RBAC controls on the purge command are more restricted, so that you do have to have purge.
I forget the exact name of the control itself, but basically you need to be on the permissions to operate a purge command. Once you have those permissions, when you do operate that, that is of course stored in the same audit log because you executed this purge. That purge is stored. Additionally then because the purge is stored, that's stored in an immutable storage so that, for example, our own purge controls went in to allow you to purge that data. It's immutable at that point.
Ultimately, it all boils down to so many other things. As any cloud system for that matter, it all boils down to the RBAC controls that are on the specific resource being protected. It makes sense. She's interesting. If I delete a row from a log, there is an entry written to another log that says, Michael deleted this row from the log. Yes. All right. It's a little bit like the Windows Event log.
If you clear the security log, there's an event written straight away which is Michael cleared the security log. If you try and delete that, there's another entry written that says Michael tried to enter the security log. There's always that record there. Even if something is deleted, there's a record that shows that it's been deleted regardless, which is absolutely critically important, I think. But where does Azure Data Explorer fit in all this Azure monitor stuff?
Going back years on how we built these systems, the Azure Data Explorer and the Azure Monitor teams, we started down this journey together. They built a fantastic platform for handling large amounts of data, and building really great queries on top of that. That became our back-end for logs. The back-end that we use internally is built on top of Azure Data Explorer.
That was first used just for application insights, and then as we converge the next round of platforms, then was started to use by Log Analytics. The way it fits in is you're operating the log capabilities from Azure Monitor, is we're operating that entirely on Azure Data Explorer. It's the same KQL language with maybe a couple of commands that are restricted, because it is this multi-tenant platform.
But that's how and why the queries are as fast as they are, and how you can go ahead and bypass the curated experience and go directly to your own query experience. You brought up an interesting word there, which was multi-tenant. Is there a solution here for customers who, for whatever purposes, require the use of a single tenant logging environment?
Sure. In addition to storing the data in the multi-tenant Azure Data Explorer, customers can come in and in the context really of from logs only, they create their own Azure Data Explorer, and then we push their workspaces into that. Now, they do this on a per-region basis, and so they can have as many workspaces as they want in that one region, that one Azure Data Explorer that they've configured. What about with Log Analytics itself?
I heard that there's a single-tenant version of Log Analytics as well. Yeah, that's sorry, and maybe I didn't quite explain that accurately. In the Log Analytics API, you would go through in that context and create the dedicated cluster. It is an ADX cluster, but it is, I guess, still paid for as part of Log Analytics, still managed for the most part directly through Log Analytics. Is that a special version of Log Analytics?
Is that something you can opt in for when you create a new Log Analytics workspace? How does that manifest itself at a practical level? Yes. At a practical level, you could see it as for the most part, I'm creating a new workspace that I'm going to host on top of this platform. It's a little more nuanced in that a customer could have an existing workspace sitting in the multi-tenant platform, and after a while, they're looking at their data and going, I have a lot of this data.
Some of it, I really want to, because maybe it's a little more sensitive, or maybe I want a little more control, and maybe even be able to manage the costs a little differently. They can create a new workspace or create the new dedicated cluster, associate the workspace with that. We don't typically move the data, because you might be leaving behind hundreds of petabytes of data.
Instead, the new data would go to the new cluster, because these logs tend to have age-out times of 30 days, 90 days, two years. The data would sit in the old cluster as well, but that older data would then gradually age out, and of course, all new data is landing in the new space. Queries are merged on top of this, so the customer is then able to see data from both portions. One of the things, I mean, I've heard the term Azure Monitor Essentials.
Can you explain what that means in the context of Azure Monitor all up and any premium options and whatnot? When we started monitoring around the company, around Microsoft and providing monitoring solutions for Microsoft Teams, and additionally, of course, building these premium offerings that we were for OMS, Log Analytics, Application Insights. One of our teams, the team that own this almost free experience built on top of Azure, that is required for operating your resources in Azure.
These are your diagnostic logs and your audit logs experiences, auto-scale, some alerting on top of all of that. These pieces were being built by, of course, a slightly different team than the teams building for Log Analytics and Application Insights. They also didn't have the need to bill because these are the essential pieces. When we joined all these teams together, we said, let's still keep this separate in order to help people understand where we are on our journey.
The Azure Monitor Essentials pieces are the free or mostly free components that are built on top of diagnostic logging, audit logging, and the data that's collected as part of that. Then the teams then, of course, as we've merged together and worked on these pieces, we've then combined hopefully and made our improvements for both UX, and both the scale of data that we're able to handle. That's really where Azure Monitor Essentials fits in.
Then the premium offerings are still the ones that we'd also had before, with application insights and Log Analytics. These are the offerings where because, in many cases, significant capacity in Azure is used to operate them, we do need to pass along those costs to the customer. If I think about this as the monitoring essentials, these are core basic logging features that are just naturally part of the platform.
Then any of the crank it up to 11 and huge storage requirements and whatnot, essentially, those are the more the premium paid ones. Yes. That's correct. One thing that always interests me is how large customers especially use Log Analytics. I mean, we're dealing with often petabytes of data, gigabytes if not more of ingestion on a regular basis.
What are some of the things that some of the customers have told you about the platform, and some of the practices that they've been practicing themselves? We do have some best practices documents published as well now, but just to go through some of the challenges that we've had with some large customers, particularly when it is petabytes of data.
When we first started integrating with some of the additional teams, including Sentinel, I would say a very common onboarding experience for a large customer was, I enabled Sentinel or the Shkurti tooling at the time. I enabled this, I turned everything on, and then, oh, wow, I got bill shock. I looked at this bill and it was more than I was spending on the compute being monitored. I think we've done a pretty good job of improving that.
A lot of feedback from customers, and that's that initial onboarding experience, particularly with the larger customers. We did work with them to help them both manage their bill and ensure that we have our own internal learning now so that when we see a new customer coming in and incurring very high costs, we'll immediately send them emails saying, hey, was this expected?
Some of the other challenges I think we've had with large customers over the years, particularly as we've merged these teams that support internal monitoring and external monitoring is, external customers have almost a different set of access rules than maybe what we would see at Microsoft.
Really, it's been about understanding how these customers might do their deployments, and a customer would say a thousand engineers, you'll find out that they have only maybe 10 or 15 of them are allowed to touch production with deployments. These types of access to the portal, we built one experience that we're expecting, this is how you jit and this is how you get your access, and it's fantastic.
Then we turn around and we find that a large customer would really be operating on Azure very differently. We did have to make changes both in what we've supported through our back controls, so that maybe you didn't see the same read-write access directly to the resource in order to still be able to go in and have access and controls on top of dashboards on top of maybe tweaking and alert.
It's been an interesting journey as we built out Azure Monitor and as we onboard some of these larger customers.
Another point you alluded to at the start of the broadcast on how do you deal with credentials and MSI, that's been part of this most recent round of maybe the last three, four or five months is how to make sure that we enable the same capabilities of monitoring using MSI rather than using maybe credentials and moving around certs because certain management is a problem both internally and externally. It really becomes a challenge as we look all up.
During the week, you and I were talking about some of the improvements that have been made around security and app insights around AAD. What's going on there? I was looking for the actual document for this itself today. Application Insights has some new SDKs that are in preview. These SDKs should be GA-ing in the first half of 2022. They allow use of AAD authentication to be included as part of the telemetry.
Instead of today where most telemetry endpoints for application insights are unauthenticated, we can start collecting strictly authenticated data. Now, that is only for of course a portion of the types of data and that there are certainly continued to be types of applications that the entire ingestion telemetry experience is intended to capture the moment from the external customer first hits a website.
You have a shopping portal, customer hits this, and well before they're logged in, you want to capture maybe what they put in there, what was their experience in their workflow, as they added items, looked at items, navigated through different pages. All of that is unauthenticated.
You can still of course use an unauthenticated experience, but when you're operating data from your own servers, your applications running in your own context, your own VMs, having that authentication only flow provides an extra layer of security on top of your data. There are some customers that I work with. They'll be very excited to hear that. That's good news.
Well, I think this brings things to an end, but before you go, we always ask our guest, is there any one final thought you would like to leave our listeners with? I think the final thought is maybe going back to that beginning what you were talking about and what is Azure Monitor and maybe some of the confusion around Azure Monitor and these different pieces. I think it really comes down to, View Azure Monitor is our convergence across both Microsoft internal monitoring and external monitoring.
As we bring all of the monitoring capabilities into one space, and so instead of thinking of it as, am I using application insights or am I using log analytics, I'm using Azure Monitor to monitor all of my resources, all of my applications, and get the right rich curated experiences and insights on top of those. David, thank you so much for joining us this week. I really appreciate you taking the time.
I know you're incredibly busy, but I also know Azure Monitor is such a cornerstone of management, ongoing use of our Azure platform. I think it's been great having you on here. Hopefully, for some of our listeners out there, I hope this has helped clean up some of the misconceptions or confusion about what Azure Monitor actually is. To our listeners out there, thank you so much for listening. Stay safe out there. We'll see you next time. Thanks for listening to the Azure Security Podcast.
You can find show notes and other resources at our website azsecuritypodcast.net. If you have any questions, please find us on Twitter at Azure SecPod. Music is from ccmixter.com and licensed under the Creative Commons license.