Episode 99: Securing Copilot AI Data and Purview - podcast episode cover

Episode 99: Securing Copilot AI Data and Purview

Aug 16, 202437 minSeason 1Ep. 99
--:--
--:--
Listen in podcast apps:

Episode description

In this (late) episode, we chat to Andrew McMurray, a Principal Product Manager at Microsoft about securing Copilot data as well as how Purview can play a role in doing so. We also cover news about MFA access to the Azure Portal (Important), PostgreSQL, Entra ID and Windows authn metadata, Backup Vaults, Conditional Access Policy, ADFS, and Azure Container Apps.

Transcript

Welcome to the Azure Security Podcast, where we discuss topics relating to security, privacy, reliability and compliance on the Microsoft Cloud Platform. Hey everybody, welcome to the podcast. This week's episode is episode 99. Yes, we just won away from the big 100. This week, our guest is Andrew McMurray, who's here to talk to us about securing copilot data. But before we get to our guest, let's take a little lap around the news. Mike, why don't you kick things off? Thanks, Michael.

So, my big news is I took a vacation. Yeah, I actually ended up doing kind of like the big family vacation while the kids are kind of at that age that's old enough to enjoy it, but not so old that they are teenagers. One of the things that sort of struck me, we visited things like the ancient Roman ruins and some of the ancient Greece stuff and whatnot. I was really struck by how much history some human disciplines have.

Like when you think about what the ancient Greeks did 25 centuries ago, like the Parthenon on the top of the Acropolis in the center of town, there was not a straight line on it because it wouldn't look straight if it was straight. They slightly curved the columns. They made the columns slightly different sizes so that it looked perfectly straight. And it was just an amazing amount of stuff that they were able to accomplish back then. And I was thinking like, wow, like 25 centuries of stuff.

Like in cyber, we barely have two, maybe three or four decades to lean on. I mean, obviously we have other disciplines and other conflict and all that, but cybersecurity as a discipline is so new. I mean, we got old people around that were literally there at the beginning and figuring some of this stuff out. I mean, like Michael Howard's still with us. Sorry, couldn't help it.

Ultimately, it was just, it really struck me as like how new of a discipline we are and how much we have to kind of figure out the basic rules of it. And we've been sort of bootstrapped into this like super important role in the world, you know, protecting elections, protecting democracy, you know, safeguarding the world's information and knowledge.

It's just, that was one of the things that really struck me is like how new we are, but also how important we are to the world and how much we have to learn from other disciplines in the world as well. And so that's kind of the big thing I've been thinking about since the last episode. Okay, so I've got a couple of bits of news, both of them about Azure Container apps. So hooray. Now Azure Container apps support Azure Key Vault certificates and it's now GA, which of course is a very good thing.

And you should definitely be using Key Vault to store things. And in public preview for container apps is managed identity support for scaling rules. So of course we do love managed identities. Managed identities are important and we've been, I've been having a lot of discussions about those over the past year or so. So of course you shouldn't be just storing a random secret in an app. You should be using a proper managed identity wherever you can.

So it's nice that we are now supporting that too. Yeah, it's good to see the managed identity stuff. Again, I've been talking about this for many, many years now about more and more applications and as you're moving to managed identities, that way you're not storing a credential somewhere to authenticate. And this is really important because this is exactly what the attackers are going after.

So you'll see any of the applications that have historically not had managed identities, you're certainly going to see more and more take on that technology, which is always good to see because if the credential is not there, then it can't be compromised by the attackers. So yeah, this is good to see.

Well, since we are going to be talking about Purview in a little bit, and I really love conditional access due to how it uses information gathered from the infrastructure, including user and device in almost real time, and take that information to authorize decisions on the fly. I'm going to talk about conditional access policies that allow block access for a user with insider risk.

Insider risk is a conditional access that basically leverages the signals from Microsoft Purview adaptive protection capability to detect and automatically try to mitigate insider threats. For example, Purview may detect unusual activity from a user and conditional access can enforce security measures such as requiring multi-factor authentication or blocking access. This is a premium feature and requires a P2 license.

For more information visit a common conditional access policy, block access for users with insider risk blog. I found the information provided really interesting. The second news that I wanted to share is about Active Directory Federation server. Microsoft has enabled some new migration capabilities provided by IntraID. This is important because migrating away from ADFS, we are decreasing the surface of attack.

Users basically do not have to maintain the extra software plus the device and use instead they can use Microsoft IntraID. The ADFS migration wizard allows customers to quickly identify which ADFS relying party applications are compatible with being migrated to Microsoft IntraID. The tool also provides migration readiness of each application, highlights issues and provide suggested actions to remediate.

In addition, it provides guides to help prepare the individual application for migration and configure their new Microsoft Intra application. I've got a few items. The first one is, and this is a really important one, we're changing our multi-factor authentication requirements for Azure Sign-In. Right now we're starting to roll out MFA support for the Azure portal only.

Over time, starting early next year, we will also include the Azure CLI and Azure PowerShell and other sort of infrastructure as code tools like that. Right now it's just for the Azure portal. So you need to make sure that you're all configured to support that. We will notify global admins of each tenant to make sure that they're aware that this thing is coming out. Next one, which is from my old stomping ground in Azure data, Azure Policy Support is now GA for Postgres SQL Flexible Server.

This is great to see. I'm a huge fan of Azure Policy. So there's all sorts of different security policies. So for example, restricting locations and so on for Postgres SQL. So that's always good to see. Next one is Microsoft Entra ID using Windows principles for SQL Managed Instance.

So one of the cool things about Managed Instance is that it's a really good bridge between on-prem SQL Server and into the cloud because it's the one that's most compatible with SQL Server, a lot more compatible than Azure SQL Database. So now we support using Entra ID login, but using Windows principles as well. So this is good to see because it just makes things a little bit more seamless.

And the last one I have, which again is something that we see across the board, is moving encryption keys to support customer managed keys. And the latest product or the latest service that supports that is Backup Vaults. So now you can have your own customer managed key rather than just the key that's provided by Azure, by a service managed key. Great to see. Okay, so with that, let's turn our attention to our guest.

As I mentioned at the top of the podcast, this week our guest is Andrew McMurray, all the way from Australia. And he's here to talk to us about securing copilot data. So Andrew, welcome to the podcast. Would you like to spend just a quick minute and give our listeners an idea of what you do? Yeah, absolutely. Thanks, Michael. So my name is Andrew McMurray, Macca to his friends. In fact, Macca to everybody. The only people that call me Andrew are people that are angry with me.

So being Australian, everyone with a Mac in their name becomes Macca, which is unfortunate given that we call McDonald's Maccas out here. And that effectively makes me named after a hamburger. But apart from that, I'm part of the Purview engineering team. I'm a principal product manager. I stand to be looking after two areas of Purview. Firstly the data governance experience, the new data governance experience that we'll be bringing out later this year.

And I also do quite a bit of work with Purview for AI and looking at how we can secure interactions with the various copilots that we have in our stable. Alright, so let's start at the very, very beginning because I know a lot of people, and I'll be honest, I'm one of them, are really confused about Purview and what it actually is to spend as long as you want explaining to our listeners precisely what Purview is, including all the moving parts.

Yeah, well, that is a fantastic question because we have done a fairly reasonable job at confusing a lot of our customers out there. We ostensibly Purview is a suite of capabilities that span three major pillars of data security, data governance, and risk and compliance posture. So those are the three areas that we look at. We've conglomerated a bunch of different capabilities into the one umbrella brand very recently in the last couple of years.

And prior to that, we had Azure Purview and Azure Purview has now become the data governance pillar of the Microsoft Purview suite, and we also had M365 compliance, which really covered the data security and the risk and compliance posture capabilities. Now we did emerge of all of these capabilities into one umbrella term called Microsoft Purview.

So Purview gives you firstly, data security capabilities in M365, as well as multi-cloud capabilities in terms of data loss prevention, insider risk management, information protection. And we use this capability called adaptive protection that sits underneath it to really understand the risk profile of a user and then dynamically adjust their access to information based on that current risk profile.

In the data governance space, we're all about finding and governing rather than securing structured data. And we're starting to make forays into unstructured data as well. We have a capability called the data map, which is an area that allows us to scan in various different data source types and get an understanding of the metadata of those types and what's actually out there in our structured data estate. We can do that across a multitude of different providers.

We have scanning connectors into around about 150 plus different providers ranging from Azure, AWS, GCP, and the various workloads that live inside of those. When we scan that information into our data map, we're then able to promote that information into a data catalog that is essentially an area where a data consumer working in a business can easily go in and find the types of data products that they need to work with based on their job role.

This stuff can be curated by data stewards in the organization. We can provide access control and policy-based frameworks for getting access to that data. And we also have the ability to start scanning unstructured data in certain areas such as AWS S3. And we're really bringing to life a lot of unstructured scanning of data sources. Worth being aware that the data governance solution is a metadata scanner.

So whilst we go and scan the data in situ, we don't bring any of that data back into the data map. We simply bring metadata about that data back. And then finally, we have the risk and compliance posture capabilities that are in Purview. Now these capabilities include things like compliance manager that allow you to get an understanding of whether your environment meets certain regulatory compliance acts and is compliant with those.

We have e-discovery and audit, which is very important for us to keep hold of information when it's needed in discovery cases. And of course, the audit capability is really manifested in the unified audit log that's currently sitting inside N365 so that we can get an understanding of all of the administrative actions that are taken across the tenant and what's actually happening there. We also have communication compliance and data lifecycle management that sit inside this pillar as well.

Communication compliance allows us to really understand what people are doing in a team's environment and be able to track down issues in that environment, such as profanity, sexual harassment and various other types of non-acceptable conduct. And then data lifecycle management with its component records management capability allows us to really understand how long we should be keeping our data.

So many of the organizations out there will keep data indefinitely and that is not necessarily a great thing to do. So being able to say, how long should we be retaining this data? Should we be retaining data no more than five years? And if that is the case, fine.

As the data hits that particular mark, we'll go and mark that for deletion and remove it from the tenant so that we don't have old data that's no longer relevant, but could potentially impose a security risk if it was left lying around. So that suite of capabilities represents what we consider today as Microsoft Purview from the M365 side of the tenants all the way through to the structured data existing inside Azure and outside sources as well.

Now what we wanted to talk about, now we've done our little recap of Purview, is talk about more about, as we know, tons of people are doing AI things. And for a lot of people who have Microsoft environments, their first foray into AI is using Copilot for M365. Now I know from talking to customers, and I'm sure you do too, that one of the concerns they have is around, can I put AI, a Copilot across my data when I don't know what's there and things can be overshared?

So do you want to kick off and tell us, am I right? Is that what people are worried about and what can we do about it? Sure. So it is something that people need to be worried about. It's fairly common in larger organizations and even smaller organizations for permissions to become poor over time, for areas of the organization to have sensitive data inside them that may be a little bit more open than it should be.

I mean, I think anyone out there can probably think of a time when they've gone to an internal intranet site, typed in a reasonably innocent search term, and then seen stuff come back that is not necessarily something they should see. Copilot is great. Copilot really extends our ability to find and use information, but Copilot is extremely good at finding information based on the prompt that you give it.

And sometimes that information that comes back might be stuff that you shouldn't see that just happens to be sitting inside a SharePoint site or something similar that is probably more open than it needs to be. Perhaps that document is effectively security by obscurity because nobody is really aware of the presence of the SharePoint site and we haven't effectively locked that down.

So I could quite innocently type in, give me some salary expectations for level X and all of a sudden I could be pulling back potential salary information for executives or something like that if it's not locked down securely. So that becomes something that we really need to think about.

How do we make sure that we're not oversharing data in the first place or at least making sure that we have appropriate controls over that data so that Copilot is not accidentally giving a user something that they shouldn't be seeing? And that definitely correlates with the experience I've seen with customers where data security everybody knows data is the thing that's important. And people have talked about that.

But it's always been on the list, it's just always been towards the bottom of the list after SOC, after identity, after all these pressing concerns. And oh, we're migrating to the cloud, we need to take care of our infrastructure and make sure they're doing DevOps, we need to make sure there's security in it. And so it always ended up just slipping to last place is one of the things I've noticed.

And I've really seen since the advent of AI how much it's really popped forward onto so many security organizations' radars. And it's just been kind of like one of those, instead of giving it lip service that yes, it's important, we're starting to see some real action on it. And I'm really glad to hear about all the different purview tools that can help with those challenges. At the end of the day, it's just fundamentals, right, Andrew? I mean, it really is just fundamental. It's labeling.

I guess is that copilot honors those labels, is that true? Absolutely right. Yeah, for sure. So if we think about purview's capability of sensitivity labeling, this is something that becomes really important when thinking about where the data is sitting in the data estate and the level of access that I should or shouldn't have to it.

Now labeling has been around for quite some time, but it tends to be one of those things that's a little bit intimidating for organizations to put in place, because an aspect of sensitivity labeling is the ability for us to apply an encryption template to unstructured data, to documents sitting inside various document repositories. And that encryption can make it very secure, and that encryption will move with the file because it's baked into the file.

But the danger there is if we're putting too much of a control onto it, then or too stringent a control onto it, then users that might legitimately need to open that particular file that aren't necessarily referenced inside the ACL associated with the encryption template, that they may not be able to open that particular file. But it's one of those things that when planned properly is actually very, very powerful and quite easy to do.

One of the benefits that we provide you out of the box in M365 is a default sensitivity label taxonomy. A lot of customers out there have not really thought about the taxonomy they should use for their sensitivity labels.

And over a period of many years now, when we think about the acquisition of a company called Secure Islands that became our sensitivity labeling capability back in 2016, we've always had a set of default sensitivity labels, but we've really optimized them over time so that they make sense for most customers that are coming into this as a new customer.

Those five default sensitivity labels, when we look at the upper two, confidential and highly confidential, we assign encryption templates to them to allow access to people within the organizational tenant, but not necessarily outside that. However, we need to think about how this is also done. So if I have those generic labels, then I could assign something as confidential and that would allow anyone within the organization that has an Entrez ID account to get access to that info.

We're probably going to need to make some changes to that. There's going to be certain pieces of information that are more sensitive than what the default taxonomy will give me. So I'll probably want to tweak those a little bit. Executive salary information, for instance, should probably be very highly confidential and be locked down to only certain people in the organization.

Now when I'm searching for things via copilot, one of the cool things that copilot will do is it will aggregate sources of data to try and get me a final answer as to the question that I'm asking. And those sources of data might be multiple files with different sensitivity labels.

If I'm attempting to access something through copilot that is protected with a sensitivity label that I do not have access to based on an encryption template, the first thing that will occur is copilot will say, I will not give you that information. I will indicate there's information out there. However, it is a sensitivity level that you do not have access to and will not show me the actual results of that information.

So if I want to find out the CEO salary, that is completely locked down via a sensitivity label that only allows three or four people to see it. So copilot will tell me, sorry, can't show you that. Additionally, the copilot response itself will be tagged with a sensitivity. So if for instance, I produce a prompt that gives me back, let's say a document that is generally sensitive, in other words, has no real problems with me seeing it.

And another response that's highly confidential that I happen to have access to, the entire response will be labeled as the highest of those sensitivity labels because I'm giving you information that is considered highly confidential. I will tag the entire conversation as highly confidential as well. So having the ability there to ensure that we're not returning data you shouldn't see.

But when we are returning data, you should see, we will make sure you know what the highest sensitivity flag of that response is. This makes sure that the end user can be somewhat responsible with how they use that response. Kind of reminds me a little bit in the day, I'm already aging myself here. Index server back in Windows used to take the access control lists off the files that were being indexed.

So that way it could actually honor the ACLs in the queries result rather than just because index server, which runs a system on the box. So of course it can read everything. And the reason why it runs a system is for very good technical reasons. But because it's running a system, it can read everything. But that doesn't mean you can. That doesn't mean the person doing the query can read everything.

So yeah, the way index server used to do it back in the day was by maintaining the ACL information from the files. So I was trying to make sure I get this 100% right. So you're saying that if I do a copilot query, and let's say just humor me, that two sources of data, two files are used to build up the results. And one is public and the other one is confidential. The result, it doesn't matter what's in the result, will be confidential. Absolutely correct. That's great.

I think one of the other things too, to be aware of with that is in the response, it will also clearly identify the sensitivity rating of each of the sources that it used to compile that result as well. So there should never be any confusion over which one of those references that came back was the one that was the highly confidential. It's all very clearly laid out to the end user. If I have a derivative word, which is the query, can I save that file? Can I save that result? How does that work?

So the result will appear in obviously the interaction between the user and copilot. When the results come back, the assets that we use to compile that result are linked into the result that comes back from copilot. So if I have access to any of that information, the guts of the information will be in the response, but there will also be a link to the file. The link to the file will still remain an encrypted file.

So if I go and download that file and then I go and give it to somebody else who shouldn't have it, that end person that I've given it to, if they're not part of the encryption template that determines the ACL for who is allowed to access that and a decryption, they won't be able to get access to it. So it's not like the result is shipping me decryption keys that I can then utilize outside of that experience.

So at the beginning, you mentioned all the different services that are part of Perview suite and the data protection capabilities they provide. Can you elaborate why logging and eDiscovery is important and what capability this enable, especially with copilot? Absolutely. So logging and eDiscovery are particularly important. Any type of interaction with copilot gets logged into the unified audit log so that we can get an understanding of who is using copilot and what they're doing.

eDiscovery becomes particularly important because in the event of some kind of issue that is of a legal nature, eDiscovery can capture the exact interactions that occurred between a user and the service and take a copy of that into a discovery package that can then be used in a legal scenario. And it doesn't matter where that's coming from, we'll capture that. So if I'm using copilot in a Teams environment, we'll capture that information.

If I'm using copilot inline, inside Word, Excel, PowerPoint, for instance, we can capture that information as well as part of eDiscovery. So it's really important that we have the ability to see exactly how the interactions with copilot are occurring and that we have the evidence that we need to provide if things are brought up in a legal scenario. So capturing all of that stuff automatically, extremely important and allows us to bolster any cases that we might find ourselves involved in.

As you can give me a practical example of that. Sure. So let's just say that I've been working as a designer for some time and I'm working on a brand new product and I want to have a look at some of those confidential design documents. Now let's assume at the moment that we don't have a really good sensitivity label taxonomy in place. So what I do is I go and open up my document and I'm working with that.

Then I go to copilot and I say, please give me a list of all the information related to the spec for this particular design. So copilot gives me that information. And in this case, because we don't necessarily have a sensitivity label framework in place, I take that information and I am it to a competitor, one of my friends at a competitor and say, look at this cool thing that we're building. Now we've got ourselves a bit of a bit of a problem. We've leaked some information.

It's highly confidential. Unfortunately, we didn't have those other controls across it. The logging and the e-discovery allow us to go back and look at the interactions that occurred and then ensure that we maintain those interactions and don't delete them so that if something does occur from a legal perspective, we now have evidence as to what happened, who did it and when it occurred.

People are listening to this and they're thinking, okay, I want to use copilot and I am pretty sure our data is a mess or we don't even know because I can say my experience, I think people don't even know what, but they probably suspect that their data is overshared, etc. What can we do? Now I know that we've been working on some steps like a maturity model that people can take, but let's talk about maybe the first two phases, what you recommend people did to start with. Absolutely, yes.

We are definitely working on guidance around things like oversharing concerns for your copilot for M365 deployment, absolutely doing that. Hopefully that collateral will be released soon, but just at the very start, one of the things you want to get a handle on is exactly how much of my content is overshared in the first place and where is that content residing? One of the first things that you can do is go into your AI Hub in the compliance portal and run an oversharing report.

An oversharing report in AI Hub is really useful because it will give you a 30-day backdated report of the number of unprotected files in SharePoint Online that were referenced by Microsoft copilot, which is really, really useful. Over the last 30 days, here's all the files that were protected that copilot accessed. Then just by doing that, you can get an understanding of, well, what sites are they in? Why are they not locked down?

Maybe I need to go in there and start working on securing my SharePoint Online environments a little bit more. If you look at something like the SharePoint Advanced Management capabilities in SharePoint Premium, there's a data access governance report that you can run to get an understanding of what are the permissions across my sites and what do I necessarily need to do in order to fix that.

At the very beginning, we shouldn't just be jumping into saying, right, let's throw sensitivity labels on everything with the highest levels of encryption. Let's first find out what our level of exposure is before we go any further. Once you've done that, you can start taking some steps to look at your sensitivity label taxonomy, make sure it works for you and make sure that the ability to lock down that information is in there. Some of your labels will require encryption.

Will they be a static set of ACLs in the encryption or will it be more open so that the user, when they apply the label, can then choose which users, groups, domains should have access to this information as well? But also, don't forget that SharePoint sites themselves have the ability to have sensitivity labels applied to them. In my organization, I can create sensitivity labels that determine the privacy settings of SharePoint sites as well as the access control settings of SharePoint sites.

By labeling a SharePoint site itself as highly confidential, I might immediately restrict it to a private group, a private site. I might immediately say that in unmanaged devices, I can't download any information from it. Just various things that will ensure that Copilot is not getting too much access to information itself from the very beginning.

And then of course, there are other things you can do around things like site lifecycle management, making sure that sites are not left lying around when they're no longer relevant. Things like auto labeling. So getting to a point where we're not responsible for the end users to label things themselves, but actually setting up auto labeling rules across things like Exchange and SharePoint to look for the presence of sensitive data.

Find those credit card numbers, automatically label the files inside them with the right sensitivity labeling and the right encryption template. So again, the user doesn't have to worry about it. But also we know that Copilot will not accidentally be giving out information that maybe it shouldn't. Are you serious? There's an oversharing report? Absolutely. Yep. There's an oversharing report.

If you go to your AMI Hub, you run your oversharing report and it will give you the number of unprotected files in SharePoint Online that were referenced by N365 Copilot in the last 30 days. That's funny. Many one of those in social media, I think. Anyway, that's another discussion. So tell me what like a day in the life, what it's like to work with Microsoft Purview. Like is it one role that generally tends to do all these things?

Is it different sort of customer roles and jobs within the organization and different teams? Like what does that look like? And then, you know, like who, you know, basically who uses it and what is their kind of daily workflow look like? Sure. So generally speaking, when you think about Microsoft Purview as an umbrella, it does tend to span, you know, certain teams and those teams tend to match up quite nicely with the pillars themselves.

And that was very deliberately done because we tend to find in our customers that we have teams dedicated to data security, stuff like, you know, monitoring data loss prevention, checking for data loss incidents, looking at things like insider risk. We have the data governance office, which is generally very different to the data security office.

The data governance office is all about democratizing access to data for your data consumers, whilst ensuring that they are governed effectively to ensure that the right people are accessing the right information at the right time. And then you have your risk and your compliance groups, which are really responsible for that more legal side of the fence, ensuring that data is no longer left around that doesn't need to be there.

And also making sure that anything that does occur is, you know, prepared for any legal opportunities that come along. So we tend to find that those three pillars generally represent three different teams. And the levels of interactions between those teams can be greater or less depending on the company in question. Smaller companies, you tend to find data security and risk and compliance merged into a single area.

So your average purview user in there will have quite broad responsibilities, but data governance tends to be something that is siloed into its own department. And it's quite often not necessarily controlled by IT itself, but by high level business roles. And IT becomes very much a provider for the data governance function.

That's actually a beautiful segue into something we want to add to each of the episodes, which is when we talk to our guests is get an idea of what their day in the life looks like. Like what Mac on an average day? I mean, what is your job involved? You know, what are you wake up in the morning and what's next? Just sort of walk through what a typical day looks like. Yeah. So for me, basically my day is split between engineering internally focused work.

So things like reviewing specs, commenting on plans for what we're intending to do over the next six months. Also talking with engineering and helping to advocate for certain pieces of functionality in a product over others over the next six months because of customer demand. I spend an awful lot of time in front of customers.

However, I would say probably 70% of my time is talking to customers, understanding their blockers, helping them get deployed with the, uh, the purview solution, and then taking those results back to engineering. So in any purview deployment, you are going to find blockers for someone, for anyone. There will always be something that they need that's not in the product.

And it's one of my jobs to make sure that I'm getting those requirements and then, uh, interpreting those requirements in a way that engineering can understand and act on keeps me busy. So Macca, the thing that we ask folks right at the end of the podcast to wrap up is if you had a final thought to leave our listeners with, what would it be?

Sure. So we've talked a lot about, you know, the types of work that you probably need to put in if you want to ensure that you are appropriately securing copilot interactions. And, uh, I think Mark said before, you know, quite often a lot of this work is that that piece that gets forgotten about during the rush to actually get the solution in place. And my advice is don't put the work off.

If you're starting to think about using copilot, take into account the stuff we've been through today and make it part of your initial deployment plan. Fundamentally don't try and fit the roof before you build the walls. Yeah. Words to live by. Um, so, Hey Mac, thanks so much for joining us this week. Yeah. Purview's a complex beast.

So it's good to have someone from the, you know, the engineering side of the house sort of talk about it, especially on the more practical aspects, you know, integration with, uh, with copilot. So with that, again, thank you so much for joining us. Really appreciate it taking the time. And to all our listeners out there, well, next, our next podcast is episode 100. Um, I'll leave it at that. It's going to be a special episode. Everyone stay safe and we'll see you next time.

Episode 100. Take care. Thanks for listening to the Azure security podcast. You can find show notes and other resources at our website, azsecuritypodcast.net. If you have any questions, please find us on Twitter at Azure set pod background music is from ccmixter.com and licensed under the creative commons license.

Transcript source: Provided by creator in RSS feed: download file