Episode 109: Securing GenAI Applications with Entra (2 of 4) - Overpermissioning - podcast episode cover

Episode 109: Securing GenAI Applications with Entra (2 of 4) - Overpermissioning

Feb 19, 202538 minSeason 1Ep. 109
--:--
--:--
Listen in podcast apps:

Episode description

In this episode, Michael, Gladys and Mark talk to guest Bailey Bercik about the problem of overpermissioning and how to use Microsoft Entra Permissions Management to identify and manage over-permissioned identities in multi-cloud environments to reduce security risks, especially for AI apps.

We also cover the latest security news about AI red teaming, Azure SQL DB logging, Azure Confidential Ledger, Star Blizzard spear-phishing campaign and CISA Zero Trust Maturity Model.

https://aka.ms/azsecpod

Transcript

Welcome to the Azure Security Podcast, where we discuss topics relating to security, privacy, reliability and compliance on the Microsoft Cloud Platform. Hey everybody, welcome to episode 109. This week is myself, Michael, with Mark and Gladys. This week our guest is Bailey, who's here to talk to us about over-permissioning and how to detect it and remediate using Microsoft Entry Permission Manager. But before we get to our guest, let's take a little lap around the news.

Gladys, why don't you kick things off? Yes, I actually wanted to talk about New Star Blizzard, who is a Russian threat actor that we track, Microsoft tracks. They're sending spear phishing messages to join a WhatsApp group. Why do we care about this? We care about all our customers. We have provided a blog with a lot of different remediation, for example, or mitigation. For example, you could use Defender in Android and iOS. In addition, WhatsApp you could install in Windows.

There's a list of capabilities, including in Defender Antivirus that you could enable in order to protect WhatsApp customers. The next news is about Microsoft guidance for zero trust. Microsoft has been providing guidance for zero trust for many, many years. Mark keeps talking about all the different changes and additions that we give to the documents. Many customers, such as US government, have different strategies that they need to follow.

In order to help customers to align to those specific strategies, in April 2024, Microsoft released a guidance for aligning to the Department of Defense. Just now, we just released another guidance that is for aligning for CISA. My news is around a actually very thematic to today's guest. Their AI Red team released some white paper on top lessons learned and whatnot for attacking AI, which is really, really interesting.

Some of it is a bunch of insights that came directly from the paper, but also that made me think about certain things like just how important it is that you have the intent and the context for what the heck you're trying to do with an AI. It's like LLMs are very much a general purpose technology. You really need to know what your app is supposed to do because you can't just shortcut that, oh yeah, it's whatever the code does.

We didn't document our code, which is not a great shortcut, but it's somewhat tolerable when you're talking about classic deterministic code that does the same thing every time, but AI does something different every time. It's really, really important to know what the thing is supposed to do so that you know when it's going off the rails. Otherwise, you don't know. Some reinforcements that, hey, attackers are rational and go for the easiest things because we saw the Red team go for that as well.

This interesting idea of hybrid attacks of essentially, hey, put this in a visual image, and then you throw text in the visual image that could be a malicious instruction. There's this interesting multimodal or hybrid angle that you have to watch out for because of the different modalities that these generative AI models work in.

Lots and lots of interesting discussions, coverage, and documentation of how an LM could be used and probably has been used in all likelihood to automate scams and whatnot. Really some interesting stuff in that paper. There's a link to it in the show notes. That's all I got. All right. For my news, I have two items. One is following on from last week's news about Azure Confidential Ledger receiving SOC 2 Type 2 certification.

They've just received ISO 27001 certification as well, validation as well. That's really good news. The link that I provide will actually be to the Azure Trust Portal and the documentation around that. The other one, which is from my old stomping ground, is Azure SQL database. They've basically rejigged, actually re-architect is probably an even better word, major portions of SQL auditing.

Some customers were concerned about the performance of auditing and some customers just don't even bother turning it on because of the potential performance impact. Well, they've completely re-architected it, rewritten a bunch of code, and it is now substantially faster, which really boils down to more people turning SQL auditing on, which is critically important for determining if there is a breach, what actually went on. So I tip my hat to the guys in Azure SQL.

So with that, that's the news out of the way. Let's turn our attention to our guest. As I mentioned before, Bailey's here to talk to us about some more Entra permissions management, this time around generative AI. As you probably know by the title, this is two of four. We've got two more to go. Last week we had a quick sort of kickoff, just a brief overview of what the other three weeks will look like. So Bailey, thank you so much for joining us this week.

We'd like to spend a moment and introduce yourself to our listeners. Well, thank you for having me. My name is Bailey Bursic and I'm a senior product manager working on Microsoft Entra. So I've been at the company for the past six years wearing all sorts of hats. And now one of the things that I've gotten to work on has been security copilot and a lot of guidance around how you secure generative AI apps with Microsoft Entra. So I'm excited to dive a little bit more deeply into it today.

So Bailey, there are several issues that customers should prepare for when using AI. What is the underlying problem that we're trying to address in this show? Well, I think the big one that I want to chat with folks about today is over permissioning.

And I know that as security teams, we've probably talked about that a million times, but AI is just going to shine a big flashlight on that and make it so much bigger when we look at the sprawl of permissions that folks have inside of their environments. So really want to dive a little bit more deeply into that, how we can enforce that for the different applications you might be using and also the permissions that you have for accounts that we can trim back on.

All right, Bailey, so that's a good sort of brief introduction. So let me give you a little story about over sharing and over permissioning and so on. Just in the interest of telling stories and sort of sharing the battle scars. Many, many moons ago, I was working with a customer and working on some really cool stuff that they're in healthcare. And we designed a system for them, designed to be highly isolated to provide data to people who needed the data. It wasn't an AI solution by any stretch.

About six months later, I gave a presentation to another customer and I'd inserted a slide that was from the first customer accidentally. Luckily I noted it like 30 seconds before giving the presentation and I actually pulled the slide out. So I spoke to my manager later that day, I said, hey, just want you to let this happen. Nothing bad happened, but I want you to know what happened just in case anything happens at all. I said, hey, I accidentally slipped a slide in there.

It was from another customer. It was an accident. But I pulled it out because I realized before the presentation that this shouldn't be in there. So I pulled it out. So nothing happened. No harm, no foul. But I want you to know it happened. And he said to me, he said, yeah, it's a good job that didn't happen. Because if it did, we actually literally have a process for handling that sort of over sharing and over permissioning of data, giving stuff to people who don't need the data.

And it would unfortunately involve the lawyers. So he said, very good. I'm very happy it didn't happen. But if it ever did happen, let us know. And that way we can pull in the correct processes. So I imagine you've got similar stories, but more around generative AI and the problems that we've seen. No, for sure. I think the example you brought up was perfect.

And I actually want to steal a story from somebody else who gave the example on a Run As Radio episode that I'll make sure to share out to you all that we could put in the show notes for our listeners. Nikki Chappell, she's a Microsoft MVP and does a lot of work in the purview space. She had an episode talking about copilot and data governance and AI applications.

And one example she gave was a case study that she led about medical doctors in putting patient information into chat GPT, which I know for folks can raise a lot of alarm bells, but similar to the example that you gave, Michael, about how you were, I imagine, going through crunch time trying to get something out the door for a customer.

And then you're putting some data in there that you might not have or you probably shouldn't have or in this case, a doctor using an application that at the end of the day, we're all being told to use AI applications to do our jobs better. I think that right now, regardless of what industry you're in, you're being told that AI is the future and by using some of these applications, you need to put them into your workflow or you might get left behind.

And so I think that in that example, doctors are experts in medical information, but they might not be experts in data privacy, data governance, and what applications they should be using. And I think that's where as some of the IT and security professionals, we need to come in and think about, okay, how are ways that we can prevent this from happening while still empowering our users, regardless of what business we're in, for doing their jobs and by doing that in a safe and secure way.

So Bailey, I'd love to hear your perspective on how do you set up controls here? How do you make sure there's a defense in depth? Because there's a lot of potential for mistakes and oversights and something sneaking or essentially tricking or social engineering, AI or LLM or people. So tell me how you think about the controls in a scenario like this. For sure, and I think it's definitely a defense in depth approach that folks will have to take.

And we can start at the first layer of talking about the actual applications that we're even going to allow in the environment in the first place. So in the example I gave previously about using an unapproved AI app, it could be any application that your employees are going to be using, whatever might be trending in your industry for folks to leverage, whether it's something creative or something that they might be inputting financial information into, for example, or medical information.

You would want to have a list of approved AI apps that are relevant to your organization that would empower your users to do their job better. But also where you know and you've done the due diligence to see that these applications are not leveraging your data to then be training data sets for other users to go leverage.

So first of all, looking at can I have an allow list purely for specific applications within my organization or register certain apps for my organization, making my employees aware that those are the AI apps that we're going to empower them with. Another thing to consider with that is going to be the actual permissions that those applications have. And I know this is a couple of years old, but I'll be sure to share this out for the show notes as well.

A colleague of mine, Mark Morsinski, led this initiative where we were talking about, as he called it, hiding in the clouds, how attackers can use applications for sustained persistence and how to find it. And in that series of presentations, we talked about both malicious applications and applications where they just are over-permissioned and how they can be leveraged if they're compromised.

So to bring that back to what you mentioned, Mark, about, or Mark Simas, the other Mark in the conversation, is going to be about how we're really looking at is this an application that folks may be leveraging that is malicious, that is providing that service, but also could be with poor intent from a bad actor to be leveraging that information and getting information about your environment.

Or maybe it's just an overly-permissioned AI app that then you're going to have to trim down and look at ways to make it run more efficiently. You know, it's funny you should bring that up. It's not just over-permissioned apps. I mean, obviously, over-permissioned apps are a big deal, but it's also apps that are no longer used, right?

I mean, I'm sure many customers, Microsoft included, in fact, we can talk to this because we've actually published this, but we've removed thousands of unused apps within our own subscriptions because they just hadn't been used in, I don't know, let's pick a number, six months, nine months, 12 months, two years, whatever. And some of those were also over-permissioned.

And so if they could, if an attacker could compromise one of those apps that wasn't being used and it was over-permissioned, I mean, all sorts of nefarious things could happen, right? I mean, I guess that's something that you see as well. That's exactly what we've been seeing and is super important to be thinking about because it is something where it just becomes a more attractive target. And I went to school for software development.

I've been there when you're just trying to get the dang app to work and you're using all the.star permissions, and then you think, oh, I'll go back later and trim it down. But you don't. Or also the apps that are used within your organization and are just stuck there that you don't know, oh, should we roll this back? Are people really using it? But as the IT person, you might be nervous that you're going to disrupt the flow of business in some way.

So to your point, Michael, looking back at the application usage, are people actually leveraging this? Is there another app that we can consolidate with and move people over to? That way we just don't have to manage so much that if it does get compromised, we would just have less of a blast radius there. The problems existed forever. I mean, it's not just an Azure thing or an AWS thing or a GCP thing or Oracle Cloud thing.

That problem has existed in Windows for a long time where people have ran processes with higher elevation, which is the ability to do more than it may need to do. That problem has existed for a long, long, long time. It's not what happens in Linux with daemons and services and whatnot running as root. I mean, it's a human computing thing.

Yeah. Well, the big problem, right, is to Bailey's point, is if you're rolling one of these things out and it's running elevated, the last thing you want to probably do on a Friday afternoon is drop its permissions and hope that it works because it's probably not going to work. Something's probably going to fail spectacularly. So people just leave it like that. Then three years later, when you speak to the people involved, you say, well, why does it run as system? Well, why does it run as root?

Or why does it run with, as Bailey said, all these dot-star permissions? I do realize we're talking about different permission models here, but bear with me. People just say, well, that's just the way it is. It just works and don't touch it because it just works. That's the wrong answer. It's completely the wrong answer.

If I can do a quick rant on this, because one of the things I've started to appreciate is the root cause of almost everything in cybersecurity is that security is the security team's problem, this myth, this false belief. If the person says, you know what? It's Friday afternoon. I'm not going to get blamed if there's a security incident. That's on the security team.

It doesn't make their Monday to-do list in figuring it out and going in a lab and doing that hard work because if something goes wrong in security, that's the security team's problem, not mine. I think a lot of this is probably also due to that classic misconception and the accountability is wrong in organizations. But I'll flip the rant bit back off. Okay. My rant on top of yours, Mark, is going to be that I absolutely agree.

I think it's a security team's thing to be looking at, administrating that at the end of the day, yes, it is so important that we educate our end users, but also it's our responsibility to empower the business, to do business in a safe and effective way. Part of this with the introduction of AI apps is great. Now there is this influx of applications that folks are going to be using. How could I prevent it from being shadow IT?

How can I prevent it from being inappropriate use cases for data or people just trying to do their job well? I think that that's something that we don't need to be reprimanding our users in a negative way if they are trying to do their best at work. They're being told that you will be left behind or it's not that AI will take your job, but somebody using AI will. Right? And so to stay competitive in the marketplace, you're going to need to acquire these skills.

At that same time, what are some approved applications we can provide for these users? What are some controls we can put in place to then restrict some of that sharing and make sure it's done in a secure way? But it's absolutely, to your point, Mark, a security team's problem at the end of the day. Yeah. And I actually think, yeah, I think it's actually a little beyond that. Sorry to do a little bit of a debate here, but I agree, security is the experts, right?

They're the ones that understand the threats and whatnot. But you can't have one team accountable for one half and the other team accountable for other half. Agreed. Yeah, because it's like, hey, do we blame the lawyers when the business leaders say, you know, I'm going to do this illegal thing? And they said no. Right? Like, no. We don't blame the finance people when the CEO spends too much, right? So we need to be thinking of ourselves in that way, as like you said, an enabler, right?

We're part of the business. We're here to make them succeed. But, you know, we have a duty to inform. And the folks that are making those decisions have a duty to decide and have a balanced view, just like they think about safety. They think about cost. They think about all these other things. And your app is costing too much money. Your app is doing something illegal. Your app is doing something insecure.

And so it's really about building that partnership, but recognizing that we both have to look at it from both sides and helping make sure that the, you know, this is for senior leaders, you know, helping making sure the accountability structure drives that right behavior as well. You know, so going a little bit beyond the scope of AI a little bit.

Absolutely. And I think since you mentioned finance, I do want to tie it back with another example that I've heard from customers since there was a story that I heard about an organization where they were using an AI app internally. And again, the importance of ever permissioning here, a financial analyst was asking it a question about some of the data sets they were working with to do some modeling and forecasting.

And then when they ran that prompt within the AI app, it was giving that analyst information about a merger and acquisition that was being worked on on the other side of the firm. So, you know, that's an incident where we're looking at naturally to your point, you know, there's within reason things we need to look at.

Yes, we need to make sure that insider risk and, you know, those certain events where people are doing something notably malicious or something where it's outside of that pit of success, right? Where they're doing some action where it's like, ah, really, you're doing this silly thing here today.

But when somebody's using a tool as intended and it's an approved application, but it's because of data permissioning that wasn't, you know, appropriately applied, like the DLP labels there or those application permissions, that can be really difficult when an employee is earnestly doing their job correctly.

Yeah. And one of my favorite things, and this is just the human nature aspect of it, is like favorite questions for users to ask because there's always someone in your org that's going to ask this, are there layoffs coming? How much is such and such getting paid? Like, people will ask that. You have to plan for that. Yes. I think that y'all even mentioned this too on a previous episode. Oh my goodness.

It was with Andrew McMurray, I want to say, where y'all were talking about that exact example of, you know, querying salary information where you have the prompts that are a little bit spicy where it's like, oh, okay, you're asking about that. Or it could be, you know, completely relevant prompts that somebody could be asking and then they just get some data back where they're like, whoo, I did not mean to discover, you know, this bit of information here.

So you know, accounting for both, obviously, and again, you know, if employees are doing something funky, that is a different conversation to be had about, you know, how you monitor and administrate that and, you know, take appropriate HR actions there. But you know, we're really talking about over permissioning and stuff with that.

Or when we're really talking about over permissioning in this context, I do want to focus on some of the unintentional aspects where the employee is doing the right thing that they've been told and then something that they get back is incorrect or they might naively be putting in information and then not realizing the total impact of what they may be doing. So I want to go back to the security responsibility.

I have talked with many developers, many administrator engineers, and I had discovered that sometimes they don't think the time before cloud. When they get these services, they think some of these services are solely for the identity people, solely for the security people. And now we are interconnecting many systems together. So now there's capability for different audience within our organization to be getting information from these services.

So my question to you is we are talking about who is responsible for the security. Which services Microsoft offers that can help both administrators, developers to assess the overprivileged issue, monitor, and even fix the problem. Oh, my goodness. Okay. Thank you, Gladys, because that was the whole purpose of this. And we were just going on some wild rant about it. So thank you for bringing us back to the things that folks can do about it.

So some guidance that we've come out with, and that's kind of the purpose of this whole series is to really break it down, is aka.ms slash secgenai, where we've broken down some step-by-step practices about how you can go about securing generative AI apps. But to your point, Gladys, I do want to dive into some of them more deeply that touch on that over-permissioning aspect.

And then my colleagues, Christina Smith and Sharon Chahal, are going to talk about monitoring and learning and some governance stuff in the future episodes. So focused on over-permissioning. What we've really found success with is Microsoft Entra Permissions Management, which is a tool that I believe we've had for about two years now. And for those who aren't familiar, it's a Kim tool or a cloud infrastructure entitlement management. So that's C-I-E-M, but it's pronounced Kim like the name.

That term was coined by Gartner in about 2018, referring to a tool that can give you visibility across multiple clouds and look into really that delta between permissions that identities are using versus the permissions that are assigned. And I think, Michael, you mentioned something about that way earlier on when we were talking about applications that are kind of this vestigial organ in your environment. And you're like, does anybody really use this? Are the permissions here really necessary?

There's a lot of dot star or same with the identities as well. So when we look at people who've stayed with an organization for a long time, we generally see that the permissions that they have tend to increase over the years that they're at a company just by nature of them taking on more projects or working on another team. And then very rarely are those permissions properly revoked and trimmed down and looked into.

So in order to really shine a flashlight on that and kind of discover what's going on, within that guide that I mentioned earlier, we talk about how to identify and really break down some of those permission sets and enter permissions management can look across Azure, AWS and GCP. So it's a really neat multi-cloud solution there looking into both your human identities. So like your admin accounts, privileged accounts, all that stuff, and then your non-human identities.

So that can cover your application identities, for example. Yeah, I just realized we're doing a bit of inside baseball here. A lot of people may not know what dot star means. Oh my goodness, we are. I know, so why don't you spend a couple of moments and just talk about that. Oh my goodness. So way back when, and excuse me if I do butcher this because I have not been doing software development in like over like maybe seven, eight years.

But back when I was in school for it, when we're talking about a dot star permission that might be looking into all of the permission sets that may fall under a category. So it could be, and I'm making this up, so don't quote me on it. If you look into our permissions and you're like, Bailey, that's not one.

So to define those dot star permissions, if we're familiar with the terms like read, write, like those CRUD permissions, create, read, update, delete, instead of enumerating each of those, you could just do dot and then the little asterisk. And then that could mean that you're doing all of those or anything that would fall under that category. Or at least that's what I remember from being a software developer like eight years ago.

It's been some time, but that would be what I mean with the dot star permissions. And as a software developer, you just want the dang thing to run. So you do all of them and then you're like, huh, the code works. And then you say to yourself that you'll go back and trim it down to actually get to least privilege and very rarely do you actually do that. Okay. So we've talked about some of the stuff like, for example, the entry permissions management.

So let's go back to our little scenario for a moment, especially the healthcare environment. So you got like doctors and no doctors or not doctors, I should say. Because I have access to medical data. Perhaps you want to go one step further than that and have like cardiologists and I don't know, something different. I'll let you come up with something with a different title.

So how do we make sure that a cardiologist, for example, sees their data and an administrator in the environment doesn't see the cardiologist data and a cardiologist doesn't see the financial data? How do we do that? Like give me some examples of how sort of bottom up how we would actually solve that problem. Yeah. So there's a few ways and this could apply to any business. So if you're listening to this and you're like, oh my goodness, I don't work in healthcare.

It could apply to if you're in retail, financial services, any sort of organization where you have a lot of different job functions. We may first look at our privileged roles. So if we're thinking about our IT workers in any business, right, those are going to be our administrators that may have access to, you know, create new user accounts, make updates to things and do some stuff that would be perhaps equivalent to root access, right?

So something that might be a bit stronger in terms of actual identity or administrative changes. That would be one category we're looking at. And when we're thinking about those permission sets, then if they're using something like copilot, for example, which copilot has access to all of the, or copilot is able to work in context. So if you have a certain permission set, then another user does not have that same permission set.

So let's say we have somebody who works in IT at an organization, they're an admin, and then somebody who works in marketing on the other side of the company. If they run the same prompt and copilot, they're going to get different information back because of their different privileges that they have and the different things that they can do within the environment.

On top of that, you know, not just the actual privileges for the identity, we're also looking at some of the data that people have access to. So that could be the DLP and the different sensitivity labeling, or if I have access to a file set and you don't, how we would go about querying that. And then another layer we can look at is the actual applications that I have access to that you do not have access to.

So if I work in IT for a hospital, perhaps I shouldn't have access to, you know, play around with or run a medical specific app because during my day to day at work, I'm not putting in medical information. So if I'm like going to my my apps page, for example, or whatever application portal you use, that should not be something that surfaced up to me or that I'd be able to leverage. Same with the marketing team, they may be, you know, using an AI tool to create new images.

Maybe we could trim that down where in your example, Michael, a cardiologist might not need to be doing that on their day to day. And that just further reduces that risk of perhaps patient information or something that we wouldn't want to have input into an AI model, but by allowing them to use certain tools that are appropriate for their job function. Does that kind of answer the question you had there? Yeah, I think it does. All right. So let's be brutally honest.

We've sort of gone all over the map, including three separate rents. So why don't we just bring our listeners back to a centered position? Bailey, why don't you start from the very top, the most gross level protection, work your way all the way down to the bottom, explaining what the different protections are, and then with some examples.

Absolutely. Sorry, I got passionate there with some of the ranting, but to bring it home for our listeners, if they, you know, tuned out a little bit doing something else and are really wanting those key takeaways from an over permissioning perspective.

If we're really starting out with, you know, the explosion of AI apps in your environment and not wasting a good, instead of a crisis, not wasting a good opportunity to go about doing some of these good security best practices, we could start, you know, at the furthest out layer for looking at those network access controls. So what can we block for our users versus allow just straight up?

So in that initial example that we led with that Nikki Chappell spoke about on Brunez Radio, her work with looking at medical doctors and putting patient information into chat GPT, what can we just all out block that for your business makes sense? And of course, you know, we're looking at different risk tolerance that organizations may have different industries and things that you approve of versus not approve of.

So if it makes sense for you to go ahead and just block it, that might be helpful for your end users to know what's allowed versus not allowed, empowering them to make the right decision and also giving you a little bit of less stuff to go ahead and have to administrate.

Of course, though, if you are going to allow access to certain applications, go ahead and create an allow list or nudge your users towards some approved AI applications where you know that the data they would be putting in is not going to be trained in models that may be used by other organizations or used publicly. So that application access approved deny is going to be a big one.

Then we're looking at, okay, those applications that you've allowed in your environment, what permissions do they have? What can we trim back on if they do have, as we, you know, had a bit of inside baseball earlier on those dot star or just those overprivileged applications where it could be that this application is very helpful for users, but it just doesn't need all those permissions to run, especially if it's an internally developed app.

Reach out to your developers and see if we can trim that down and stop, you know, all of those permissions. That's something that within enter permissions management, you'd be able to view on those, what we label as workload identities or non-human identities. You'd be able to see if that application is truly using those permissions and trim that down accordingly. Something else that we mentioned earlier is kind of those vestigial organs of applications.

So also if you're seeing, hey, there's this AI app that we've approved and we have in our environment, but nobody has used it within the past 90 days. Of course, check to make sure that it's not a seasonal app. So if you're in retail, something that may be very popular around the holiday season, if you're in finance, maybe during tax and audit season, people may be using that app more heavily. Use your best judgment for your business, of course, and the seasonality there.

But you could look at removing that application altogether and then just removing a possibility for if a bad actor were to compromise that application, what they would be able to do in your environment because it wouldn't even exist. Then another thing that we may be looking at is the permissions for the actual objects that that application may be accessing.

So certain examples of if we're leveraging an AI app like Copilot, for example, where I may want to look up different documents that I've collaborated with coworkers on or different information that may relate to a project. Let's say I put in a prompt and I want to look at what's coming new or what's on the roadmap for a certain year within the context of what I'm working on. And then all of a sudden it tells me about a product launch on the other side of the company I shouldn't know about.

That's where that DLP labeling comes into place. And it's very helpful when we're looking at what's restricted access for certain groups, what might be something confidential versus for public consumption or for consumption across other parts of the company. Trimming down on those privacy labels will be important there. And then lastly, when we're looking at the actual human identity. So this is going to be if we're looking at the privileges that users have.

So that might be an administrator permission. If we're looking at creating users or creating different resources. And again, if we're talking about AI ads that are going to have the same permissions that that user has when they're leveraging them. If we're able to trim down on the permissions that user has, then we're reducing the possibility of something intentionally perhaps going wrong or unintentionally going wrong if the user is leveraging a prompt that may not be appropriate for them.

So in order to trim down on some of those privileges for those identities, that's where a tool like Enter Permissions Management could come into play. Where we're again able to look at that difference between the permissions that that user is actually using versus what they're assigned. And we can get super granular in the tool. They call it down to the task layer. When we're looking at if I'm actually doing a specific task within that permission set.

And if you can get as bespoke as you'd like, if you're looking at two colleagues. So if Alice and Bob are both security administrators, but Alice is using more permissions during her day to day versus Bob, you could create a bespoke role for Alice or for folks on Alice's team. So of course go as course as you would like or as granular as you would like depending on your tolerance and also the amount of time you have to pour into something like that.

But it's just a few different ways that you can take a defense in depth approach to securing your AI applications. So Bailey we're on episode two of four in this little mini series. Can you give us a bit of a preview of what's to come? We are and what a great little mini series it is. So I am the second episode. I got to cover a bit of over permissioning. And then we have two more.

So one is going to cover governance and that's going to be led by Christina Smith who does a lot of our governance work within the product group for Entra. And so she's going to be talking about those join or move or lever scenarios of what you can trim down, how you're looking at reviewing, who has access to what and going a layer deeper than I did in that area. And then we're going to close it out with Sharon who's going to talk about monitoring and learning.

So you know the fact that okay you've done all of this once. What do you do for the future to make sure that you know things stay in a good way? You don't have too much drift and you're able to monitor if something funky happens in your environment. Very cool. Okay so why don't we bring this episode to an end. Bailey as you know we always ask our guests for one final thought. So if you have one final thought to leave our listeners with what would it be?

I do and I'm a fan of the show so I've heard about the final thoughts before. So I've been ready and excited for this. My big final thought that I want to leave listeners with and I think Mark Simoes mentioned this toward the beginning of our episodes, stealing my thunder there. But that this is a lot of just the basics all over again right.

I think that you know for a very long time we've been talking about over permissioning, how we can clean stuff up, how we can look at least privilege and these defense in depth strategies and so a lot of admins and security folks I think you know the conversation around AI applications and these AI tools is new and fresh. But a lot of it is just going to be that it's going to surface up when we're doing the basics incorrectly or doing the basics not so well.

And so this is a great opportunity to be able to really enforce those basics within the business and be able to empower your users to leverage AI apps. So you know to kind of close it out and put a little bow on it, it's that this is just another example of when you need to do the basics really well especially in regard to AI applications. You know it's funny I think doing the basics is one of the most common final thoughts.

The other one that's really common is to use multifactual authentication which I would argue is just the basic anyway. So yeah. Well it's good.

Yeah. And these are the things that I think you know people tend to chase all the sparkly new stuff and I think that this is an example of a sparkly new thing that is going to force people back to you know the basics where it's the same thing as anything else where you know brush your teeth, get eight hours of sleep, you know eat during the day, drink water. Like these are all some really basic stuff that can be boring but you know it's important to do.

Yeah. I couldn't agree more and don't forget the sunscreen. All right. So with that let's bring this episode to an end. Bailey thank you so much for joining us this week. I know you're really busy so we all appreciate you taking the time out. And to all our listeners out there we hope you found this episode interesting. Don't forget this is the second of four so make sure you tune in for the next two episodes. Stay safe and we'll see you next time.

Thanks for listening to the Azure Security Podcast. You can find show notes and other resources at our website azsecuritypodcast.net. If you have any questions please find us on Twitter at Azure Setpod. Background music is from ccmixtor.com and licensed under the Creative Commons license.

Transcript source: Provided by creator in RSS feed: download file