Hello, and welcome to the let's Talk Azure Podcast with your hosts, Sam Foote and Anne Armstrong. If you're new here, we're a pair of Azure and Microsoft 36365 focused It. Security professionals. It's episode 15 of season four. Alan and I had a discussion around insider risk management recently. It's a compliance solution to discover and manage potential insider risks in your organization. Here are a few things that we covered. What are insider risks and why organizations manage these risks? What is insider risk management? Which systems can be monitored, and how insider risk management allows organizations to take actions on these identified risks. We've noticed that a large number of you aren't subscribed. If you do enjoy our podcast, please do consider subscribing. It would mean a lot for us for you to show your support to the show. It's a really great episode, so let's jump into it. Hey, Alan, how are you doing?
Hey, Sam. Not doing dupad. How are you? Yeah, very good, thank you. Anything new and exciting in the world of Azure and security this week? No, not I'm aware of all the product teams that turn to Alan now from Microsoft. We've obviously had Ignite is obviously coming up, which is exciting. Lots of potential new additions coming out of there. Anything else that you can sort of think of? I think this week, thinking about it, the Microsoft three six five copilot was announced to go ga in November.
Okay. Interesting. Yeah, I think it's one of the first ones to do that. So it's going to be just before Ignite, I think. Nice. Yeah. That'd be really interesting to see the efficiency gains that organizations can potentially make with three six five copilot. Is it something that you've played around with yet?
No, I've not had chance. I've seen some of the stuff potentially it can do seems interesting. So, yeah, I think we should take a look at it, but I don't think it's something we probably purchase on top right away anyway.
Yeah, exactly. I think you've got to be able to realize some real efficiency gains right. To make that investment, which I'm sure in certain circumstances, you could definitely make. Right. If you've got a workload that aligns well to it, just could be really interesting to see because it's more of a productivity sort of skew. Right. It's not really in our day to day focus, if that makes sense. So it'd be really interesting to see some real use cases come out of that area.
Yeah, exactly. And that's probably what the security copilot is going to do for us, make efficiencies where we can. Okay. So insider risk management. Alan, should we get started? Yeah, sure. So, Sam, what do we classify as insider risk?
Okay. Yeah. So to start off with, I thought I'd take it back to sort of the most basic part of it, really. And when we think about a data risk in the organization, a lot of the times we tend to focus on technology and security controls for preventing data leakage. And that could be basic security hygiene, posture management. There could be many different technical controls that you put in place. And one area that is of real focus is what we call insider risks. And this is effectively the humans inside our organizations. I think it's probably worth pointing out to start off with is that the term insider risk. I don't really like using that term in some respects because I don't know, to me it feels like it's got connotations that it's malicious insider risk, right? But I think one thing is probably worth calling out now is that there's a lot of insider risk that comes from inadvertent activity. I'll give you an example of this. So let's know your SharePoint permissions are overly permissive and you allow people in your organization to share public links to files in your organization without anybody having to log in as an example. Sort of the most permissive setting inside of SharePoint. Now if your controls are set at that level, then any user legitimately in your organization can share information. Now that user behavior is sort of enticed by your permission setting there in SharePoint. So what we would say there is, it is an insider risk because a human's actually undertaken that action, but they've done it in a sort of legitimate environment because they've been allowed to do it. Let's say you have very restrictive SharePoint settings and then at that point a user, and let's say legitimately from a business perspective, their day to day workflow, they need to share a file with an external user. So let's say you've got somebody in marketing that's sharing marketing collateral externally. They might look to a dropbox or a Wi transfer, a USB stick, any other medium to try and move that data outside the organization. And really this is under the umbrella of data loss prevention. This is where we put security controls in to try and limit or at least discover these activities happening in this episode. We're not really going to dive into data loss prevention. We'll definitely have another episode on it in the future. But essentially data loss prevention is, as it says, preventing data loss from the organization. What we start to get when we talk about data loss prevention, we start to feed in signals to get visibility of data, leaving the organization at the edge. So it may be that you use something like Defender for endpoint on your machines to look and identify traffic and files leaving those devices. It could be detecting somebody plugging in a USB stick, transferring a bunch of files onto it and then exfiltrating data that way. And data loss prevention can also be in. Email teams SharePoint OneDrive many variety of sources now in your organization, you may not block these activities. Some organizations go to the level of blocking USB sticks, putting data loss prevention policies in place to actually block the exfiltration of this data, trying to put up walls. Now, any one of these signals that we may have may not constitute risky behavior from a user. And that is because what you can see with real insider risks is like a sort of a chain, a sequence of events that lead to data to exfiltration. So it may be that somebody syncs a new folder from OneDrive and then they or SharePoint and then they plug in a USB stick and exfiltrate some data that way. So what Insider Risk Management does is it aims to correlate these events together. So you might have a DLP policy that would raise an alert on files being sent via email. But if you did that for everybody all the time, that could be very noisy because there could be a lot of legitimate information leaving the organization via email. But also you might want to highlight when sensitive information is leaving the organization. Let's say you have sensitivity labels and you want to make sure that no sensitive information is leaving the organization. So Insider risk is, I would say, like an umbrella tool above DLP. It can utilize DLP policies and it's got its own sort of policy creation. I'll call it Engine built in. But essentially what it's doing is it's monitoring for user activity, ranking that activity on a risk scale, piecing it together and then raising alerts where applicable. And so you're sort of piecing together multiple bits of data into one area.
Okay, cool. So you talk about insider risks being users either potentially accidentally sort of leaking data or sharing data, things like that. But also could be the more frequent one would be a malicious act to exfiltrate data, maybe from a maybe they're leaving the organization as a trigger or things like that.
Yeah, so there's definitely two sides to it. And I think when I talk to customers about it, there's a big skew on malicious risk, right? Disgruntled employees, levers to the organization and specific events in an employee's tenure at an organization, which could put them at a higher risk. They could also be a more sensitive user in the organization. They could be like a head of finance or something like that who primarily deals in more sensitive information. So their risk scores and modifiers are different. But a lot of the time we do see a lot of inadvertent and accidental insider risk and data leakage in organizations who enforces the permission settings in SharePoint and OneDrive for sharing. How do we know that our back model is there and our DLP policies are in scope of all users? Insider risk really sits a level above that, sort of watching many different signals to try and identify those types of patterns.
Okay, so I think we kind of talked about this a little bit, but how does Microsoft Insider Risk Management help with these risks?
So obviously it's giving you that visibility we've spoken about that. But also what it's doing is it's also giving you a system to take action on those identified risks. So it's one thing to actually be able to identify and correlate this data, but then when you're actually investigating an insider risk, there's a process that may need to be followed internally. Maybe there's different stakeholders that need to be engaged on it. So it's also giving you sort of an ops process and tooling to take you through that basically step by step.
Okay, cool, so that sounds really good as well. So it's not just alerting you on a possible insider risk, but it's also helping you with that, going through that process of doing the reporting, investigation, et cetera. Okay, so how do you define policies inside a risk management?
Okay, so it's probably worth calling out that there are some inbuilt predefined templates for policies which are a really great starting point. Let's talk about a few of those scenarios. They're quite self explanatory data theft by departing users. So effectively what you can do is you can connect your HR system. I won't go into the details of that because that can get a bit technical, but effectively you can connect your HR system and you can basically tell inside a risk manager when certain events are happening in a specific user's case. So as an example, you can feed in from your HR system when a user is departing the organization. You can also identify the day that they handed in their notice as an example. And then you can tell insider risk management the day they're actually leaving the organization and then that won't necessarily make them fire loads of alerts, but that's going to feed into that risk model to look at. If we talk about malicious insider risk for this one, this user has never shared anything on a USB stick ever since they've been at the organization and then suddenly they start downloading gigabytes worth of information from SharePoint and putting on a USB stick. I don't really want to go with a negative scary angle, but I'm just trying to give you an example of actual data theft by a departing user. It's probably worth calling out now that you do really need to think about maybe any regulatory risks or user privacy risks. Sorry, not risk, regulatory compliance. Sort of initiatives that you need to follow or anything that's inside your privacy statements internally because you are effectively correlating a bunch of private signals on a user and you are looking at that. There is another tool which I'm not going to talk about today, which is very closely linked to insider risk because it's another part of insider risk which is called communication compliance. I will do an episode on communication compliance but really that is looking at actual communication. So it's a bit deeper in terms of a user's privacy. This might be one to one messages between people in teams and things like that. This is really looking at data leaving the organization, not necessarily text between two users or something like that. Data leaks. So this is sharing or DLP kicking in effectively to see what data is leaking outside of the organization. And then you've also got sort of some modifiers on top of that. You've got priority users. So I've mentioned this briefly in passing that you might assign different users as being priority users. It might be like your C suite might be priority users, your heads of your management, people that would generally tend to have access to more sensitive information because of their role. It might be that you might have a product team that deals with intellectual property. So you might put those as priority users as well because they have direct access to your if you're a product company, you might be building that intellectual property in house security policy violations this is basically feeding in signals from Defender for Endpoint looking at security policy violations on machines. So the compliance status of machines think making sure that machines are patched, trying to understand any vulnerabilities that are on those machines, and also feeding that in because there could be a correlation between people with out of date machines, maybe they've been breached, maybe their identity is being used to exfiltrate data. That's another common scenario as well. You might have an insider risk that's not actually driven by the actual user themselves as well. So feeding information from Defender for Endpoint is important. There are a couple that I've never really looked at, which is patient data misuse. That's not really a scenario that I've actually had and that's still in preview and then there's risky browser usage as well now. So I think that's feeding in signals from block lists in Defender for Endpoint about hits to your block lists from a web filtering perspective, you can start off with those. You can also feed in information alerts from your own custom DLP policies as well. You can base a policy in insider risk management on a DLP policy if you would prefer to add your own metrics in, because it might be a know you want to alert on different types of information or labeling or sensitive information types, trainable classifiers things like that as well.
Okay, cool, that sounds like a lot there and I think if I remember, there's quite a lot of even though you've got those policies, there's also quite a lot of triggers are they called triggers? Triggers and data points that you can use as well, isn't there, to identify those types of activities?
Yeah, and it's probably worth saying that the amount of customization you can do on these policies is pretty bananas in a good way, I'd say I see really good benefit in just out of box default settings. I'll talk a little bit more about when we get to alerts, but the best way to describe it at a high level, there's modifiers and different properties you can set on these policies to tweak how they do fire and when they fire, because with a lot of these tools, especially around DLP, they can become extremely noisy. And that's something that I think organizations are a bit worried about. I've got a quote that somebody once told me. It's like, well, it's great I'm getting all these alerts and all this discovery, but I'm going to need to hire somebody to just manage the output of it. Right. So it is probably worth saying that you should carefully consider the amount of policies that you enable and also how you tune those policies as you're creating them. I don't know about you, Alan, but my sort of feeling on it is that you will need to refine those policies potentially multiple times to get them to a place that you're comfortable that you're not getting false positives at that point.
Yeah, definitely. There's definitely. Even with some of the other products like Microsoft Defender cloud apps, there is tuning in that as well because it can be noisy initially, because like you said, you might have a scenario that is valid use of sharing files, things like that, like you said, which might cause false positives. Baseline is good to start with, but then it's tweaked to the organization's. Uniqueness, I guess, is one way to put it.
Yeah, and that's right, because all organizations are different. Let's say you were a design studio and all you did was create IP for clients and send them to them on OneDrive. Right. That's very different than what's a good example of people that don't wouldn't share much, many files. Maybe you're an accountancy practice, right. And you don't share accounts directly with customers. Maybe you've got like a SaaS application that you do all of that in and it's all secure and it would be weird for somebody to download the accounts of a customer and put them on a USB stick as an example. Right, and it may be that just using DLP for that isn't appropriate for the tracking of that.
Yeah. And there'd be some similarities in different industry verticals and things like that. But again, there's still another layer of uniqueness, again. Yeah, exactly. Okay, so if a risk is identified, how do we investigate it?
Okay, so the first starting point for any risk that's identified is an alert to be triggered. And an alert is, I suppose it's no different to any other alert, actually, in the sort of the Defender world, an alert will be raised and it's probably worth sort of calling out that you initially need to triage that alert that's raised. And when we're talking about triaging, we're talking about giving feedback to the tool about how you want to progress that alert. Because one thing that insider risk management does is it tunes itself based on the feedback that you give it about specific users. So, as an example, Sam from marketing shares loads of information all the time. Alert after alert gets triggered because all this information keeps getting shared. And what you can then do is that you can effectively dismiss that alert and that will feed back and tune that for that specific user. And what's really great about sort of the alert interface is you get a list of alerts you can drill into one. And what it then does is it shows you all of the risk factors which have accumulated to get to the decision that alerts should have been raised. So what you'll see is you'll see exfiltration activities. So you'll see like, let's say a user had 2000 exfiltration activities. They copied 500 items to a USB stick, they downloaded 200 items from SharePoint, and then they emailed my maths about 1300 files to external recipients via email. But that might not be enough to trigger an alert. It might be the cumulation of those over and over again. So it might be that the copy to the USB drive was the last thing they sync these files. And it's really going to show you the sequence of activities that have led up to that. And really when we're talking about roles and responsibilities, because we haven't actually talked about this, really is a lot of the time when organizations are investigating insider risk, there's multiple stakeholders involved. There's It and SEC Ops Compliance actually looking at these signals and maybe they're initially triaging it. I've had an example where somebody's triggered an exfiltration activity, and I've been on a call with an It admin. It's like, oh, well, that's triggered because we reset their laptop and they resync the whole of their OneDrive. Their whole marketing folder came down. And then they did share some files externally, which is just normal for that user. But those two things combined are a bit like odd for a user to happen. So you are going to see those cumulative exfiltration activities happening. You're going to see the information about the user. It's going to rank it with a score out of 100 about how severe the system thinks that alert is. So you're going to get all of this information sort of displayed at you right away. Then you can start to look at the Activity Explorer. What the Activity Explorer does is it gives you a timeline, it's a bunch of dots with lines. It's quite an interesting sort of graph, the way that it's presented. But what you can then do is you can look at the activity of a user with everything in and around the risky activity as well. So what you can really do is you can drill in to see what a user has been up to in and around the risky activity. Because it might be that the folder that was synced was a non commercial marketing folder and actually it's fine for them to share that information. Or it could be that they synchronized a bunch of confidential information and exfiltrated. That way you can also look back I won't go into all of the areas, but you can also look back into the history of a user, about how many other alerts they've had as well, and investigate those alerts that have previously happened, even if they've been dismissed in the past. So you can start to really see a history of that user and really get a context of, okay, well this user, a few times they've triggered these and we've sort of dismissed them as false positives, but now it's starting to become a pattern. We're seeing exfiltration activity once a month, every time the accounts are run, every month. We're seeing exfiltration of that information every single month. Maybe you wouldn't let that go on for months, but you can start to correlate those events together. I won't go too deep into it because I can't really explain it that well, to be totally honest with you. There's a few different ways that you can investigate those alerts.
Okay, cool. So there's definitely a lot of tooling there to help with that process. And like you said, I can't remember whether it was this product or if it was communication compliance. But I think can you anomalize the users that are being they've triggered it initially so that you can do some initial investigation.
You can do that for both sides, communication compliance and insider risk. Yeah, so yeah, that's a really good point to call out. So it might be that you might not want the person that's responsible for this system to know the specific users that are triggering these, these alerts. And that could simply be a case of it could be themselves, it could be somebody else in their team. So you can pseudo anomalize their usernames so that you don't actually see their real names until it actually gets opened. You bring your stakeholders in to take it to the next stage.
Cool. Yeah, that sounds really powerful as well because like you said, if comes up for yourself, you may just be like dismiss. Yeah, exactly. Okay, how do we take action then on an identified risk?
Okay, you've had a policy trigger, you've got an alert, you've triaged it and then you've basically said, okay, it's ready for us. And you've investigated it, you can then take action. So it's probably worth talking about the creation of the case because the case is really what turns an alert into a working area, really. And when you start this, you can feed these actions into ediscovery, you can also feed them into sentinel, into your seam, basically to raise these alerts as they come in. It's probably worth calling out. So taking action, what we then get is we then get a case dashboard so we can see the case information and then you can assign ownership of a case to someone else. And what that really allows you to do is if you're in a larger organization, you might be the one that's identifying these risks and you might be passing it to another team to actually go on and actually action. When you build a case, you then see the alerts that have triggered into that case and the people that can then that have access to that case can then see all of the information that has been raised. But just in that specific area you can look at the Content Explorer which then actually allows you to see the content that has triggered that alert. And to be fair, you can see that in the alert anyway, but you would then start to look through that content to understand what has actually been exfiltrated from the organization. And you can build case notes in there so that you can actually describe and document the processes that you've actually been through whilst investigating that case. Contributors can be brought into the case to basically add context and to give their input on investigation of that user and that alert. Because it might be that if an insider risk has been deemed appropriate, that you might bring somebody's manager into that case to understand what's going on. You might bring somebody from it in to spearhead that. You might have somebody from HR legal Compliance if you're a smaller organization that might just be one or two people or just maybe even one. But in larger organizations you might want representation from many different areas, especially when we're talking about data theft. So you can escalate a case which then basically identifies it needs legal review. And this is where an integration with Ediscovery then comes into play because you may then trigger other mechanisms in your organization to prevent further data loss and to really understand what that user has been doing. Because it might be that you've only identified one alert or aspect of what they have been doing it doing. You can also run automated tasks and flows for the case. So you might send a notification, you might send a notification to somebody's manager when that's happened. You might even notify the user themselves when they've been added to an insider risk policy, if that's part of your policy as well. Full visibility in that. And you might want to create a record in say, ServiceNow or something as part of that process. So it's good to have that integration for some automation. For Power Automate, you can integrate Microsoft teams with the case as well, which will create a Microsoft team automatically when an alert is confirmed. And then you can effectively all jump into there to discuss and manage that as you go through. So that might be appropriate for organizations where you've got a lot of stakeholders having to manage that. And then once you've gone through that process, you can then resolve the case and resolving you're either going to say it's like a benign case, like it didn't actually warrant further investigation, or you can say it was actually a confirmed policy violation, which is going to feed back into it at that point.
Wow, okay, that's cool. Yes. I guess as you're going through that process, you're collecting your evidence and everything. So if it does need to go to maybe a legal case against, like you said, data theft and things like that, then at least you're capturing everything you did, who was involved and things like that. So at least then you've got that audit trail, I guess, as well, which is really great.
Yeah. And it's probably worth just calling out a preview feature that's in there, which is called Forensic Evidence, which is pretty cool. It allows you to see visually what was happening on a user's machine at the time of that data theft. So, effectively, what I believe I haven't actually seen it in action, but it visually captures clips of security violations, basically in real time on people's machines. Wow, that's interesting.
I wanted to just call it out because, well, it's kind of scary and really cool, but I just wanted to call it out. That what it's probably worth calling out is that you can exclude certain desktop applications and websites and there's multiple levels of you can apply multiple levels of approval for activating the capturing feature as well for a user. So there are some policies in place now I need to call out. I haven't actually used this part of it yet, but it's definitely something to also think about because some of the times what you can have the issue of with Insider Risk is that you see all this activity and it's hard to sort of cut through the noise. Really, you're trying to pinpoint a needle in a haystack, because if you imagine our users just working on a normal day and then they just randomly exfiltrate a bunch of files, that's one activity potentially in a slew of other activity as well.
Yeah, cool. Okay. I think I have to go and investigate that myself and see what that's like. Okay, so how much does it cost? How do you license it? So e five has it included and let me just get this right. It's also included inside of e five compliance add on, is that right, Alan? I think that's correct. And I believe there's a separate is there a separate SKU for Insider Risk as well? Yeah. Insider risk management one. Yeah. But it gives you a few other bits as well.
Right, okay. Yeah. Because it's also going to give you things like communication compliance. Right? Because as I said before, there's a few different tools in the insiders sort of sphere of tooling.
Yeah, exactly. So that's cool. Okay. So it's like some of the other products, like from the security side, you can buy as the if I have security, or you can buy them sort of in their smaller SKUs as well. So that's really good as well. Okay, so. Is there anything else, Sam, that you can think of that we might have missed or not missed?
Not really. I think the sort of technical complexity is in the policy and the tuning of those policies. I would say HR connection can be a little tricky. I'll call it a little tricky. There is definitely some integration that you need to do there. And it's probably worth calling out that I'm a real fan of insider risk management with Defender for Endpoint in play as well, because without Defender for Endpoint, you can sort of start to get sequences forming, but you can't really see the full thing sometimes, if that makes sense. And we saw that with the policies where there's security policy violations, there's risky website activity, there's other device related policies and activity that's flowing in there that really starts to get more accurate sequences. Because what you don't really want to do is just have the viewpoint of say, SharePoint OneDrive and email, because you've effectively got a gaping hole, which is your endpoints right. That you can potentially exfiltrate data from. So that's probably, definitely worth calling out. That my gut is that it's one of those tools that's really great when it's connected with other tools inside of Microsoft's ecosystem. But if an organization is at e five sort of level, or they're purchasing e five compliance add ons, you could probably argue that they're quite advanced in Microsoft's ecosystem already. Right. Because when you're up at those levels to make them justifiable to the business, you really do need to consume them. You need to consolidate your other tools and replace them with better together all in one solutions from Microsoft. So I think once companies are and organizations are at that level, it's kind of a no brainer to configure these policies, tune them and have that level of continuous visibility that it gives you.
Yeah, definitely. And I think the only other part of of sort the the deployment of it is probably that business process of how to manage the incidents. You may have something similar in place already, but then it's just applying it to this, isn't it? Yeah. And we've seen some sort of good technology integrations there as well, like automations with power, automate team creation and things like that. So yeah, really good to see.
Okay, so is there any of our other episodes that kind of relate to this?
I had to go back in time a bit, back to season two, for anything that was even close, because let's talk Azure, and I'm including these episodes because the Purview Suite, I believe, is hosted on Azure, right? So I'm calling it, it's all one. So the only thing that's remotely close to this one, I think, is information protection. We did an episode way back in season two, episode twelve, so definitely go and check that out. If data security and privacy is a.
Focus of yours, you probably need to update that one, I expect. I think we need need probably to to recap. A lot's change since then. Exactly. I did think about that the other day. It's like when all these new things happen, do we retire episodes? And I don't know how that should work, but it'll be a cycle, I assume. Alan, it's your episode next week, so what are you covering? So I'm going to dive into Intune again. I think it's probably on hosted on Azure, so we can talk about it.
It's not AWS, is it? But, yeah, we're going to talk about I've been doing some episodes on Bring Your Own Devices, things like that, and zero touch deployments, but I think it's worth, probably talking about modern management for Windows and how you can move away from in most cases, move away from group policy on Active Directory if you're still the hybrid joined devices and move to managing, I think via yeah, great. Yeah, that sounds good.
Okay. So did you enjoy this episode? If so, please do consider leaving us a review on Apple or Spotify. This really helps us to reach out to more people like you. If you have any specific feedback or suggestions, we'd love to hear from you. We have a link in our show notes, so get in contact. Yeah. And if you've made it this far, thank you very much and we'll catch you on the next one. Thanks. All bye.