Welcome to the Azure Security Podcast, where we discuss topics relating to security, privacy, reliability, and compliance on the Microsoft Cloud Platform. Hey everybody, welcome to Episode 44 and welcome to 2022. This week we have the full gang, we have myself, Michael, we have Sarah Gladys and Mark. We also have a guest, Jess, who was here to talk to us about, frankly, just some of the boring security stuff that's really important.
It's not necessarily the shiny objects, but we'll leave that when we get to Jess. But before we get to Jess, let's talk about the news and some of the stuff that front and center on people's minds. I'll kick things off. A couple of things really took my interest over the last few weeks. The first of which is in Azure Key Vault, we now have automatic key rotation in public preview. Now, I want to caveat with this with something critically important.
I read a blog post some time ago about, please be pedantic with your words when you're talking about cryptography. That is no less true when you're talking about key rotation, because you have to know which keys you're rotating. The way Azure Key Vault works in this scenario is it is rotating key encryption keys, which is generally required for compliance requirements. The chance that good is not going to be rotating data encryption keys.
That's a whole nother ball of wax, that's a very difficult topic. Not many products actually support that very well. That being said, Azure SQL DB with always encrypted actually does support it, data encryption key rotation. This is available now in public preview, it's great to see it. You can basically say, I want those key encryption keys rotated every 12 months or something to be in compliance.
But again, I want to point out, most of the time this is going to be key encryption keys, not data encryption keys. Next one is in Azure Storage. We now have attribute-based access control conditions in public preview. This is actually really cool. You can actually put a rule on say a blob store that says, if someone has these attributes essentially in their OAuth token, then allow access.
So what you're basically doing is having declarative rules based on the contents of someone's authentication context. This is really great to see some customers I've worked with been asking questions about attribute-based access control rather than role-based access control or RBAC. So this is a welcome addition to the stable. The last one is, I would be honest with you, I've never come across a customer who wants this, but apparently some people obviously do.
Again, in Azure Storage, you can now access a storage account from a virtual network and subnet in any region. Historically, you had to be in the same region. So if you're accessing say storage accounts from, let's make something up, and Azure Function, that storage account and the Azure Function had to be in the same region, that is no longer the case. That is all I have in the news department. Hello, everyone and happy new year.
Like many others in Microsoft, I took a very long break for the holidays, so I do not have a lot of news to inform. However, I found few good information that I think it would be helpful to many of our listeners. First, the Microsoft Security Community is continuing to present many free live presentation about capabilities within our cloud services. For example, January 12th, Microsoft Defender for Cloud will be introducing the Microsoft Defender for Containers.
January 19th, Microsoft Sentinel will present the present and future of user entity behavior analytics in Microsoft Sentinel. January 20th, Microsoft Defender for Cloud will be presenting what's new in the last three months. In February, there's several other presentations, especially from Microsoft Sentinel. One is becoming a Jupyter Notebooks Ninja, that's February 3rd, and the next one is Auto-Major Microsoft Sentinel 3-H with Risk IQ Threat Intelligence, that's February 10th.
For details and registration, please go to aka.ms.com slash Security Community. And if you want to see past recorded video, please go to aka.ms.com As many of you that follow me in LinkedIn, you see me posting all the time about all the free training, webcast, podcast, and other Microsoft type of training that we provide. We even have websites that provide free training. As you know, we need more people with security background to help secure our customers.
So here is a way to gain some of the knowledge needed. The next item I wanted to talk about is a blog that Stewart Kwan published in early December, talking about Azure AD Custom Security Attributes in ABAC, or Attribute-Based Access Control. As I mentioned in previous podcasts, I am really excited about the capabilities that ABAC brings, since it adds more capability to extend user attributes to do verifications.
For example, in some scenarios, you may need to store sensitive information about users in Azure AD and make sure that only users, authorized users can read and manage this information. Or you may need to categorize and report on enterprise applications with attributes, such as business, unit, or sensitivity. As this becomes available, I think it just will extend the key value payer verification needed to control a lot of different access in the digital state.
So if you get a chance, take a look at that blog, since it is very informative. Actually, all Stewart Kwan content that I have seen is really, really informative. I always love the authentication basics videos that he posted on YouTube. So if you haven't seen them, take a look at them. Last, I wanted to mention another blog named, Simplify Your Identity Provisioning with these new Azure AD capabilities.
With the updates described, our organization will be now able to allow password writeback from the cloud when using Azure AD Connect Cloud Sync. Provision to on-premises application, verify their system cross domain identity management or scheme, provisioning endpoints, and much more. So if you have a chance, just take a look at that blog. Big thing on my radar here is the cyber reference architecture. I threw a quick Twitter and LinkedIn post on this and got more than I expected.
I think it was like a quarter million feed views or something like that on LinkedIn. So very popular. Made some changes to the cyber reference architecture posted up. A couple of big ones are adding SASE or Secure Access Service Edge, S-A-S-E. So we have a section in there explaining SASE and which Microsoft capabilities map to that framework. We also added a zero trust transformation journey.
So think of this like how do we get from a flat network, over time, five to ten years, this kind of history at Microsoft, to a sort of full on zero trust journey. And what are the different stages, the priorities, what you do first. It's got some nice little morphs and transitions there to show you visually how those things change over time. And then the defender for IoT and OT and all the IoT and OT attacks were added to the attack chain diagram.
I added a couple modifications of people diagram and actually added a new people diagram that kind of aligns different roles to like a plan build run, governance prevention response, identify, protect, detect, respond, recover kind of framework. So kind of a little different view on roles and how to kind of map security to standard business plan build run stuff.
A bunch of zero trust updates, added the zero trust commandments, the latest version of the ramp, the rapid modernization program on what to do first, next and after that as you kind of modernize your stuff with zero trust strategy. And then some tweaks around threat intelligence and whatnot. Big one that people were asking for was, hey, you have new product names like one of this can be updated. So I get that a lot after our marketing department decides that they found better names.
So that is all taken care of in there. So that was the thing that sort of was big for me in the last month or so. So before we get to Jess, let's talk about the elephants in the room. I mean, obviously this broke early December, mid December last year. And that is log 4J. I know a lot of our customers are still struggling trying to come to grips with, you know, what instances they have of log 4J.
I'd like to give you my sort of two cents on the issue before Mark and Sarah, if you have any thoughts. It's really interesting looking at this bug. Actually, there was a series of bugs. The first one was basically through logging, you could actually manipulate the inputs to JNDI, which is the Java naming and directory interface. And what made this interesting from my perspective is they really sort of violated a principle that I've believed in for 20-something years.
And that is that all input is evil until proven otherwise. In fact, in running SecureCoder, I should have a copy of the book in front of me. I was actually looking at it earlier today. Chapter 10 is literally all input is evil. And that's the problem that that's ultimately the problem that this particular bug had. They accepted input from an untrusted source and then used that to build essentially a privileged operation.
Other input trust problems are things like SQL injection, XML injection, LDAP injection, direct-retroversal, all these other kind of vulnerabilities where you take some input, you don't validate it for correctness, and then you use that to perform some kind of sensitive operation. And that's ultimately the problem here.
I've often joked when I've been teaching developers around Secure Software Development, we can talk about all different classes of vulnerability, but ultimately the one lesson you have to learn as a developer is that all input is evil. You need to make sure that any data that you get from an untrusted source is validated for correctness. And I mean validated for correctness, not validated for badness, because that assumes you know all the bad things and no one is that clever, believe me.
You need to check for validity and making sure that it's correct. And if it's not correct, you reject the request. It's really that simple. And this ultimately is just an input trust problem. The interesting thing is that there was actually a black hat talk in 2016. I've got a link in the show notes. And they actually say the exact same thing. It's like basically don't use untrusted input for calls to JNDI, the Java naming and directory interface. I mean it's there in black and white.
So that's all I want to say. I think it's part of a much bigger problem that we still see across the whole industry where people are blindly believing that input coming in is correct. That is not the case. So if you've got code where you blindly accept input and you don't validate it for correctness, there's a bug in there waiting to happen. If you're lucky, the application just crashes. And if you're unlucky, you've got a problem like the Mog 4j problem.
The angle that I was that sort of struck me on this and sort of channeling a little bit of my internal chess here, it just reinforces how important the difficult stuff is like inventorying and patching. And, you know, I hate to say the word S bomb because it's not quite ready for the scale and prime time, but like those kinds of things, just knowing what you have to be able to take care of it is so critical.
And recognizing that there are limitations in the tooling and technology available today. But one of the rules that I sort of think about from more of an infrastructure perspective is if it's easy to do something, you're going to have a sprawl problem. So if it's easy to copy data, you're going to have a data sprawl problem. If it's easy to just add in code and it's just easy to do for anyone, you're going to have a code sprawl problem.
If it's easy to create a VM, you're going to have a VM sprawl problem. That's just how it goes. If it's easy for people to do things, great, you just unlock business value. You also just unlock the sprawl problem that could impact security and IT and other things. And so that's kind of the thing that I really took away from this is we have to be ready for that is that there's these low friction, easy copy. There will be everywhere things. And that is going to have a negative corollary.
So that was kind of my lesson learned out of the chaos. And I feel for all the IT folks out there that had to deal with this and especially the waves of updates and having to sort of go back to what you just did. Yeah, that caused my inner IT person and my old ops memories to just cringe. Michael and Mark have given their kind of 10 cents. But I guess I'll just talk about because everybody knows unless this is the first time you're listening to the podcast that my baby is Azure Sentinel.
And of course, one of the things we were being asked straight away is by customers is, well, how can we use Sentinel and how can we use Microsoft tools to detect this in our environment? Now we'll put a link in the show notes. Everyone was very busy before Christmas. And it has been updated since then, updating all of our tools, tooling and products where we could to help people detect things.
So in Sentinel, we've got a solution which is full of detection queries and hunting queries, Defender for Cloud and Defender for Endpoint also have things too. So yeah, it's been pretty busy. And as Mark said, I also had a lot of sympathy for the operations people who will have had a really rough couple of weeks with this. And of course it happened just before the holidays.
So I hope for those of you that, well, definitely, I'm sure at least there are some people out there that had their plans affected. I really hope that you get a bit of a rest now in the new year. Now it's all calmed down a bit. And fingers crossed, we'll have a little bit more time before the next security thing that will happen because that is the circle of life. So it'll happen again with something as we all know. Here's one thing else.
I wasn't going to talk about this, but the more I think about it, the more I need to talk about it. One thing that drives me a little bit bonkers. So I look at the, you know, I was going through the patches today for log4j. And one thing I've never understood is, so I look at one of the patches for log4j and it fixes like almost a dozen things, including changing port numbers from like default 514 to 512. And again, I don't understand all the innards of log4j. I've never actually used it.
So I realize I'm not coming at this from a level of knowledge when it comes to log4j. But the Apache folks do this a lot. They're issuing a security update, but it actually fixes a whole bunch of other stuff too or changes the way some features work too. I honestly don't like that. My preference is if you've got a security patch, it's like just fix the problem or the related security problems.
Don't add features or change other features because if there's a regression in there, like for all we know, someone may have been using port 514. And again, I don't know. Now you just change the default port to 512. So now your application breaks. So you deployed a critical security update and your application breaks because they change something else on the patch as well. I honestly don't like that. I think a security fix should be as surgical as humanly possible.
I mean, if there are other security vulnerabilities, sure, fix them as well. But I don't think you should change functionality otherwise. And that's just my opinion. It's been a bugbearer of mine for probably close to 20 years. And this is actually the first time I think I've actually mentioned it publicly, let alone on a podcast. But anyway, just my opinion. Yeah. I mean, I definitely understand where you're coming from, especially on something as high profile as this.
You definitely want to be clean and surgical. If you're talking about a routine thing where you have to deliver a whole bunch of stuff every month anyway, it's a little bit of a different story. But when you've got a special thing that you know half the world is going to be applying and the other half is going to regret not applying. I tend to agree with you on the cleanliness, but I think that we have to differentiate like this kind of emergency from kind of a routine thing.
Because the reality is it's like maintenance. You got to change the oil in your car. That has to be just a normal part of how things go. You have to selectively figure out what you bundle and what you don't. But there's a decent volume because software is complicated. I'm not a fan of analogies either. Just saying, you know. But anyway, my view has always been that if you've had to make an argument by way of analogy, then your argument's weak. But anyway, I'll just leave it at that.
Anyway, let's get back on topic. Okay. So I get to introduce our guest this week. Our special guest is Jess Dodson, who I know from Australia. Jess, do you want to introduce yourself? And tell us how long you've been at Microsoft and what you do. Absolutely. So my name is Jess Dodson, as Sarah has said. I'm a senior customer engineer. I've probably been with Microsoft now. I'm coming up to three years this May, which is very, very nice. I've probably been doing tech now.
I'm coming up to 20 years, which is also making me feel very, very old. So I started off as a sysadmin and kind of slid from operations into security. And I kind of like having that background of operations from a security perspective, because I think it often gets missed. So Jess, the reason that we invited you on, of course, was to talk about one of your pet peeves. And I know you have done many conference talks about this in the past. So I know what they are.
But just before we dive into it, what is your main pet peeve generally about things you see day to day? Trying to get people just to do what we consider to be basic security hygiene, basic security best practice, and it's just not being done. So I know that there's been a discussion about Log4J. And for me, probably the big one that comes out of that, if you don't know what you have, how are you going to be able to protect it?
So if you don't have an inventory of your systems, if you don't know what systems you have, what operating systems they're running, what applications are in your environments, how are you supposed to be able to protect it when something like Log4J comes out? And I don't know any organization that's doing this well. And I don't understand why it is so hard for people to understand. At the same time, I do. It's boring. It's not fun. It's not sexy. It's not using new and shiny tools.
It's really monotonous. No one likes documentation. No one likes documentation. So I do understand it. But why are we not seeing it done more? You know, it's interesting to bring that up. Back in the day, you know, Galaxy far, far away. Actually, it must have been around 2002, I think. So my boss and I, my boss at the time was a gentleman by the name of Steve Lipner. He coined a term, which was giblets, which is basically components that you depend on that you don't actually create yourself.
And really the product that sort of raised that to our attention was SQL Server. Because when the slammer worm hit, a lot of people had SQL servers and didn't actually realize it because they were using the developer edition of SQL Server, which is essentially an embedded version of the database. You know, it's not the classic SQL servers. You know, it's essentially a stripped down embedded version.
And we see a very similar thing here with the Log4J stuff, right, is they've got this embedded library. A lot of people didn't know they had it. You know, this is what led to a lot of, you know, a lot of successful attacks because people didn't know they were even using it. And so I think you bring up a really interesting point that there needs to be better inventory management.
So you know what, we've got a VM over there that's running this and we've got a VM over there that's running that and we've got a Cosmos DB here and we've got a SQL Server there and we've got a, you know, Azure functions or, you know, whatever, it doesn't come in at any platform. It's not just, you know, not just Azure. So yeah, I think you bring up an important point there.
I mean, unless you know what you have, you don't know if you have vulnerabilities or not, if there are vulnerability strikes, then, you know, it could be really problematic. Another example I saw here was with Kubernetes when Kubernetes had a serious issue a few years ago. It was an amplification vulnerability. It was working with a finance customer and they honestly did not know where all their Kubernetes systems were on-prem.
They could easily find them in the cloud, whether it was in this example, AWS and Azure, but on-prem they had no clue. And all they knew is they had to go and find them and find them quickly and that's a patch them quickly, but they had no idea what they had.
And I know, Mike, you're going to nail me for having another analogy here, but especially, yeah, because one of the things I always talk about whenever I'm giving like senior leadership guidance and try and slip in there somewhere is, you know, these systems are like, you know, having a fleet of cars or planes. If you don't maintain them, you're toast because I think the problem with paying attention to these kinds of things is it shouldn't be about whether it's boring or not.
It should be about whether it's important or not. And if your senior leadership doesn't care and isn't making your IT and business people prioritize proper maintenance, then how are you going to expect them to listen to the security people when the business leaders say, yeah, I'm not paying you to maintain stuff. I'm paying you to do new features, right? Like if it's not important at that level and they don't recognize the risk of it, you're going to be fighting an uphill battle every day.
And I think when we talk about the ROI on things like this, I think reactive versus proactive is what it comes down to. And I think a lot of organizations don't see the ROI in that proactive work. And we don't do a very good job of selling it either. From a security operations perspective, a lot of security operations stuff, I don't think they know how to sell that to their organizations that this is what the return would be.
So an example for me would very much be around some of the ransomware stuff that we saw going around. Proactive maintenance would have saved some of those organizations millions of dollars when it came to DR and BCP and backup and HA and yet because it wasn't put in place, they ended up having to spend millions of dollars in ransom instead.
So Jess, I have a question for you because I've made this observation that there's sort of like this magic line in organization crosses when they actually get a sock manager, like someone in the management team, the leadership team that can advocate for, hey, it's going to cost a bunch more from a bunch more incidents if y'all don't start doing patching and all these other things. Like it costs me money and I'm going to have to take it out of budget and headcount and whatever.
Like I've seen that there's that sort of like magic change point when organizations make that commitment to security operations or a sock or whatever you want to call it. I mean, do you see the same thing there? 100%. I think Bum's on Seats, which is what I call it, is more important than tools because a lot of the time it comes down to people. It comes down to who is able to look at the things.
And if you have particularly management who is advocating for all of the operation staff and from a security perspective and from that proactive perspective, if you have someone who's willing to do that, not only will you get the ability to do that proactive work, you're more likely to get more headcount as well, which gives you the ability to do the things that you want to do. If all you're doing is fighting fires, you're never going to get the chance to do any of that proactive work.
So it's just going to go by the wayside. And a lot of that comes down to not having enough Bum's on Seats. That's literally what I think it comes down to. And I've spoken about that one before, that all of the tools in the world are fantastic, but if there is no one to look at it, if there is no one to utilize those tools, what is the point? So inventory is like one of the first things that you brought up. What other areas do you think more customers need to spend more time and more focus on?
For me, I think some of the other big ones, and I feel a little bit bad because I know that a lot of the people who will listen to this are going to be technical people. So I can already feel a lot of them cringing going, oh, that's me. I call it dog food, and that is you need to eat your own dog food. If you are expecting your users to do it, you should be doing it yourself.
I know that from the perspective of being the person putting in a lot of those changes from a security perspective, if you are not willing to do it, if you come off as a hypocrite and saying, no, I'm making my users do this, but I'm not going to do it myself, a really good example of that is local administrative rights on workstations.
If you are ensuring that there is no local administrative rights on your workstation for your users, but you still maintain local administrative rights on your standard workstation with your standard account, we shouldn't be doing that, and it's something that we really need to get better at from a security operation. Right, and you're talking about it from a productivity perspective, right?
So Jim Bob, the admin, who is an admin in the environment, just doing their email and just browsing the web and doing their online banking and what have you within the organisation, they should not be an admin. Oh, 100% not. They don't need administrative rights, but at the same time, even as security operations, we don't need to either. Yes, we do still need to be able to do some administrative functions, but that doesn't mean that we should be modifying the rules for us.
If we are expecting our users to jump through hoops in order to be able to get any form of administrative rights, then we should be doing the same for ourselves. Because if we're not willing to do it, why should we be expecting our users to? Remember some years ago, I was working with a, I'm not going to say who it is, but it was a legal institution. And they'd been hit really badly with malware.
It turns out that patient zero was actually a person who was in the reception running as admin, his local admin on the machine. And so that person was running as admin, and the attack came through that person, and the person's admin on that machine. And it was just pretty easy from that point forward. So I see this problem as well a lot with developers. A lot of developers think they have to run as admin. 99 times out of 100, you don't. There are some scenarios, but they're relatively rare.
And with automation, you probably don't even need that anyway. One of my favorites in the Windows environment is, well, I need to be an admin to debug my application. No, you don't. Well, there's a debug privilege. I'm like, yes, there's a debug privilege, but that's only if you're debugging a process that's not running under your account. It's not for debugging your own code. A lot of people don't realize things like that. So yeah, I see this a lot with developers as well.
And trying to pull admin rights from developers is often an uphill battle, but it's for everyone else's safety, ultimately. And I love developers. I do. They hold a little soft spot in my heart, but I do agree that trying to get those rights off them is really, really tricky.
I also think that particularly when we're looking at where some of our risk vectors are in our environments, Sandpit environments and development environments and proof of concept environments are where I tend to see a lot of those nasty things coming in because they're not as tightly controlled. It's dev. It doesn't matter. We don't really need to worry about it. No, no, no, no, no. You do need to worry about that.
If it is attached to your production environment, if it is using your production identity systems and it is touching any of your production data, you need to worry about them. And I see that quite a lot, particularly, and I know that Sarah will find this hilarious. When setting up Sentinel, when you connect Sentinel up and start streaming stuff in, it is often those dev tests, Sandpit environments that start flagging quite heavily. Oh, yes. I do know about this.
And I also have to, I have seen, again, not naming any particular organizations because I've seen it in more than one during my career. I have seen some very, for one of the very sloppy dev and test environments because it doesn't matter because it's dev. It doesn't matter because it's test. And we know that actually attackers go and look, specifically because of this, attackers will go and look for those environments because they know that there's likely their security controls aren't as good.
And as Jess said, of course, you can, if they all get hooked up to a seam, whether it's Sentinel or something else, that can generate some noise pretty quickly. So yeah, it's definitely a thing. And just to add on that developers thing as well with the developers having high admin, I've also seen, and again, I've seen it more than once, security teams give themselves very high admin access. And when you ask and say, hey, why does the team need this access? She said, well, it's the security team.
We trust them. They need it. And of course, these privilege says you don't need it. If you don't actually need it to complete a task at any point during your job, it's access you shouldn't have. So yeah, it's not just the devs. It can be other people as well. It was just the thing I wanted to add in there.
The thing that I always am a big fan of because it's like, just set up the guardrails the same dev all the way through, and then you have no surprises as you move from dev to test to prod or whatever you call your stages. Like I've always been a huge fan of that.
And the other thing that, you know, as we were kind of building the Paul and the ESC architecture and whatnot, we realized was kind of a nice trade off is if you force them onto an admin workstation, it's like, well, if you want admin privileges, you need a separate workstation. Maybe I don't need them that much. So that was one of the funny things as we're developing that that we kind of tapped into the psychology that's a beautiful segue.
And when I look at the list of things that Jess wants to talk about, the next one is actually exactly that privileged access workstations. So Jess, you want to give us your thoughts on that? Privileged access workstations are something that we definitely do advocate for and we want to see more of. I'm yet to see them done what I would consider to be like gold standard.
And I think that's probably the issue that I have with privileged access workstations is that people seem to think that you have to get it right first go. And I don't think that is necessarily the case. When it comes to getting something like pause in, something is better than nothing. And I hate to use this quote, but it is true. So the enemy of progress is perfection. There is no point in saying, I'm not going to do it until I get it perfect.
But you're better off trying to do something than doing absolutely nothing. All right. I'm going to get the popcorn out because I want to hear what Mark has to say on this. No, I'm actually in agreement with it. I might have had a different opinion eight or 10 years ago. Oh my God. When we first came up with the USA and PAW architectures and recommendations and guidance around it, the idea of an admin desktop was not necessarily our original idea, but we did codify it and formalize it.
But we've been just trying to make it easier and easier ever since. The big update we did about a year ago now was, hey, just use the cloud to manage and secure it. It's actually going to be more secure and it's a heck of a lot easier to deploy than going through all sorts of crazy on-prem AD isolation, GPO kind of stuff. We want it to be as easy as possible and we're constantly looking for what is a good logical step in. That's one easy, but two provides a meaningful step up in security.
Always trying to, as much as we can, avoid that psychological barrier thing because, yes, we want to limit the amount of admins and use the PAW to do that if we can. But at the same time, we don't want people to go, I'm not going to do the PAW because that's too hard. And we're always trying to figure out the ramp up. I just realized a lot of people may not even know what a PAW is.
Do you want to just spend real quickly just explaining super-duper quickly what a privileged access workstation is and why? Yes. So PAW, privileged access workstation, effectively the idea is that instead of using your standard issue OS that you do the web browsing and the dangerous stuff and clicking on email links, you actually have a separate operating system for that. You can do a separate operating system by having two physical pieces of hardware. Or you can have a separate VM.
Now, you have to be very careful on this because a host OS can control a VM that's hosted on it. So you end up having to have a trusted underlying OS tied to the physical hardware. And then your productivity stuff lives in a VM. But ultimately, it's just a separate OS where admin stuff is done versus user stuff. And actually suggests this point, right? In a perfect world, you would have separate devices, but it's better to have a VM running on a host and separate the jobs that way.
Your admin workloads from your non-admin workloads versus having the same machine doing admin and productivity at the same time. Yeah. It's just drawing a line between it and security boundary, if you will. So I just want to give you a couple of examples of this that I've seen over the last year, actually. I'm not going to say anything, but one was healthcare, one was finance.
One was a company who said that if you ever touch production from a non-privileged access workstation, you will lose your job. So that's simple because they don't want anyone in production from their productivity keyboards because, like you say, their email slash phishing attacks, you can't guarantee that those air quotes keyboards are clean. And that's the whole point of a privileged access workstation.
The second one was we're building a threat model for a customer and we have this Azure storage account. And as we go through the threat model, I'm like, okay, so what sort of accessibility does that storage account have? And it turns out it contained relatively sensitive information, but it's also accessible to the world. It didn't have any kind of IP restrictions on it. I'm like, you're kidding, right? I mean, please tell me there's some kind of isolation on this.
It turns out that there wasn't. So one of the engineers actually on the call from his laptop on the Teams meeting actually went straight into production from his laptop. And so I actually sent a message to the, essentially the sponsor of this project. I'm like, did I just see what he just did then?
He went straight into production from his normal developer laptop and the person they sent me a message back saying, yeah, I said, that's possibly worse than having the internet accessible storage account. We need to talk about that. So where do we chat about a little bit later and they, you know, they changed some of their policies and so on to not allow dev straight straight into production. So Jess, we've done inventory, eating your own dog food, least privilege and admin rights.
We've done pause, no need to be perfect. What else is on your, your hit list? I think this one's going to be a little bit close to your heart. And I think you told me that you snaffled this one from Mark as well. For me, logging, and I love my sticker of collection is not detection. There's no point in collecting the stuff if you're not going to look at it.
So I have a really good example of this one working for an organization, not going to name any names where they had turned on at the very top level of their domain file auditing and they were ingesting somewhere between one and two terabytes of data into their log analytics and Sentinel was costing a bucket load and they'd been doing this inside their on-prem scene for years. And yet they couldn't explain to me why they needed it. Oh no, we need to have that turned on. Why?
What information are you going to get out of it? So when it comes to logging data, absolutely, you do need to collect your data, but unless you are doing something with it and you are tuning the information that you're getting out of it and making sure that what you're getting out of your logs is valuable, you're literally just paying for file storage. That's all you're doing.
Now, this one is very close to my heart, Jess, because I have seen this numerous times that folks just pop. They just move everything. If they had an on-prem scene or if they didn't, they're just like, oh logs, let's put them straight into log analytics or Sentinel or whatever. And that is a premium product and there is a premium cost involved with ingestion.
And really, if you're not actively using those logs for hunting or you're not, if you're not proactively using them for hunting or you're not reactively using them for detection, you probably do need to, and I'm not one to bang on about costs and stuff. I mean, personally, I'm far too technical and I largely sort of switch off when we start talking about costs, but it is something everyone has to bear in mind.
And you do need to think about if you're not using those logs for anything, why are you ingesting them or is there somewhere else you can put them? Because we know that there are many organizations throughout the world and in different industry verticals who have some kind of retention requirement. It could be two years, five years, seven years. Seven years seems to be the sweet spot nowadays for a lot of places.
But think about, do you actually need it in your, in your seam or is there like a cheaper place? Can you put it in blob? Can you put it in Azure Data Explorer? There are other sort of more cost effective ways potentially depending on how you need to use the logs. So yeah, honestly, that is a conversation I have time and time and time again, Jess. So yes, I know all about this. Oh, sir, I think collection isn't detection. Definitely sounds like a markism to me.
It is a markism and I will admit that I stole his phrase and put it on a sticker, but I don't hide that I stole his phrase. So I'm okay with it. I think I originally stole that because it's always a chain of theft, right? I think I originally stole that from an awesome MCS consultant named John Rodriguez. I think that's where I, that was my upstream. I have no idea where he stole it from. Well, no, in John, he definitely stole it from somewhere else.
Okay, so we've had on the list so far inventory, eat your own dog food, lease privilege, privileged access workstations, no need to be perfect and log management and collection isn't detection. What's next on the list, Jess? So tied very nicely into logging is your seam, sim, whatever you want to call it, any of your security systems and if your threat protection products. A lot of the time when I'm going into organizations, I'm helping them set that up.
So I help a lot of customers set up Sentinel, set up Defender for Cloud, set up Defender for Cloud for Cloud apps, Defender for identity, all of those and they're great products, but they aren't set and forget. They are systems that constantly require tuning and it ties in very nicely to the whole logging collection because unless you are tuning those systems and the information that they are sending through to your seam, you will be getting noise.
You will be getting false positives or benign positives. You will be seeing information that isn't of use to you. So you need to make sure that you are tuning them. You need to make sure that there is maintenance being done on these systems because they aren't just set and forget. So for a lot of customers that I go into, I come in, I help set up Sentinel, six, 12 months later, I come back, nothing has changed. I'm like, why haven't we added in new analytic rules?
Why are there not new log sources connected? Why haven't we tweaked these particular analytic rules because things have changed in how you're operating or how your systems are configured? Why hasn't anything changed? So I'm definitely going to pop corn out for this one because Sarah tells me that as you sent noise, absolutely awesome and you're telling us that you got to sort of feeding it. So Sarah, what do you have to add there?
I agree 100% with what Jess has to say, but I mean, like your perspective from a Sentinel perspective. Sentinel is awesome, but Jess is absolutely right and that it isn't entirely set and forget.
So what we do, the whole point of Sentinel is that we have lots of out-of-the-box content that you can turn on, but we always do say you should have a look at the logic in there and you should check that it actually suits your organization because they're written by our security researchers, these detections and their hunting queries, but we still can't write them completely cookie cutter. Some of them may not be appropriate.
So and we also, and we're building this out a lot at the moment in Sentinel, if you've been following Sentinel for a while, that we have also got now smart tuning. So if something's generating a lot of alerts and you're marking them an incidence and you're marking them as false positives, you can actually see that flow down into Sentinel, which means you know that you need to go and possibly look at retuning that rule.
So as awesome as Sentinel is, we definitely make it as easy as we possibly can to do that. See it's not that you can set and forget it and I'm sure that any product, Microsoft Security product person from any of our sister products would say exactly the same thing for theirs. We try and minimize that overhead as best we can, but we can't get rid of it.
And at the risk of triggering people have heard this quote way too much, but the way I think about it is like the Spider-Man thing, with great power comes great responsibility. The more customization, the more you can tailor it, the more you can do this, the more you can extend it, the more that you have the responsibility to do that, right? The more that you have to maintain it and make sure it's on track.
If it's a self-driving car and it steers for you, okay, and some of the XGR features are like that. But if it's something where you have to actually steer it and you get all the great road feedback and all that kind of stuff, because I'm really trying to annoy you with the analogies, like sorry. The more that you have control over it, the more that you have responsibility to keep it on track and tuned.
And that's just it, because attackers, they don't get paid if they don't evolve their stuff and evade things. So it's a constant battle to always keep up with that leading edge of what the attackers are trying this week. Which are, from a practical perspective, Sarah, I'll just let you thoughts on this. So the Log4J stuff that we have in Sentinel, we've obviously issued some kind of background baseline checks that are in the product.
Would we expect to see those evolve over time as new attacks or new exploits or even new vulnerabilities are found in the product? Yeah, definitely. In fact, for the Log4J solution that we released, it's already, it's definitely had at least one or two updates. And that was just as the situation was evolving. We added in additional hunting queries, some of them were tweaked a little bit.
I believe the tweaks were to do with, at least some of the tweaks were to do with the IOC list that were associated with it. And so in the Sentinel content hub, where we have this out of the box stuff, you can actually see now where something is due an update because we do keep updating things. So Microsoft is actually doing it as well where we can because even our stuff, we have to tune. Customers have to do the same thing. So yeah, definitely stuff will change.
And as we see new attacks come out, we will release new things. It might be sometimes it's we're going to release brand new detections and hunting queries to for new attacks. It might be that we see slightly different tactics and we tweak things. It'll depend on what happens. But yeah, even Microsoft and within the core stuff of what we do in the Sentinel product, we're always reviewing and updating things. And IOC meaning indicators of compromise. Yes. Thank you for reminding me.
And IOC is an indicator of compromise. If you're not familiar with that acronym, it's an IOC is something that we know is bad to give it the very high level. So it could be a URL, it could be an IP address, it might be a file hash. And generally, if you see an IOC in your logs, that might be an indicator that you've got something going on. It's not definitely, but it's probably something you want to look at.
I just want to go down that little rabbit hole from a market like your comment and Jess as well. So IOC, and I realize we're getting sort of on a tangent here. I mean, that's indicator of compromise, not indication of attack, right? I mean, attacks easy. Indicator of compromise means that you've found something that may indicate that you've actually been compromised, not just attacked. Is that fair?
Yeah. I would say if someone's got into your logs and so therefore the IOC is in your logs, so it's got into your environment somehow, I think it's definitely worth investigating to see if you actually have been compromised because someone's obviously breached your perimeter if an IOC is turning up in your logs. Well, I mean, the thing about it is an indicator, right? And it's a matter of probability. Because we put out, everybody puts out, hey, this is part of an attack.
And there are false positives, right? And it's just a question of what the false positive rate is of that and is it a false positive or true positive? So IOC is definitely something worth investigating because, hey, this was based on a real attack and this is a sign of something. But it's by no means in my experience a guarantee that it's absolutely there. There is always, hopefully, a low false positive hit rate on those. And that's true of all products, all sources.
Yeah. And I think when it comes to those indicator of compromise, a lot of the time they do come with probability. So you actually get that this is how confident we are that this is definitely an indicator of a compromise or this is how confident we are that this is definitely a risk that you need to be aware of. I think the whole idea of assume breach and assume compromise, like it's no longer I am completely protected and I am in my fortress and nothing can penetrate.
It is there is likely to already be something within my environment and I have to do everything I can to prevent them from being able to exaltrate data or laterally move or being able to obtain administrative credentials, those kind of things. So when you see those IOCs popping up in your logs and you are paying attention to them, it's more of a, hey, you need to remember that you have to be constantly vigilant. And I think that's more prevalent now than ever before.
We need to just remember that there are always things we need to be looking for. All right. So we certainly covered a huge swath of topics today. We covered inventory, eat your own dog food, least privilege, privileged access workstations. No need to be perfect. Log management, collection is in detection. Everything is not set and forget.
Obviously, women's a little bit of a wild goose chase there, but I think it's really important to understand what tools do them, frankly, what they don't do them, where the human interaction is required. So one thing we ask our guest, Jess, on every podcast is if you had one thought to leave our listeners with, what would it be? So it's kind of summing up everything.
So for me, it's tools are amazing and tools can help you so much, but they aren't going to help you if you don't have your basics right. And I know that the basics might be boring and they are time consuming and they can be very monotonous. And there's likely to be a lot of internal and political considerations in regards to it. You might be ruffling some feathers, but at the same time, you have to fight for them. And I've found that that is the way things move forward.
You need to be that change in your organization that fights for these proactive things to get done. And you can't know everything. So you have to ask for help. You have to ask for people to come on that journey with you. Because without the basics, the shiny means nothing. Yeah, I agree 100%. I mean, probably the most secure organizations I've ever seen have gotten the basics right. They don't necessarily have the latest and greatest and shiniest tools.
I mean, they obviously use shiny tools, but they've really nailed the basics. And I think that's critically important. Actually, the funny thing is that the basics kind of haven't really changed over the years. They really haven't. The basics are just fundamentals. So yeah, I concur 100%. So with that, let's bring this podcast to an end. Jess, thank you so much for joining us this week. I know I can speak on behalf of all of us. Thank you so much for joining us.
We've always learned something or a new perspective on things. And you've obviously brought that to this podcast as well. And to all our listeners out there, again, welcome to 2022. I hope this year is a little bit better than the last couple of years. Thank you again for listening. Stay safe and we'll see you next time. Thanks for listening to the Azure Security Podcast. You can find show notes and other resources at our website, azsecuritypodcast.net.
If you have any questions, please find us on Twitter at Azure SecPod. The music is from ccmixter.com and licensed under the Creative Commons license.