Welcome to the Azure Security Podcast, where we discuss topics relating to security, privacy, reliability and compliance on the Microsoft Cloud Platform. Hey, everybody. Welcome to episode 82. This week, it's myself, Michael and Mark. And this week, we have a guest, a security MVP, Truls, who's here to talk to us about all things related to security strategy, which I have no doubt Mark and Truls will have a fun time. But before we get to our guest, let's take a little lap around the news.
Mark, why don't you kick things off? So the big news from my side is that the Zero Trust Commandments, which are now a standard, have been released. And so what this is, is it's actually a combination of the previous publication on the Zero Trust Commandments, as well as the Open Group's core principles for Zero Trust, and put together into a single guiding document and updated, refined, organized, cleaned up a little bit.
And so this is a pretty big deal that we now have our first standard for Zero Trust, which I'm extremely excited about. Just for a little bit of context, I'm going to do a mini-speech here. When I think about what these things are, just to sort of put them in perspective, in the old days of security, there was a lot of people had a belief somewhere along the lines of, not exactly, if it doesn't have anything to do with the network perimeter, it's crap.
So there was this deep, deep faith in the network perimeter keeping us safe, whether it was said out loud or it was just implied and understood. And so what happened was, what I see Zero Trust as, and the Open Group, Microsoft, and many others, is that essentially what's happening is when you kind of tear down that wall and that mindset of the perimeter can keep us safe, that's Zero Trust.
That's, okay, let's go ahead and reinvent security the way we should have in the first place minus this one broken, flawed assumption that we can somehow keep a network and everything on it safe. And so it really sort of opened up the aperture, which was a good thing, but it also required us to sort of, the first thing you need to do is you need to bound that world because security isn't an infinite thing. It doesn't do everything magically.
And so you need to have some rules that kind of define what this new definition, this new scope of security actually is and does. And so that's what we tried to do with the commandments and sort of, that's one of the reasons why we made that the, one of the first standards that we pushed through is to get that clear definition. And these aren't everything to do with security. There's going to be a lot more to come. There's guiding principles for architects of all types, security and otherwise.
And then there's a reference model coming and all that. But what these do, these commandments do, is all of the things that elevate up to a must or a shall statement, the things that are absolute, hardcore, worthwhile boundaries to set, which of course took a whole lot of thought and debate and consideration and feedback from the first round. And of course, we're always open for more feedback.
But these are the things that basically help provide that new kind of hard boundary of clarity, which is this is what security does. If you aren't doing this, or if you're doing something that violates this, you're doing it wrong. And so that's really what these commandments are. Really proud of this work, really looking forward to seeing people use them, getting feedback on them, continuously improving them as needed. Yeah, that's the big news there.
And so I'll put a link in the show notes for the commandments themselves, as well as a post I put on with some commentary on it as well on LinkedIn. There's a couple of news items. The first one is around TLS. So we've now updated the default TLS policy for Azure Application Gateway. If you're not familiar, there are different predefined policies that start with like App Gateway SSL policy. I wish to change that to TLS, but anyway.
Yeah, so it's like App Gateway SSL policy followed by a date, like a year, month, date. The latest default policy is 2022-01-01. And the minimum protocol policy is now TLS 1.2 and enables also TLS 1.3 as well. And also we've really restricted the Cypher suites to some of the sort of more modern versions like more use of Galois Counter Mode, for example, and some of the ones that use Cypher block chaining are still using, you know, SHA 384 or an AES 256 and so on.
And one that was in there for a long time, and I'm glad to see that it was gone. It was there really for backward compatibility with some really, really old devices. Was one that was using triple DES. So I'm glad to see that that one has now finally fallen out of favor. The second item is in public preview, firmware analysis in Defender for IoT. This is actually pretty cool. I didn't even know we could even do this.
We can actually take a binary firmware image from an IoT device and conduct automated analysis, looking for potential security vulnerabilities and weaknesses. This is actually really, really cool. I think this is pretty amazing to see that we can do this stuff. So Mark, one up for the Defender team. Absolutely. I love that feature. I've been waiting for a little while because I was actually, you know, part of managing the Defender for IoT business for a short period of time.
And you know, I saw the ReFirm lab acquisition. I'm like, oh, I can't wait for engineering to bring this into the product because that's going to be awesome and start getting us proactive. Now we have the news out of the way. Let's turn our attention to our guest. This week we have Truls, who's a security MVP from Norway, who's here to talk to us about a blog post that he wrote earlier in the year called Field Notes on Security Strategy. And we can obviously take it any direction we want.
Truls, welcome to the podcast. Thank you so much for joining us this week. Would you like to take a moment and introduce yourself to our listeners? Yeah. So as mentioned, my name is Truls. I'm a Microsoft MVP based out of Oslo, Norway. On my day-to-day, I work as a security architect slash engineer slash everything that needs to be done. I work in the company called Sopra Stereo in a security operation center.
So it's mainly like MSSP focused, the work that I do, focus on security monitoring, automation and security tooling. And given the MVP status, it's mostly in the Microsoft stack. And as you mentioned, I'm very passionate about security monitoring strategy and security strategy. And that's why I wrote the blog post. Yeah, I really enjoyed reading your post. This is not just the memes and the choice of the memes, but very much enjoy the line of thinking.
Do you want to maybe take a moment and talk a little bit about the post and what led you to write it and the key themes that you're emphasizing there? Yeah, sure.
So to start with, I think the idea for the post was born with a lot of the, when you go around viewing different deployments, and this is mainly CM deployments of Sentinel, you see a lot of people, I think you said before, chasing a silver bullet where it's, we need every data connector, we need every single point of data, and we need all of the analytic rules, the default templates enabled, and you'll get the situation where a lot of
people that work in security monitoring will be familiar with this, where you have what we define as alert fatigue, where you can only handle meaningfully like 50 alerts a day and you get 200 every hour. And so you're going to miss a lot of important alerts and you're just skating by on pure luck basically. We definitely see that a lot, especially in anything to do with security operations. I tend to call that one the collection is not detection dynamic.
It's just that you need to be, I call it ruthless prioritization. Not only are human minds tuned towards scarcity and gathering and pack rat, kind of being a pack rat to bring more stuff in. That's sort of a natural human thing, but especially in security, when logs weren't turned on in the pre-cloud days and all sorts of stuff, we crave more information, more data, but there's a limit to that. At some point, there's too many books in the library and you need a librarian to curate it.
So yeah, we see the exact same stuff. I think touching on that point, it's interesting in a sense that a lot of people still have this, I guess kind of backwards old school mentality of, oh, we need the firewall and the netflow logs. We need all of these in the CM, we need detection on it. But given today's state of the infrastructure and the perimeters you work with, the network is no longer the main perimeter for most companies, it's identities.
So it doesn't make any sense to put that big of an emphasis on the same kind of logs that we used to emphasize like 10, 15 years ago. I think that's another thing that really bothered me was that we would see a lot of focus on these infrastructure heavy logs and not as much focus on just getting a good value first and foremost out of the identity based logs.
So that was also like a big motivator for getting the thoughts out of my head and onto the page to sort of like explain where I was at with that. Yeah, it's almost, I don't know, the analogy that pops into my mind is like, we used to have one mine with gold in it. And so we basically set up a perimeter around it and defended it. Well, we found gold everywhere, you know, where all the organization's data and valuable assets are and they're everywhere else.
And why do we still have all of our guards around the mine when all this stuff is out there? That's not the right intercept point. I've used to talk about this where I visualize an organization in the olden days or the olden days makes me feel old, but like a village with a fence around it. And you have like the firewall where all the traffic goes in and out. It's like the port and you have a guard there. And some people might have guards posted on the fence.
I think there's an old like adage or something where they say if you build 15 meter tall walls, you're just opening a market for 16 meter tall ladders. But the idea is that before security monitoring, if you even had security monitoring, the logs you would be gathering were like firewall ingress and egress logs because that made sense, right? It's the shaft down to the gold mine, as you said.
So now that we moved away from that and people can access via like proxies and stuff like that and SharePoint on their phones. And you can work from home office, you can work from, I don't know, Bermuda if you wanted to write and still have like decent access to things like mail and SharePoint and stuff. So it doesn't make sense. So I guess in that sense, the point I was also trying to make in a post is that not all log sources are created equal because you need to prioritize them.
And people put too much emphasis on gathering all the logs and then enabling default detectives. And that's enough. But I think that's not the right way to work when you're designing like a security strategy for how you want to monitor your stuff. The way I look at it is you just have to look at it from an outcome basis.
And we have to ask the question that quite often wasn't asked at the beginning of security operations as it sort of was emerging as sort of an offshoot of IT operations or of network operations or wherever it happened to originate within an organization. We have to ask, why are we doing this? What is the purpose? What is the outcome? And not just the technical outcome, like, oh, we have to handle these alerts. Why?
You know, kind of do that, what do they call it, the five whys or whatever that Ishikawa diagram or something. Sorry, I was part of a Six Sigma project a long time ago when I picked up some terminology. But like, what is the outcome? What are we actually trying to do? And it's you're trying to protect the business from attackers having access to your stuff. And that's a better North Star than we got to take care of these alerts, right?
Because that then guides you to, okay, what kind of alerts do we need to look at? Because like you said, we have 200 an hour, we can handle 50 a day. Which ones do we investigate? Which ones are worth it? Because it's all about that, you know, ruthlessly prioritize. And you know, what do we need to do? That's the most likely to cause damage to the business to be an attacker that has access to the goodies.
And it's really interesting because I've read some of your posts on the topics as well, especially relating to when you talk about security tooling and talking about what security tool do we need to bring in to solve this issue, which is not the right question, as you've pointed out.
So it's, I think it's interesting to hear your thoughts on the things I've been sort of like turning around in my brain for a while, because it's, as I mentioned at the end of the blog post, it's what I'm writing isn't gospel. It's not like 100% correct for everyone. So you need to, the biggest thing you need to take away from it is that you should question your strategy and you should create your own strategy.
You can't follow one-on-one guides on how to set up security monitoring because it's not going to be like, it's not going to be a one size fits all kind of thing. Good example of that is we have two different customers. One of the customers, they do RDP nesting as like a normal thing for them. So they RDP into one server and then they RDP from that server onto another server.
Now for most of our customers, this would be a sign of someone is trying to pivot from one server to another, and this is shady business. But for this particular customer, this is a normal usage pattern. So a detection that shows RDP nesting would be not very good for that customer because it would be like almost only false positives. So we need to take a step back and do like proper use case design for every single customer, every single thing you want to monitor.
And as you mentioned previously that it's not about necessarily starting with like tooling and stuff like that, but it's about finding the north star. Like what are the things that I want to protect? Like what is most important? And then also looking at what kind of business am I in? What type of threat actors will try to target me? What are the most likely scenarios based on those threat actors?
What are the most likely scenarios based on the users that I have and the level of security knowledge in the company as a whole? So it's more to do with knowing yourself and then designing the security strategy on that basis instead of just working from tooling. Yeah, I love that theme. Like I actually was helping out with a new unified offering, basically a repeatable engagement that we do. The Microsoft Unified support, formerly Premiere for those that have been around as long as I have.
It's on Sentinel adoption and essentially helping people start off as a migration thing. But we decided to reframe it as an adoption one because it's like, we're not just going to go ahead and rerun a rule and take whatever Splunk or ArcSight or whatever queries that you've got. Let's make sure there's use cases in there. Let's make sure we're thinking through that because you have to be able to express those risks in a way that you can technically implement across different platforms.
Because like one of the things that we see a lot of folks that have never worked with an XDR, for example, there's so many common attacks that you can detect that pretty much look the same everywhere. That's your basis. That's your baseline. When you get into, okay, once you have those in place, what are you looking for in a custom query based tool like a Sentinel or what have you that allows you to do custom scenarios and use cases?
One, you got to be thinking in terms of use cases so you can actually have a common language because an XDR doesn't work in the same exact query way that a SIM does. You need a common language across those, but you need to be able to say which ones are we have covered here so we don't end up writing a bunch of duplicate queries. The time and effort that we spend to the point you're making is actually on, hey, this is unique to our organization.
This would always be a false positive, but we also do this. If the attackers go with the standard thing, that's going to stick out like a sore thumb if we're looking for it. How do we write use cases that are custom tailored to our environment for things that the adversary could not know or would not know and they're going to basically just find themselves standing out in the middle of a main street going, oh, I thought it was an alley. Yeah, completely agree with that.
I love the sort of starting with knowing your infrastructure as well as the threats and all these other things because you're defending the environment you've got. Yeah. I think it's, as you said, the ruthless prioritization thing. I think it goes for when you're setting up your CM and integrating with Next.dr as well. It's about prioritizing after you sort of figured out, okay, what's the most important parts of this?
Let's say, ah, I need to monitor my sign-in logs, my audit logs, my Azure activity logs, just as an example. You've identified what you want to protect and then you identify the log sources you would need and then you can start working on seeing what your user's patterns in those logs are, what's the baseline and start working on, as you said, the use cases.
Because I think one thing that a lot of people do wrong and not just enabling templates and then just saying, oh, now we have detection for this thing, but just because you have an alert doesn't mean you're being a good SOC or MSSP or security team just because you surfaced an alert. If most of the alerts you get are false positives, you're doing a very bad job, in my opinion. Because an alert isn't necessarily actionable. So it needs to have some level of what do I do when this alert surfaces?
What's the point of making this alert surface? So we need to have a reason for it, at least in my mind. So that's where the use case development comes into play, where you would say, ah, we want to look for these kind of patterns and why do we need to look for them? OK, this is the reason. And then what do we do once this emerges? So a bad use case would be like password spraying against Azure AD identity because we have multi-factor enabled, right? It should have, at least.
And then I think what do you do if you get an alert someone's trying to brute force? Well, I can maybe check that this user hasn't been a part of any leaks, but that's like the extent of what I can do unless there's someone actually gaining access. I can't do anything about that. So it's pointless to have as an actionable alert. Maybe you want to have an informational alert to see what's going on. But it's very important that your use case is in the alerts you generate.
No matter where it's at, if it's the CM or the XDR, they need to be actionable. At least in my mind, that's very important part of a good security monitoring strategy. Yeah, 100% agree. Like a couple things that we emphasize in our workshops that I do a lot of development on is it's got to be incident-centric, not alert-centric, right?
So that you're looking at this through the lens of not just, hey, this thing came up and it's a password sprayer, or whatever it happens to be, which was hopefully blocked as you point out. But this thing came up, is it part of something else? Because the adversary may trip for alarms in the process of stumbling through the house trying to find the safe, right? And so you've got to be thinking of it holistically.
And is this one snapshot of the unicorn's tail something that we need to correlate to the horn and the hoofs and everything else that we've got for sensors? And so having that incident-centric mindset we found is definitely super important. And then the other thing that you reminded me of is the other benefit of use cases is if you can express this in a simple use case, here's the threat, this, that, and the other, you can then look and say, hey, is this something we already prevented?
Is this something we could prevent? Because the last thing you want to be doing is have your sock chase down 100 incidents a month that you could have just blocked with MFA, right? The whole point of the sock is to apply that precious resource of those few humans that you're able to get and train and spend time on and whose brains burn out if you give them too much stuff, right? Let's make sure we're protecting them against garbage. Let's not give them a false positive.
Let's not give them something we could have blocked. Let's not give them a garbage alert when we could give them a high quality actionable alert because you have a very finite set of people at the end of the line and you need to clean up that pipeline and filter out the garbage before it gets to the point where you actually need a human to pull out a weapon and do something. Yeah, that's a very good point.
It's funny you mention it because I'm working on a presentation and a blog post right now about that exact topic about security automation because one thing is the strategy behind it and the use case development and the data you ingest, which is like, I guess it's a very big part of it. But at the same time, as you said, you need some alerts to be informational low to paint like the part of the picture of the unicorn you're chasing. So you need some way at least to correlate that data.
And sometimes if a security analyst has to every single time go into virus total and copy and paste something, they have to do like cross reference searches. They have to do enrichment that could be done automatically. It's a waste of time.
And so security automation as well plays a big part in, I think, fighting against alert fatigue and just making sure that, as you said, the precious finite minutes of human brainpower can be used to actually do something that requires human brainpower and isn't just something where you sit and do the exact same thing, just copy and paste and do repeatable tasks that could be automated. So that's a very good point.
It kind of calls up the power of the NIST cybersecurity framework, you know, the life cycle of identify, protect, attack, respond, recover, and soon to be governed if they accept the proposals of the V2, that's the draft that's out.
And I mean, honestly, I think the original point of the NIST one was because we had a very prevention-centric mindset in the industry at the time, was to sort of open up the life cycle so people were thinking holistically, including the back end of it, respond, recover. But I feel like the pendulum has swung over, especially for folks that work in security systems, they're only thinking in terms of respond and recover.
And it's like, y'all, you need to be talking to your architect and engineering and operations counterparts in IT and in security, because the last thing you want to be doing is chasing the same incident that could have been prevented 100 times over. So it's sort of like, it's a reminder, but now in the other direction. Yeah, exactly. So, Trulls, do you, when you're sort of talking to customers, do many of them use industry standard set of frameworks or controls?
So for example, one thing that we use a lot is NIST SP800-53, a whole set of different controls and they're used in the federal government in the US. They're used for things like directing FedRAMP, FedRAMP low, medium and high. So they'll take those actual controls and then sort of codify them for moving federal solutions to the cloud. Do you sort of, when you're talking to customers, do you come across these kinds of frameworks or are people kind of winging it themselves?
I think it depends a lot on what kind of industry you're in. A lot of our customers are in regulated industries, so they will use industry specific standards. We see a lot of ISO, I think it's 27,001. We see NIST, obviously. We see, so usually we are mostly Azure based for our detection parts. So it's mostly like the vendor for clouds, they've deployed a NIST standard. But I think also most people are using the Microsoft built-in cloud controls, which are pretty good.
So I think that's about the extent of what we see. Also SIS obviously like the different baselines for VMs and stuff, but yeah, it depends. Center for Internet Security, CIS, I assume. Yeah, yeah. So I think you should bring up the Azure controls. So by that, I assume you mean the Azure Security Bench. Oh, actually it's now the Microsoft Security Benchmark, right, Mark? Microsoft Cloud Security Benchmark, I think, is the official new name. So is that commonly used as well?
Because one of the things I do like about that is that it does map to other controls like NIST SPA100-53. Yeah, that's very commonly used. And I think as a focus for us as well, it's looking at, I think a lot of people in the security industry, like security people, we like numbers, which is why I made a point about we're very binary.
If things are secure or not secure, that's just the two default options we have, which is, I guess, historically why we've been bad at communicating business value.
But yeah, the secure scores, like the identity secure score and the Defender for Cloud secure score are very good ways to get your platform engineering team or your engineers and architects to sort of like design around making that score better, because that's something that you can put as like a key performance index, like making the secure score better. So I think that fact that you can integrate them with like a scorecard is very like intriguing to people.
And they want to work to get that score to be good, because it's something that the customer can easily like, it's very easily like something you can project onto a customer facing dashboard. And so they can see the score and they can ask why hasn't the score improved? Why are we still at 89%? So that's, I think that's a very good thing. I think a lot of people are working just on improving that score. Look, I love that. Look, I'm not going to lie. I love that.
But I love that with a grain of salt. I love the idea of having a number that people can aim for and see to improve. But I've also seen customers spend a lot of time trying to improve that score for something that actually has very little return. So they're willing to spend a lot of money to raise that score. But what they're really doing isn't really improving their security posture that much. Look, I'm not going to say that's something that I see all the time, but I have seen it happen.
Like just the quest for increasing a number regardless. So my only concern there is, hey, if you're going to focus on increasing that score, then make sure it's the stuff that actually matters for your environment.
One of the things I've seen is because they're, and this is actually going to one of the points you made in the article, is there's almost a binary view, which for people that grew up in the technology world isn't particularly surprising because there are truths and absolutes in the technology world that it will always execute in the exact same way because that's what the code says.
But the interesting thing is the risk of the organization itself at the very top is dealing with a very, very complicated world where economic things could change and weather and all these other kind of risk factors to the organization. One of which is cybersecurity risk is a business risk. You've got to translate that sort of fuzziness and probability world down into this sort of technical absolute thing. It's always sort of interesting to see that.
I think what you're seeing, Michael, on my perspective is probably just that people recognize that hey, we need to go ahead and do a bunch of things, great, but then they apply the old mindset of sort of a literal binary do everything and it has to be 100% correct. It's probably because they set the expectations poorly to the board or the business leaders that, hey, listen, if we get this score, we're good. Well, that's not right.
It's actually this score is a good indicator for us, but it should be considered along with four or five others and we're constantly looking at what the most important risk is, which of course we asked you three months ago, we hope. Hopefully that they're actually understanding what's important to the business. And so I don't know, I feel like it's that same old kind of technical gremlin mindset rearing its head again to just sort of go back to the absolutes and binaries.
I think the point about just getting someone to go along with security because it's a business expense, so you need to communicate it as something that's actually beneficial to the business. And then obviously as well with the score, I think it's a good tool where if you need to actually get some traction, I think you can use these scores easily to get some sort of momentum on your security work.
But again, very important, it's to sometimes you might get like an 8% increase for doing something that will have no impact in your environment. And so do you do that just to make the number go up or do you prioritize elsewhere? So I think that's the trap you can fall into and that's where other standards come into play and actually knowing your environment. So I think a mix, if you're having trouble getting traction, I think the score is a good startup.
As you evolve, you should maybe look away from the score and look at what's business critical to you. It's really like the training wheels to get you started on the bike, but at some point you need to take them off and ride the bike itself. Yeah, like get you started moving and then once you start rolling, it's easier to just do it based on more complex things. I got actually two last questions I wanted to ask before we get into the final question.
One I was kind of curious how many organizations you run into are actually using kind of a use case based approach? And then the other is I'd love to get your comments on the cyclical or iterative approach that you recommend. Yeah, so I think a lot of people, at least the organizations we work with, are quite familiar with use case based approaches to at least the security monitoring. We do a sort of, how would you call it, like a co-managed SIEM solution.
So their security team and our security team can work together. And so we have like a process where they can suggest like, oh, this is something that's important to us to monitor and then we work together to develop the use case. So I think it's something we see among quite many of the customers we work with. But outside of like security monitoring, it's not my field of expertise, so I couldn't say 100%.
As for the cyclical approach, I think that's one thing that we forget and a point that is very closely related to when you're talking about security tooling. I think we are very good at seeing the new Shiny and running after it and, oh, this is going to help us protect against this thing instead of looking at our existing tool stack or the strategies and the designs we already have in place and going over it and seeing if we could do something better.
So my idea with like suggesting a cyclical approach was that once you go through protecting your identities as an example, once you've enabled security controls that make sense to your organization, you have enabled monitoring and detection that makes sense to your organization, you shouldn't stop at that. You should go back maybe a month later, three months later, whatever makes sense and what you can fit in and look, am I still covering all my bases? Are there new options?
Are there any new developments in the tools I already have that I can use to improve my security posture and then do that continuously, like put it into a system instead of, okay, now this is enabled, this is set up, now it's going to work and look, there's the next tool, I'm going to move over to that because then you'll end up with a lot of tools, decent security posture that will degrade over time because you're not actually maintaining it.
Yeah, I'm a big fan of continuous improvement because I think it addresses a couple of problems that are very prevalent in security, which is that sort of, hey, we got the check mark, we're done. That's sort of the dark side of checklist thinking because there are a lot of benefits of checklist thinking, but there is a dark side of, hey, it's done, we don't ever have to go back and look at it. We finished our 853, so therefore we're compliant, we're done.
The continuous improvement helps break that, but it also helps break the fear of perfection, that we have to get this thing perfect the first time out. Then you spend all the time in the lab doing the thing and you don't actually put it out there and try it and learn on it and iteratively and cyclically improve. I'm a huge, huge fan of that continuous improvement type of growth mindset, some call it. All right, so now it's time to wrap this episode up.
Because as someone who listens to the podcast, you're probably well aware that we have one little question for you at the end, which is if you had just one final thought to leave our listeners with, what would it be? I think it sort of boils down to my mindset when it comes to everything in security and everything in IT is to keep it simple. I used to say, or something I learned in the army is keep it simple, stupid, because you need to remind yourself, don't do things in large batches.
It's easier to just do it in small batches, keep building on it. Just keep it as simple as you can. That way it's way easier to get things done. All right, so let's bring this episode to an end. Touls, thank you so much for joining us this week. It was fun listening to you and Mark, just pontificate. To all our listeners out there, thanks so much for listening. We hope you found this episode of interest. Stay safe and we'll see you next time. Thanks for listening to the Azure Security Podcast.
You can find show notes and other resources at our website azsecuritypodcast.net. If you have any questions, please find us on Twitter at AzureSecPod. Background music is from ccmixtr.com and licensed under the Creative Commons license.