Welcome to the Azure Security Podcast, where we discuss topics relating to security, privacy, reliability, and compliance on the Microsoft Cloud Platform. Hey, everybody. Welcome to episode number 30. This week, it's myself, Gladys, and Sarah. Mark is just absolutely slammed, so we'll have to listen to his news next week. We also have a guest. We have Pete Bryant.
He's a senior software engineer in the Microsoft Press Intelligence Center, and he's here to talk about everything you need to know about Mystic. But before we get on to Pete, let's take a look at the news. Gladys, why don't you kick things off? Yeah, I wanted to mention that Michael Workers, David Sanchez Rodriguez, Javier Soriano, Marcelo de Olio, and myself have recorded the first Azure Security Podcast in Spanish. We have published it.
Currently, we are recording in a monthly basis, but we expect to change to more often, depending on the outreach. Thank you, Michael, for helping us get all this set up. It takes a little bit of learning, but we are doing it now. It has been well-received based on Twitter and LinkedIn comments that we're seeing. At the end of this month, we will be interviewing Roberto Rodriguez about his similar creation, so being the look out for those.
From the Cloud Capability perspective, I'm really excited about a conditional access filtering that has been added in preview for Azure AD. Basically, this gives the ability of filtering for device as condition. For example, one can restrict access to a privileged access workstation or a secure access workstation. Sometime in our documentation is referred as POR or SO.
For those of you not familiar with what POR or SO are, basically, there are computers that are really hardened, limited applications. We recommend not to do email or regular web browsing. It's only used for administration. You are able to connect to Cloud Administration or on-prem application for administration. To configure this, basically, all you have to do is go to Azure AD conditional access under the authentication context.
The last thing that I wanted to talk about is, two years ago, Microsoft launched Windows Virtual Desktop. With the pandemic, Microsoft has seen the need to support an evolving set of remote and hybrid work scenarios. To support this broader vision, we are changing the rebranding of Windows Virtual Desktop to Azure Virtual Desktop. You're going to start seeing a lot of documentation referring to Windows Virtual Desktop as Azure Virtual Desktop.
Cool. Some of the news I have, I'll start with Azure Backup. Anyone using the Azure Backup service and any resources that are using the Microsoft Azure Recovery Services or the Mars agent, you need to be using TLS 1.2 or above. We will be stopping using TLS 1.0 and 1.1 as of the 1st of September, 2021. At the day we're recording this, that is June, July, August, September, that's three and a half months away.
I know that in production IT terms, that's not necessarily a long time if you have to go through a change board and stuff. Definitely get onto that if you're using that. Secondly, let's talk about my favorite baby, Azure Sentinel. Just a quick one this time, I will actually not talk about it too much, but we have made some great changes to the pricing of Sentinel and this is pretty cool because it means it should be cheaper for folks.
Now, when I say cheaper, I'm not saying we suddenly drop the price, but a couple of things to know. If your capacity reservations are now called commitment tiers because we like to change names. But with the commitment tiers, we now have higher commitment tiers. If you're familiar with them, you'll know that we went from 100 gig a day up to 500 gig a day. Now we are also doing one terabyte a day, two terabytes a day, five terabytes a day.
So you can actually just configure that commitment tier in the UI without having to talk to a Microsoft person. The other thing that's really, really cool is the way that we bill for data ingestion over the commitment tier. So if you'll say on a, what used to happen was if you were on, say, 100 gig a day commitment tier and you went over 100 gigabytes a day, you would pay the pay as you go rate for Azure Sentinel.
Now what you'll do if you go over your commitment tier, you will just pay the effective rate. So because each commitment tier has a discount. You can tell I'm a tech person and not a salesperson because I am appalling and explaining this. But basically it means it's cheaper. We'll put the link in the show notes, go check it out. It is, I think, a big improvement because previously, if you went over your commitment tier, you would get charged for that overage quite a bit more.
So it's going to be cheaper now, which is lovely. Then we've got for Azure Security Center, couple of things to talk about there. We've got, it's got some new recommendations for hardening Kubernetes clusters. So if you're using Kubernetes, we're going to have some more hygiene recommendations, which is great. There's going to be some new recommendations to enable trusted launch capabilities. In preview then for GA because Azure Security Center is pretty much as busy as Azure Sentinel, I reckon.
It's things that have gone GA that you may have already seen. We've got Azure Defender for DNS and Azure Defender for Resource Manager and now GA. Azure Defender for open-source relational databases is GA. We've got some new alerts in Defender for Resource Manager, and there's also the SQL data classification recommendation. Severity has changed and that's all GA too. Then the last thing that I wanted to talk about again, Security Center, this is public preview, but this is very cool.
So it gets to be separate. Is that Azure Security Center now integrates with GitHub Actions. So if you're using GitHub Actions, they are a way of doing automation within your GitHub repo. I have had some experience with them. I'm a bit of a GitHub noob, but I have had some experience trying to post automated messages in the Sentinel repo. So I've done a tiny bit with this.
It's very cool because what it means is that you can incorporate security and compliance into your CI CD pipeline, and it will help developers identify issues faster. So definitely go check that out if you're using a GitHub repo for your code. Over to you, Michael. That's all my news. One of the first items that I have is the fact that we now have Sec DevOps practice support in GitHub and Azure.
So for example, if you're using GitHub as your main pipeline, then we can actually use tooling that we have now. For example, Azure Security Center in collaboration with containers, we can provide that end-to-end collaborative view and tooling to help you secure the products that come out of your pipelines. That's really great to see. The next one is there is now the general availability for key rotation and exploration policies for Azure Storage.
So before I get stuck in, so there's need to explain what the keys are here. These are not encryption keys, these are not cryptographic keys. This is the keys that are used as essentially the access token that you use to access a storage account. If you're not familiar, you can use two major ways of accessing storage accounts either through some token or you can use using AAD identities. We talked about the last podcast, we talked about the ability now using policy to disable the use of keys.
So only using AAD accounts, that's the data plane. Well, if you need to use access keys, for example, secure access tokens, then sometimes you may want to rotate those on a regular basis. Well, now you can put policy in place to require a rotation policy and exploration policies for those access keys. So some people still want to use access keys, I totally understand that, but this is just giving you more control over making sure that those things are rotated on a regular basis.
The next one is we now have the ability, is it the same public preview, to have identity-based connections in Azure functions using Azure triggers, like triggers on various services. This applies right now to Azure Blob, Azure Q, Event Hub, Service Bus, and Event Grid. Basically, what it does is it now lets you leverage an identity instead of a connection string when these services are talking to each other.
As you're probably all well aware, storing a secret is always a painfully difficult thing to do. More importantly, if it's compromised, then the attacker now can impersonate that particular service. So this gets rid of that problem by using managed identities. So if you've set this in place, you can have two services talking to each other just using managed identities to authenticate against each other.
The last one I have, which I was going to talk about last week was, but I totally forgot about it, was Cosmos DB now has support for client-side encryption using Always Encrypted. Always Encrypted is a technology that first came out in SQL Server. It's essentially client-side encryption. So the keys are actually maintained by the clients. SQL Server doesn't know about them. On this particular example, Cosmos DB doesn't know about them. They're maintained completely at the client.
There are certain kinds of data and certain kinds of configurations that will allow you to do queries over that data even though it's encrypted. This is the beauty of Always Encrypted. So that technology is now available in Cosmos DB in preview. We talked a couple of months ago now about the SDK that's available. It's up on GitHub. It's essentially the same code.
So if you're using Cosmos DB, you want some incredibly robust cryptographic control at the data plane, then this is certainly worth looking at. So with that, let's change tax and let's turn our attention to our guest. This week, we have Pete Bryant. As I mentioned, he's a senior software engineer in the Microsoft Threat Intelligence Center, otherwise known as Mystic. Pete, welcome to the podcast.
We'd like to spend a couple of moments and explain how long you've been at Microsoft and what you do. Thanks, Michael. Yeah, thanks for having me on. So I work as officially a software engineer at Mystic, but really I'm more of a security analyst or researcher. I've worked at for Mystic for a couple of years now, and I've been at Microsoft for getting on for nearly five years now in a variety of roles.
I actually started off as a security engineer at Skype back in the day, and then I've done some kind of customer facing roles before moving into Mystic. But my background really has always been the kind of defensive side of cybersecurity. So socks, instant response, that sort of work. Very cool. So I mean, I have to ask this pretty straight up question. What is Mystic and what is the role of Mystic within Microsoft and how does it relate to our customers?
So Mystic has a number of different missions, as the name suggests Threat Intelligence is a core one of those. So what that involves is investigating and tracking the more sophisticated threat actors that are targeting Microsoft and Microsoft customers. These are typically nation backed groups or advanced e-crime actors. And you might have heard of some of these groups that we track when we talk about them publicly.
As we name them after periodic elements, so things like strontium, gold, nabellum, these are all kind of names of threat act groups that we track as part of the core Threat Intelligence mission. And the objective of that mission is to feed into both Microsoft's defender teams, so the teams that protect Microsoft as an organization, but also out to our customers through our security products.
So the intelligence that we gather as part of the TI mission feeds into all of the products that our customers use kind of day in, day out. But it's not just that kind of Threat Intelligence mission that Mystic does. We also have a number of other engineering and R&D roles. So we spend a lot of time and effort researching new attack techniques and new defensive techniques and feeding them again into the product groups, into the product ecosystem that Microsoft has.
And some of that is providing domain expertise to other groups. Some of it is providing kind of core engineering platforms that actually do some of this detection as well. We also try and engage with the community more broadly. So as part of the Threat Intelligence mission, we have a lot of industry partners who we work very closely with in the threat actor tracking with information sharing and so on. But we also try and share out through open source projects and openly in the community.
So one of the kind of big open source projects we have is Mystic Pi, which is one I work on. But there's others in there as well in the new section Gladys mentioned Simuland, which was created by Roberto Rodriguez, who's my colleague at Mystic. And there's a number of other areas where we're just trying to contribute back to improve the security ecosystem for Microsoft customers, but also just more generally. Can you explain a little bit about Mystic Pi?
Sure. So Mystic Pi is kind of very much my baby. I could talk about it all day because it's something I've worked on for the last couple of years now. And what it is is a set of Python tools to support threat intelligence analysts and threat hunters. Most of it is derived from expertise and experience in-house in Mystic.
And we actually have a very similar tool set internally that is it has a different name and it's kind of geared towards our specific internal processes, but it has a lot of the same capabilities. And the idea really is to provide an easy and simple way for security analysts and threat hunters to use Python and primarily Jupyter notebooks to conduct this investigation work.
So it has tools to help you kind of collect data, analyze data, visualize data, and kind of really improve your kind of workflow speed and capabilities based off this experience we have within Microsoft and specifically within Mystic. So one of the big benefits of doing this in Python and creating a Python-based tool is that it also opens up a integration to the wider Python ecosystem.
All of the other capabilities people have built out there, partly for security, but more really for other projects. So if you think about the data science and ML community, they're heavily invested in Python and there are loads of great Python tools out there, such as things like Scikit Learn, that make ML a lot easier to conduct. And they're all written in Python.
So having a security tool set written in Python as well means we can kind of integrate those two sides of the Python ecosystem for the defenders as well. I'm always fascinated by the kind of the human element of tools like this and people's journey, you know, as things change, as technology shifts and so on. And obviously one of the biggest changes over the last decade has been the use of AI and ML.
A couple of weeks ago, actually, at the end of April, we had Sharon Shah on who does the AI and ML for Azure Sentinel. And she briefly talked about her journey as a security professional, sort of through the AI and machine learning landscape. Could you share with our listeners sort of your kind of journey, like what things you've learned along the way as a security person learning AI and ML?
A big thing with security, particularly from a threat hunting, threat intelligence perspective, is it's really just a data problem. You need to collect your data, format the data and then find the interesting things in it. And that's fundamentally not that different from what data scientists do.
And having kind of talked to other data scientists and particularly working at Microsoft, where we've got teams of great data scientists doing other things, being able to collaborate with them has shown me kind of how powerful ML and AI capabilities can be for threat hunting, even when they're potentially quite basic, or at least from a data scientist view that we see as basic. And Python just makes them so accessible.
It's provided me a way of really easily learning and leveraging some capabilities that really help. If you think about kind of what our data scientists do internally, they spend a lot of time creating really cool, very granular data models and machine learning algorithms that help kind of find specific events in a whole stack of data.
But for me, what I can do is kind of take some of their learnings at basic level and apply it in threat hunting to do things that don't have to be anywhere near as sophisticated to be valuable. So if I've got a big set of data, if I can create an ML model, a simple ML model using some of the pre-built capabilities in something like scikit-learn, just to cut that data set down to 10% of what it was originally, that's a huge help to me.
And so having those capabilities and those tools easily available to me as a security person through Python and just a few lines of code is really powerful. And it means that I can learn a lot about ML as I go in. I'm far from an expert. We work pretty closely with some data science experts. And to be honest, a lot of the maths they talk about goes over my head. But I can understand enough and leverage enough to make it useful to me as a threat hunter.
Cool. So Pete, I get asked and I'm going to let you talk about it rather than me, but I get asked, why do I need Sentinel if MysticPy can do all these things? Like how do they work together? Do they complement each other, etc. Because I know not everyone is clear on how those two might coexist. I guess the first thing to say is MysticPy is built to work with Sentinel, but it's not Sentinel specific. It has capabilities to work with other data sources.
It hooks up with things like Splunk, hooks up with MD, it hooks up with local data you might have. But it's also not a replacement for any of those tools really. It's focused more on the less structured parts of the security process. So you're not triaging alerts necessarily. It's more the experimentation that comes with threat hunting or a particularly complex investigation.
One of the advantages it has is that it takes the power of something like KQL, which we have in Sentinel, and just opens up to the, again, pretty much anything you could think of doing in Python. If you think about where MysticPy is sitting, it's probably not going to be something that every security analyst is going to use. It is definitely one of the more advanced capabilities in a tool set.
But it allows you to do things that you maybe could do in Sentinel, but wouldn't necessarily want to do. And I think one of the really powerful things about it is its ability to connect to multiple different datasets at once.
You can use it and pull data from Sentinel as your starting point, but then also pull data from other locations and analyze it together without having to then kind of ingest all that data into Sentinel, and the kind of storage elements and engineering side that comes with that. Really, it's kind of your extension out of Sentinel into other elements.
And really, it's kind of the world you're oyster once you're in MysticPy, because you're not really constrained by kind of UI or features at that point. The general sort of view I get is Sentinel is this big tool, whereas MysticPy may be more applicable for some people who want to have some more programmatic access and fiddling around with different settings and so on just to get certain types of data back. It seems more program-y rather than being an infrastructure tool.
Is that a fair comment or not? Yeah, absolutely. And it's also not a tool that's really built for structure, if that makes sense. It doesn't have integration with like a ticketing system or a nice process queue that you get with the instant experience in Sentinel, say. So there are scenarios where you would definitely kind of, you need that structure such as triaging alerts that you're just not going to have in MysticPy, because that's not really what it's intended for.
So can you give an example of how MysticPy has been used in the wild and perhaps say some of our customers some time? Sure. So along with MysticPy, we've also created a number of Jupyter notebooks that go with Sentinel and use MysticPy to kind of allow people to kind of do specific things. And one that we created last year that I think is a good example is, is one that was looking at COVID-19 themed threats.
And we wrote this back in, I think, March or April last year, when we were seeing a huge volume of COVID-themed phishing attacks and other kind of influence type operations. And rather than just kind of like release a feed of IOCs that we are seeing that would have kind of grown on an exponential basis, it made a lot more sense to create a notebook using MysticPy that allowed people to analyze the stuff themselves.
So there the notebook kind of collects various data sets from Sentinel primarily and then looks in them for COVID-themed elements, domain names, document names, things like that. And then uses a number of the features of MysticPy to help highlight which of those might be something worth investigating a bit further.
So we can look them up in threat intelligence feeds, we can get details on domains and when they were registered, are they something that was just kind of set up 10 minutes ago or has this been around for a couple of years? What's the reputation of this? Again, just kind of allowing us to take that core data that you've got in Sentinel and enhance it with all of these external data feeds to help you kind of drill in and investigate this data. And I think that's really important, right?
I mean, you've got this massive amount of data and you're essentially using MysticPy and it's just AI and ML to whittle it down to some smaller data set that has a higher likelihood of being real attack data. Absolutely. And you can't necessarily get it down to something that has zero false positives. But as I was saying before, as a threat hunter, you don't need to do that necessarily.
If you can just cut it down to a manageable level that you can go and investigate a bit further, that's a huge win. From the looks of it, there's a lot of lessons learned that can be gathered from the work that Mystic does. Does Mystic publish data on attack or threat actors? Yes. So we do fairly regularly now. We publish things publicly and also via the Microsoft security tooling. So you'll see our public blogs. We've recently posted ones about the
groups Nibelium and Hafnium. And these go into detail about the techniques and the malware that we've seen those threat actors use. But it's not just those public elements. We also have detections and reports available via things like Defender for Endpoint. So if you are targeted by one of these groups, you all get access to reporting about those groups through the portal where you can learn a bit more about them, what their history is, what their typical targeting
pattern is, TTPs that might be associated with them. And we also extend this out to other customers we have. So where we're seeing the groups or the threat actors that we're tracking, where we see them targeting customers specifically, we let those customers know what we've seen, when we've seen it, and help them respond to that. So we're using the intelligence that we're gathering to feed into customers and the community as much as we can
through this. And we're always looking to step this up. The blog cadence has increased over the last year. We're also starting to share threat intelligence data sets more often than we have done in the past. And really, we're just trying to be as open as possible and publish as much as we can to help people defend themselves against these threat actors, but also allow other threat intelligence organizations and companies to build upon the work we've done and expand it using their
own visibility. We've seen some great recent examples of that where we published a blog about a threat actor. And another threat intelligence organization has been able to take the information there and build and expand and produce their own blogs with some new information that's based on the unique visibility that they might have. So there's real kind of advantage to everyone allowing this kind of public sharing of the work that we do. So that leads into a good question about
Nebellion. What do we learn from Nebellion? Oh, I mean, we learn a lot from Nebellion and we're still learning from them. I think the last six, eight months has been a continual learning process from this threat group. There's so many things that they've done that have been kind of interesting and maybe not completely new, but used in a way that we haven't seen on a scale before that has
allowed us to focus our research, but also kind of develop new investigative areas. So the way that they focused and pivoted from on-premise identity up into the cloud, this wasn't necessarily a completely new technique. The stealing of ADFS key material and minting of SAML tokens had been documented by researchers before and have been used by attackers before, but not on the same kind
of scale or sophistication that we saw with Nebellion. Recently, we published a blog about the phishing campaign Nebellion's been running for the last few months and the techniques are used there. Again, the techniques weren't completely new or novel, but the way they went about doing this, the TTPs they used and the methodical approach they took is something that we're learning from.
And again, when you're tracking these very sophisticated threat actors, you learn a lot just from the way that they approach these attacks, how they spend a long time developing and testing capabilities before launching attacks. I think if you look at the supply chain attack Nebellion launched, they spent years on this basically, getting access, persisting it, tweaking it, testing
it, and then finally exploiting it for their end goals. So that kind of timeline and persistence is super valuable from a defender to learn from and gives us a whole wide range of data points that we can use to improve our own tracking of threat actors, but also the defenses that go into our products. What about Haphneum? Have we learned anything there? Any particular patterns? So Haphneum was a really interesting one. And I think one of the great things about
that was the cross company approach we took. So Haphneum was a threat actor that we had been tracking who we saw exploiting exchange vulnerabilities. And specifically a number of exchange zero days that were disclosed in March this year. So about the time that we started seeing them exploit these capabilities and understand what was going on, other parts of Microsoft were also focusing in on this. External security researchers had reported some of these
exploits and vulnerabilities to the Microsoft security response center. And this meant that we could team up with them and the exchange group to take that information we had from researchers, the threat actor information that we had seen as mystic, and the research we'd done internally to create a really good response to that, allowing us to have kind of a comprehensive patching and protection capability for customers, as well as having detection of threat hunting
resources for people to go see if they've been impacted. And that was a really good example, I think, of the threat intelligent mission that mystic does, enriching and enhancing the security work that goes on across all of Microsoft really. I will have to ask about ransomware. There's a lot of talk about it lately. And I have and I have heard customers talking why my antivirus couldn't gather it. Can you provide some learnings from there and the overall process ecosystem?
Yeah, so obviously ransomware is a big problem for the whole industry at the moment. And it's certainly one that mystic is focusing on. You might have seen the recent reports about the FBI responding to the attackers who targeted the colonial pipeline here in the US. And the FBI called out in their press conference the other day the support that they'd had from
for mystic. So it shows the kind of work we're doing across not just Microsoft, but the wider community to help respond to ransomware and impose some cost on these ransomware actors. But really, what is interesting about ransomware is the way it's often depicted in the news versus the personas behind it. So ransomware is often seen as a malware problem and it's reported as a malware problem. So again, if we look at colonial pipeline, the report was about
dark side. And really, dark side is a type of ransomware. It's the software that's involved in it. But behind dark side, there's a whole number of personas and actors that we can track and look at here. Generally, with these ransomware groups, it breaks down into kind of three parts. You've got the people who create the ransomware who do the coding. You've got the sellers effectively. So these are the people that advertise the things on the dark web that provide people access for a
fee and kind of maintain the infrastructure behind it. And then you've got the operators at the end. So these are the people who buy access to the ransomware platform and then we'll deploy it at a victim and be that kind of initial interaction point with the victim demanding the ransom. So you've got all these different elements that you need to contend with here and there's a number of places where you can learn and track from them. And I think part of the problem we have
with ransomwares, we see it as a malware problem because we talk about it as a malware. Whereas really, you've got to think of it much more broadly than that. You've got to have an approach that thinks about it a lot more holistically, particularly for these really big intrusions. It's not just a case of someone attaching ransomware payload to an email and sending it so it gets deployed.
These operators who are compromising organizations are acting like a sophisticated threat actor and gaining access through an initial kind of compromise point, pivoting around, gaining domain dominance and then deploying ransomware at the end. So you need to yes, think about blocking the malware aspect of ransomware. But really, if you get to that point, that's very late in the ransomware kill chain. You need to be looking way earlier at that kind of like initial access, lateral
movement, how are you going to detect and stop that? Because that's really the stage you need to be doing it rather than just trying to block the ransomware executing on the endpoint. So when we started the interview, you mentioned that Mystic is also involved in research and development. Could you give us a brief overview of what the research and development looks like? Yeah. So R&D insecurity is one of those kind of never-ending problems is you've always got to
kind of keep up with the latest attacks. You've always got to develop detections for the latest TTPs that have come out or the latest piece of malware. And obviously, doing that kind of churn is part of what we do. But we're also looking at how we can leverage the community to kind of take
our R&D to the next level. And one of the things that we're looking at and my colleague Roberto is really championing is this idea of how can we engage with the security community, particularly the research and offensive security community, to build our R&D in at their development stage.
So if you think about kind of red team tools quite often what happens is these great offensive security researchers will develop a great new technique, build it into a tool to help them and other red teams release that it will then get abused by a malicious actor who will kind of compromise a bunch of innocent people using it. The defensive security team will then be kind of
focused on it and develop detections and allow people to find it. Well, really, we kind of want to we want to skip that whole innocent people getting compromised piece and see if we can work with those offensive security researchers early on to help them develop the detection capabilities as they're developing the offensive tooling so that when it's released defenders are already there
rather than having to wait. And that's kind of a big project for us and it's kind of a big bit of work for the community because it typically hasn't been something the community has done so well that kind of red versus blue element and coming together. But also the R&D side on the defender's perspective has been limited by things like a lack of good tooling and also just a lack of time. Like sock teams have a number of kind of objectives and it can often be quite time sensitive. So
kind of making the time for this defensive research can be hard. So we're trying to do things like developing frameworks and tool sets that can help these defenders do that research. So Simuland
that was mentioned in the new section is one that Roberto has created to help with this. But also just the work we're trying to do with these researchers to get ahead of the game effectively on the defensive side of that research and through Roberto's persistence and the work that he's done building the community we're having some great success there in allowing us to kind of work with these researchers to make sure that we understand their new techniques and capabilities before
their public and also make sure we've got those protections and defences built into our products before their public. And for me that's probably going to be a real game changer not in the next week or two but in the next year or so. If we can kind of really get that process working I think we're going to have a tangible difference on the security landscape.
There's a question we ask all our guests at the very end and that is if you had one final thought just one thing for our listeners to really hang on to what would it be? I think a key thing to focus on is keeping perspective on security threats. As threat intelligence practitioners we quite often like focusing on the really sophisticated elements
and talking about them. So a good example would be with Nibelium all the focus was on their long-running supply chain compromise but really that was just kind of part of what they did. A lot of the other elements of their attack were the techniques and processes that are well known to defenders and really by keeping that in perspective and focusing on the security basics people can really do a great job at protecting themselves even from the most advanced threat
actors out there. Things like just making sure you've got MFA enabled, things like you restricting what you're exposing to the internet. They make a huge difference again it's not just the kind of commodity threats but against these really advanced factors as well and it might not stop them because they're going to be persistent and find other ways in but what it does is A imposes bigger
cost on them and B gives defenders way more opportunity to detect them. You'll find that even advanced groups will be quite noisy particularly if you force them to kind of circumvent good security controls. So really what I'd say is focus on making sure you've got the basics in place
regardless of who your threat actor in your threat model is. So with that let's bring the podcast to an end Pete thanks so much for joining us this week I know we we all really appreciate it we also learned a great deal and hopefully our listeners learned a great deal as well and that's all you listeners out there thank you so much for joining us this week take care of yourself and we'll see you next time. Thanks for listening to the Azure Security podcast you can find show notes and other
resources at our website azsecuritypodcast.net. If you have any questions please find us on twitter at azuresetpod. Background music is from ccmixter.com and licensed under the Creative Commons license.