Welcome to the Azure Security Podcast, where we discuss topics relating to security, privacy, reliability and compliance on the Microsoft Cloud Platform. Hey everybody, welcome to episode 102. This week we have a guest in Astoria Cinema who's here to talk about purple teaming untra ID. But before we get to our guest, let's take a little lap around the news. This week is just myself, Michael and Sarah. So Sarah, why don't you kick off the news? I will do.
So I'm going to go with the first thing, because this is top of mind for me at the moment, which is Ignite. So you may know that Microsoft Ignite has the registration has opened. It's in Chicago in November, just the week before Thanksgiving for those of you in the US. Now it is generally sold out, but a cool, cool thing is that if you are a security professional, there is an RSVP code to get you in as a security person, which is very cool.
So if you're listening to this and you wanted to go to Ignite and you missed out on a ticket, we've got the links in the show notes. So you can go use the RSVP code. Something of note, apart from the normal Ignite things like the breakout sessions, we've also got the pre-day stuff, which is half a day before the main Ignite thing starts on November the 19th and the pre days on November the 18th. And there are three security pre days.
There is the Microsoft Ignite Security Forum, which is a decision maker type thing. And then we also have two pre day labs. Now I'm biased that I'm extremely excited about these, but that might be because I'm running them or overall on the hook for these things. So the first one is AI Red teaming in practice. So it's a hands on workshop that's done by the AI Red team folks. It was also taught a black hat earlier this year. Those people are amazing. They really know their stuff.
It's very different for something we've done at Ignite before, because there's actually no Microsoft products involved in it. It is just the principles and some of the open source tooling and techniques you can use to Red team things. And the second one is secure your data estate for a copilot for M365 deployment. And that's, it's going to be a lecture based workshop. And of course, top of mind for a lot of people is I want to deploy a copilot.
And unsurprisingly, a lot of people start with M365 copilot. So in that workshop, we're going to talk about how you can do that, how you can understand the state your data is in and how you can make it secure so you can let some AI and copilot run loose on it without the copilot accidentally sharing all the information it shouldn't do with the wrong people. So that's the first one. I gave far too much time to that, but that's something I'm very heavily involved in.
But let's look around the other things. We've got build and secure your apps with App Service and Defender for cloud. So if there's a new service capability for the better together stuff, if you're using Azure App Service, you can now use Defender for App Service, which is part of Defender for cloud. Now you may remember, we've had a good friend of the podcast, Yuri on many times to talk about Defender for cloud. So you should go and have a look at that.
Now I also talked about in the last couple of episodes ago, we talked about the FIPS Mutability Support in AKS, that has now gone GA, which is very quick for something to go from public preview to GA. So well done to AKS. So if you need to use the federal, I'm going to FIPS or Federal Information Processing Standards. So if you're in the US and you need to use that, it's now GA. Hooray. Public preview Azure Monitor Metrics is doing Azure Monitor Metrics exporting now.
So that means that through the DCR data collection rules, you can now route Azure Resource Metrics out to storage accounts, event hubs, and from event hubs, you can send them elsewhere. So if you want to put your Azure Monitor Metrics somewhere else or do something with them, you can do that. And then a couple of big announcements that actually at the time of recording this only were announced a couple of days ago. So exciting, we're doing trustworthy AI.
So this is kind of does what it says on the tin. We are, FATCHR announced that we're doing trustworthy AI. So it's a number of principles and kind of principles that we're building our AI on. We'll put a link to it so you can have a look and read all about it. You'll be hearing quite a lot about that in the next few months for sure. And then last but not least for me, this has been a big news section for me. We have public preview of hybrid Azure AI content safety.
Now Azure AI content safety has been around for a little bit now. That is essentially when we put filters in your AI. So it can filter for hate things, violence, sexual content, etc. But hybrid Azure AI content safety will also help you look at stuff on prem and disconnected containers. So it's a little bit more flexible. So as always, we'll put the link in the show notes and you can go and have a look. And that is me for the news, Michael. So over to you. Yeah, I've got quite a few as well.
So the first one is we've just released an update on the secure future initiatives. So it's just basically a progress update. It's well worth reading, giving an idea of all the things that we're working on right now. Essentially the big overarching metric, if nothing else, is that we've got the equivalence of about 34,000 full-time engineers working on this stuff.
And we're starting to see a lot of the fruits of that effort, not just internally, but also some of the products that are getting changed as well. So I'll actually touch on a couple of those. The first one is we've just released a video on secure and scalable quick starts for Azure functions using the Azure Developer CLI.
A huge focus in this video on security, best practices for securing apps, how to use VNet integration, how to use managed identities and so on to access things like Azure storage. Really good to see. Huge fan of managed identities. Next one is, so Sarah touched on this with the Microsoft Trustworthy AI. Definitely go and check out the video on that. It's well worth looking at. Microsoft Playwright, I don't even know what Microsoft Playwright was, to be honest with you, but now I do.
It's an end-to-end web testing tool. And we've now added Entra ID support for Microsoft Playwright testing. Very cool because if you want to test a system that's using Entra ID, then this is the way to go. Next one is managed identity support for WordPress on app service. This is really cool.
Again, I swear that every few episodes I talk about some new feature, some new product that is getting managed identity support, it's critically important that we remove credentials from the environment and managed identities are one way of doing that. And this is actually a big focus of SFI as well. So here we've got a third-party product, WordPress, and we're now supporting managed identities for that as well, which is great to see.
Sort of in a similar vein, Azure Bastion now has Entra ID support for SSH connections into the portal. Again, this is really cool to see because I don't think SSH would normally have worked with Entra ID without some kind of extra work. So this is good to see as well. So again, removing credentials from the environment. Next one is managing network security groups on subnets via Azure Policy. This is really cool to see. I'm a huge fan of Azure Policy.
I'm a huge fan of NSGs as well, but Azure Policy is just a way of deploying an environment or configuration in the environment so that the policies are always applied. So you can say that certain subnets must have certain kinds of network security group rules by default. Something from my old stomping ground, SQL Insights, which apparently never came out of preview. I didn't know that.
But SQL Insights will be retired on December the 31st this year, 2024, and you must replace it with something with products like Database Watcher for Azure SQL or something like that. So again, if you're monitoring your SQL environment using SQL Insights, that thing is going to be retired at the end of this year. The last one, which I want to touch on, because I think it's incredibly important because no doubt by now, most of you have seen the email. If you haven't, you need to.
And that is that we're announcing mandatory multi-factor authentication for Azure Sign-In. Now, this will be a feature sort of slow rollout. And what I mean by that is we're starting with the Azure portal. So basically, if you're going to access the environment through the Azure portal, then you must have your accounts that are doing that must have MFA support, multi-factor authentication support. This is going to come into effect towards the end of October this year.
You can opt out of it, I believe, until about April next year. But if you don't, then it will come into effect on October. And I also want to point something else out that's really, really important here. This is just for the portal for the moment. Over time, we will change that to cover more things like API access and so on. But right now, it is just for the portal.
I realize it's not a complete defense because you can still access resources, potentially access other resources using, say, API access. But this is just a starting point to sort of make sure that everyone sort of gets on the train at the beginning so that we can roll this out in the future. All right. That's all I have in the news department. So why don't we switch our attention to our guest?
As I mentioned, our guest this week is Nestorii Sinema, who's here to talk to us about purple teaming Entra ID. So with that, hey, Nestorii, thank you so much for joining us this week. We'd like to take a moment and introduce yourself to our listeners. Yeah, thanks for having me. So my name is Dr. Nestorii Sinema, also known as Dr. Azure ID. And I've been working with Microsoft since January this year. So what is it, nine months or so.
And I'm working now in Mystic, which stands for Microsoft Threat Intelligence Center. And I'm pretty much doing the same thing I did before joining Microsoft. So I kind of try to break our stuff and then help to fix that and of course help to build detections for that. And those eventually becomes part of our tools like MDE or MDA or whatever Defender products we have. And I think people know me best from my toolkit called A8 Internals, which is used quite often for like purple teaming Entra ID.
So that's a very quick introduction. Okay, Nestorii. I'm going to have to ask you first of all, as you said, you're known as Dr. Azure ID. How did you get that nickname? Okay, well, I am a doctor. So I did my doctorate 2015. It wasn't about security as such. So it was about informatics. And it's from University of Reading, England. So that's the doctor part. And because I know a little bit things about Azure ID, which is now Entra ID. So that's why I picked the name.
Now I warned you I was going to ask you about this when we had our little pre-record. But can I ask you about your car registration? Yeah, sure. So it was about 15 months or something like that. So I ordered a new car with vanity plates, of course. And the registration number is AAD365. And we were having a vacation in Spain. And then when we flew back, I was able to pick up the car. But during the vacation, Microsoft announced that, okay, we are going to change the name to Entra ID.
And I was like, oh, yeah, that's the story with my car plates. So it's a kind of like a joke with our friends that every time Microsoft rebrands something, it cost me like 1000 because that's the price of the new plates. But I'm still not going to chase the plates. I think I'm going to have to find a link to your socials where there's a picture of that to put on for the listeners because I remember seeing it and I felt so bad for you.
But I guess I wanted to ask you, how did you decide or how did you get into doing research about Azure AD specifically? Was it an accident? Did you choose it? Because as you said, you've been looking at how we can purple team and do things to Azure AD or Entra ID for a long time. How did you get into it? Yeah, well, it's a kind of long story. So back in 2008 or something, I was CIO of third largest Finnish teleoperator.
And we were building up hosted message and collaboration environment like email and SharePoint and that kind of things. So we built that environment. And then we quickly go to 2013 or something like that. Yes, it was 2013. So one of my old colleagues, he was a director in Finnish organization that provides IT training like for administrators and that kind of roles. And nobody in the company, I mean other traders, they didn't believe that cloud is going to be a big thing.
So he asked for me that now because you build that hosted messaging and collaboration environment, could you come here and train like cloud? And I was okay, why not? And at that time it was just Office 365, which is now Micro 365 and so on. But the Entra ID was part of that since the beginning. And when I was doing the training, of course, you need to make sure that everything is secured and the environment is secure and so on. So I kind of accidentally went that path.
Because in the training, you build your environment, you connect your OnePrem AD to the cloud, you build the synchronization. At that time, we did a lot of ADFS also. So that's kind of how it started. And then I started to research more and more and more. And the first version of my tool, I think it was published 2018. So yeah, that's how it started. Do you want to just talk about the AD internals tool and just explain sort of what it does and how people sort of use it on a daily basis?
Well, it's a PowerShell module. And why PowerShell? Because everybody who is administering Windows, they are using PowerShell or who are like administering servers or cloud, Microsoft Cloud, you usually use PowerShell. And also like when I found something that I think that this might be a vulnerability, I reported that to Microsoft, to Microsoft Security Response Center.
And they usually require that they would be like a proof of concept, that would be easy to use, so that you can replicate that. So I implemented that as a new feature to my tool. And then I was able to send that to MSRC and they were able to replicate that. So during the years, I have put a lot of things there. So I can't even remember how many different like functions there currently are, but they are like hundreds. And it is used by, well, bad guys are using that also.
But typically it's used by like administrators who want to check that how they, what is the security post of their environments. So I have implemented same techniques in the tool that threat actors are using. So now when you have the toolkit, you can run it in your own environment and you can test using the same methods and tactics and techniques that threat actors are using, that can you detect those attacks and then how you can prevent those after you have like detected those.
So that's how kind of it is used. Do you want to give some examples? I'm just curious. I mean, so I agree 100%. I mean, I think knowing how bad actors go sniffing around environments, looking for vulnerabilities and then potentially exploits those vulnerabilities or configuration weaknesses. So I agree 100%. I think that's something that all people who deploy on any platform should understand. They need to understand what they're up against.
So you wanted to give me some examples of what sort of things it would do. What would someone do on a daily basis if they were using a tool? Well this probably not going to happen like daily basis, but if you remember the solar gate attack a few years ago, so that is kind of like, it started with kind of golden seminal attack. I mean, when the attackers were pivoting from on prem to cloud.
So for instance, you may run my tool in a server that is running ADFS, which is used typically for identity federation. And you run that and you export the token sign in certificate. That's one thing that you should be able to detect in the endpoint that you are running this command or you are somebody's extracting those certificates. And when you have those, then you can forge SAML tokens and then you can log in as any user in the tenant basically. You can also bypass MFA with that.
Now while you have that certificate, you can then start doing this. So you log in using any user you want to, maybe bypass MFA and then as a purple teamer, maybe the blue side of the purple team, they can then try to see that how well we are detecting that. I mean, like forging those SAML tokens. So that kind of things you can do with that. I want to be really pedantic. When you say exporting the signing certificate, I assume you mean certificate and the private key. Yes, correct. So yeah.
What does it take to do that? I'm just curious more than anything else. And that would assume that first of all, whoever is getting in there has access to the certificate and private key. So you have to run as an account that has access to that, right? Because we're not going to let anyone have access to the private key. Yeah, that's correct. But usually you only need to have like a local admin on that ADFS box and then you can do that. Okay. Fair enough. All right. What other examples?
I'm just curious. What other examples have you got? What else? So you can simulate like phishing attacks also. You lure user to use this authentication code thing. So it's pretty much the same device code authentication flow. It's pretty much similar than if you are using like Netflix, for instance, and you can log into Netflix that easily. So it just shows you a code and you need to use your phone or your laptop to go to Netflix and give the code and then your TV kind of logs in.
So Android has a similar kind of functionality. If you want to log in in Cloud Shell, for instance, you don't have the GUI, so you need to use some other way and this device code is one of those. So basically you can do that. So you lure user to log in using that code and then you will get a refresh token and access token. And you can log into, let's say, Teams, for instance, or Office. So that's shown to the user.
And then when you have the refresh token, you can use that to get access tokens to other services. And that's also what you can do with any internals. So you can simulate stuff that the attackers would do. So it does sound like you need quite a bit of knowledge about tokens and probably OAuth 2 and SAML tokens and that sort of stuff. Is that a fair comment? I mean, to take full advantage of this, or is it really just a black box?
Like you just say, I want to do this and the tool just goes ahead and does it. Or is there some requirement to have at least a baseline level of knowledge of what the heck is going on under the covers? Yes, you need to have some kind of basic knowledge so that you at least know what you are doing. But for instance, those parts which are exporting the certificate, so there you don't need to know anything. You just say, give me the certificate.
And it gets that from the box using different methods. So I'm going to ask you a question, this story. As someone, because there could be, there's probably people listening who've never heard of Azure AD internals. And if people, if anybody was listening who wanted to use it or give it a go, are there any particular modules or parts of it or particular things you can run that you'd recommend people started with?
No, not really, because it's like a toolbox and there are tools for different purposes. So it depends on your purpose. So I can't say there's anything like a specific way to start, but well, maybe, you know, just like I'm trying to get in those tokens, so sign in, get token and try to access some service. So maybe that's where to start. So what do you see as common weaknesses?
Like, if there was one thing that people seem to find the most using this tool, what sort of, you know, if you sort of enumerate like the top two or three, what would they be? That's a really good question, because I don't actually know. So because I just made the tool and others are using that. But what I have heard is that what people quite often tell me is that they have learned so much about how things work, because I also used to blog about things a lot.
And then when I implemented the tool, and because it's like a PowerShell script based, you can just open up the script and see how it's working under the hood. So maybe kind of like that way, I would say that that has been like most useful to people. Yeah, so it's kind of like a educational tool also. Yeah, I really back that point up that I'm the sort of person that really needs to understand how something works.
You know, if I want to know, you know, what something does, I sort of want to know all the machinery underneath. I'm reading a book right now that explains how Rust does all its stuff under the covers. And I found that to be more useful than just reading a book on, you know, advanced Rust, because I have a much better idea of how the borrow checker works. And I realize we're going completely off topic here.
But the points I'm trying to make is that, yeah, I'm a big fan of I personally learn by knowing how something works. I mean, you got to understand the basics. Once I understand the basics, then I know what I need to go ahead and learn. I made a comment about this on X a few weeks ago, but I got a new Surface Laptop 7, which is an ARM 64 based CPU. So have a guess. You know, have a guess who's learning ARM 64 assembly language.
You know, I don't have to learn it, but I just want to learn it because now I'm in a position where I can actually experiment with it. I know people learn in different ways, but certainly that's the way I learn is by learning how things actually work. So yeah, it's pretty cool stuff. Yeah, kind of like process of how the tool has evolved is that I want to study that how, as you mentioned, how this works. And then I'm implementing that in my toolkit. So okay, let's go back a little bit.
So how cloud works. So basically you have some client that is talking to some service in internet using APIs. So everything happens using APIs. So basically there's some new functionality and I want to learn how it works. And then I have implemented that in my tool, like it works as it should. And then I'm trying to tweak some parameters and try to do things that shouldn't be possible. And that way I found most of the vulnerabilities I have found. So that kind of helps there.
So first implement it as it should work in a correct way, and then try to, you know, tweak and do it in some other way. That's a good point. I don't think a lot of people realize this, but every single thing that you do when you're talking to Azure or any other cloud platform, whether through the portal, whether it's through the CLI, whatever, whether it's through some.NET assembly, you know, the Azure SDK, at the end of the day, it's just REST calls, just REST API calls.
That's really all of it. In fact, I think there's sort of a meta best practice there, which is if you are building anything that uses REST endpoints, REST APIs, then you need to make sure those things are secure as well. In fact, there are, you know, REST APIs are becoming an increasingly important attack point. In fact, I'll put a link in the show notes.
There's a little newsletter that I get every, I think it's like every week or something, which is like, you know, this week in REST endpoint security and, you know, vulnerabilities have been found where attackers, for example, just make something up, could enumerate, you know, all the movies that someone's watched on a streaming service, you know, anonymously, because they just call the API with some key, you know, some random key or some, or something that's predictable.
These are all incredibly common attacks these days. So yeah, just your point there is well taken. At the end of the day, it's just, if they're just API calls, that's all they are, REST endpoints. And it's incredibly important that those endpoints are secured. Yeah, also one thing that is kind of related to this is that how does cloud know who you are when you are calling those APIs? Well, you are telling to the cloud who you are.
You are telling to the cloud what device you are using, what client you are using, and so on. So that was also one reality thing that I realized that how the cloud works. So that allowed me also to find some interesting stuff. And that's the entire ID part. Correct. Right. You may decide not to use enter ID. There may be some other authentication or authorization model that's less secure. So it's okay.
Nisrari, you already mentioned, and I know this already, that you joined Microsoft quite recently compared with your career in, you know, looking at Azure AD and doing research. So how did you came and joined us rather than doing your Azure AD research or enter ID as an external researcher? Yeah. So when I was working outside Microsoft, everything I did was so-called black box testing. I only like an interface with the services was API.
So I was sending stuff there and something came back and I kind of didn't know how things work internally. So it kind of took a long time to see if something is secure or not. And at Microsoft you have access to more telemetry and you have access to the source code. So you can now see inside the box. So it's not black anymore. So it's more like white box testing or glass box testing or whatever. So I can now do more, more good.
So I can find more vulnerabilities faster and they can be even fixed before it's in production in some cases. So that was basically the reason why I joined. And then I talk about with this in some of my current colleagues and ask that maybe they would be something interesting to me, maybe not. And then we did some bit of negotiation, a couple of months, and then there was a nice position for me, which I then applied and got selected.
So that's kind of a very short story how I ended up being here. I guess, well, the last thing I wanted to ask you was, are there any, well, from what you're allowed to talk about, are you investigating any new interesting types of identity attacks or identity research things that you can share with our audience or things that you think folks should be aware of in that identity space that we're seeing more or less of, anything that you've noticed that would be interesting for folks to hear about?
Yes. So nowadays, everything like when you're using those APIs, so you are using quite modern authentication called token-based authentication. And there are a couple of attack scenarios there. So for instance, this Solvergate attack, there was kind of token-based authentication attack where you stole the secrets which are used to sign those tokens. So that's kind of a requirement so that you can forge those tokens. So that's one example.
And to mitigate that, you just need to protect your on-prem server wherever those secrets are very well. So you can't do that anymore. And now that we have made that quite difficult, nowadays bad guys are doing the token theft thing. So they are doing the phishing or they are compromising endpoints and then stealing those access tokens from there. But now we are, well, we are or we have made that also quite difficult.
So if we are, for instance, requiring that you can access our services only from compliant devices. So if your laptop, for instance, if that's like it's managed and it's marked as compliant, meaning that it has to have certain security features enabled, it has to be up to date and so on and so on, depending on your organization. So if you only accept like access from those kinds of devices, you are good to go.
So this means that this is very hard currently for threat actors if we have like this zero trust configuration. So now I'm currently focusing on, can we break this thing somehow? Yeah, sad story there. So I had a cell phone and one day I got a message saying, hey, you need to make, you can't use a cell phone anymore because the version of Android you're using doesn't support the security features that we require for you to connect to the corporate network.
So I had to go out and buy a new cell phone. I mean, there was an option to put like a Fido key in the bottom in the USB port, but I felt that was just going to be a little bit messy. So I didn't bother with that. So it's got a new cell phone. But yeah, that's a really good example of something I think, you know, where the risks have increased and we require endpoint protection of a certain level and the version of Android that I was using didn't support it.
So yeah, can't access the corporate network with that device anymore. Yeah. And when I look back to 2013 or something, when I started training this Office 365, at that point, the message from Microsoft was a little bit different. So at that point it was bring your own device. So everybody were able to connect to the cloud from anywhere using their own personal devices. But now it's a little bit different because the risks are also different than they were at that time.
So it's been quite good to see how this has evolved to the current point where you actually are requiring secure devices, multi-factor authentication or phishing resistant authentication, passwordless authentication as I would. Yeah. So it's bring your own device, but we're going to tell you the devices you're allowed to bring is kind of what it sounds like, but that's fine.
I mean, at the end of the day, if you've got some sensitive corporate assets, right, you don't want some hacked endpoint, you know, accessing the corporate data. You just don't. So yeah, I'm all for that. Yeah. Also like 10 years ago, there was in quotes only email, Skype, then you had a SkyDrive, which changed to OneDrive. But anyways, it was more like this productivity kind of tools, but now everything is in cloud.
So it can be like your line of business application that would get compromised if your users endpoint gets compromised. So it's a bit different than it was then, but yeah, I like the current, let's say security poster much better than then. All right. So a question that we've decided we're going to start asking our guests is what does a typical day for Nestorio look like? Okay. Yes. Well, I'm located in Finland, so it's a little bit different time zone than the US, especially on the West coast.
So I usually will wake up quite early, but I do other things that work in the morning because I want that my kind of like working hours are overlapping with my colleagues in the US. So I started in the morning by looking social media. So I follow a lot of like researchers and it's good to know if there's anything happened during the night. Then I start doing my work and it depends where I am and so on because well, there's a couple of things I do. The one is the researching thing.
And another one is that I do a lot of like speaking. For instance, Sarah mentioned the Ignite conference. So I'll be in the same time in the same week, I'll be in the US, but I'm speaking in three other conferences. So I'm not able to make it. So when I'm doing the speaking thing, well, then I'm traveling usually. I'm tuning my slides like a couple of minutes before the presentation is and so on. So it's a kind of different life.
But when I'm back at my office, home office, after doing that all social media stuff, I start doing the research. So it depends what I'm doing, what I'm researching. And I have meetings with my colleagues. I comment some blog posts or technical publications and so on. So it's a very, how would I say, it depends. It's always different day. So there's no same kind of days ever. And that's also why I like this job a lot.
So it's not like my grandfather used to go to the factory for 30 years and do the same kind of work day every single day. No, that's not in this job. So Nestori, we always ask our guests at the end of the show, if you'd like to leave our listeners with a final thought, it can be advice or whatever you like. What is your final thought that you wanted to leave them with? Use MFA and require compliant devices. That's it. Quite simple. I think we'll just leave it at that.
I think that's a, I'm a big fan of short and sweet. I think if you can tell a story or give some advice in a very tiny sentence, I think it's generally good advice. All right. So with that, let's bring this episode to an end. Nestori, thank you so much for joining us this week. I know you're incredibly busy and I know it's getting on in the day in Finland. So again, thank you so much for your time. And to all our listeners out there, we hope you find this episode of interest.
Stay safe and we'll see you next time. Thanks for listening to the Azure Security Podcast. You can find show notes and other resources at our website azsecuritypodcast.net. If you have any questions, please find us on Twitter at Azure Setpod. Background music is from ccmixtor.com and licensed under the Creative Commons license.