An Inside look at Securing Azure - podcast episode cover

An Inside look at Securing Azure

Jan 22, 202134 minSeason 1Ep. 20
--:--
--:--
Listen in podcast apps:

Episode description

In this episode Michael, Sarah, Gladys and Mark talk with guest Alex DeDonker, and member of the Azure STRIKE team, about his team's role in helping secure the Microsoft Azure cloud platform. We also discuss the latest Azure Security news for the following services: Azure Sphere, Azure Backup, Managed Disks, Azure Security Center, Azure Policy, Azure Defender for SQL, Azure Health Bot and Azure Automation. Finally, Mark discusses some updated Solorigate resources and human operated ransomware.

Transcript

Welcome to the Azure Security Podcast, where we discuss topics relating to security, privacy, reliability, and compliance on the Microsoft Cloud Platform. Hey everybody, welcome to episode number 20. We have Sarah, Mark, and myself. Gladys is away this week. She's actually doing some training for her new position. We also have a special guest, Alex Dodonka, who works on the Azure Security team. But before we get to Alex, let's take a look at the news. Sarah, why don't you kick us off?

Couple of cool things that took my fancy in the news recently, so much stuff, even though we're just in the new year. To start with, application change analysis has got a new UI, which shows changes in all the Azure resources under the subscriptions that you've got up at the moment. If you're not familiar with this UI, the last one only showed a limited number of change, and you couldn't filter them out.

If you had noisy changes that weren't being made regularly, you wouldn't be able to see the actual changes you were interested in. Definitely go and have a look at that because now, it's in a public preview and it's much, much nicer. It obviously would never be a new segment without me by talking about something to do with Azure Monitor. The next one is that in public preview, you're now able to use Azure Data Explorer clusters and monitor it with Azure Monitor.

Lots of things that sound familiar here. If you're not familiar with these products, Azure Data Explorer is like a big data lake where you can chuck all your data in and query it and do whatever you want to there. Azure Monitor, of course, is our entire monitoring platform, and now you can get some stats from your ADX cluster. It can be looking back what's been queried, who's been using the most CPU in your ADX cluster, see who's been running all the queries.

Essentially, you can get some operational stuff, which is good to understand who's using your ADX, maybe abusing it as well, hopefully not. But yeah, that's now in public preview. We've now got built-in Azure Policy support for NSG Flow Logs, which is lovely, which means now you can actually force NSG Flow Logs being turned on or being turned off in Azure Policy as part of your security baseline, which is sweet. Last but not least, we have ASC, Azure Security Center updates for January 2021.

There's only two updates this month. We've got the CSV export of recommendations. Security Center gives you hygiene recommendations based on the CIS benchmark and best practices. You can see them in the UI and you can also export them via Event Hub. But now you can also get them through a CSV file, which I have had some customers ask for.

The other thing that's exciting that's happened this month is that the vulnerability assessment, which uses QOLIS in the background for on-premise and machines in other Cloud has now gone GA, which is great because it means that you can vulnerability scan everything because it's in GA. Some customers of course won't use things or can't use things until they're in GA when they have an SLA and the support for them. If you've been waiting for that one, it's here.

Hooray. That's all my news for this time. Quite a few things to pique my interest over the last few weeks. As you said, it's actually been pretty busy even though it's still only January. The first one is there's now support for backup on Azure Managed Disks. My guess is, prior to this, there was no backup for Azure Managed Disks. But now there is, which is fantastic because it's a totally agentless backup.

You can just set the policy and essentially Azure goes ahead and does all the work, which is absolutely fantastic. The next one talking about Azure backup is that we now have support for encryption at rest using customer managed keys. This is something that I think just about every customer I have worked with over the last 24 months has said they wanted to use customer managed key support because backups are often contained sensitive information.

Obviously, you could encrypt the data itself, so the backup will automatically take the encrypted data. But in this case, you can actually encrypt the actual backup itself. That's, I know of at least two or three customers that immediately come to mind, but are very happy that that's now finally available. The next one is as you defend a for SQL, there's some updates that came out. The first one is that it's now generally available for SQL servers inside of a VM or on machines, I should say.

This is really important because I know a lot of customers who, for specific reasons, want to run, say a SQL server in a VM rather than as a PaaS platform as a service offering. Perhaps they're just doing a lift and shift, and so they wanted to do with the least amount of friction as possible. Well, as you defend a for SQL servers, we'll now look inside, actually look inside that particular VM and do things like baseline configuration analysis. This is actually pretty cool.

You can set a baseline for security in the SQL database. There's actually a PowerShell script that you can also run. I'll put a link to that in the show notes. The PowerShell script is outside of this as you defend the aspect, but it's useful to understand this SQL baselining. We can measure against that baselining, there's detailed benchmarking information. There's also much better integration with Azure Security Center. It just looks a lot nicer.

It also, the update includes Azure Defender for SQL includes support for Azure Synapse Analytics, a dedicated SQL pool that's also generally available. Yeah, so just all around fit and finish and making the Azure Defender for SQL support look a lot more complete. It's fantastic work. Next one is Azure HealthBot is now generally available. I want to point out this has got nothing to do with security whatsoever, at least not directly.

The only reason I brought this up is just in light of the fact that I've been working a lot with healthcare organizations of late. Also with COVID-19, obviously being front and center with everybody on the planet today. We now have this Azure HealthBot. This is actually pretty cool. This allows developers in healthcare organizations to build and deploy AI-powered and compliant conversational healthcare experiences at scale.

Again, it's not a direct security thing, but I thought I would put this out there because it's pretty topical these days. Next one is Private Link Support for Azure Automation is now generally available.

As I've mentioned many times, you're probably sick of hearing me say this, but we've seen over the last few years a bunch of trends inside of Azure, especially for platform as a service offerings, things like customer managed key support for encryption and better at rest, but the other one is Private Link Support. This is where you can have a PaaS offering in this case Azure Automation, and essentially have it as an extension of your on-prem network. That's really great to see as well.

The second to last one is Public IPs can now be upgraded. Not a lot of people know this, but an IP address is actually a service in Azure. It's actually a feature in service. You can create these public IP addresses, and there's two versions, basic and standard. Historically, if you said I want to use a basic IP address, you are basically stuck with the basic IP address.

Well, the standard IP address includes lots of other offerings, other capabilities that are not there in basic, and you couldn't upgrade. Well, now you can, and that's another great usability feature that I think a lot of customers will be happy to see. The last one, one of my favorite topics as he looks across at his Azure Sphere SDK, my Azure SDK board in my hands right now. Operating System OS version 21.01 is now available for evaluation.

There's some new features in there most notably around Wolf SSL. If you're familiar with Wolf SSL, it's a very small library that does SSL and TLS designed for small IoT devices. I actually have a running on a little Arduino, Arduino Uno works nicely. Yeah, that's my last topic there. Azure Sphere OS 21.01 is now available for evaluation. Over to you, Mark. News in my space. We'll start with the human operated ransomware.

We've been talking about this over the last couple of episodes, and finally we do have a landing site and the mitigation plan has been released. It's in the form of a PowerPoint deck. I like PowerPoint for those out there that don't know that. We did get that one out there.

It has a very prescriptive plan for addressing the human operated ransomware, which honestly given the flexibility of these attackers and their willingness to profit using pretty much any technique, it tends to be a very broad one. It could also be used as a general security template plan, like what should I be doing and how to protect myself against any threat, but ransomware is just the top one at the moment. That is out there. It's just AKMS slash human operated.

We'll send you that link and put in the show notes. We are expecting that trend to continue and grow and continue to impact our customers significantly. The next thing that some of you folks in cybersecurity may have heard of is this lower gate or sunburst attack. We have continued to release information on that. We actually have kept adding to our existing resource center, the AKMS slash lower gate. That will also be in the show notes. Couple of notable new blogs.

We did talk about zero trust and slower gate in a really nice blog by Alex Weiner, and how did zero trust principles stand up to it. Fairly well, it's my estimation of it there. Then there's also for the more technical and threat hunter deep dive into the forensics folks. There is a blog that just released as well on the second stage activation of how these different pieces that you've seen technically connect together. Very, very interesting read.

A lot of it honestly went over my head in terms of technical detail, but I was quite impressed by the sophistication of these attackers as I read through it. Because there was a lot of pains that were taken, a lot of effort was taken to stay stealthy for these particular attackers. Because of all the questions that we've been getting lately around, how does Microsoft secure our environment?

We thought we would get a number of guests on the podcast to share how Microsoft is securing our code and our infrastructure and lessons, learns and best practices from there. Our first guest in this informal series is Alex to Docker. Alex, would you like to introduce yourself? Yeah, thanks Mark. Like you said, I'm Alex, a program manager within Cloud Security here at Microsoft. I work on a team within Azure called Strike and have been a part of the team for close to two years now.

Strike is an internal security education and compliance program that provides employees with actionable knowledge and resources to support our all up security strategy. We're most known for our ability to create and deliver quality learning experiences on those strong security principles based on current and real-world threats. Essentially, Strike is a platform for community engagement, knowledge sharing and broad collaboration promoting on-the-job learning.

While empowering our engineers and participants to raise the bar, building a healthy security culture. We do this by offering a variety of trainings and hands-on experiences that motivate and ready participants to securely design and build and operate services. So Alex, so if I'm an Azure developer, say all of the security education, the mandatory stuff, the cool courses that I would want to take to learn more about security, that comes from your organization, right?

Most definitely. At the moment, Strike offers close to 100 online courses, not only for those developing our services on Azure, but also for those interacting with them, maybe even for the first time, both how to build securely with additional offerings that focus on the user and or customer.

So for example, we have a course that covers best practices to keep your Cloud applications and infrastructure secure, but also a far less technical course named how to explain Cloud security basics to anyone just to give you the gist of it. Because I know Azure, developing Azure has a lot of different roles in it with people, different skill sets, different responsibilities.

Now, if I understand correctly, Azure is effectively there's a core-based set of services that are out there for everyone. Then Azure is really a set of different individual feature teams that develop these individual capabilities and services. That's your customer. Those are the folks that you educate is those two constituencies. Is that right? Correct, Mark. These individuals are within engineering. That's the common theme.

But as you know, engineering is broad and covers a ton of backgrounds, including but not limited to obviously software engineering, program management, design, data science, hardware engineering. Basically anyone that has the engineering discipline, they are most certainly within scope. So we do try and provide a diverse set of courses and trainings to meet all of the different players inside of our company.

Now, if I was in a customer organization and I wanted to set up my own education program to educate my own developers and application teams and application security teams, what would be the things that would be most important to set up and to build into that program? First and foremost, I think understanding holistically of your goals and knowing the big picture of your product. I think the best way to go through this is threat modeling.

Threat modeling is an extremely valuable exercise that our team does a ton of workshops and gets behind acquisitions, for example, will always offer a security basic session, a threat awareness session, as well as threat modeling and then a threat modeling lab.

It's not going to cover all grounds, but as either a new company or someone trying to get started in that space, you'll have to go through all the security concepts, understand what threat modeling is, and you'll have to know your product inside out. Find people with pen testing experience, people that have threat modeled in the past, and almost build a V team or a team that can go through everything holistically, but make sure there's security experts in there.

That's where I'd start and where, say, if an acquisition comes on board where my team will typically get involved. Very cool. Michael, I first learned about threat modeling from some of your work in the early days of Microsoft some 15, 20 years ago. Is that similar recommendations as you would have given back then? Yeah, nothing's changed, I think. We've always found threat modeling to be a really useful tool.

By tool, I just mean technique, not necessarily a tool per se, but just a very useful technique for understanding how an attacker may try to compromise the system, and also making sure you have the appropriate mitigations in place. We have plenty of knowledge and technology around code level issues, static analysis, dynamic analysis, and so on. But in the area of secure design, threat modeling seems to be the most common by far, and it's certainly grown a lot in the last 20 or so years.

If I look at the material that we created 20 years ago, it's nowhere near as slick and as efficient as it is today. Yeah, I always think of threat modeling as the design part of the process, whereas all this asked and asked and those kind of things are much more of the implementation part. But of course, it's a creative process, so design isn't always completely separate from implementation.

So Alex, one of the things I was very interested in digging into is, the relationship with the red team, like how involved is the red team in the various different aspects of the strike program and the education components? Now, they're very much involved and the red team actually provides some of the more popular content of ours. I'll give you an example of what that partnership has looked like in the past.

So we'll take lessons learned from previous Azure red team operations, where we have actual pen testers present on their tactics, findings and insecure practices, and we just don't leave it there. We highlight the remediation steps taken and how best to work with the Azure red team after the operation is carried out. We typically have the impacted team even co-present on the matter at hand, and everything that went into securing the service after the fact.

Beyond that, we actually have a really exciting product that we built with the Azure red team called Cloud Capture the Flag. I know Capture the Flags are big time in cybersecurity, but ours is super unique. This partnership consisted of a lot of huddle ups, a lot of time with the red team determining what challenges and what vulnerabilities we should highlight in this experience.

If you've never heard of a Capture the Flag, basically it's a series of challenges that vary in degree of difficulty, and the participants are required to exercise different skill sets, that hacker mindset, and once a challenge is solved, a flag is given out that is usually tied to points, and you submit these to a CTF Capture the Flag server.

Typically, the highest earning participants will indeed get a bunch of prizes, and accolades, bragging rights, but our version and what's unique and why we needed to partner with the Azure red team is because this is in the Azure hosted environment.

You actually get to fiddle around with our services, attack them in a real-world setting, and the different vulnerabilities and procedures were identified as current and valuable threats to go through these different challenges, including the Mitre Attack Matrix, which is super hot right now and something everyone should understand. But by the end of the day, people unlock new knowledge for improving security in their Azure environment just by hacking it.

Now I'm going to step into a little bit more of a manager hack question. How do you measure success? I mean, is it all subjective and highlighting key wins and key impacts in an anecdotal way, or are there some concrete metrics of what good looks like or what success looks like? That's the million-dollar question I must admit. Aside from staying out of the news, I can share a few examples. My first would be some of our more targeted efforts.

For example, we have a training series in itself called an Introduction to Social Engineering and Security Risk for Data Centers. The title probably gives it away, but this is specifically for data center employees onboarding and maybe starting their careers in that space. But due to the nature of their work, it's been key to put policy and processes that are unique to them in the forefront. When you can be intentional and not generic with security training, it pays off.

Additionally, our team will support service adoption and onboarding, which is very much measurable and monitored by our team. Since you've had the Azure Security Benchmark team on previously, I'll share an example of how we got behind their efforts and demonstrate impact with them. Essentially, there are processes to establish security benchmarks by selecting specific security configuration settings to secure Cloud deployments.

We'll pinpoint service owners who are either new to Azure or needing to improve their security posture of existing deployments. What my involvement looks like is the coordination and execution of a workshop, where here we provide hands-on assistance while developing the security baseline for their Azure offers. Setting a security baseline and containing recommendations to help our customers meet their security controls in the Cloud.

Basically, this reviewed baseline gets published on Azure Docs and this whole process offers the consistency and the security guidance to our customers. This is a prime example of measured success because demonstrating their configuration of Azure meeting security capabilities, it's premapped to the industry benchmark. Once it's complete, these pre-identified teams are meeting the predetermined needs. The tangible shift of the needle is indeed a measurement of success.

Someone who's been actively involved for many years, training software developers, I have to ask, which classes seems to be the most popular? There are so many, but quickly let me talk through a few. One's called what 99 pentests against Azure have taught us. I can't say much, but it's as good as it sounds. Another would be security concepts and threat modeling overview. Threat modeling is one of our favorite topics.

I think we've gone there, but there's a sneaky fun course called a journey from engineer to hacker, where a colleague of ours takes you through staging a safe and controlled environment to well hack. The underlying message is, when you understand who your threats are, where they're coming from, and their motivations are in a much better position to defend against them. But the last few, I'll quickly cover these.

They've actually already been presented to external audiences, or we plan to bring these to our external communities. We're doing that through the security community team and their webinar series. We'll make sure you all get the link, and I think Michael will make sure to publish that. But the first one that's already available is called securing you, basics and beyond.

It's an extremely valuable message about the current threats, protecting your accounts and staying secure at work, but also at home. Additionally, in February, the next one you'll be able to catch is called the billion dollar central bank heist, costly lessons and cybersecurity. Basically, this was a riveting case study of the largest today financial cross-border cybercrime. What we'll do is we'll talk about the Azure offerings to prevent such attacks.

Beyond the February plans, beyond the lookout because we're going to be doing these all the way throughout summer, I think we end in June. We have a talk on authentication and authorization, one on Azure Security Center and CloudApp Security, and then I believe one about open source. Check us out. We'll be offering these throughout June, and you can catch our next one in February. Yeah, Alex, I've got a question for you. What resources are available to get started with a program like this?

What I'm going to do is actually shift the tone and share some advice, because our program is actually only five years old. A lot of the things maybe going through your mind were tactical and thought out based on our recent growth as a team. What I think was most evident in our success was how we were positioned to be alongside these teams, the red teams, the blue teams, the purple teams.

To not be detached, you see a lot of companies that will put their security training and group it into HR bundle that is a generic requirement for everyone at the company to meet this security compliance checkbox, aka maybe it's phishing or something you'd see on a day-to-day basis. If you are building as much as a company like Microsoft, that generic training is not going to even scrape the surface. That'll leave the doors wide open for unsecure coding practices.

But where we invest all, a lot of our time and energy is to bolster that generic training with other opportunities to deep dive. You'll need to understand who everyone is, like who's building, what their needs are, and to put together different experiences and either hands-on or just diverse sets of trainings that are almost going beyond what you'd need to even know.

Our team is so unique that we're putting out content that's 300, 400, 500 level for people that want to become their team security expert. Not every team has a tried-and-true security champ or even security engineer. So it's essential to equip some of the other folks to be that lead. A lot of our efforts are that community building, getting people in touch with the right folks.

So if you don't know who's who or what's what, I think you should do a little bit of a self-audit, but know that the generic run-of-the-mill training is not going to be a catch-all. You got to offer some more robust content and cover more grounds. There's so much that goes into this. So Mike, I'd be interested to hear what Alex has been describing compares to how we started doing this like 15, 20 years ago. Is it a lot different, a little different?

We're talking about the threat modeling piece, and I'm just curious from an all-out program and advice perspective, how much has really changed over that time? From a 50,000-foot view, nothing's really changed. It's the same concepts that we laid out in the early 2000s. Some of the major differences would be the development processes are significantly different. Back then, it was mainly waterfall models, C, C++ stuff.

We had classes of vulnerabilities that we still have to care about today, obviously, because C and C++, especially old C and C++, or people who write C++ code as basically glorified C, those issues are really of concern, especially memory safety issues.

But now we're talking about different types of issues, persistent store of credentials in configuration files, or cross-site scripting, or cross-site request forgery, or poor cryptography, poor random number generation news for keys, those kinds of things. Memory corruption issues are not front and center like they were back in the day.

Obviously, they still exist, but for the average developer developing on a cloud platform, a modern cloud platform today, they generally tend to be using higher-level languages, C-sharp, Java, Go, Python, PHP, languages that abstract you away from the low-level machinations of the machine. In terms of process, very similar. I mean, the software, the way we develop the software using the security development lifecycle, a lot of that is still very similar.

Again, some of the nuances have changed, but from a 50,000-foot perspective, it's good to see that what we started doing 20 years ago is not just still being used, but has also been modified and updated and modernized to adapt to a rapidly evolving cloud platform. Alex, I had one last question for you. Bug bounties, how do we approach that? What's our philosophy and our thought process and how do we use them?

Yeah, bug bounties are most certainly a huge priority for us, and I'm in a unique spot where some of our partner teams are the ones going through the different reports and things that get sent in, and we're taking a lot of those findings, obviously, and working through the remediation steps there, but we also take those messages and can put them in the forefront of the people building the products that may have released something that was found.

That was a bit insecure, but I think what's most important is to kind of drill into the bounty programs specifically because I know a lot of the people out there are doing the work. We want to support you however we can and ensure you're kind of set up for success. So I'd say before you just go poke around anywhere, make sure you understand what the actual ongoing programs are. We have a ton focused on Azure at the moment.

There's even some on Windows and other products, but a lot of these campaigns are happening from the product teams. So they're kind of specific in terms of what they're looking for. So be proactive in not wasting your time just finding anything random, but really looking at what Microsoft and what our teams are wanting you to focus on as we hopefully can pay you out and work with you to solve some of our security issues.

We definitely want you all to be trying Azure and seeing what you can find, but we don't want you to get caught off guard. And if you're going down a path that's not part of a tried and true bounty program. So learn about the bounty programs. We'll be sure to provide a link of the current ongoing programs and make sure to read through it because often you may find something and we want you to report it 100%, but it may not be a part of the current ongoing bounties.

Alex, one thing we always ask our guests is if you had just one thought to leave listeners, what would it be? Be authentic. And have your message come straight from the source.

Where we really are excelling is when we partner with the security teams, they're essential to our success and a lot of the priorities and messaging we distribute is in coordination with those teams, but we happen to be in the same organization which I've kind of shared and that direct alignment with those setting that security strategy is what's unique about us.

Our involvement, it's not just amplification of these messages, but rather a true partnership with the individual subject matter experts. We're not paying actors to carry out trainings. Instead, we're grooming individuals to present on the matter themselves. Like I mentioned earlier, we'll take the pen tester and we'll put them in front of an audience. These experts, we support them in various ways.

So it actually comes across as a professional quality training and our process requires them full intake, content reviews, rehearsals, presenter coaching, design support. And we're there to moderate and produce the trainings real time as well as post-production. So to sum it up, partner closely with those experts. We couldn't do it without them, but make sure you're doing enough where they know they couldn't do it without you.

If a lot of security teams will kind of flounder as they try and put their own messages out, get a team that does this day in and day out, put together and those messages will come together a lot crisper and a lot cleaner. And I know you'll start seeing security results and improvement in your security posture. So with that, let's bring this to an end. Alex, thanks so much for coming on this week. We really appreciate it. And I learned a few things.

It's kind of been a bit of a blast from the past, to be honest with you. It's a modern spin on something that we frankly started doing 20 years ago, especially in Windows, Office, SQL Server and Exchange and Visual Studio back in the day. With that, I'd like to thank all of you for listening as well. So stay safe out there and we'll see you next time. Thanks for listening to the Azure Security Podcast. You can find show notes and other resources at our website, azsecuritypodcast.net.

If you have any questions, please find us on Twitter at azuresecpod. Background music is from ccmixter.com and licensed under the Creative Commons license.

Transcript source: Provided by creator in RSS feed: download file