Episode 360 – Who owns your data in Azure? - podcast episode cover

Episode 360 – Who owns your data in Azure?

Nov 16, 202340 min
--:--
--:--
Listen in podcast apps:

Episode description

In Episode 360, Ben laments about a change in the data processing for Azure Sentinel and Scott contributes some suggestions around Microsoft's data processing policies and how customers can keep up to date on them along with some suggestions for how to manage your personal data in Log Analytics and Application Insights. Then they close out the show talking about how to track resource ownership over time for your Azure resources. It's also the season of giving and we're raising money for Girls Who Code. Donate today at https://give.girlswhocode.com/msclouditpro! Like what you hear and want to support the show? Check out our membership options. Show Notes Microsoft Azure Legal Material Microsoft Products and Services Data Protection Addendum (“DPA”) Large Language Model (LLM) Managing personal data in Log Analytics and Application Insights The latest Microsoft Digital Defence Report came out and it is a sobering read Azure Monitor Insights overview Azure Monitor activity log Azure resource logs Define your tagging strategy About the sponsors Intelligink utilizes their skill and passion for the Microsoft cloud to empower their customers with the freedom to focus on their core business. They partner with them to implement and administer their cloud technology deployments and solutions. Visit Intelligink.com for more info.

Transcript

- Welcome to episode 360 of the Microsoft Cloud IT Pro podcast recorded live on November 3rd, 2023. This is a show about Microsoft 365 and Asher from the perspective of it pros and end users where we discuss a topic or recent news. And how about relates to you today, Ben Laments about a change in the data processing for Azure Sentinel, which side note less than two weeks later has now been reversed.

And then Scott provides some suggestions around Microsoft's policies and how to keep up to date on them. Then they wrap up the show by talking about ways to track resource ownership over time for your Azure resources. Scott, I feel like we're maybe sort of back on schedule for a little bit. We've been kind of recording all over the place lately. Uh, - It is what it is. We'll get there. Holidays are coming - Up. Yeah, that'll throw us for a loop. We're only 15 minutes late today,

- . Oh - Yeah, I got caught in a meeting about some stuff coming to the business, future announcements coming down the road, but I have an announcement. Let's just dive right into it. Scott, I have an announcement I got from Microsoft that actually was kind of disturbing and it was a little ironic with some other conversations I was having.

So the other day I should find, I don't know where this conversation was, I was talking to a friend of mine and we were discussing, I think we were actually discussing browsers and our preference for browsers with Firefox versus Edge versus A and Safari and all the options. Brave all - Those, all the fun things. Yes. - And he made a comment about trying to start transitioning from Edge to Firefox.

One, he's been having some issues playing videos and Edge, but the other comment he made was he's, I'm starting to get a little worried about this AI stuff with some of these companies training LLMs, and them using my data to train their models because there's also been some debate that we've had around this, about like training models off of website data.

If some of these companies, not necessarily specifically Microsoft but the OpenAI models the bar, like all of these, how they're learning is are they potentially scraping like copyrighted material off of websites to train these Lang large language models? Like who owns the stuff on my blog if they start using my blog and answers for these large language models?

All of that. So it was kind of a security discussion around AI and access to data and who you trust and who you don't trust and all of that. This was literally like three or four hours later I got an email titled Microsoft Sentinel will begin collecting security research data on forward December, 2023. So December 4th of this year, and this is not for my tenant, this is actually for one of my customer's tenants, but it says we're changing how we collect data Starting on this date.

You are receiving this message because one of your Microsoft Sentinel workspaces, so log analytics essentially is opted out to share the data with Microsoft security research. This was turned on for a reason. We did not want this particular data being shared to Microsoft Research.

. Turns out Microsoft security research protects our customers against threats and attacks by delivering built-in state-of-the-art detections, investigating tools, attack disruptions and mitigations, building more accurate models, AI models. I am reading between the lines there. It doesn't specifically mention AI requires diverse data sets and signals because of this.

Starting December 4th, 2023, Microsoft security research will have access to your data to help build, test and optimize analytics models and detections. Okay, so I turned this on specifically because I wanted this data to stay private and now you're telling me - And by turned it on, you mean turned - It off - Explicitly opted out Out of this thing. Yeah, - Out of this. And now they're just saying, sorry, too bad. We don't care. We need to build our models so we're gonna go grab your data.

Anyways, they do go on to say, so then they have some more information and I haven't dug into this too far yet, but it's privacy and data handling commitments and the Microsoft online services terms and online services data protection addendum, their legally applies. It also says customers using customer managed keys CMK won't be affected by this change.

So I am theorizing that by that single sentence down towards the bottom that if I go switch everything to customer managed keys, it's not gonna get shared because potentially this is encrypted then beyond their ability to grab it. And then it also has a help and support if this change affects your environments. Well yeah, 'cause you're actually changing settings on me and you'd like to learn more about it.

Please contact support and then it just walks through like the path to go through under your issue type and support NA Azure. Yep. And then it gives the account information, subscription name, all of that in the Azure subscription that this is using. I'm not a big fan of this. I get if they wanna do this and they send an email and ask permission or say would you like to help? Or even saying, I mean it would probably be borderline is if you don't do anything, this will be turned on.

But you can go back in and toggle this checkbox to leave it off or something along those lines. But just kind of blatantly saying, we don't care what you said before about sharing this data, we're gonna go get it anyways because we want to train our large language models better. I totally get the how it helps with security, how it helps building the models with security co-pilot coming out.

I'm assuming that also has something to do with this this, but just going in and blatantly saying now we're gonna access it. Especially when they do tend to focus a little bit more on security than maybe some of the other big players. I've tended to trust Microsoft more than Google when it comes to keeping my data secure and the privacy of my data. But this just kind of, it rubbed me the wrong way when I saw this - . Yeah, so I'm, I'm over here shaking my head and and just rubbing my eyes.

So there's a couple of things. So um, I think part of it is the just abject fear that's been instilled in everybody. Like I would not want an LLM hallucinating on top of my data either back to me or certainly in context of potentially like my data with other customer usage, things like that. So like reading between the lines and I'm not a lawyer, I don't work on the sentinel team, anything like that.

A lot of sentinel isn't necessarily AI driven but it is ML driven which ultimately like these large language models Yeah those are actually ML as well. Like we've conflated the whole ML AI thing, blah blah blah, all that stuff. Yep. I think it's good to have the signals in there. I think it's bad to kind of revert on a decision like that.

It's a good reminder to everybody that when you sign up for these services, like there is a just absolute crap ton of legalese under underneath the covers that drives these things and I don't think enough customers actually go and read those. I've had the opportunity a couple times in over the course of my time at Microsoft to actually have to go read up on the legal agreement for Microsoft customers when it comes to the what's known as the mosa or the Microsoft Online subscription agreement.

So that governs things for MCA customers, sorry, Microsoft customer agreement MCA and you know EA customers enterprise agreement customers have a slightly different set of terms even if you don't use Azure, there's always just the general like Microsoft terms of use that you're using in tools, things like that along the way. So there's really two things that even if you're not into the legalese of the stuff you're doing, it's kind of interesting if you can go and stomach your way through it.

One is that most Microsoft online subscription agreement and then the thing that everybody always forgets is the DPA, which is the data protection addendum. So the DPA in many cases is actually the thing that drives the contract. Like the legal contract between you and Microsoft particularly with regard to the processing and the security of your customer data and your personal data in there. In your case here. Yeah, you've got a couple options. One is go familiarize yourself with the DPA.

And again I encourage every customer to actually do that. Like not just your lawyers but folks should go and read it, go read the DPA and see what Microsoft does with your data. I don't think Microsoft is like, they're definitely not as clear as companies like Apple when it comes to this stuff.

Like Apple's very clear, hey we do processing on device, here's why we do it on device, blah blah blah privacy, all that kind of stuff with providers like Google, Microsoft, Amazon, like all that stuff happens in the cloud or the the other option like you said is yeah flip over to CMK.

So CMK is customer managed key where effectively you're managing your encryption key including things like rotation of that encryption key inside something like Azure key vault so that you are in full control end to end and you have that capability. So I run into a lot of customers, I don't, like I said I don't work on cental but particularly in like storage land, like people are very sensitive to this kind of thing. So we do like customer provided key.

So we actually we we do customer managed key where customers can manage keys inside of KeyVault but we also have this capability called customer provided key where per request, like per upload you can actually send a different encryption key for every single object if you're like really super kind of paranoid about that thing and how it comes together and what that looks like. The other thing that the DPA is helpful for is it also doesn't just govern how Microsoft uses your data.

It governs how things like data destruction go and you know what happens when you know you issue a delete request or you delete a particular resource or something like that along the way. And then for this one you're actually kind of lucky in that you are in maybe like an Azure service here. Like your encryption capabilities between services are going to vary. It's a little bit different over in Office 365 M 365 land. It's different over in Dynamics land.

You might not necessarily be able to come to consensus across all of those things, especially when it comes to some of the large language models you were talking about. Like those rolling out and getting to where they need to be. Like they're just gonna be baked into experiences in some places and customers might not have the level of control that they necessarily want there upfront.

And that becomes part of the rationalization exercise of just trying to figure out like is this the thing that I want to do? Is it the right thing for me at this given place time? That kind of thing. - Definitely and maybe it has this and I have not probably spent as much time reading these DPA and legalese as I should, but the other thing that would've been nice to include in either the email or have a link to it is what data is actually being sent?

Like is it row for row data from my tables in Sentinel or is it things like when Sentinel detects a brute force attack, the IP addresses of a potential brute force attack and where is it almost anonymized I guess in sense or sent a summary of the data that's coming in versus this is sending like IP addresses and usernames and countries and I don't know 100% at least from this email, what does it mean when this data's being provided now to the Microsoft security research in terms

of the level of data or the details around what that is. - That's an interesting one. I actually, I pulled up the DPA while you were chatting just to see is there anything specific to to Sentinel in there two that's called out and yeah there's not anything in that one.

The other thing it's gotten kind of harder over time to find some misinformation like the DPA a used to be like an online thing or one of 'em did the online service terms and even like SLAs and things now they're like turning into like PDFs and word docs so you have to like explicitly go out and download them and figure all that stuff - Out. Yeah it's interesting.

And then this is a conversation kind of going along side of this in Discord too is you mentioned this is Azure, this also comes into play as what this would mean potentially for Microsoft 365 down the road.

This particular case isn't applicable to Microsoft 365 and Azure the customer managed key isn't necessarily a expensive paid for service versus Microsoft 365 I believe if you wanna do like a customer managed key and Microsoft 365 but you need to be on an E five license and I can't remember which E five it is, if it's the EMSE five or the Office 365 E five, Microsoft 365 E five would definitely cover it 'cause it has both of those in there.

But if I remember right, customer managed keys in Microsoft 365 or E five. So if you ended up in this boat with say Microsoft 365 copilot as that's coming out where you got an email saying you've elected to not share certain information with the Microsoft 365 security research and the only way to block it is customer managed keys. You could find yourself in a position where the only way to stop that would be to upgrade from like that E three five plan 2 85 plan. The whole large language model.

Again, to your point, it it's my data but it isn't Microsoft's data center. They're hosting it all. And again I haven't read all of the legalese around when they can or can't do this type of stuff but it definitely does lead to some interesting things to consider as everybody is now scrambling for data for these large language models where before part of my thing was Google always wanted your data and I'm gonna call out Google specifically because they're in the advertising business.

Facebook or Meta always wanted your data 'cause they're in the advertising business. They benefit from getting as much data as they can about you to better advertise and market to you. Where let's face it, that wasn't Microsoft's business but now with AI and everybody kind of battling for these large language models and the training of 'em, all of a sudden I find Microsoft wanting a lot more data than maybe they did before so that they can train these models.

Which definitely leads to, like someone said, it's some interesting days, interesting times to think about some of the implications of this beyond just copilot can help me with my work. Now - I hate to be a stickler for language but I, I kind of would continue to encourage you to not think of everything in ML land as a large language model. So like large language models are specific things like in many cases like in the case of chat GPT we're dealing with language transformers.

So these are things, these are machine learning models that are built around natural language processing. So if you're ever played around with say like Lewis and sorry the language understanding intelligence service and building out like chatbots in Azure or something like that, you're dealing with natural language processing. There's all sorts of other ML models that can and do exist out there.

So that could be things like you said like maybe maybe I'm a somebody like Google and I'm trying to figure out spread of advertising and cohorts of customers and so I can give my advertisers the best experience where they can target their ads to people in a certain demographic. Like I only want to target 25 to 30 year olds that live in Spain or even a certain region. Like you can get down to that level of granularity sometimes depending on what's going on out there.

And that's very different than hey I'm gonna throw a large language model against this thing or some type of transformer against it so it can ingest and do a bunch of natural language processing. So I would bet in the case of something like Sentinel, 'cause Sentinel is a product that like you can bring your own ML models to as well.

Like you can do your own inference on top of it even on top of like your own custom data that you potentially bring into Sentinel, say you're doing like taxi or things like that and kind of side curing your own data in there. So I would bet that it's more about all up signals especially 'cause if you look at the way things have been going with security in general like across the tech companies, I don't think Microsoft's any exception here.

The more signals you have the better off you can be. Yep. What's the sensitivity like just a devil's advocate, like what's the sensitivity to a signal in Sentinel versus something in your sign-in logs in Azure AD like being anonymized and correlated across the service, right? Microsoft always talks a lot about hey why does security work so well in ID the artist formerly known as Azure ad?

It's because they have billions of logins and billions of signals that they can run that inference on and figure out the next thing there. - Do you feel overwhelmed by trying to manage your Office 365 environment? Are you facing unexpected issues that disrupt your company's productivity?

Intelligent is here to help much like you take your car to the mechanic that has specialized knowledge on how to best keep your car running Intelligent helps you with your Microsoft cloud environment because that's their expertise. Intelligent keeps up with the latest updates in the Microsoft cloud to help keep your business running smoothly and ahead of the curve.

Whether you are a small organization with just a few users up to an organization of several thousand employees they want to partner with you to implement and administer your Microsoft Cloud technology, visit them at intelligent.com/podcast, that's I-N-T-E-L-I-G-I-N k.com/podcast for more information or to schedule a 30 minute call to get started with them today. Remember intelligent focuses on the Microsoft cloud so you can focus on your business.

I think that's the part that I get where to your point, Sentinel would absolutely, I was trying to find a tweet but I think I lost it in our chats, being able to just pull those signals and if they're anonymized enough, I totally get it.

Like you look at the amount of signals that Microsoft gets and I've talked to red team companies before that are actually writing viruses trying to bypass things like Microsoft Defender from an antivirus perspective and they've told me, and I don't have any validation for this other than just word of mouth talking to these people that Microsoft Defender antivirus is one of the hardest antivirus software to bypass when they're just writing a virus from scratch

and that once they are able to write one Microsoft Defender is also one of the ones that shuts it down the quickest. They said I essentially get one shot at it, I deploy that virus, I use it, I get in, I never works again. And I think some of that is to your point of Microsoft is able to take all of these different signals from their products and they use them to protect other tenants.

So myself, anybody else listening to this podcast that uses Microsoft 365 is benefiting from the signals that these Microsoft security products are getting from people also trying to attack Microsoft. You look at what Microsoft 365 is, 90% of the Fortune five hundreds, how many of them are deploying Defender Sentinel? Some of those. But there's signals coming in from those that Microsoft uses to protect the small guys as well.

So I also do understand that if those are anonymized enough, if they can figure out those signals and this is where I, maybe you have to trust Microsoft a little bit that they're not grabbing a bunch of PII type of data or really sensitive internal data but looking at some of those signals to really help strengthen that. I think there absolutely is a benefit to that.

I think it was a tweet from Merrill that I saw about the amount of signals that Microsoft does get and analyze a day and it's like in the trillions I believe of signals a day that Microsoft gets around people trying to breach the Microsoft products essentially. Or maybe it was even specific to Entra. It's - A ton of data, right? If you consider like particularly in the case of intra and login data 'cause Microsoft is able to aggregate and correlate signals across the public service as well.

So like your endpoint when you log to enterra login do microsoft online.com, like that's the same thing that MSAs and all that use and all that use as well. I don't know if you have any options here so I'll put a link in the show notes. So one thing that does come up from time to time is handling personal data in things like log analytics. Like I haven't seen specific guidance for Sentinel and I've been kind of like binging Duck Tuck going and then googling in the background here.

uh, I couldn't find much but there is specific guidance out there for log analytics and app insights customers. So maybe you want to take a look through that as well. I don't know if there's things that can potentially like make it a little bit better by potentially getting that data over to a different log analytics workspace once you're done with analysis in sentinels so you can still retain it for yourself.

Are there potential retention things you wanna do At the end of the day the data's the data though, right? So you've gotta kind of figure out your path to navigate through there. Whether that's saying well sentinels not the service for me given a set of constraints that exist or it it is and I'm gonna live with it or sentinel's not the thing for me in its current form.

So I need to do something and potentially upgrade to some form of my own encryption key like customer managed key and then live with any licensing implications that come with it to have that associated level of privacy. Yeah - I found the tweet, it was not about signals, this was just about password attacks per second blocked 4,000 password attacks per second this year.

Like that's just an insane amount of data coming in because it's not just that they blocked 'em, I'm sure they were looking where they're coming from, all of that.

But any who, I'll have to go look through that article as well and if we want to make any changes to log analytics and how that data's in there, the - Nice thing on your side is that you work with enough customers that it's also like the all learning that you can carry forward for them as well and say okay here's potential considerations. We might wanna think about this as we're turning it on.

So you can kind of front load the conversation with new customers in this like crazy always ever changing landscape. - Exactly. So interesting scenario. Do you have other, you had something else you wanna talk about today? We spent a long time on that but let's dive into your topic. We - Did, so mine's a little bit of a a logging thing as well.

So this is one that came up with in in a customer conversation with me recently and I think it impacts a lot of customers in Azure or or there's the potential for customers to ask this question.

I don't know, it's like a big impacting issue but so have you ever been in the boat where you're working with somebody and you sit there and you do a bunch of deployments and you kind of turn stuff on and then you walk away from it for a while and then you move on to the next thing and then the next person comes in behind you and they say, huh, what is this thing? Who created it? Do I still need it anymore? Does it need to be on, can I delete it?

And it's one of those things that you came up I, I was talking with a customer funny enough about some storage accounts and they're like, Hey I have all these, they had a couple hundred different accounts out there and they're like, can I delete them? Like well do they have activity ? Have you checked just the metrics to see if you're driving any transactions in there? Oh no I haven't done that. All right, well let's go do that.

Let's spin up an easy mod workbook and uh, Azure monitor workbook and we'll look at some of this stuff and and see what's going on. Oh I see these 10 accounts over here don't have any activity. They go can I delete 'em? I don't know what, what data's in there. Well I don't know. Maybe I should go and talk to the person who manages it.

So the crux of the question is who actually manages your re Yeah, who manages resources in Azure and how do you go back in time and figure that out for resources that have potentially existed for years and years and years?

- A hundred percent. And it's funny you bring this up 'cause I had a customer doing this exercise as well that I'm supposed to actually go do some work trying to sort through resources and figure some of this out because they're in the boat where they have everything in five subscriptions and we need to break it out into 14 subscriptions but we don't wanna move stuff that's not being used anymore. I mean this is not an insignificant amount of resources either.

We're talking like thousands of resources. It's not quite, I don't think it's up in the tens of thousands. It might be definitely not hundreds of thousands but to your point it's okay we have these five subscriptions, thousands and thousands of resources first before even who owns them or who would know is how do we even go through and figure out which of these resources are still used, which can be deleted, which maybe can we archive?

Are there resources that we could change the SKU on, have SKUs changed so that we can get some price savings by making some changes to 'em? And to your point then, who actually is in charge of making those decisions? Because some of these are production services, some of those are dev, some of those are test and what's going on.

So since I have that question, what's the answer Scott? I - Had a couple of thoughts here and some of it is we're eventually we're gonna hit a roadblock and we can't go back in time all the way. So first thing is, is this resource being actively used right now? I think one of the best ways to do that is start over in Azure Monitor. So I often refer to that in shorthand as azon and over in Azon there's a bunch of built-in workbooks around kind of core service metrics.

So if you're dealing with a core service, so I'm thinking things that fall into kind of compute, networking, storage, many of the data analytics services, things like that, they all have a set of default metrics which often drive back to things like transactions in the service, right? If I'm dealing with a storage account, am I seeing puts and gets actively to that storage account?

And if I am, I I I know somebody out there is using it and potentially that helps me take the next step to go and answer the question. Okay, now I know it's on who is using it and who is using it can go a couple different ways.

If I'm dealing say with if virtual machine and who is using it, I might want to go and just check out the activity log for that VM or spit out some diagnostics and do something like log analytics or a storage account, some service like that and take a look and see okay great, I know this thing's powered up and it's on am I seeing active logins to it? Can I actually infer anything from a login name or like we talked about baston last week, a bastion connection, things like that.

If it's a storage account I have resource logs that I can turn on. Most customers tend to complain once we get down to this step where we're saying like well go look at like the fine grained resource logs for the thing. 'cause they go Oh well I gotta turn that on, I gotta configure it and it costs me money for the logs . Yeah I get it. Time is money and turns out when you use additional resources, those cost money as well. We're not, not talking about keeping this stuff on forever, right?

You could even turn on diagnostic logs on a service for 10 minutes, whatever a period of time is and just kind of see what you get in there. It doesn't need to be like tens of thousands of dollars or hundreds or even tens of dollars to to spend that stuff up and turn it on if none of that gets you anywhere. The next place that I would kind of go off to is the Azure activity log.

So by default at the subscription level there is a set of information that's retained when management plane operations happen. So basically when you like fire off a request against the arm APIs, which the portal is doing all the time, like a create event for a resource, an update event for a resource, things like that. Those will all be in the activity log there. There's some limitations there. Like the activity log only logs three months of data by default.

Unless you've gone ahead and configured like diagnostic export for that and potentially sent it out to another place. But I look at that as a little bit of a learning exercise.

Hey here's another opportunity for you to go through if this is something that's important for you to audit in your environment, you really might want to think about turning on activity log exporting for the actual Azure monitor activity log and getting that data out to something like a log analytics workspace so that you can have a richer interaction to go back in time over that resource. I would - Agree.

And that's kind of the approach we've taken as like you said it's log analytics, it's Azure activity logs, it's really just going through all the logging and looking at what's there. This is gonna help with the activity.

I don't know that it always helps with the owner and I think this is a, - So the nice thing is with activity often comes identity because users have to authenticate to these things like hey we have to figure out if you're authorized and before we figure out if you're authorized we need to know who you are. Yep. To do it. So lots of that stuff is just happening in inside of OAuth and it's all authenticated against Azure AD anyway. So AD potentially becomes a source of logs for you.

Okay, am I seeing sign in logs like during a a given period given time but as long as you have the log like it'll have a goid or something else in it that ties you back to a user identity over in Azure active directory which ultimately like in inferences and and gets you down to ownership Now all this stuff only helps you like if you have the data there.

The other thing that I had mentioned and it came up in the chat over here on Discord was there's other things that you should just think about maybe from like a governance perspective, like there's some best practices around things like tagging your resources, be it either tagging a resource group or tagging individual resources in resource groups in Azure. Like you might wanna tag those with a, maybe not an individual user but like a responsible group for it.

Or quite often like I encourage a lot of customers in like multi-home departments, like they run multiple departments, they wanna do chargebacks and stuff like that. Hey let's rationalize not just the environment this thing sits in. Is it prod, is it dev test, is it staging? But let's actually tie this back to something like an IO code or some type of like internal finance code that you have or even like just a department, does this belong to finance? Does it belong to you Azure?

Does it belong to finance, hr, anything like that. Yeah, - I was gonna bring that up as well. Going through this exercise and what we talked about logs helps you maybe find what happened before. This is also a good opportunity I think to go through some of what you just said is maybe this is a good time to start thinking about tagging and even using Azure policies to enforce that a certain resource is tagged with a owner.

And again whether that be a person, a department tagged with what type of resource this is this something that's getting stood up for test or dev or is this a production level resource?

I've seen customers tag dates of when this was created, but thinking through maybe some of that additional metadata that you should start adding or do wanna be adding onto your Azure resources so that as you encounter this scenario again or as you move down the road and you wanna six months from now, a year from now do a regular type of audit of what's out there. Do we still need it? Is it still being used?

Having some of those tags can also be very beneficial but you may not have them the first time you do. It might be something that says maybe we should start doing this. - I think it's one of those things like when you run into it you're like oh maybe I should start doing this now kind of thing. Yep. I see a lot of customers when it comes to tags that they tend to just get stuck in the rutt of trying to rationalize like all the options.

'cause it's basically just name value pairs and you can be right, very freeform in your name values, things like that. There's actually some decent guidance out there in the cloud adoption framework about just like getting started with tags and like defining your strategy where it calls out a lot of these things. Okay, you might want to tag like the uh, department that owns the resource you might wanna tag like the operations team that owns it.

I think that's very helpful like in the context of even if I don't have the individual owner, hopefully the people who operate the thing have a contact right? And they can get back to it and do it cost codes as well, like whether that's like an IO code or so like you know whatever it is that ties you back to a cost center within your organization is also super helpful. Not just for like chargeback but for hey where's the throat that I need to go choke to figure out the answer to this thing.

I've also seen customers that do things and it is in the adoption framework where they tag things like the owner, the requester and they actually put email addresses in there. I'm kind of sensitive to that stuff sometimes. I don't know that you should 'cause then you're pumping like PII out potentially into things and then we're back to that whole what does Microsoft do with my data? Like did I pump those activity logs out to log analytics?

Are there like Sam a company and I'm doing business in Europe. Are there GDPR implications for me or or things like that there. So your mileage may vary in like how far and how granular you wanna go, but I think it's definitely a good idea to sit down and just do that initial rationalization. And if you haven't done it today and you're just onboarding, hey now's a great time to do it. If you haven't done it and you're already onboarded, hey now's a great time to do it.

- . Sounds good. I don't think I have anything else on that topic. Anything else you wanna cover on tagging, cleaning up Azure resources, deleting everything. I mean there's always the approach of you delete it all and see who screams, right? Make sure you backups on delete it and whoever screams is the owner there - Is that as well? Yeah, to just take it and and turn this thing over a different way and see what happens.

And you might have some options there if you're dealing with a website, like you might not have to delete it, you can just shut it off or a database. That's certainly a way to do those kinds of things. Just make sure you leave it off long enough that you give people enough time to scream. Like I've seen people do things like shut off a database and they leave it off for a day or two and they're like, well nobody said anything so I'll delete it.

But it turns out that database was only used on like month close or like closing out a quarter, even the end of the year and three months later they get, they get that pinging that says, oh hey, what happened to blah blah blah. And you're like, oh crap, I deleted that. Nobody ever talked to me. So yeah, you do kind of have to think about those things as well - For sure. Awesome. Well thanks Scott. It was interesting discussion today. - I got one more for you. Yes.

If anybody stuck around this long, we are back in November. So last November we did this thing where we tried to raise some money for Girls Who code. - Oh yeah. - And we had uh, a bunch of awesome listeners donate and we raised $1,500 this year. I'd like to see if we can raise 2000 and get out there and help the next generation of IT pros or developers, whatever that happens to be with Girls who code. So I'm gonna be harping on this one for the next couple weeks just to see if we can get people

going during this season of giving. Yes. - So we should do that. Should we extend this longer than November? Because we are also recording with some of our schedule for those that are still sticking around. You actually won't hear this until November 16 if I'm doing my date math right. So - Here's the thing, it's open all year , so - Should, that's true. You - Don't have to wait and come in. But we - Need to set a sense of urgency. We need to set a time like we wanna get it by this time.

Should we see if end of the year, should we go through the end of the year? - I forgot about the lag in release. Yeah, we'll go to the end of the year. That works for me. - See if we can get $2,000 by the end of 2023. By the time people first hear this, that gives you a month and a half through the Thanksgiving holiday season. Yep. - Should you celebrate, but yes, during November, December, whatever that happens to be. So yes, - That's true. Thanksgiving, I was talking to my kids about that.

They were like, what holidays are worldwide versus just the us And there's really only a couple I can think of that are, well there's no, there's a handful, but there's a lot of them that are just localized to countries. Thanksgiving being one of those. Yes. Yeah. Anyways, we do not need to go on the holiday rabbit hole today, but through the end of 2023, $2,000 for Girls Who Code. Sounds good. - All right, let's do it. Alright, thanks as always for the time.

- Yes, thank you. And we will talk to you again next week. - All right, perfect. Thanks Ben. Yep. - Bye-Bye Scott. If you enjoyed the podcast, go leave us a five star rating in iTunes. It helps to get the word out so more IT Pros can learn about Office 365 and Azure. If you have any questions you want us to address on the show or feedback about the show, feel free to reach out via our website, Twitter, or Facebook. Thanks again for listening and have a great day.

Transcript source: Provided by creator in RSS feed: download file