Episode 377 – Microsoft Copilot for Security - podcast episode cover

Episode 377 – Microsoft Copilot for Security

May 23, 202439 min
--:--
--:--
Listen in podcast apps:

Episode description

Welcome to Episode 377 of the Microsoft Cloud IT Pro Podcast. In this episode, Ben and Scott talk about a recent incident at Google Cloud where one of their customer accounts was completely wiped out without notice. Then they dive into Microsoft Copilot for Security. Ben has been getting hands on with it and it is expensive. They discuss pricing for Copilot for Security, how to think about approaching the multiple embedded experiences in it, and how to think about building a corpus of knowledge and truly leveraging it as an assistant and accelerator for upping your security game in your Microsoft cloud. Like what you hear and want to support the show? Check out our membership options. Show Notes “Unprecedented” Google Cloud event wipes out customer account and its backups A joint statement from UniSuper CEO Peter Chun, and Google Cloud CEO, Thomas Kurian What was the recent outage caused by? Microsoft Copilot for Security Microsoft Security Copilot to be available April 1 as a capacity-based service Microsoft Copilot for Security - Pricing Manage usage of security compute units in Copilot for Security What is Microsoft Copilot for Security? Microsoft Copilot for Security experiences Copilot for Security prompting tips - Create effective prompts About the sponsors Would you like to become the irreplaceable Microsoft 365 resource for your organization? Let us know!

Transcript

- Welcome to episode 377 of the Microsoft Cloud IT Pro podcast recorded live on May 21st, 2024. This is a show about Microsoft 365 and Azure from the perspective of it pros and end users where we discuss a topic for recent news and how it relates to you. This week, Ben and Scott discussed a recent Google Cloud event where a customer account and all of their data was completely wiped out without notice and we share some thoughts we have around customers protecting their cloud deployments.

We also have some updates and thoughts around co-pilot for security after Ben has been able to get some hands-on experience with it. We talk about pricing, approaching the experience and how to think about leveraging it in your environment. We've had a bit of a chaotic week, so we're gonna get through today, but I should go bring up this article. This was an interesting one and we don't like throwing other cloud providers under the bus because, oh what am I doing? Because stuff happens.

Microsoft has stuff happens, Google has stuff happen and Amazon has stuff happen. But this was just, there's some thoughts that I had that came out of this news article that I ran across. I actually had a friend that sent this to me the other day and I'm curious to see what you think Scott. So this was on ours, Technica, and you can actually go read this article as well on the company's website that this happened to. But the news headline is unprecedented.

Google Cloud event wipes out customer account and it's backup and then it's UniSuper, I think it's UniSuper as how you pronounce it, UniSuper, UniSuper, UniSuper, $135 billion pension account details, its Cloud compute nightmare.

- This is a rough one. Reading - Through this essentially sounds like there was some configuration change, something that happened as Google was standing up the private cloud for this client and somehow like they wouldn't have been standing up a brand new one because they already had a bunch of data up there, but it essentially wiped out UniSuper GCP account including all of its information in all of its backups that were stored in GCP.

It's like they went in and just said Delete UniSuper from GCP . - Yeah, that's one way to think about it. So you sent this one over and I was kind of reading through it and trying to think about it in context. It really, I think it'll be interesting to see if an RCA ever comes out for this one like root cause on what actually happened here.

Yeah. But from the outside looking in, what it looks like is this customer had an existing set of workloads and an existing account with A GCP and with Google Cloud and for some reason that existing account was deleted. I would be willing to bet like weird series of circumstances like that. It was something like really crazy like the customer's account got flagged for fraud because of something that was happening maybe due to like an automated fraud detection system or something like that.

And then due to a series of errors in the fraud detection system, it just went completely off the rails and potentially took out their data. And the other things that sidecars alongside with, hey, this is a customer's account and weird things happen. So you hear about this all the time. I think like think about it in context maybe of like, let's take a step back from cloud provider and think about like social media providers.

Like I don't know if you've ever been on a platform where you've had your account banned just unilaterally. Yep. Hey, this thing goes away and you can't do anything. You know, for me, I had my Google Ads account banned and I'm banned for life for Google ads and there's no remediation forum, there's no one for me to go talk to. This is a permanent decision and it's been made kind of thing.

I was reading about one last week with a venture capitalist MG Siegler where his Instagram account got banned by meta. Same thing like hey, you've been banned unilaterally. There's nothing that you can do about it. In that case, they ha, you know, MG happened to have contacts at meta and can turn it back on.

And in this case, this provider at UniSuper, so it's a superannuation fund in Australia, effectively think like 401k provider here in the United States, but government mandated retirement fund kind of thing. Like not a good look to you have it all go away and get blown up that way. So you quite often do hit these very weird code paths and things that you don't think you're, that you don't know that you were going to encounter until you encounter them.

And unfortunately in this case it took down an entire customer along the way and not just like the entire customer. It took down their entire estate because they were so all in on Google and GCP for their workloads. Like so all in to the point where they've been quoted in joint press releases in the past of saying, Hey, we're all in on Google for their VMware engine, you know, their migration capabilities for getting us on-prem to the cloud, all these kinds of things.

So what a not great week that company had , right? You know, be it from their CEO all the way down to their folks who were the ones who made the decision to go all in on Google and then for Google itself, like these are the kinds of things that follow you around as a company for a while. And I wouldn't be surprised if this one doesn't get more mainstream press or hasn't had more mainstream press beyond ours.

I think ours Technico was just the first place that you had seen it and sent it over my way. - The part that blows me away about this and Google said this should never have happened, which is kind of apparent, but one, they said someone should never just be able to delete their account. But it was deleted to the point that in this joint statement from it was a joint statement from UniSuper and Google, both their CEOs or the Google cloud CEO was that UniSuper had a bunch of redundancy in place.

They had two geographies to protect against outages and losses, but none of that protected them. What protected them was they had their backup in another cloud provider. It wasn't even like Google could go back to some backup that even they had internally in restore it.

To your point about like this unilateral decision, I would've expected that Google would've had some customer account backup or protection in place, even behind the scenes to where maybe UniSuper couldn't recover their own account because it was all deleted and everything. But it sounds like Google couldn't even recover their account that they had to go stand up a brand new GCP instance and restore all of their servers, all of their data from a backup they had with another cloud provider.

- It depends on how that data is stored. So I can absolutely see how such a thing would happen. You know, if you think about all the things that we've talked about in the past with your data in the cloud, when it comes to things like data ownership, you know, here in the United States, if you think about things like NIST standards for shredding hard drives, right?

If I have, if I have my data on a hard drive at a provider, how does that provider shred my hard drive and ensure that I'm the only one who has access to my data? Those kinds of things. Yep. There are these very real kind of kill switches in place that effectively once the key, the primary key is gone and it's severed from the data, there's really no way to bring that relationship back. And quite often the hyperscalers are doing things like doing garbage collection on data pretty aggressively.

You know, it's not a bank error in anybody's favor to retain terabytes, petabytes, potentially tens or hundreds of petabytes of backups for customers. And you know, just have those sitting around for weeks and weeks and weeks waiting for a customer to go, oops, I didn't really mean to delete that kind of thing. And especially if it's a a back to the whole like you know, how could this happen?

You know, think about it, especially if it's a fraudulent workload, like you don't want that stuff on your system to begin with. True. So you potentially just nuke it from above and call it a day, especially if you're very, very sure that it is in fact data that should be nuked or a workload that should be nuked an account, uh, a billing account, things like that. Whatever it happens to be. So pretty unfortunate. More, more than pretty unfortunate. Very, very unfortunate.

Good lesson though, in kind of thinking about the multi-cloud thing and DR in a multi-cloud world, how you think about positioning yourself with other providers as a customer, right? Like you might be all in on Google, but then you might leverage say like enter ID for your identity and as your security token service you could be all in on AWS, but you might leverage a component of Microsoft or Google for something in your workload.

You know, there's a whole bunch of customers that do those kinds of things too. It's rough. Make sure you got backups right. That whole rule of three thing becomes, becomes pretty critical here. And not just do you have backups? This was another good lesson in even though they had backups, recovery was still forever. Like it wasn't about just having RPO RTOs were extremely elongated in this case.

And if you think about it for a financial firm, that's kind of a super critical thing when you have money flowing in out in the case of something like this, which is a effectively a pension fund, pensioners who are in that fund, like you still need to get your payment right, right. To be able to to buy food and survive and, and all those kinds of things too. So just a bad situation all around. Yeah. And hopefully they find a way through. - It was out two weeks here.

It was, it was May two was when it started and they full restoration of services on May 15. So it sounds like everything's back up now as of about a week ago. But yeah, being down for almost two weeks and I agree. I think the biggest part that one of the biggest takeaways from me even thinking about like my Office 365 environment was that whole thing of having some of those backups.

'cause you are, you hear so much about this and I've talked about this a little bit more recently too, of, oh well Microsoft has multiple data centers or multiple regions. I mean AWS Google, everybody has multiple data centers, multiple regions redundancies in place and some people in I would say six, seven years ago I prescribed to this a little bit more than I probably should have of those redundancies are gonna protect me.

Like I don't need to have my own backups because Microsoft is building in so many different backups that why do I need to go pay for another one? This one went in and highlighted of, it's not common. I mean this is a one-off in Google's case. I can't say I've heard of any accounts in the Microsoft cloud where somebody's gone in and just everything's gotten deleted.

But of having those backups to your point somewhere else so that if you are the one that finds yourself in one of those one-off scenarios, you can get your backups. Like I can't imagine a company this size if they didn't have those backups in another cloud, how much worse this could have been for them. - Yeah, definitely detrimental. You know the other thing to think about here is so, so you know you're kind of calling out, Hey do I have the backups?

Do I have the backups? Like let me think about that. Like absolutely like think about backing up your data. But I think it's also critical and if you go read through, I'll, I'll put the link in the show notes to UniSuper kind of timeline of what happened and if you read through that timeline and how it goes, one of the things that potentially delayed them was also not just having the backups but having the configuration right and the ability to stand it all back up on that side.

So for you, let's say for you as an M 365 subscriber Yep. Are you using things like the community tools for like M 365 DSE to back up the configuration of your tenants on a regular basis? Are you testing that you can stand up a new tenant with a similar configuration beyond just kind of the data pieces and the backup and restore bits there? - You're gonna catch me. I'm not doing that with mine. I have clients that I'm doing that for but I don't do it with mine.

- So I think that's the other click stop that folks need to consider. Like we talk a lot about kind of uh, application recovery and having backups and user data is absolutely a critical piece of that. The other thing that comes into play here very much is recovery and configuration and, and all those kinds of things.

So to the degree you can with the providers that you have and the systems that you stand up, really do think about that stuff holistically if it's within your wheelhouse - Configuration - And it, for some people it is for some people it isn't. You know, if you're out there and you're listening to this and you go like, oh I pay an MSP or something to do all this for me so I don't have to worry about it. Yeah, maybe go ask 'em some questions, right?

Just make sure that you've got the warm fuzzies about what they're doing and how they're actually providing you value in cases like this. - Yes, I would highly encourage you to ask your MSPs questions about this type of stuff and it is, it's your value. And I will say like the clients I'm backing up configurations for it's, it makes sense. I think in my case, like for my tenant personally, if I lost my conditional access policies, I wouldn't really care App registrations.

I mean some of those you think about too, like all my app registrations that are tied into Azure ad, if I lose Microsoft NID, it is always going to be Azure AD Scott. Either way if I would lose that and lose all my app registrations and then not be able to authenticate to some of my third party apps, like do I have backup credentials saved for the native logins for those versus just my SSO logins and and it was a a good callout.

There's a lot of stuff besides just do I have my emails and my files to think about when you're in these cloud DR scenario situations, do - You even know what the stuff is? is an interesting one. So yeah, the other thing I often think about is if you are building and deploying software, how do you think about recovery within and standing up assets again if you have to around things like build pipelines and deployment pipelines.

Uh, you know, so if you're using like GitHub actions, do you have that YAML save someplace? Like what happens if somebody comes in and nukes that repo right? And it just goes away one day, like how do you get over that and how do you do it? So you know, I think taking a step back, having that good holistic view of your entire estate, not only what resides in your estate but how that stuff was built, how it's configured becomes extremely important.

And in some cases, you know, you can't automate your way out of a job when it comes to doing recovery with some of these things. But I think it's important to just understand kind of where those rough edges are and that you've accounted for 'em in your runbooks and all the other things. - Yes, a hundred percent. Do you feel overwhelmed by trying to manage your Office 365 environment? Are you facing unexpected issues that disrupt your company's productivity?

Intelligent is here to help much like you take your car to the mechanic that has specialized knowledge on how to best keep your car running intelligent helps you with your Microsoft cloud environment because that's their expertise. Intelligent keeps up with the latest updates in the Microsoft Cloud to help keep your business running smoothly and ahead of the curve.

Whether you are a small organization with just a few users up to an organization of several thousand employees, they want to partner with you to implement and administer your Microsoft Cloud technology, visit them at intelligent.com/podcast. That's I-N-T-E-L-L-I-G-I-N k.com/podcast. For more information or to schedule a 30 minute call to get started with them today, remember intelligent focuses on the Microsoft cloud so you can focus on your business.

All right Scott, should we move on to our next topic? - We spent a quick, we've spent - a quick 20 minutes, quick 15 minutes there. 15, 20 minutes on this one. Yeah, but it's an important one. Again calling out, I think using this story to highlight the importance of some of these backups and things to think about. This next one is a little bit more of an update on a topic we talked about earlier. Diving a little bit more focused into the Microsoft space is security copilot.

I don't remember which episode it was. It's been a few episodes ago when it first came out of GA and it was available to stand up and I had said that I went and turned it on in my environment. I went and created a security compute unit instance or an instance of copilot for security, which you can scale based on security compute units. It's $4 per hour per security compute unit. And I spun it up for like two or three days and didn't see anything on my Azure bill.

I was like huh, I've only used it a couple times. Maybe it only bills when you ask it questions. Something like that. And people were like, well let me know when you find out what happens. So I turned it off for a while 'cause I started getting scared that it was just racking up a bill in the background and even though I had my quota on there and I just got nervous. So the other day I turned it back on again just to find out what would happen and it started charging me

very quickly. Do - Me a favor. Yeah, just so everybody has context, flip your web browser back over to - You for those that are seeing this . Yeah, - Flip your screen back over to cost management here. - . Yes. So here is it. I went from, and if you can't see it, I had like a $59 that I had accumulated on May three, may four. I was up to $162 and then it kept going very linearly.

So the first two days it looked like a massive hockey stick 'cause I went from increasing my bill like 12 bucks a day to increasing it by roughly $113 a day, a hundred dollars a day and then it continued and I do still have like my $150 limit on the subscription.

So even though it shows my accumulated cost up around $1,400 now because I may or may not have forgotten to go turn it off once I saw it hit that it is very much one security compute unit at $4 per security compute unit per hour has absolutely nothing to do with how frequently you use it. Once you light this thing up and turn it on, you are going to get billed $4 per hour as long as it is created in in existence.

So realistically security compo copilot is absolutely going to cost you, I think it's like $2,920 a month if you go do the math and multiply out how many hours and average days in a month over the course of a year. All of that math. And that's just for one security compute unit. Know when you go spin these up, Microsoft recommends a minimum of three security compute units.

Again, not saying you need it, they let you still do one, but three would absolutely cost you $12 per hour per yeah $12 per hour over the course of a month, which you can do the math 3000 multiply by three or at like $9,000 a month for Microsoft's recommended minimum configuration for security copilot. So pricing it is absolutely if you wanna use this, going to cost you a minimum of three grand per month unless you do something different.

And I had some conversations with some people the other day that are like, I'm starting to use like logic apps or PowerShell or things like that to ramp up their security compute units or actually to like, I don't know if they were actually going to the point of destroying security copilot and recreating it when they'd wanna use it.

So again, if you're not gonna use it at all during the night, do you really need to have three or four or five security compute units provisioned from 6:00 PM until 8:00 AM the next morning or do you actually blow it away and recreate it? It led to an interesting discussion I had about different ways people are trying to manage costs of security copilot based on number of security compute units provisioned within that instance of security copilot or even just like, can we just blow it away?

It's expensive Scott, which led us into other discussions about ROI or even some of the pros and cons of blowing it away versus ramping it down, things like that. - And other news AI be expensive , right, I guess is is the takeaway there. So while it's a loss leader in some places maybe think like co-pilot consumer versus co-pilot within versus M 365 co-pilot, co-pilot for security is definitely not a loss leader kind of thing.

So you need these SCUs, these security compute units to actually be able to have the associated compute to run through an action on your queries within the underlying LLM. So you know, responding to a query in an LLM takes a bunch of CPU and and memory and other things on the host to go and actually like retrieve the data read out of the vector databases, do all that kinda stuff.

If you're doing RAG or something like that along the way, retrieve augmented generation, well then you've also gotta go out and have the compute to be able to retrieve that data. Say it was like a Word document or a PowerPoint, something like that. Be able to parse that in an LLM and then be able to construct these meta prompts and and all the other things so it's not free to get there. It's also very kind of fuzzy as to what that looks like and how it manifests.

So you know you do have some usage monitoring within copilot for security. So directly within like the security copilot portal, security copilot.microsoft.com and being able to go thing, go and see things that way or you have kind of this just raw view of costing within cost management and how that carries through. But it's an interesting thing.

You're sitting at $4 per hour, at least in the hero regions out here in the US you know that equates to, like you said, basically three grand a month, three K 29 90 I I think it's safe to round up a little bit and and just call it three grand in that case. Yep. And then you have the kind of recommendation for compute units. So if you don't know what you're doing with these things and you just kind of look at 'em and you go out and read like hey where should I start?

Well you can start with one, I believe the recommendation is three. So you're kind of sitting at a min cost of nine grand per month before you've really done much with it. So like all things that needs to be measured and weighed, right? Like what's the ROI there and what's the value for me as a customer? Like once you're starting to hit like nine grand, you know, is it worth having that for an entire month or should you just pay for an MSP pay for a consultant, something like that.

Once you're doing a couple of these and you're getting up to maybe not like the nine grand marker but let's say you hit the point where you're at six SCUs and you're running those for an entire year, now all of a sudden you've gone from 90 plus KA year to 180 k plus pretty quickly and that's theoretically somebody's salary , right? Including benefits and and other things on top of it, at least here in the us.

So now is having one person better than having a bunch of commute units, I don't know, you know, needs to be weighed out organization by organization. I think - This is where it does start getting interesting for me in some of our discussion even before we started recording was how do you start showing the ROI of co-pilot for security in that particular case? Because again me, I have myself a couple of contractors that are doing work for me.

I am not gonna go out and spend 30 KA year for security copilot. I can go into my audit logs, I can go write power shell, I can go write KQL and it is probably not going to save me 30 grand of time a year to have co-pilot for security in place. I'm not gonna spend that much time asking security questions of my environment with the size of company I am.

But to your point, this is where the scale gets interesting and can you show that ROI is now you start getting up into a hundred, 150 employees, 10,000, 20,000 employees, there's a lot more data. If I go write a KQL query, I may pull a lot more data that I have to sort through or if I'm running a PowerShell script, there's just a lot more data as you get a bigger organization.

So does having copilot for security where I can go in and ask natural language type of questions, ask questions about my audit logs if I have Intune ask questions about devices and about events and Intune does having a copilot to go over that much data, give me an ROI or to your point, do I start ramping up now because I have that many more employees, I'm having to buy six 10 security compute units and now my cost is getting up into the multiple hundreds

of thousands of dollars like you said now it's a salary and I can pay somebody or multiple, somebody's a full-time salary to go in and write KQL queries and build out other processes to detect the data. It's an interesting ROI discussion. I don't know that I have the answer on how you would calculate that but I think it's something that a lot of companies are going in looking through and to your point, it's expensive.

I get it. I would love to see this be a little bit more per use and maybe if there's Microsoft maybe would add some auto scaling in the future where you can scale up and down your security compute units because there may be some validity to you just have one security compute unit during the night and then ramp it up to six or nine or 10 during the day. 'cause one of those things we talked about, if you actually completely blow it away, you are gonna lose the history of your queries.

There's that conversational history and while not retraining of the models, some benefits that come from keeping that co-pilot instance up and having those queries that historical que in your copilot.

But I could see where there could be some rationalization to having, especially a large company where they're using copilot all the time, having 10 security compute units during your working hours when you're actually diving in or if there's some type of incident that is raised in your environment, you need to ramp up those security compute units So you can get responses to these questions a lot quicker but then when you're not using it ramp it down

to one security compute units, you retain your history, you retain that environment but you're not paying for all those security compute units 24 7. I - Think it depends a lot on the functionality volume like so one of the, I don't know, maybe you can help me out here. So one of the confusing things to me with the copilot for security thing is it's kinda like back to that like suite of suite stuffs, right? Like that we've been talking about with Intune and and other things.

So there's the concept of the embedded experiences within copilot for security. So that could be like co-pilot for security as it's embedded inside enterra. It could be co-pilot for security as it's embedded in Intune, it could be co-pilot for security as it's embedded within defender.

And then each one of those defended embedded experiences has its own set of like nuance in the way things like logs are stored, how you query those logs, how you write effective prompts around those and and how all that stuff goes. So there's that piece of it and then there's the things that actually happen kind of like automagically in the background, right?

If I think about like hey I have an active incident and I'm trying to query for risky users in entra and I don't know how to do that today. You know there there could be value in having that stuff spun up right there and you probably don't need a whole bunch of SCUs and a whole bunch of compute sitting behind it just to effectively prompt and get a KQL query that then you can go and run and and bring that data back.

The thing that you might need it for is something like, say you're an organization who experiences a lot of live sites and you're doing a ton around incident management. So like one of the capabilities in the embedded experience for my co-pilot for security with Microsoft Defender is doing automated incident summaries.

So if you can start to automate incident summaries and distill those down and potentially automate summarization of RCAs, can you eventually get to the point where this thing can write salient RCAs for you? Maybe is it tomorrow? Probably not. Is it a year from now? Is it two years from now? I don't know but that's like an inflection point that's likely to be on the horizon.

And then if you think about that like being able to do like really good crisp incident summary response and RCAs RCA summaries and then eventually write RCAs or potentially even automate incident response, super valuable. Like if you think about being like an on-call, uh, A DRI or something like that, like the burnout could be real if you're the human that's on call 24 7 versus having the AI bot LLM whatever thing that can do it for you.

Like I don't see how most folks wouldn't kind of pine for that and grasp onto it but you know, you kind of need to get it all to the point where it's like ooh those incident summaries are really good and ooh those RCA summaries are really good and can I take this to the next click stop and get it to where you know, it's turning more into automate all the things and if you can like hey great, there's likely to be more than enough value that's kind

of inflicted by the tool there that makes it worth, you know, whatever the cost is that's associated with it. So I don't know, we'll see where a lot of this stuff goes.

It does feel a lot like automate yourself out of a job when it comes to things like copilot for security and I think there's a ton of angst there in general, you know you mentioned like writing queries when you're doing threat hunting like at some point you're probably just building a whole body of individuals who are either getting really good at prompting or they can be really good at the actual hunting experience. I think for now you still want them to be good at the hunting experience.

You don't want them to just be good at prompting and being able to draw on that as kind of their, their superpower. So yeah, we'll see where all this stuff goes. It's gonna be interesting like and I don't know what the timeline is like it's very hard to understand right now like 'cause this stuff is just moving so fast. Is that tomorrow? I don't know. Probably not. But is it like six months from now? Is it a year from now? Is it two years? Is it five years? That's very hard to discern.

- The other thing you still run into with copilots, and I saw this some even with copilot for security and to be fair I haven't had a ton of chance to play with it. 'cause frankly I can't keep it running long enough to play with it for very long. I need to like set out dedicated points of time where it's like I'm gonna spend this day really diving into it and go spin it up and spend 50 bucks to have it running for a day. But there's still the thing of hallucinations too, right?

Like if you're doing threat hunting and you are saying show me all the attacks on my exchange online environment or on my Azure front door instance coming from this set of IP addresses or you are creating these prompts to bring back your investigation, you want that to be 100% accurate. You don't want to miss something because co-pilot misinterpreted something or hallucinated on something or anything like that.

So I think that's still very much an aspect of especially co-pilot for security into your point about you still are gonna want good people that are experienced in hunting and not just prompt engineering because I think while it can still pull a bunch of data quickly, there's still a validation that you'd wanna take place, especially initially to make sure that whatever is happening in the background when you're asking copilot for security these questions

that it's returning these the right information. Because I did see where like when I did play with it and I was asking it different questions about Intune or about different data where it, I wouldn't get the same responses all the time and my data hadn't changed. I would expect the same responses every time around some of those and even some of the co-pilot for Microsoft 365.

When I ask it about different tasks or tasks, the coming due dates or different tables to summarize different things, it's not always the same. And I think that's still one risk with all of these co-pilots is people, and we've talked about it, people always treating it as this is 100% accurate all the time when it's maybe not.

And I, it's going to continue to improve as they continue to improve models, continue to look at data and figure out how do we make these in a way that they're more accurate. I am not worried about it running me out of a job right now.

If anything, there are days when I'm pouring through rows and rows and rows of data, I'm like man, if copilot could help me dig through all of this quicker so I can move on to the next task that I have because I have more to do than I have time to do it, I'm actually looking forward to the day where co-pilot can help me optimize my time a little bit better because I don't, I, I don't see IT security co or co-pilot for security running

me out of a job anytime soon. I - Mean I think that's the right way to think about it is as an accelerator, like so walk into it with kind of some intentionality like hey, this is all fairly new stuff. It's early days. Can I use it as an accelerator? Yes or no? Can our business use it as an accelerator? Yes or no? Are we thinking about it the right way?

Like do we need to think about it as something that's on for a year or do we use it for three months to upskill ourselves and get to where we need to be? Right? Build that kind of prompt book and you know, wikis and all those kinds of things that you potentially want to have in place. Like use this to augment and improve your process is a good way to think about it.

And you know, from that lens, I've worked places where like dropping 200 grand on a consultant to have them come in and write a 20 page document for you is something companies do a lot , right? Like, hey come help me improve this process kind of thing. Sure, whatever we, we've got just the consultant for you that that can help you do that. And there's value in those kinds of scenarios and lifting that stuff along. But you do have to be kind of walking into it with that level of intentionality.

You can't be just sitting here and saying like, what am I gonna use this for and how's it gonna go? - Should we call it a day about twenty, fifteen, twenty minutes on backup and 20 minutes or so on co-pilot for security? And I'm guessing we both have meetings coming up. We - Can do it. I'm running low on coffee. - I am out of coffee. My coffee's gone. I just have a few, ended up with a few grounds in my coffee. I have a few grounds in the bottom of my coffee cup, but that's about it.

- Fire your barista. Yeah, - Another story, another day. I don't know , but well that Scott, enjoy. What day is it? Tuesday. Enjoy the rest of your week. We normally do this on a Friday. We've had some sickness, some crazy busy schedules and recorded a bit at an off time and an off day. So enjoy the rest of your week. - We'll get back on track here. Hope - Everybody is back healthy and we should be back to our normal schedule here. Well maybe soon.

We've got, now we have summers and vacations coming up, Scott. - Yeah, we do. My kids end school on Friday last day for them. So woo-hoo. - Are you excited for all the noise to return to the house? - ? My kids are teenagers. There's no such thing as noise. They're gonna be sitting in their bedrooms and playing video games, let's be - Honest. Sounds good. Well congrats on the end of school summer coming up and enjoy your week and we'll talk to you again soon.

- All right, great. Thanks - Ben. Thanks Scott. If you enjoyed the podcast, go leave us a five star rating in iTunes. It helps to get the word out so more it pros can learn about Office 365 and Azure. If you have any questions you want us to address on the show or feedback about the show, feel free to reach out via our website, Twitter, or Facebook. Thanks again for listening and have a great day.

Transcript source: Provided by creator in RSS feed: download file