Welcome to episode 378 of the Microsoft cloud It pro podcast recorded live. On 05/31/2024. This is a show about Microsoft 3 65 in Azure from the perspective of It pros and end users where we discuss a topic or news and how it relates to you. Today, we're diving into a new preview feature that was recently announced at Microsoft build, Microsoft compute. Scott and Ben dive into what exactly the services is and some of the intricacies of
what it entails. They also discuss what a deployment involves and what some of the possible use cases they see of the service based on the features implemented within the preview. You wanna get in a smash or adventures today? Yes. Like fleets, Azure fleets, boat fleets, helicopter fleets. Compute fleets of what? Azure compute fleets, fleets of computers. Yeah. III figured it's been a hot minute since we talked about Smash stuff and it's only fair that we get back to
it. And this is a new preview feature that came out or was announced in the build time frame. What was that? Week ago, 2 weeks ago. Something like that. I don't know. Time has no meeting to me anymore. Could've have been, like, a month ago. A But it recently at Microsoft built. Yeah. May 20 24 is when this 1 publicly came out. And
landed in public preview. So I figured between virtual machines, virtual machine scale sets or Vms s, virtual machines scale sets, flex or Vms s Flex, which we've talked about in the past. And now compute fleet. We've got yet another perm mutation for at scale deployment of
virtual machines. And I think this 1 is kind of interesting when it comes to the quote unquote scale aspect of it because it goes a little bit above and beyond what things like Vms do today, just with the units of compute that can be deployed, like, if you're thinking, like individual Vm counts, things like that. So real quick just to kinda ground everybody. So compute, like I said is this new thing. It's effectively a new infrastructure service
within Azure around compute. And it's meant to kinda just streamline end to end that whole provisioning and management aspect of provisioning compute... Compute capacity across a whole bunch of different Vm types, potentially at the same time, availability zones, kinda of mix and match to your pricing model.
Also that you can get to Vm deployments at scale that are performance because let's be honest, it's a little bit hard to do like multi vm deployments today, especially if you're doing, like, a scripted deployment or something, say with, like, powershell and just en, like, Hey. I'm gonna spin up a Vm, and then I'm gonna wait, and I'm gonna spin up another Vm, and I'm gonna spin up another Vm. It's a lot of Api calls. It's a lot of potential polling operations,
things like that. So the cool thing about this is we need to kinda talk about it as, like, a infrastructure service or an infrastructure component. It's a single Api call to do an Azure compute fleet deployment, which is really kind of a cool thing, especially when you think about the scale of it. So you can deploy up to 10000 vms, in a single call through compute fleet.
You've got a bunch of different settings in there where you can prioritize things like deployment speed, you could prioritize operational costs, say, like spot compute versus regular, pay go, compute, you can kinda have a mix and match model, potentially balance both those dimensions together because you're deploying different units of compute and those different units of compute the Skews underlying Skus and compute. They all have different costs
associated with them. You've also got to, kinda cost management aspects. And being able to mix and match pricing models in a wide swath single Api call deployment. What else Oh? You can also... Because you're deploying potentially so many virtual machines at a time. Like, but deploying 10000 Vms at once isn't necessarily a small ask. When it comes to things like quotas, availability of that compute in a region, anything along that
dimension as well. So it's kinda nice from the fire in forget aspect to say, hey, fire off an Api call and kinda load me up based on what my quota is for my subscription. A given region things like that along the way. So it looks like a kind of fun nifty new feature. It is definitely not an f for everybody feature, I think, given the scale aspect of it. I don't know that many people are looking to deploy deploy 10000 vms at once, at least not broadly.
But there's definitely customers out there that would love to deploy 10000 and frankly, more than 10000 virtual machines at a given and go to get through some of the stuff. So you think about, like, customers with a large Ai training workload kind of thing. That's a ton of compute, ton jeep can be fairly ep. So, you know, you might not wanna run the thing the whole time. But, like, yeah, Definitely, while we're training a model, we need
to get that. Those units have compute up and running and those Vms configured all those kinds of things along the way. So looks like a fun 1. Yeah. This was interesting and you first brought it up me We're like, hey, ben. Go check this 1 out because can't remember. It was a couple months ago.
We were talking about Av. And I was creating some maybe the environments where we were actually, like, destroying and rebuilding or tearing down and recreating Vms on a nightly basis, and we were only doing... I mean, we're only doing like, 30 or 40 Vms. But we were running into issues where when we did all 30 of them at once. Again, it was a bunch of single Api calls. Not all of them are coming back up. We split it up into 5 or 05:10 vms at once. And it worked better.
And you're were like, hey Ben, now you can do complete. Compete and do, like, do compute fleet and do, like, a whole bunch from that once. And it led to a little bit of like that use case where this is very much, like, when you stand up a compute fleet, go to play a bunch of them, but then you have to continue managing it through the fleet. And that was 1 thing I was curious about when you told me to go check it out is it's like, can I use a fleet to
go deploy 30 vms? And then like, in the case of Av, if I wanna scale up or scale down, how does that look within a compute fleet? Like, can I go? Turn Vms off or delete them or how do I manage these museums once it's the fleet. And this is not really for that use case. Like you said, this is I'm gonna go deploy a bunch of Vms, but then you continue to manage that capacity of Vms within the fleet, and it's not something that you could easily... I would say it's not something you can
easily scale. It's like what you said. We wanna go spin up 10000 vms. We're gonna go spin 5000 vms for a workload and we were even talking probably more of like an a a ep type workload where it's... We're gonna go train something for a week or train something for a month or... I mean, certain industries have certain times a year where it's busy and they're spinning up a huge number of Vms. Think of shopping sites over the month of November and December
where... Mh. They need a massively scale up. For a short period of time, and then they scale back down or they just frankly, like blow them away. It's like, we hit the middle of January, nobody's shopping anymore because everybody spends their Christmas money and has done shopping. Let's just delete everything. Like, this is very much spin up a bunch of Vms for a short period of time and then get rid of them all. Is kinda how I interpret
this. It's the Nuance potentially of that statement of compute fleet is kind of like an infrastructure service versus a holistic compute management service. Right? So I think about things maybe, like, the Vms s or Vms Flex as being a little bit more feature rich there. Like, 1 of the things with compute fleet, at least I haven't seen a way to do it. It's not documented nor, like, that I see, like, an easy way to do it through the Api surface
or the portal or anything like that. But, like, 1 of the big differences is is when you deploy a compute, you're deploying compute, you're not necessarily configuring that compute. So when I think about, like, a traditional single Vm deployment or, like, a Vms fast deployment things like that. It's not just about deploying the compute. It's also about configuring and managing that compute throughout its life cycle. So
I don't know. Maybe I'm standing up, like that, you know, traditional multi tier web application thing where, you know, I've got a front end, some middle wear and a back end. I might do that in S flex, and then within my deployment configuration, I would say, hey for this Ubuntu based front end deploy Ng x on it. And load these Ssl cert and do this configuration for Ng x. My middle aware, do this kinda configuration for my Api hosting on my data side, like, with my
sql server. Like, Sam doing, like, my sequel or something like that. Well, go ahead and actually spin up my sequel install it on the box for me or even deploy that as a bad service like whatever it happens to be. And you don't have that same flexibility within compute. Compute is literally, like, fire and forget I want to create just a ton of virtual machines at the same time. I'm gonna have, like, min and Max targets for how much I want to deploy. And and
of what type I want to deploy. But then once it's deployed, it's up to you to go and stand up your applications, do all that kind of management on top of it. So it's kinda nice in that it's got a single Api, fire and forget for the at scale deployment piece. It'd be interesting to see where this goes in the future, like, if they bring in, like, at scale management as well. Like, I would love to have a way. And if it's possible today, maybe the docs just need
to be updated. I would love to have a weighted to say, like, go in and create an arm template to do a compute fleet deployment. And then within that fleet, as I'm defining my Vm skus, and I'm saying, okay, you know, I want 10 of this spot, up to a hundred of this spot. I want 10 of these, you know, ds s 96, up to, you know, 50 of them, whatever
happens to be. Like, in that same arm template, having, like, further child properties that I can call out to so that I can do things, like, execute boot automations, so maybe, like, cloud in it scripts, things like that on a Linux box just to get them up and running. But that stuff's kind of not there today. The other thing is it's a little bit different and I do think of as maybe a little bit more of an ep thing. So when you fire off a request to instant initiate a compute fleet. So you go
in fire off this Api surface. And again you do that through arm template. You do it through the portal. You do it through the Arm Apis. Like, however you do it? You fire off that request. That request can take... It's effectively asynchronous compute deployment at that point, like, it all happens in the background it just churn and churn and churn. These are, like, super long running jobs. So, like, an initial compute fleet request can be active for up to a year. 365 days.
If you need to clear a request, like, hey I I did the fire and forget saying of that Api, and now there's a bunch of async stuff happening in the background. Your only option is, like, you can't really, like, pause it. You can just delete the compute fleet request. And if you do that, all the Vms inside that compute fleet come down at the same time. So, yeah. I... It's a little bit weird. In that it doesn't have parity with things like, Flex. It's its own new weird
thing. Like, whether you wanna think about it as potentially, like, a new Api surface, like, as a new infrastructure service, built on top of existing constructs like virtual machines, things like that. The public docs refer to it as a building block. So it's just kinda like this foundational little block or, like Lego brick. That's going to accelerate your access to compute capacity in a region. So if you think about it that way, like, hey, I just need to
secure a bunch of compute. Really good for that. I need to secure a bunch of compute, and I need to configure it, and I need all this post configuration. I need down level management and all this other stuff. I don't know. It might not be the best thing for that, but it's not, like Vms or those other things are going away. You might have to kinda mix and match to or it just very well could not be the right thing for
you along the way. I'm curious, and I don't know that you have an answer because we talked about this a little bit, and this is in the Faq. The 365 day thing you brought up. Is if you go in and look at the Faqs for compute fleet. It says how long does my compute fleet request active, compute fleet requests are active for 365 days. If you delete it, it deletes all the Vms. If I don't delete it, it continues the
Vms continue to run and charge you. Does this mean that if I request or maybe I already know what you're gonna answer. I'm curious how this plays out is could it actually take 365
days to provision all 10000 vms? Or does this mean that, like, you put in a request And within that request, you're setting, I need certain amounts of Vms, and you can go in and update your fleet, I believe to change that number of Vms, Like, do you only have a year to change it and after a year, a complete fleet or a compute fleet is like. Stock or locked, I'm I can't imagine anybody would wanna spin up 10000 vms and not know if it's gonna complete in a week in a year.
Like, the whole what is our active request mean? Don't know. Right? Like, I I... So a couple of things. So it's in preview. I I think you've gotta kinda look at it from that lens of, hey, it's all not gonna be there, and I think as questions or answer or, like, as questions or asked those will get added and and build some of that out over time. You know, do you wanna wait a year to deploy your 10000 Vms? No, Probably not, but that's why
you're gonna have men's and max. Like, so you had the portal up earlier. I was gonna say we should walk through that and just talked through what a deployment looks like. Yeah. I think that gives you a little bit of kind of a a view into that. World and what some of the dimensions are that you're looking at. So, like, any other arm deployment. Right? Subscription,
resource groups, that's pretty easy. You're gonna have a resource name, your fleet name, what region are you deploying into which this is limited right now in preview. It's East. East Us 2 west 2 west 2 and West Us. So really East and West, the original and 2 are the only 4 options at this point in time. Effectively some Us hero regions in Azure. Yeah. So you've got that availability zones is your
next 1. So do you wanna use Az z's for your Vm as to deploy and then just like a regular Vm, you know, zone 1, zone 2, zone 3 kind of thing. What is your security type for those virtual machines? So I think we discussed this a couple episodes back. Trusted launch Vms or the default everywhere right now, but you can revert to standard, or you can do confidential computer on the way. Know, trusted launch are the ones that give you, like, you've got the screen up now.
Those are the ones that give you secure boot. They give you virtual T pms. And then some of the integrity monitoring around, like, memory things like that coming in from the hyper visor. Single image that you're gonna choose. So it... It's kind of interesting. Like, again, this isn't like Has flex. Where you would say, hey. Maybe I have this Vm. That's on a Ubuntu. I have this 1 that's on windows, things like that. That doesn't occur over here.
So you just kinda pick a single image, and then you're lining up the units of compute that are all gonna use that image underneath it. Yep. And you can do with that image, you can do your custom images to you. So you still get that option like Go see marketplace images, see other images, where you can go select my images, shared images. You can do ubuntu Red hat, oracle windows, like you have all of your normal images that you would expect. So this isn't
limiting in that aspect. It's just limiting and that you could only pick 1 for your entire fleet. Correct. Yeah. Yep. Do you feel overwhelmed by trying to manage your Office 3 65 environment are you facing unexpected issues that disrupt your company's productivity? Intelligent is here to help much like you take your car to the mechanic that has specialized knowledge on how to best keep your car running, intelligent helps you with your Microsoft
cloud environment because that's their expertise. Intelligent keeps up with the latest updates in the Microsoft cloud to help keep your business running smoothly and ahead of the curve. Whether you are a small organization with just a few users up to an as of several thousand employees. They want to partner with you to implement and administer your Microsoft cloud technology.
Visit them at intelligent dot com slash podcast that's INTELLIGINK dot com slash podcast for more information or to schedule a 30 minute call to get started with them today. Remember intelligent focuses on the Microsoft cloud, so you can focus on your business. Next up for you is the types virtual machines. So I think this is an interesting 1 Like I I'm Mentioned in kinda of the opening that you can do mix and match.
For cost optimizations. So being able to mix and match regular Vms and spot Vms. And then you can select the sizes or the Vm series that you wanna use. Along the way for those specific components. So I think it's a minimum today of 3 Vm series that you have to select and you can go up to 15. I'm gonna spin up some big ones, Scott. I'm gonna go do 10000 d 96 b fives. Yeah. Go for it. That's a beef you bill. So, yeah, shooting can do us select a bunch of those. Like you said, spot regular.
So... And then after that, you start picking capacity. This is the 1 where it gets interesting for me. Like, after you select all your sizes, now you can go choose these capacity preferences. So you can maintain a capacity, and this is once the target is matte, replace evicted vms to maintain target capacity. So it sounds like once you spin this up, you can set a capacity. So target capacity is a number between 1 and 10000, and it sounds like once you spin these up, you must be able to kick Vms
out. Like you could essentially go in and say, I'm missed guessing eviction is essentially deleting that Vm saying this Vm isn't part of this this fleet anymore. It's a spot compute thing. 1 of the things that you run into with spot compute is it's kinda based on availability. And it's based on demand shaping within the region. So if you're doing spot, spot can be evicted, at any time. So it's really good for kinda, like, state workloads.
If you have state full things, you know, you kinda make sure like, hey, that you're responding to things like eviction notices, so you're potentially persisting state and, like, object storage or off to disc someplace else. Things like that. So basically, what it's saying is if you wanna maintain your capacity, like, let's say you did a thousand spot vms,
and you tell it to maintain. Well, over time, once you've turned those spot vms off, they're gonna get evicted just naturally through the life cycle as other customers come in and out of the region and the elasticity there. So what you're telling it to do is saying, like, hey, I need a thousand vms
all the time. So if any get evicted, so it avail a hundred Vms and you go down to 900, the async engine in the background that's deploying that compute for you, will come in and request another hundreds spot Vms to get you back up to a thousand. And if you can't get you to a thousand, it'll kinda keep churning away in the back end. And keep trying to get you to where you need to be, or you can say don't maintain that capacity. Like, hey, I wanna start at, like, a thousand, and then stuff's just
naturally gonna get it over time. And once it gets evicted, I don't really need it back or maybe I spun up another compute fleet or something else along the way. So you can kinda let it go naturally kinda downhill that way if you want to. This would get interesting then with the 365 days because as it technically then like, no longer active after 365 days and if something gets evicted, it isn't active to maintain capacity. There is my question. I guess so. I don't know. Preview service,
there's no Sla, anything like that. Figure it out. I imagine some of this like just to be brutally honest is kinda like a shot in the dark from folks going like, hey, how long do we need to do this? Is, like, a year sounds like a long time for a long running async job just to kinda keep sitting in the background. It's like a job scheduler. Especially when you consider that at least as of today for compute fleet,
it doesn't have any cost. So it's kinda like Ak and that, you know, you're getting the management piece for free, but you're paying for the underlying compute along the way and and all the constructs that that come with that. So you're You're paying for the virtual machines. You're not paying for the ability to deploy those virtual machines and manage them via compute fleet. I don't know if part of it is maybe, like, a little bit of, like, just a cost thing, even like,
internally. Like, hey, how many these jobs do we need to run? How long do they need to pull for? What does that look like and how does it manifest? I imagine a lot of that stuff gets cleaned up be before it go was to a production state. I would assume so because there's a lot of I have a lot of questions. Like, how some of that works that aren't answered. I think some of the the questions you have. You know, if folks are kinda sitting in the background and they're asking these same questions.
A, lot of it is... This might not be the service for you. Like, that's true. There are other things out there. Like, this definitely, like, has a use case and a place. And if you're looking at it, and you're going, like, yeah, I don't really see the use case. Right I don't see place, like, that's okay. That's why things like Vm
and everything else exists out there. Because like I said, this isn't Vms, So even though you can do the capacity deployment through a single Api call, you've still got the potential, like, configuration concerns and all the other stuff that comes along the way with it. So you have this little trade off there that you're kinda balancing between do I want to deploy a lot of compute or do I want to deploy a little less compute and then have all the
manage ability aspects that come with it? Continuing down? Setting maintain capacity don't maintain, what is your target capacity? So what are you gonna set this at? And then you can set the eviction policy. So...
That's all spot stuff, spot specific. Delete, allocation strategies, max hourly, price per spot is all the spot stuff, and then you get down to the Vm capacity so non spot Vms, and you lose some of those extra settings because now it's just regular compute where it's your capacity, and this one's... I don't know if this is spot. Yeah. Maybe you can help me out with this. Like, you can set a capacity for these normal Vms of
say a thousand. And then you set your minimum starting capacity of something like 10, and this minimum starting capacity is essentially saying that Vm in the fleet our provision gradually. It starts with your desired in capacity if you specify it, so it could start with 10, and then it's gonna slowly grow up to 100. It doesn't say how slow, but this is not something you have with
spot Vms. Like spot vms you just set the capacity, and I'm guessing it's does it all out once versus regular Vms where if you want to use set a minimum and slowly ramp up to your capacity. I think it's implied that spot is also going to ramp because by nature of spot via. By nature of spot. Like, you're basically saying with spot compute.
Give me your extra compute in the region, and I'm gonna pay a little bit less for that compute, but the reason that I'm paying less is because you, Microsoft as the as the host and hyper scaler can rip that compute away from me. I can be evicted. And when you've ripped that compute away from me, you're gonna go give it to another customer. So let's say in this case, like, I'm the 1 who's consuming a lot of spot, and you come in and you say, hey. I'm here as a as a
customer and I don't need spot compute. I need real non ep compute, and it's in the same series that I have in spot. It's in Microsoft best interest to e me out of there and just let it go. So I I think a lot of it is just kinda, like, sitting and and trying to balance that nature of spot versus everything else that's they're and available for you. The other interesting thing about this too is like, you said earlier, you can select up
to 15 different sizes. So you could go in and select 15 different d series Vms, z e series Vm spot Vms, but you can't set capacity on a per sku level. I can go in here, and I can say, I want my target capacity to be 3000, my minimum is a hundred, but I've selected 5 different sizes. It's gonna go... Well, it's technically gonna break the way I have it now. If I'm spinning up 5 different sizes with 3000 capacity. I believe it's spinning up 3000 of each or is it taking? Now I'm
questioning myself. Total capacity. So this is total capacity. It's taking 3000 dividing it by 5 probably and spinning up 600 of each Skew that I've selected, doing that math in the background versus 3000 of each sku that I've selected. Again, very unclear from the documentation, even from the sense of, like, hey, self documenting code. If you go look at, like, the arm templates for these or, like, the actual arm
endpoint that sits out there. I don't have, you know, 10000 course that I can go play with in my in my Pe go subscription. So, like, I wasn't even able to to 1 of these at scale, like, even, you know, even getting yourself lifted beyond, like, the default core limits and, like, a visual studio sub is pretty rough these days for how that stuff comes together. But, yeah, you're basically doing your allocations that way, and it's gonna split amongst them. It's unclear how it divides things
up. So the interesting thing is it's not exposed in the portal really or at least it is a math back clearly between with the options in the portal. But in the arm template, if you go look at the arm template. So Mh. The spot profile and the regular priority profile. They have this property that's called allocation strategy. And allocation strategy can be capacity optimized, or it can be lowest priced.
So that's kind of interesting because then what it's doing, if I'm reading it the right way based on the Api, is it using the logic of, hey, you selected 5 d series Vms. What are you phonetic clicking on over there? So so you've nuts. Yes. You've selected 5 d series Vms, you can go in and you can make your Alex vacation strategy to be lowest price. And then what it'll do based on that is, I... It would literally choose the lowest price
Vm series. And it would start with deploying those first and then come behind it, like, it would just keep kinda incrementally stepping up the ladder based on the price. Or like I said, the other dimension is capacity optimized. So what I was clicking and I was trying to see what a... I can't find small Vms. That's another weird thing with this. Like the smallest Vm I was able to find to select was 8 cores. I was curious if I just selected 2 small ones
and set it to 4. If I get 8 Vms or if I get 2 of each. Okay. Not to service for you if you're looking for small tabs. If I'm looking for small Vms, I know. I was trying to test it out to answer my questions. This very much does seem geared towards, like Ai training scenarios, Hp scenarios, where I'm gonna spin up a bunch of compute to run some kind of some kind of job on top of things.
It's just going above and beyond, like, the click stops of some of the scale out capabilities of things like, you know, the current Hp offerings, current Vms offerings, but, yeah. It's a little bit of weird nuance there. So then once you set your capacities, it's really, after that, it's just what's my admin username password for all these vms, again, set it once, get it for everything.
If you wanna do hybrid benefit, if you're doing windows, and then networking is just virtual network subnet, and then you can go in and tweak network interfaces. Really, if you wanna do Ns msg or accelerated networking. They do have an option 2 to do a new load balance in front of it with networking. So relatively basic, you're essentially just saying dump all these vms in a network. Use an ns msg if you want to... And if you're gonna do any type of load balancing. Yeah. I believe you have to
create a load balance. I don't think it's optional Do you? I'm gonna try it? Oh, basics. What did I fail on? Oh, field name? Benz test fleet. Who it validated without a load balance or Scott? Create gonna initialize the deployment. So I tried to spin it up through an arm template, and I just pulled their default arm template. So the default arm template at least as it's documented today. Includes a V handle load balance in it. So, yes. I just said V subnet, no
load balance. So it doesn't look like it's required. We'll see what this actually creates when it goes and deploy everything. Well, mean if I have to come back and check in a year. Yeah. It's What else? There's that Faq out here. I'm trying to think if there was anything else in here that we wanted to cover, start and stop time. Yeah. No start and stop time. You
spin it up? It's running. It will be interesting like you said, do they change some of the stuff going forward, but also like you said it's not for me. This is not a service I would use, the only use I kinda thought of it, but even as we've looked at it is if you wanted to do it for a lab. If you're spinning up, like a large lab for a learning environment. But even that, given you'd have to do an image and that there's no start and stop time, you're probably better off going other routes even
for labs. Like you said it does tend to be... It appears to be more Hp Ai training based stuff that you were mentioning. I'd go back to you there are services to do those kinds of things. Right? Like, there there's literally lab services. Lab services. We've talked about that 1 before, which does sit out there. There's Dev. So there's kinda all these different
constructs. So you do have to kinda go through and pick the right functionality, like, underlying deployment model, all all those things that that are gonna work for you within there. The only thing I'd call out is, like, if folks are watching this, and they're like, trying to get hands on with it and if you are more of, like, an arm template deployment person, versus a go and click in the portal to do your first deployment. You do have to
register the resource provider. For this 1. So there's a little bit of a flag that needs to be cleared, register the resource provider and then you can go ahead and do your deployments. You didn't have to register the resource provider because you initiated through the portal and then that registration happens automatically. And then it broke because My subscription is not registered to use the name base Microsoft dot compute. You've never deployed
a vm in there. Apparently not. That's a weird 1. I thought I had. Now I'm gonna go look. Oh, this is gonna be annoying. Yeah. We'll go look later. That wraps that'll be for another time. Yes yes. Yeah. So kind of a interesting service to go look at explore play with. If you wanna go spin up a whole lot of vms. Compute fleet. Get it done. Alright. As always. Thank you, Ben. Thank you. Enjoyed it. Enjoy your weekend. Try to stay cool in this. Florida got hot really fast
this year. We went from summer to summer, like that yes. In an instant. So enjoy, we will talk to you again soon. Alright. Thanks, Ben. If you enjoyed the podcast, Go leave us a 5 star rating in itunes. It helps to get the word out so more It pros can learn about Office 3 65 in Azure. If you have questions you want us to address on the show or feedback about the show, feel free to reach out via our website, Twitter, or Facebook. Thanks again for listening and have a great day.