S4E24 - Azure Container Apps - Fully managed containers in the cloud - podcast episode cover

S4E24 - Azure Container Apps - Fully managed containers in the cloud

Nov 24, 202352 minSeason 4Ep. 24
--:--
--:--
Listen in podcast apps:

Episode description

This week we cover Azure Container Apps. Container Apps is a serverless container service provided by Microsoft Azure, designed to simplify the deployment and scaling of containerized applications using Kubernetes. It offers a seamless and integrated platform for developers to focus on building applications without the need to manage the underlying infrastructure.

Sam takes the lead covering:

  • What are containers, and why do organisations use them?
  • What is Azure container apps? And how do they compare to other services in Azure?
  • Which types of apps can be deployed and hosted?
  • What are the SKUs how much does it cost?

What did you think of this episode? Give us some feedback via our contact form, Or leave us a voice message in the bottom right corner of our site.

Read transcript

Transcript

You. Hello, and welcome to the let's Talk. Azure podcast with your host, Sam Foote and Alan Armstrong. If you're new here, we're a pair of Azure and Microsoft three six five focused It security professionals. It's episode 24 of season four. Alan and I had a recent discussion. On Azure container Apps, a serverless container service provided by Microsoft Azure designed to. Simplify the deployment and scale scaling of containerized applications using Kubernetes.

Here are a few things that we covered. What are containers and why do organizations use them? What is Azure container apps, and how. Does it compare to other services in Azure? What types of apps can be deployed and hosted? And what are the SKUs and how. Much does it cost? We've noticed that a large number of you aren't subscribed. If you do enjoy our podcast, please do consider subscribing.

It would mean a lot to us for you to show your support to the show. It's a really great episode. So let's dive in. Hey, Alan. How are you doing this week? Hey, Sam. Not doing too bad. How are you? Yeah, good, thank you. Are you back in our time zone yet? How was your jet lag from Ignite last week? I don't think it's been that bad. A little bit. I do seem to flag by the. Evenings, but, yeah, I think I'm okay.

Journey back was great as best it could be, I guess, from the it's quite a long journey, isn't it? But, yeah, back in back to work. And cracking on with the week. So how about your week's? Been busy? Yeah, I think it's just mostly been getting my head around a lot of the announcements at Ignite. To be totally honest with you. From a product perspective, I would say. There'S been a lot of announcements across the board. Is that sort of what you think? There's been an absolute ton of new.

Stuff, especially in our area as well. Yes, definitely a lot that's coming out now. And customers and the community are asking, what does this mean? So, yeah, it's definitely a lot out there. Yeah, no, 100%. Cool. Okay. Shall we crack on with this episode about Azure containers? Yeah, let's do it. Okay, Sam, so what are containers? Yeah, let's begin with, I suppose, the. Basics on containers, really, because this is. Probably our first proper episode where we're.

Actually focusing on containerized applications. Okay. So containers, if you imagine traditionally, you. Would have a server that then got optimized and brought into virtualization. So virtual machines, you may use that in your organizations. Virtual machines give you well, kind of. Exactly what they say on the tin, really, a virtual literal machine. So a hypervisor is used to effectively mimic all of the hardware components of.

A system in a virtualized and software way. Containers are a subset.

I suppose it's quite hard for me to explain the comparisons. It's, in effect, a more sandboxed environment than a virtual machine. Imagine just a sandboxed smaller environment that you may just run your singular application or workload in. So usually these are still uses a hypervisor, but you've effectively got a sandbox jailed environment in which to run run your application containers. Weirdly, they do still use operating systems. So the base image for a container.

May be Debian, Ubuntu, Alpine, X, Y. And Z, other operating systems that you would actually use in virtual machines. But effectively containerization systems effectively segment those. Applications at a I'm going to say at a higher level. You may have a single machine, it could be a virtualized machine as well, that runs a containerization system, such as Docker as an example. And then within Docker you would run. A set of containerized applications inside of docker and docker applications.

Typically a lot of dockerized applications are web applications as an example. But there's many, many different types of applications that can be running containers. Different application lifestyles that can be supported, lifecycles, not styles, sorry, that can be supported. And containerization allows you to effectively chop up a machine even further into smaller chunks. And. When we talk about containerizations, I. Think we've got to talk about service.

Orientated architectures and microservices architectures because this is where a lot of containerization has come from. And what I mean by that is traditionally you may have had a monolithic. Application that you would run and lots. Of people still run monolithic applications. Nothing inherently wrong with monolithic applications. But what we have seen is we. Have seen the rise in popularity of say, a microservices or a service orientated. Architecture where software has become so large.

That segmenting it up into its component. Pieces makes a lot of sense and. Then you orchestrate those pieces together. So, whereas an organization traditionally may have. Been hosting one, two or a handful. Of applications. If you are going down. A microservices architecture, you could have hundreds of literal microservices, aka small applications that you've got to orchestrate and manage. You couldn't have a virtual machine for every single application because.

The management overhead of that would be nuts, basically. So containerizing applications makes a lot of sense. You've got another level of segmentation and protection there so that you can run. Workloads side by side. That's also great if you're running a. Tenanted system, a tenanted application where multiple. Of your end users are sharing the same resources. It gives you another level of abstraction there. So, yeah, containers are. Immensely popular, shall.

We say, many, many different software and. Workloads that can be supported. You could create your own custom containers. And containers themselves. Traditionally don't store any of their, they don't have any data storage within them. You use volumes or bind mounts to mount volumes inside of them and folder structures. So containers are supposed to be able to be destroyed at any time, moved. Upgraded, X, Y and Z.

So the lifecycle of a container is very different to a traditional web application. And hosting platform. I'm not really going to go into it in too much detail because, one, because I'm going to butcher the explanation, and two, if you're looking at Azure container apps, you're probably already on that journey anyway. Cool. Yeah.

So you're kind of saying that using microservices as well. Like you said, you wouldn't want to manage multiple virtual machines because it might be, like you said, hundreds of microservices. I guess that's kind of also a cost saving as well because you can run as much you can on the. Same hosts without them well, without them. Bleeding into each other kind of thing. Like you said, they're all sandboxed away, so they can't communicate unless you allow them to. From a networking perspective.

Yeah, consistency is really important as well because the developers of software usually define the requirements of their containers, so they get consistency. So even if the infrastructure team is running a different operating system to what you use, you can define all of your requirements in your containers. You get flexibility of being able to. They'Re so flexible that you can sort of wrap any type of the vast.

Majority of applications inside of a container. And you also get massive efficiencies, basically. Because containers. You can provision so many containers on a single host and you use these management tools to make that a lot more simple for you. But you can take micro chunks of. CPU and CPU resources and memory because. Containers are so lightweight. So if you have a lot of.

Business applications are just effective batch jobs that run and create, read and update databases, there's sometimes not a lot of processing power that's required there. So you can really fine tune your resources and split up your V cores even further than you can in a virtualized environment.

Cool. And you can also have different dependencies packages, can't you, in each container, so that if you haven't got that worry of, I need this version ten of this package and this other application needs version nine because they're so containerized. Yeah.

So multiple containers on the same machine don't share any resources between each other. Any packages are, I suppose, baked into those containers is probably fair to say. When you build a container, like if you're using Docker, you have a docker file and that's effectively the commands it goes through to build that machine. And you can define your dependencies there, basically. Cool. Okay, so container apps and what are. They and how does it compare to other options that are in Azure?

Okay, so we've talked about some very. Simple examples of singular containers. As an example, we have sort of jumped into microservices. But what happens time and time again. With applications is that it's all well. And good building them in your own environment, running them on your local dev box, but when you get to actually. Running them in a production environment, that's a very different conversation.

There are management tools. A big containerization system is Kubernetes as an example. We're not really going to touch on Kubernetes because it is a podcast in itself. That's not a criticism of it, that just is a reflection of the sophistication of that tooling. Right. So you're a developer. You build your application, you define your docker file, you get all your dependencies ready to go. Then you've got to somehow host that image, that container somewhere.

So what you could do is you could fire up a virtual machine inside. Of Azure, install Docker on it, push. Or build your container image on that machine and then start running it. You could give it a public IP. Address, fire traffic to it, and you could manage it end to end fully yourself. What Azure Container Apps is going to. Do is it's going to take your managed service from Azure, which is actually backed by Kubernetes, but it's going to abstract away a lot of that complexity for you.

You are effectively going to give it your container. You're going to tell it where you want it to run, what you want. It to run on the resources that it's consumed, what from a networking perspective can talk to it and what it. Does define its lifecycle. And then Microsoft is going to run that for you. So we've talked previously about app service. Now a slightly confusing thing about app.

Service is you can run docker containers there, but we'll pause that thought process for a moment. But what app service is for applications, a platform as a service. Azure Container Apps is that same platform as a service, but for containers instead of the source code directly. So you'll build your container image, you'd probably push it to a private repository. You could use a repository inside of. Azure to do that because there's a repository service inside of Azure.

And then you'd effectively say, okay, I would like to run this container in this way based on this image that's come down. I think the big thing for me. Is it's going to do your networking for you. It's going to give you Https and. Your TCP ingress without having to manage infrastructure. I think that's really important because getting your application terminated to the Internet correctly is a job in itself. Basically.

It will handle specialized hardware. So you can run basically, I'm not sure what the actual name of it is, either dedicated or isolated hardware. We'll come to that in the pricing. But if you need access to GPUs for your workloads as an example, or you want to run on specific hardware. Then you can do that. One of the big functions of it. Is it's going to give you container revisions and application lifecycle management. Right? Because one of the big challenges is.

Okay, I've pushed a new version of my application out. What if there's a regression or a. Bug in that that I haven't spotted. In my QA process that obviously never happens, right? Because developers test everything before they push. Them out but it will happen at some time. So you want to be able to stage releases. Maybe you load balance a new version.

And you load balance 50% from a. Previous version to a new version or you slowly roll out but you want a way to be able to revert back and it's going to basically give. You that and split traffic. For a B, testing it doesn't sound like a hugely complicated endeavor but if. You'Re running that yourself that is going. To take time there. You can also pull containers from any. Registry so any public or private you. Can pull from Docker hub if you just want to run a container that's.

Already out there and publicly available or. You can use a private one such. As using Azure Container Registry. If you want to bind it to an existing VNet. You can also do that as well. When you're defining your environment for your. Containers to run in, what it's also. Going to do is it's going to give you access to monitoring and logging. And observability that is challenging for any. Application stack and technology. So Microsoft is going to give you tooling in and around that that is.

Going to help you accelerate binding into that some of stuff, some of that stuff. Also I think I want to call. Out now because I'm probably not going. To have a separate section on security but that managed identities are also supported in injection into containers. Now getting identity flowing through into containers is a challenge because they are an. Isolated sandboxed environment if you will. So passing identities and authentication through there can be a challenge to do that securely.

Cool. It kind of sounds like this is great if you are building containers, because there's probably still a lot that you have to still configure. But in effect, you just have to. Provide the location of your container, get it uploaded, make sure your networking is correct and things like that. And then you're kind of sorted. It kind of feels like anyway, it's there, it's done. You can start testing. There's a low barrier to entry kind. Of thing of getting your app up.

And running, at least testing it and. Seeing if it works kind of thing before you go into your. Ironically containerizing. An app is very simple from a. Developer user experience perspective, right? You've got your nice safe space that you know that your app will operate in. If I run a container using Docker. On my Mac laptop and then I push it onto a Linux like an Ubuntu server that's running Docker, I have. Really high confidence that it will just.

Work because my container, the environment inside of my container is exactly the same. But the way that I kind of. See it is that then containers add. A management overhead to you because you're not just dumping your code onto a box, putting a web server on it. Let's say you've got a web app as an example. You're not just configuring the box and. Throwing your application on it. You've then got to understand the lifecycle. Of that container and manage it.

I'm not saying that's overly complex, especially with the technologies that these type of individuals have to deal with. I'm not really saying it's arduous or anything like that. It's just another thing that you need. To think about on top of IAS. Right? Because if you went IAS and you. Just maybe put Windows server on it around IIS as an example, right? Does IIS even still exist? Alan? I don't even know because I haven't even used any of that for a long, long time. Right.

I use words that I know. Right. But you can put a web server. On, you drop your app into it and it will start running, won't it? You have to make sure that box is configured, hardened and everything you want to do with it. But then you've still got the box. And then you've got, let's say, Docker. On top of it that you've got to maintain. Microsoft is going to take that next level, so you get all of the benefit of the great developer experience. And once again the cloud platform goes.

Oh, just give me your container and I'll run it for you. Right, yeah. And I guess as you if you. Were hosting yourself, like you said, it's. Not impossible to do, but you've then. Got to think about resilience and all. That sort of stuff. I have two machines running in different regions, things like that maybe, or even just in the same region, let alone if you need an application that needs. To scale for you.

I'm guessing the app container service kind of at least from a resilience perspective. If the host it's on for any. Reason goes down, it's literally up on another one somewhere else that you don't. Even have to worry about. Yeah, exactly.

And I think it's probably worth calling out is one of the sort of disadvantages of Azure app service. I know we're not talking about app service in this thing, but we do need to compare the other options in Azure because there are some things you need to think about. But app service, one of the complexities. With it is it's not the same. Environment as your local environment. So there is a process of validating that it will run in app service.

But if you give your container to Azure container apps, it will just run. The caveat that I need to add. To that is you can actually run containers on Azure app service. So that makes it a little bit more complex. Should we talk about the other options in Azure? That's probably a good segue into talking about them. Yeah, let's do that.

Okay, so Azure container apps is going to give you a lot of help and support, take a lot of your workload off in terms of binding up a lot of these things. Azure App service, as we've talked about. That can run. Containers, docker images, docker containers, sorry, but that's really for long running applications. Azure Container apps has the ability to do event driven jobs as well. So you can set up a schedule to run a container on a particular job cycle. If you want to leave a you.

Don'T have to leave a docker machine. Running, docker container running constantly. And that's really what App Services is all around. Really. There is Azure container instances as well. But that is really around pods of HyperV isolated containers. So it's better to think about that. At a lower level than container apps. You're not going to get things like scaling, load balancing, certificate termination, all of those niceties at that. But if you do need a lower. Level experience.

That could be a good option for you. Azure Kubernetes Service so as I've mentioned. Before, Kubernetes is a very mature and. Sophisticated platform that people dedicate their whole professional careers to. Azure Kubernetes service does wrap up a. Lot of the simplicity, a lot of. The complexity, sorry, and layer on simplicity for you. They'll effectively run the control side of Kubernetes for you. But you do still need to interact. And manage it in a very similar.

Way that you would for running your own Kubernetes cluster. They're just giving you a lot of that out of the box, basically. So if you do need to actually. Build a virtualization platform and an orchestration. Platform, that might also be a better option for you. Because container apps is really around like. Singular containers and managing them basically, maybe scaling those containers, but not at that. Scale is probably worth talking about. We'll just talk about Azure functions because.

Azure functions is effectively a serverless environment that allows you to execute code. And really you can do that with. Container apps as well. But really what you're talking about with Azure functions there is kind of like app service. You're giving Microsoft your code, you're doing a build for them and they're running. That build for you.

And it's not containerized, it is containerized. Because app service and Azure functions containerizes everything as you send it up. They're just building that container for you on the fly. There. There's two other options that I have never used, so I'm just going to call them out because I've got no experience. Azure spring apps for spring developers. I'm sorry, but I have absolutely no clue what that is whatsoever. And azure red hat OpenShift. Again, OpenShift is not an area that.

I've looked at at all. So even if you are looking to containerize your apps and deploy and orchestrate them in Azure, you do have a few options that you do need to think about what is best for your use. Case. But the reason why I picked Container. Apps to talk about is I think it's probably if you do want to containerize your apps, it's probably the easiest one to get started with and that's. Up for debate and that's just my gut. So it's a really friendly and welcoming.

Start to that if you're moving into. That area and you want some assistance with that journey. Yeah, okay. Yeah. It does sound like the Azure app. Containers, even for. Someone starting off at. Least in either development, or they understand. The containers and then maybe, like you said, they'reusing another container, then they want to host that service in Azure. It kind of does feel like that's the easiest, simplest way to start off with, at least until you understand what. Other.

Advances you need to do in that space that you maybe then have to decide on another Azure service to help you out with that. Do you know what I love about it is it's just helping you orchestrate those containers. There is very little vendor lock in with it, right? If you want to take your containers elsewhere, you are free to do so. Right, because they're giving you the management and orchestration platform. But there is no in theory, there.

Is no change to your containers whatsoever. If you're currently on GCP and you've got your own private registry there, you. Just got to get your container to. You'Ve just got to get your image to a place that this thing can. Talk to it, right? And worst case, that's a rebuild of. Your container into container registry as your container registry. But if you're doing that somewhere else, that's not going to be out of the question. And yeah, if you don't like the.

Service or you want to run on multiple clouds at the same time and. Load balance between them because in theory. You could have an external load balancer pointing at these endpoints, right. You don't have to do the load. Balancing in Azure if you don't want to. So it's really flexible from that perspective. Yeah. Okay, so what sort of types of. Applications can be hosted then within these containers? Yeah, so like I mentioned, long running applications can run. They can just sort of stay up.

Be replicated, scaled, kept warm, ready to go. There is what's called the term is. You effectively have an environment which has. A number of containers within it and then you have revisions of those containers inside your application. And that's how you can basically decide what your active revisions are versus your inactive revisions basically as you go through. But just talking about applications.

Long running live applications are supportive. You'll have your entry points and you'll have your startup mechanisms predefined in your containers. There's also the ability to run jobs and jobs you can effectively set up. A you can have different ones.

You can have a manual job that's just triggered on demand. You can have a scheduled job. So you specify like a cron schedule, basically. And then you can have event driven jobs as well. So they can be triggered via like, a message arriving in a message queue as an example. And effectively what that job does is it just runs an image. But that image runs, you pass the. Data into it, or the image wakes. Up, finds its job that it's got to do, and then it does its.

Work and then shuts down basically after that. And that's a really powerful thing because event driven systems are really popular now. The way that you orchestrate flows of logic running through a larger organization, having an event driven system can simplify a. Lot of the job flows, basically, especially. If you've got long running tasks and lots of interconnected pieces. So, yeah, it's really good that it's supported here. If you are into microservices. It allows.

You to also run there's a native Dapper integration. And what Dapper does is it effectively runs. Is the best way of explaining it. It effectively runs a separate application alongside your containers with like a native messaging and queuing system that you can use to communicate and orchestrate between your containers. Dapper is open source, so it's great to see that there's, again a non. Vendor locked in inbuilt mechanism to do.

Messaging and queuing inside of this system. That's a lot of words, right. But bringing up your own queuing system, your own messaging framework, again, is a role and responsibility in itself. So in theory, you could align to. Dapper and that's just supported out of the box with container apps. So if you already use it, big. Win, and if you're looking to something. To help, then yeah, you've got it there, ready to go.

Wow, okay, so there's definitely a lot of different scenarios there. Rather than just hosting your code and. Running it like a web service, that's great. Okay, so I think we kind of talked about this a little bit, but. How do we get the apps deployed? I think you said, about putting them. Into. Repositories, things like that. Yes, let's talk about deployment options because that's always really I think that's really. Important personally, because getting your code from.

Text editor to cloud and improving the. Efficiency of that is really important for organizations. Right. They don't want a hugely bureaucratic system that they need to sort of go through. So you can deploy an application from. Your code editor so you can do it inside a Visual Studio and Visual Studio code directly. I haven't done that. My assumption is you do the build and you push the container image on.

The fly to there, but I'm not 100% sure there you can define your deployment in Azure Portal. So if you're bringing a public image. Down, let's say you wanted to run. Like because you could run a database here, right? You could run an actual database in container apps, right? Because you could bring down let's say. You did MySQL as an example. You could bring down the MySQL and. Just run it alongside.

Also, you can drive deployment from GitHub actions and Azure pipelines. I believe there's extensions, and I think they're both called extensions on both side might be extensions and plugins. I can't remember 100% remember on the action side, but you can effectively hook. Into. These here as well. There is also infrastructure as code. So native, I'm not sure about TerraForm. To be totally honest with you, because I haven't looked at it, but Azure CLI and the newer Azure developer CLI.

Both have support basically for it. So yeah, lots of ways to get your but what's interesting here is, and I know I keep coming back to app service, but it's a good thing to compare against, especially if you're coming from that world. Your deployment is a lot more simplistic. In this model because you're not pushing a package of your code. You have already built the container, you. Have done all of the QA to make sure you're happy with that container. You are literally pushing the image of.

That container to this host environment. Right? So all it is, is getting a big blob of data ready for them and some configuration to boot the thing up, right. It's not sending all your code and your configuration. You do do some configuration with environment. Variables, granted, but you don't go to the same level as you do with app service. No. Cool. Okay, so it's definitely a couple of ways to get your apps up there. Okay, so let's talk about managing and monitoring.

So you kind of talked about it being available, but how do you monitor manage your applications in the containers? Yeah, so there's a lot of tooling in and around that Microsoft are basically binding in for you. You've got log streaming, the ability kind of like you can an app service actually to see a live output of the logs coming out of a system. You can connect to a container console. So you can actually connect in to.

Your container and debug through because that's an important thing with containers. You want to be able to actually get into them, to debug them and work out what's going on. There a lot of the platform metrics you're going to get through as your monitor and log analytics as well there. So because that ecosystem is in place, you're going to get a lot of the logging analytics and alerting coming through there as well. Those things that I've talked about, you're going to use them in very different.

Ways dependent on the phase of your application deployment. So if you're in development and test, you might be looking at logging and streaming in the container console, but then when you're deploying and actually running it, trying to do maintenance on it, observing it in production, you might be using things like Azure Monitor and log analytics. Basically at that point.

Cool, okay, so it's kind of the same. Like you said similar monitoring to what you might be used to in Azure anyway from Insights maybe to like you said, the Azure Monitor, et cetera. So that's good because it's not something specific to this service. It's kind of the norm across Azure kind of thing. If you've done it somewhere, you know, kind of what you're looking at apart from stats being different. Cool. Okay, so what sort of SKUs are there and our favorite question of how's.

It priced okay I'm reading from the. Pricing page because it's a complicated one. I admit that right now. Okay so the pricing is done in virtual core seconds and gigabyte seconds for Ram and requests. So you get your 1st 180,000 virtual. CPU seconds per month for free. You get your 1st 360,000 gigabyte seconds. Per month for free and you get 2 million requests each month for free as your base. That's what you get.

That seems powerful in itself, isn't it? Start out and try and work out your deployment of application your container and you can almost do it for what sounds like you can do it for. Free before you go sort of production. It kind of feels like that exactly. Yeah that is again like a lot. Of these platform as a service managed services you are getting a free tier so that you can validate your code. In a dev preprod environment 100%. Okay?

Now when you go above this there's basically two modes. There's active usage and idle usage. Okay? So if. Your application is deemed to be an active mode, if it's above. 0.1 CPU cores or when the data. Is above 1000 bytes a second. So when I'm talking about 0.1 cores, I think you can see the granularity. That is able to be gleaned by chopping up virtual cores. Right, because in the world of microservices. And containerization you can literally chop up.

A single virtual core into hundredth of a core of resource and sort of scheduling down to there. So the numbers are just simply ridiculous and I can't really give you a great example, but I'm just going to tell you that one virtual CPU second. Is zero point $34 per second and memory is $0.4 per second. Now what's even more interesting is and then the idles are basically so that's active usage. Right. So that first number I gave you.

Was basically 34 /second right for vCPU I'm not even going to talk about the zeros in front of it right. We'll just talk about 34 memory is 4 /second but if it's idle and not doing anything then it's 4 /second. On each side basically right. If you're not really doing anything, you're saving even more basically, right. But what is slightly more insane is. You can buy a one year and. A three year savings plan to go. On top of that.

I'm not even going to talk about the numbers but they are cheaper basically. It'S 15% savings for one year and. 17% savings for three years as well. Now that is for container apps that are running in like what we'll call. The consumption plan, the shared environment as an example. Okay, so what you can also do. Is you can run in a dedicated plan. This is where I was talking a while ago about isolated. It's called dedicated.

Effectively, what that does is it gives you single tenancy guarantee and access to. Specialized hardware and actually more predictable pricing. Really as well because we talked about these insane numbers, right? Like how many virtual CPU core seconds are you going to use in a month, Alan? I don't know. Right. That's going to fluctuate with the amount. Of users that I've got.

What you can do with Dedicated plan is you can buy basically bigger chunks over and over again, basically. So what you can basically do is you can buy in actual vCPU hour. Basically instead of second and gigabyte hour. Basically on this side. So a vCPU per hour is nine. Cents per hour in dedicated and then. Memory is. 0.6 cents per hour, something like that. Really like nothing. And again you can get one year. And three year savings plan as well.

There is also a plan management I believe you got to put on top. Of it as well, which is ten cents per hour as well. So you do have the ability to go completely single, tenancy and dedicated if you wish. Does that mean then what you pay. For is basically your limit? So that it's kind of like if. You bought if we talk about app service, you bought a SKU that had an X CPU, et cetera. Correct. Basically once you hit that, that's your. Max kind of thing. Yeah, but it is pay as you go though.

So you can scale. Yeah, you're just buying it in bigger units, I believe. Basically. I've personally not used the dedicated plan before. That's not a bit that I've been in. But what I can tell you is. What is very nice about the consumption. Plan and the last time I used. It. You could set your minimum I. Think it was your minimum ram to. Like for your container. Want to say it was like 192 meg? Or maybe it was 128 something ridiculous. Or 96 meg or something like that.

A lot of containers require like 50. Meg of Ram to boot or something like that. You can really bring it down. And I think if I remember rightly, the minimum that you could put a V core onto was zero point 25. Of a V core, basically. And it was like to run a I don't know, I might butcher the. Numbers, but like an app that I was running and it was costing me like $2 a month, basically that stayed. Up all the time. So if you had a job that ran like once a day, I think.

You'D struggle to spend the 180,000 vCPU seconds basically. Okay, so the size of the container. Is what you define when you build. It kind of thing, what resource you want to give it and then you. Don'T pay for all of that, you. Only pay for the seconds, the minutes, et cetera of the Ram and CPU. Then that it uses within that. So you can kind of limit how much you use based on what container size you give. I guess in some form, yeah, but.

What'S even more weirder is that if you run your container constantly, but it's idle. You'll still pay the memory Fee I'm going to call it 4 /second. Right. But your CPU cost is going to go from 34 as well. So if your app idles overnight you're. Not going to pay your active usage you're going to pay even less overnight because you're literally losing less CPU. Have you ever heard of that from any sort of Microsoft platform where you.

Can basically have idle versus active V core? I haven't seen that no it's not very capitalist, is it? It doesn't make sense, does it? So let's just hope yeah, but I. Think that is based on how people. Interact and use containers. Containers are designed to be very efficient in their use of resources and they. Are usually deployed in environments where there's many of them that are orchestrated together. So I think that's why those things are in place. Yeah, it's never really just one on.

Its own sat there it's always ten 2100 thousand, et cetera. Cool, okay, is there anything else you. Can think of that you might have missed or any other areas you might. Want to just quickly talk about? No, there's loads more that we could talk about with it but we've already been going what 48 minutes now so I think that will give you a good overview. If you do have workloads that you think you could containerize and put in here, I would definitely encourage you to at least try.

Give the free tier a go. I say free tier, it's not free tier is it? It's the same tier but obviously there's. Always a time element and a bandwidth resource requirement there. So yeah, just have a look at it, it's a great bit of kit. Cool. So Alan, next episode, what are you. Going to be covering? So we've not been actively not talking. About copilot but which are you talking about Alan? I haven't said that yet. Sorry. I think we've been waiting for a bit more visibility of its usage and.

Things like that before we kind of. Did an episode on it so I'm. Going to do the Microsoft copilot so the Microsoft three six five Copilot, the. Production one next week. We do know a bit more about. Security copilot and what its capability is from McKnight but I think I just. Want to get a bit more information. To come out before we do an episode because I want to get that nailed. We've been owing and ring about when is the right time to start talking about Copilot. Right.

And I think with all of these. Technologies, we do like to wait until. We can get our hands on it. And really understand it, if that makes sense. And that's not us being pessimistic, I don't think. That is just us making sure that we can convey it in the right way. Right. And it feels like Microsoft Copilot now. It's ga, it's the right time to start talking about it, if that makes sense. Yeah, 100%. Yeah, exactly. We'll do an episode on that on its capability. I'll probably throw in a bit of.

AI studio because it's kind of related in some form, but yeah, so we'll cover that. Nice. Okay, so did you enjoy this episode? If so, please do consider leaving us a review on Apple or Spotify. This really helps us reach out to more people like yourselves. If you have any specific feedback or suggestions to episodes, we have a link in our show notes to get in contact with us. Yeah, and if you've made it this. Far, thanks very much for listening and. We'Ll catch you on the next one.

Yeah, thanks all bye.

Transcript source: Provided by creator in RSS feed: download file