Hello and welcome to the let's Talk Azure podcast with your hosts, Sam Foote and Alan Armstrong. If you're new here, we're a pair of Azure and Microsoft 365 focused it security professionals. It's episode 19 of season five. Alan and I recently had a discussion around Azure AI search. Azure AI Search is a cloud based service that utilizes AI capabilities to provide advanced indexing and querying over your content to deliver relevant contextual search results. Here are a few things that we covered. What problems is Azure AI search aiming to solve? What does it integrate with? What are the use cases for the solution and how much does it cost? We have noticed that a large number of you aren't subscribed. If you do enjoy our podcast, please do consider subscribing. It would mean a lot for us for you to show your support to the show. It's a really great episode, so let's jump in. Hey, Alan, how are you doing this week?
Hey Sam, not doing too bad. How are you? Yeah, good, thank you. Good, thank you. It's build week this week as we record. Does it kick off today? Yep. Yeah, it's kicking off probably. Well, definitely right now.
Yeah, we're recording a little bit earlier because Alan's out on his holidays, so. Yeah, we're a little bit earlier this week than we usually are, but yeah, hopefully there should be some good updates from build. I kind of find that build is quite a, I don't know, it feels like a fun feature release, maybe because it's not mainly so security focused, if that makes sense for us. Right. It's kind of like a lot of the products are integrated, but it's kind of like the other side of the fence in some respects. Right?
Yeah, definitely. Yeah. It's um. Yeah, well, it's bound to be AI as the key topic of release, I guess, for development and things like that, or for building solutions. So let's see what comes out. And you're right, it's kind of more, as the event sort of sounds, it's more around building solutions and things like that rather than potentially technology products. It might be that we had RSA recently as well, I think maybe last week or the week before. There's a lot of, a couple of security announcements then. So probably just sharing the love across the events.
Yeah, the Microsoft build machine is definitely cranking, isn't it? That's for sure. Right. Updates come in thick and fast across every single area, it seems right at the moment. It's just. Yeah, non stop. Absolutely non stop. Really impressive to see. Yeah, it's definitely got, I wouldn't say worse. There's definitely more being released now again. Yeah. And it was bad. Not bad. It was a lot last year sort of thing. It's not bad, it's just difficult to keep up.
Bad for us. I agree, I agree. It's a good problem to have. Right. We like investment in technology. It would just be nice if their product teams weren't so absolutely huge. Right. So it's an absolute beast now. Yeah, I think you're probably right. It's probably our problem because you know, we, we're so wide on what we cover generally, so it's probably our fault anyway. Yeah, exactly. Yeah. Cool. Okay, so this episode is azure AI search.
Yeah, yeah. It's definitely a newer product that I've taken for a spin in my, I'll call it side quest of AI in the evenings and weekends. It was, it's quite interesting because I've been sort of, I wouldn't say starting from, I don't know, sort of going to the lower levels of AI and machine learning models and that type of space. I've really come at it from a, I kind of think of it like a comp science versus a software like platform engineer or something like that. Right. I've come at it from a how do we utilize these technologies more effectively, if that makes sense? Because I do get the power of LLMs and sort of chat prompting and scouring data from the web. But really what I've been trying to understand is the best way to apply these technologies to automate certain processes. Right. And so I've been sort of starting from the like, you know, running them on my machine and trying to get them going maybe with open source technologies and I'll talk about that a bit more. But what I'm continuously finding is when I search for x problem to x, well, x solution to x problem, Microsoft Azure technology seems to like whittle its way in, if that makes sense. Right. I did a previous episode on AI Studio which is effectively an end to end way to sort of subscribe to consume these models in a relatively friendly way. And I think that's what Microsoft's doing with AI search and it's why I wanted to do an episode on it. I'm not going to be able to deep dive everybody into the intricacies of all these different machine learning concepts and the data science behind it and the comp science behind it, but I'm really going to try and talk to you about real applicable use cases, if that makes sense, and why? I think this product is actually quite important for the ecosystem.
Yeah. Okay, so let's get started then. So I guess really the first question is, what problems is Azure AI search aiming to solve?
Okay, so throughout this episode, I'm going to use retrieval augmented generation, or RAG, as it's known by its acronym, as an architecture for leveraging large language models, LLMs like chat GPT, as an example. And I'm going to use this sort of scenario because I think it's easy to digest on a podcast without me having a PowerPoint deck in front of everybody and be able to actually diagram through. But it's actually a real use case that's being used today. People are seeing value from Rag assisted LLM interactions. So let's just talk about rag and what it is to start off with. So I think the idea of interacting with a large language model that is trained on web data. So let's think chat GPT, it's probably the most, well, it's probably a lot in this space that wouldn't agree with this, but that's probably the most ubiquitous, and I would say popular, you know, and most marketed LLM. So let's use that as the example. So you put in your prompt into chat GPT. That large language model is trained on data that was scraped from the Internet at a certain time. And so you put in your prompt, it goes and searches that vast amount of data that it's got, and then it uses its trained large language model to return us response that it believes that you have the intention of wanting to receive. So it is generating all the time. It's effectively guessing what the next word should be in the sentences that it's replying back to you. It's going, you asked for, get me the number of, get me some stats about different cats as an example. It's going to go, right, this person wants to know about this specific type of cat. I'm going to go and search my database of information to return as much relevant contextual information about that cat. And I'm going to use my large language model to not just return the data, but also to synthesize a response for you so you can ask it to speak in a certain way. And this is really the difference between what we've had previously in what we call full text searching, and we'll talk about full text searching as well, because that's also important. But traditional search engines and search indexes that we've had have been what's called full text searching. So it's just looking for keyword. It's looking for keywords in the data and not augmenting it with the large language model. So it's like, you know, I want to know about this specific type of cats, and then it will just list you the stats back instead of being able to describe it to you in a generative way, if that makes sense. Now, chat GPT and these web trained models are really powerful. They allow you to pull data out of these large data sets, right? Like the web is a huge, huge dataset. The challenge comes when maybe you don't want to search the Internet. Let's say you want to search your own data set. So let's think of an example. I'm a hotel chain, and I've got a bunch of hotels. I'll use that example as we go through. So I've got, on my website, I've probably got a list of all my hotels. I've probably got all of the marketing imagery, I've got all of the fluff and descriptions and the stats about the hotel, its location. Is it accessible, what the check in and checkout procedures are, the timings, that type of thing, specific information about those entities. Now, probably not a great, well, okay, let's say some of that information isn't publicly known. I did use the example of a website there, but so one could argue that you might be able to use chat GPD to actually summarize that information from your site. But let's say there is, you've got this custom dataset of things that you want to do. Now, you could go and train your own large language model on that data set, but you will need a plethora of different individuals in order to help you do that, not to mention, like a huge amount of intensive computing power that you need. So Rag allows you, the rag pattern allows you to essentially search for and build context inside the large language model in real time. So if I give you the example of if you were to open chat GPT and you were to say to it, you know, if you were to say, if you were to paste in a list of all the hotel names and you said, okay, hey, chat GPT, save this information in context. All you got to do is just tell it, and then you ask it about that data afterwards. But we're doing that in real time from a database. So I'll give you the work sort of workflow that you have. You have a, what's called a vector database, and we'll talk about a vector database in a little bit more depth. And what we do is we take our data and we put that into that vector database. So we have a job, maybe our hotel information is a bunch of PDF's. We'll extract all the text out of that. We'll transform and sort of vectorize that information. I will touch on that a little bit, but I don't want to get too technical. So we're basically building a database of information. Then let's say for our hotel chain, we're going to have a chatbot on our website that people can ask questions about our hotels. Somebody goes onto the website, they search for. Tell me, all of the hotels that have rooftop pools, you know, you know, in your hotel chain could be quite a specific question, right? You know, that might require somebody, you know, to, you know, an actual agent or operator, you know, a real human to actually answer that question because it might not be completely obvious from the website, especially searching through if you've got hundreds of hotels. So as you put that prompt in a rag pattern will send the query to the vector database, which will return data from your data set that it thinks is relevant to your search query, right? So the first part of it is just looking at your database and saying, I think Sam wants to know about hotel one, two, three and four. And we'll return that data back along with the prompt, because we've got the prompt from the user. We pass the prompt and the knowledge into a large language model to add the knowledge into the context of what that large language model has. And then we'll ask the large language model to respond. So the large language model has got its own, it's got all of its language processing generation abilities and all of that sort of stuff, but it's now got the context of the data that it thinks that it needs to know about. And we're not passing it the whole database, we're passing it just the parts. Because what's called inference, actually running prompts through a large language model is very compute intensive. So the lower the amount of data that you pass to LLM, the cheaper you're effectively going to make it in, you know, simple terms. So we filtered down our hotels, but in this example, I can't tell you that that's what it would actually return, but I'm just sort of giving you a working example. We've got a filtered list of hotels. We might even have filtered data about that hotel. It might just, it might just pull out specific sections of that content. The LLM then has that data. It responds back to you, and your customer then has a much richer experience because you get the text generation from the large language model. It can say, hey user, these are the hotels that I think from our portfolio that would be a good fit for you. And then it could actually list the hotel information there as well. And if you like index the URL's to the web pages as well, it can effectively give that level of guidance as well. And the way that I got to know about Azure AI search is that vector database. And the way that you process your data and store it is possible, but it does require a level of engineering knowledge. You've got to know the pieces that you want to piece together. There's lots of different vector databases and there's lots of different vector databases and there's lots of different pieces. But unless you've got the actual skills and knowledge, it can be a challenge to sort of cut through the noise and glue it all together. Lots of companies are now trying to sort of productize around this challenge and they're building these end to end systems and that's where Azure AI search really, really, really does come in.
Okay, yeah, thanks. Yeah, it sounds quite detailed about the process and sort of, I guess the, well, the process that the, from prompter to LLM to, you know, getting your response back, come on, saying how a rag sort of gets involved in there. I think you kind of started sort of to get onto this. But know, are these, this problem of building this vector database and things like that hard to solve today? What would you do if you didn't have something like Azure AI search or I guess those other sort of solutions that other organizations are building?
Yeah. So there's two ways that I've really seen this happening and I, because I've got a development background, I started with sort of the developer mindset of thinking about what are the libraries, the SDKs, the open source technologies that you've got in front of you to actually start working on it? But really the challenge isn't just the knowledge about how to piece these things together, it's also the resources that you need to do it. You know, a lot of AI acceleration and performance comes from the use of graphics cards or GPU's. So, you know, Nvidia is an example, accelerating to be insanely valuable and, and putting out, well, they essentially can't build. GPU's essentially fast enough because of the explosion of growth of these types of technologies. But in order to run some of these models slightly different than what we're talking about here, I suppose it's all intermixed. But you need large amounts of video memory to, to run these models and we're seeing new models that are appearing all the time which need less memory, lower requirements. That's because a lot of people are pushing AI to the edge. Good announcement. Just in time for new surface laptops which can effectively run micro models locally actually on the machines themselves to do their new recall. It's kind of a bit scary, but the new recall product and things like that. So it's not just an engineering and a brain problem, it's also a resources and an infrastructure problem as well. You've got to piece it all together and then once you've got all the pieces you've also got to host the thing. Azure AI search encapsulates a lot of these different technologies and sort of wraps up in a nice bow for you that you can effectively just transact, get access to it in Azure and start using it with no real, you know, massive sort of capex investment potentially.
Yeah, yeah, that's probably the key thing, isn't it, is that if you did it yourself, as you said, you need a lot of resources, you know, hardware we're talking about not, I say, you know that again, you still need a lot of knowledge like you said. But you know, if you're thinking of that, this is a way you need to go and you buy your hardware, things like that, and realize you haven't got the right stuff or you just found out, you just sort of found out about I guess as your AI search where you can almost try it at a cost, but at least you can pick it up and put it down I guess, as well. Okay, so probably we need to move on to then probably AI search, how does it work? Can you take us through how you get deployed and use it?
Yeah, okay. Yeah, so let's talk about how you sort of transact and get started. So to start off with, AI search is a completely hosted product inside of Azure itself. So you can go on to the Azure portal, you can create a resource which is Azure AI search, start configuring and get started. You do have to pick a tier and some service limits, but we'll talk about that when we get to. Then you've got to. One part of that is also reliability with any sort of production system because a lot of these AI tools are actually being put into user facing and critical workflows, I would say. So you could effectively deploy multiple AI search instances to sort of start load balancing between them as a 99.9% SLA and there is also a free tier as well, which is good, but there's no SLA on that side of things and you can put them in specific regions to give you a sort of logical separation of it. The other one as well, because this is effectively a large, there is more than this, but it's effectively a large database that you're creating. They have also thought about multi tenanted Sass scenarios, which is another complexity on top of complexity in the database world. If you want complete, well, let's not talk about complete segregation because I suppose you'd have to have a separate service per customer. But if you do want some level of actual segregation, you might do that with separate databases. In something like SQL, there's effectively a built in, I believe it's called high density mode, which allows you to segregate sort of workloads for multi tenant scenarios. So that's like just, you know, like it's just part of it, you know, that that would be a real challenge to put in place for organizations. And it's something that you do need to think about, especially if you've got sense of sensitive information. Then you effectively create what's called a search index, which is where you start to populate all your data into. You get a certain amount of search indexes per paid tier, and I'll talk about that when we get to charging. But it's effectively another bucket in order for you to actually load your, your data into basically if you're using vector search. So yeah, let's actually just talk about the different types because all I've really talked about is vector searching. Talk about full text searching as well. So if you do need to just return specific raw data that you don't want, LLM'd, so to speak, it has the ability to also just effectively store and do full text querying on the data as well. So instead of having to have like multiple data stores, you can effectively loop it in all into one singular place as well. I'm not really going to go into the process of loading the data in and things like that, but what I will call out is that there is effectively APIs to develop against. So there's a rest API, there's dot net libraries in order to actually interact with the system, but there are APIs and there are SDK processes that you can follow to load data, to also query the data. So there's sort of an end to end developer mechanism there for you to administer the content, but then also interrogate the content when you actually want to use it, which is, you know, there's one thing having the like the data loading side or the querying side, but this is just all completely looped into one specific place. There's a lot of talk around security as well with AI solutions like responsible use of AI. But also we can imagine that these systems could be processing sensitive information that needs to be protected. Any of these types of systems, you can, you can effectively configure inbound and outbound connection security so that you can effectively manage ingress and egress in and out. You can do customer managed encryption keys as well, and also apply document or row level security filters as well. So you sort of segregate the data that's actually available. So we've really just talked about the actual, what's the best way to sort of this is we just talked about basic sort of storage. So let's talk a little bit more about some of the more advanced features of it because, you know, create, tokenizing and creating vectors and storing them in a vector database is relatively simplistic, shall we say? That's, that's just, that's just, that's just part of it, basically. So let's talk about what can actually be supported. So inside a vector database, what happens is when you put the data into the database, it creates tokenized and vectored representations of the data that you put in. And the vectors represent the similarity of different bits of data relative to one another. So I saw a visualization of it the other day of like, it's like a three dimensional plot and it's got like points in space, and the distance between the points is how effectively similar a piece of content is. But they use a large, they use a special model to calculate how similar two things are. So the similarity isn't just done in the, in the sense of how, how similar two words are, but it's actually in the context of a large language, a model that's been generated, how similar two actual concepts are, basically. So if you're searching for different types of transport, then there will be a vector, effectively there'll be vectors computed for different types of transport, to group them together, to say these are all different forms of transport, not just saying what sounds like transport, you know, it might return like a trucking company or something like that. With transport in the name as an example, you can also. So we've just really talked about text so far, but you can also do multimodal content types. You can encode images and it's called embedding, creating embeddings. But you can use OpenAI clip or GPT four turbo with vision using Azure OpenAI to actually compose vectors of both content types together. So if you do want to search across different types of data, that's also really really important. They also have a concept called hybrid search which allows you to combine vector and keyword querying into a same request. So if you do want to pull blocks of just full actual queried content and then also generated results back, you can combine the two in the same system. You can also do multilingual searching as well. So if you do have a mix of, you know, different languages and content there that are supported, you can search across that content which is, which is really powerful. So, so yeah, so I don't want to jump into it too much because I think what I really want to do and I know I've been talking for a long time, but there's so much in this product and there's so much I haven't even talked about. I've only really talked about one sort of basic scenario. But if you are looking to, if you are looking to integrate AI into your product or your service, my recommendation to you would be to look at one of these tools and reverse engineer if you want to build it yourself. If you've got no understanding of AI and these types of large language model processes today because you're going to get a lot of the value of it from building on top of this and then you can sort of reverse engineer to the actual implementation that you want to build if that makes sense. Or you just roll this solution into production and just, you know, make hay while the sun shines so to speak. Right. Because it is like prod ready and backed up with, you know, a lot of engineering.
Yeah, I was just thinking, I mean that like you said, you've, it sounds like you've only touched, scratched the surface of it and that's still like loads, loads of capability there and yeah you're right. You know, I think, I guess that's when we might see that when we get to the, the costs. But you know, you're right, there's a bit, a lot of engineering in the background to build something that like you said, you know, supporting multi tenant just by potentially, just by flicking a switch within it and maybe a little bit of config kind of thing where previously, like you said, you might have to design that or you know, bring up loads of services to support that kind of thing or processes. So yeah, this is rate, like you said, ready to go and you can get started with it. So, so yeah, I guess what he kind of said about it, but we know what does it integrate with is this other stuff in Azure that it can just sort of just tie into quite easily?
Okay, so the two sort of use cases I wanted to cover was I wanted to start with copilot for security. So if you're testing or using copilot for security, you may have noticed now that there is an integration for Azure AI search where you can effectively bring your own data into copilot for security. In co pilot for security you can upload your standard operating procedures and your documentation directly into the interface. And I assume it's got something running in the background very similar to this. If not this, that's loading that data in for you. But if you do build your own datasets, you can load them in via, via a connector. Effectively it's a bit of a niche use case, I suppose, but we're already seeing that integration there in that product. The main way that I wanted to sort of call out and the sort of proof of concept of the pilot thing that I want to call out is Azure OpenAI Studio or Azure a studio? Because what you can effectively do in OpenAI Studio is you can connect your own data source to it and it'll build like a chat interface, a chat playground for you once you've made that connection using a large language model that you like, effectively rent in Azure. And then it also allows you to package it, call it your own model, and then deploy it for your users to then start consuming. So Azure AI search builds your database of information. You get your information in there and there's some really good examples on learn about how you get the data into the system. There's examples of loading data in from different systems, JSON files, loads of different sort of scripts and examples. It's also got some examples of how to sort of proxy the requests into it as well. If you can use Azure functions to sort of proxy requests into it, you can put rate limits on things like that to protect it. But yeah, get that data into AI search and then use Azure OpenAI studio to rapidly iterate and test it without having to set up the end to end process internally. Because even if you end up using like an open source model and you want to run it on your own hardware, you can at least test your data sets and basically mvp and test your model end to end and what you're thinking and your processes. So your feedback loop is massively reduced, your time to execute is massively reduced. You know, your, the knowledge that's required is, I'm gonna say is reduced because it's. It's well documented. It's sort of piece by piece, you know, described out in the portal. So I'm a real advocate of. Even if you. Even if you don't want to or can't use this in a production scenario, maybe it's too expensive or there's an intellectual property reason or a regulatory reason why you can't use it potentially. I don't know, I'm just, you know, sort of catastrophizing there. You know, at least using it to validate your hypotheses is going to be valuable. That's for sure.
Yeah. Okay, cool. Yeah, def, that AI studio definitely sounds like something to use. Definitely. I think when you last. Did your last episode on it, I think it was explosion of information and use. Yeah, I know. And I need to redo that episode because I've been. Yeah. The more I, you know, sort of investigate it, the more applicable some of these, you know, solutions actually are. I think it was only 13 episodes ago. 13 episodes. Jesus.
And it's changed so much since then. That's 13 weeks in effect, isn't it? Give or take. True. Okay, so let's go on to costs. You know how much it costs to run. You did say something about a free tier. We always like something for free. Well, we do. We love a free tier. Depends on how much you can do for free. But.
Okay, so with. So there is a free tier. There's actually 123-4567 tiers that you can go to. I'm not going to go through all of them, but let's just. Yeah, see how we get on. So there is a free tier. You can store 50 megabytes on the free tier. Doesn't sound like a lot, but that is a lot of text, if you think about it. You know, that's quite a lot. It's not a lot, but it could be a lot. You can have three indexes in your service as well. They're basically effectively buckets of data. You can't scale the instance at all, but it only costs, you know, it costs nothing per month, basically. Let's talk about standard. Well, there's a basic tier which gives you 15gb of storage and a max indexes of 15 at that point. But you can also scale out. So per scale unit, it's $73 a month. And you can scale up to nine units per service. And I believe the scale out limits effectively processing costs, basically because, you know, this is the database part of the system. More than anything. So it's the data loading, it's the searching, it's the returning of data. So think about this like your data layer. Then you've got standard s, so that's free and basic. Then you go to standard s one, s two, and s three, and they're effectively 160 gig, 512 gig, and one terabytes worth of storage. So just ramping up storage, and then your max indexes go up, so it goes 5200, then up to 1000, and you can scale up to 36 units of service. But the costs go up quite dramatically at that point. $245 a month per scale unit at the standard s one, and then $2,000 a month at s three. So you're really starting to ramp up there. But you're storing 1 data. And yeah, you've got potentially 200 indexes or partitions as well. And then there's also storage optimized. So there's two storage optimized skus at two terabytes and four terabytes, but you can only have 200 indexes on that. So it's just for smaller discrete indexes of data, but larger amounts of data in them. Up to 36 units again, and it's $2,800 a month, or $5,600 a month per unit basically at that point. So yeah, big ramp up in costs, but four terabytes is four terabytes as well. There's also some additional features. I haven't really talked about these, but we'll quickly skim over these. So there's a customized entity lookup skill which allows you to effectively provide a defined list of words and phrases that you want to label inside of documents as matching entities, and you can effectively process text records of that. So at zero to 1 million text record is $1 per thousand text records. In order to do that, not sure the size of a text record. I assume it's a sentence or so. Maybe document cracking is another additional feature as well for image extraction. So it extracts content, so you can pass it documents and it will extract text for free. Image extraction is then build on top of that. So if you want to sort of describe your images as they go in, in your documents, you can also do that as well. Again, $1 per thousand transactions, up to a million images. Semantic ranker is the last additional feature that you can use. It effectively uses, I believe it uses the Bing AI, the Bing model, I believe. And it effectively gives you an enhanced search, semantic search ranking. Basically, it's supposed to optimize the quality of the ranking of your search. Results. So it's used when you want to improve the quality of search results. Basically I haven't used it, so I wouldn't know. But the first thousand requests per month are free and then it's $1 per thousand additional requests on top of that. I didn't go into those things because this episode is long enough as it is. But it's not just storing the data, it's also processing the data in and out of it as well. It's not just a bog standard vector database, basically.
Yeah. Okay. I think Bing search got renamed to Copilot. Oh, did it? Okay.
Yeah, it was, I remember it being bing search and that. And then I think it just got called Copilot and then obviously copilot for like everything else. Yeah. Yeah. Wow. Okay. So yeah, you can definitely get started at least with some really small data sets to sort of get an idea of whether your LLM can and can create in effect its rag and things like that, and use a rag at least with that data or at least a subset of it. And then yeah, ramp up as you need. And again, I guess that kind of also indicates how much resource you might need and development time to actually build one of these solutions on prem and things like that as well. If Microsoft's suggesting, well, not suggesting that that's the cost it is to use their gpu's they've got and things like that.
Yeah, because the way that I see businesses using this is to validate their use cases for AI in a sort of a programmatic sense. You know, we've got our co pilots, our chat GPTs and you know, that's great for augmenting like actual users. But what if you've got a use case that you could potentially use a large language model for, you know, searching, you know, your custom data, your custom intellectual property that you've got as part of an automated process. It might be a sort of process in automation that would be really hard to automate using traditional development tools or low code, no code or actually writing code. So one of the things I've really learned about AI is that it can be a great way for certain specific use cases to reduce the amount that you have to effectively the amount of code that you've effectively got to write because it can effectively search large amounts of data with relative ease once it's all plugged together and working. And I think this tool gives you a lot of that plugging together very quickly. So if you do want to rapidly iterate and validate your ideas, going with a system like this should definitely, I think, be considered. You know, this might not be the right system for you. It. It might not do the things that you want to do. You might want more control. You might, you know, want to run it in house because of, you know, ip and sensitivity, but you could at least validate your thinking into this system even with mock and dummy data. And you do it relatively quickly.
Yeah. Yeah, definitely. Okay. Is there anything else, Sam? I mean, obviously there is lots of other things, but. Yeah, we really haven't got the time.
To go through on this one, but, yeah, 100%. I just want everybody to be aware of this product and try to understand where it might fit in instead of teaching you the MF degree about every single part of it. One, because I don't know it yet, and two, because it's an absolute beast. Right. So it's a completely, you know, new sort of area, especially for me, that's for sure.
Okay, cool. And it was, I think we mentioned it before, but it was season five, episode six, wasn't it, where you did your first looks at AI studio. So if you want to get a bit more info on that, you know, go and listen to that one for now until we. Well, till Sam, you update it with another episode because like I said, it's changed. Or your knowledge of it, at least on usage of it, has changed at least since then. Yeah, definitely.
Okay, so next week's episode is mine, and I'm going to be talking about device discovery and why it's important to understand what's on your, what's on your network from a device perspective and kind of the integrations into the defender XDR portal and what the benefits are really of bringing that data into there. So that should be a nice episode, hopefully. Yeah, no, it should be good.
Yeah. Okay. So did you enjoy this episode? If so, do please consider leaving us a review on Apple or Spotify. This really helps us reach out to more people like yourselves. If you have any specific feedback or suggestions, we have a link in our show notes to get in contact with us. Yeah. And if you've made it this far, thanks ever so much for listening, and we'll catch you on the next one. Yeah, thanks. All.