Unsupervised Learning is a podcast about trends and ideas in cybersecurity, national security, AI, technology and society, and how best to upgrade ourselves to be ready for what's coming. So I want to talk today about how to think about AI and cybersecurity, and specifically how to think about and what to build regarding AI for cybersecurity. There are a million different directions you could go, and you only have so much time and so many resources. So I want to
give a possible direction here. So I want to frame everything today by asking and answering two questions. One, how does software and security change when we add agents, and how should we prioritize our efforts on adding AI and agents to our cybersecurity program? So these are the main ideas I want to talk about that. I'll kind of bring us around to answering those questions, and these ideas build on each other. So we're going to take one
at a time. The first one is the concept of intelligence pipelines, which is a way to visualize workflows within a business. Second one is Theory of Constraints, which talks about how systems struggle to do things at scale and like where the blocker is. The third one is something I call AI state management, which is how I see I actually replacing most software. And the final one builds on all of those. And that's the AI security attack and defense framework. So let's start with something I call
intelligence pipelines. So I started thinking about intelligence pipelines in the context of how AI will replace human workers. So these are explainable visualized workflows that show how business processes require human level intelligence. So imagine like you have someone named Mark and they work at a company called claim. Right. And this is what his day looks like, right. He does all these different tasks that you see in this diagram. The brain icons are where his human action is required.
And let's say this is a world without AI agents. So this is where this is human work. This is things that only humans can do. And Mark is a really good employee because he could do 124 of these claims per week, and he has an 87% quality score. And it's really hard to find people like Mark. So he makes a lot of money. They give him nice perks because they don't want Mark to leave, because 124 claims a week 87% quality. So that's insurance. This one's
for a medical company. This is called Badspot. And Badspot is a company that reviews moles in person using licensed dermatologists. Now, Kim is one of the dermatologists here, and she sees 212 patients a week, has 92% accuracy. And again, it's really hard to find employees as good as Kim. And the key point here is that the only reason we have jobs at all is because of these blue icons, right?
These blue brain icons are the specialty. They are the geniuses of like general human intelligence that somebody can do what Mark could do and Kim can do. Right? Is it a new spot? Analyze the mole. Was it dangerous before? You have to do these manual checks. This is human work. This is why we all have jobs, right? Otherwise, this would have just been like a script or automation. We've had programming and automation for years and years and years, decades.
But there are certain things that only humans can do, and that's what these blue icons are. So I just want to stress that again, the only reason we have jobs is because somebody is paying us to be one of these blue icons. We as humans are inside of a workflow that looks like this. And I'm trying to get you to think of your job in terms of these workflows, because this is exactly how AI is going to think about your job, right when it comes to
optimize it. Air quotes optimize. We don't normally think of our jobs or our pipelines or our workflows in terms of a visual flow like this. This is exactly how McKinsey is going to think about it, or KPMG or any random consultancy that comes in to optimize air quotes. Your company and your department and your team and your workflows. They're going to say, okay, tell me exactly what it is that you actually do. And they're going to produce some sort of visual. It's going to look something like
this for your job. Your team's job, your department. And actually a collection of them will be your company. So you have to start thinking about this. That's why this is the first concept here. So this one's for a military intelligence company. So you have to look at satellite photos and do a bunch of analysis and then create narratives around them and write the report. So just like the other workflows, you see the different pieces and you
see why they require human right? This is intelligence. You know, analysis. And Amir is the best at his job because he could do 12 of these assessments per week, and his accuracy is really high. It's 84%. This is an attacker organization based out of Eastern Europe. Let's say this is their primary attack workflow. Again, this is a company called cyber Attacks or whatever. So they find targets. They run recon tools. Again, these are things that manual security testers
will have been doing for quite some time. And again, you can't just script everything right. You can script a lot of this stuff, but a lot of it requires the blue icon of human intelligence and human thought. And that's again why we are employed. So you've got to find and filter targets because they can only do so many assessments. Right. They got to run a bunch of
recon tools. They attempt to exploit what they find, and then they try to do various tasks afterwards, like expand, uh, you know, get initial access, sell the initial access, you know, uh, gain persistence laterally, move things like that. So the core idea here is that human work can be broken down into workflows like this. Humans don't normally see things this way, but I can guarantee you this is how companies coming to replace you will see things. And unfortunately that's going
to include your management. Right. That's going to include the C team and the board. They're going to be like, why can't we just do this with AI. So there are going to be thinking about how to make this kind of diagram for the work that you do, for the work that we all do. And by the way, like a year ago, uh, AI consulting. Yeah, this is 2024, I believe might have been 2023. I'm pretty sure it's 2024. McKinsey's consulting for AI was already 40% of their business, 40%.
This is according to New York Times. So this thing is coming fast for us. So that was the first concept. Next one is Theory of Constraints, which a lot of you probably have already heard of. And it relates directly to the intelligence pipelines we just talked about. So the Theory of Constraints basically says that you should stop trying to optimize everything simultaneously, because not all problems are actually
hurting you the same amount. It says the biggest thing hurting you is the slowest point in your overall workflow. Or as Goldratt puts it, your overall output is equal to the output of your worst piece, and that's what you should address first. My friend Joel works at OpenAI, and he works on a team that tries to figure out how to make AI benefit defenders as much or more than attackers, and we do a couple of 3
or 4 hour walks a month. And one of these walks many, many months ago, he gave me like religion on this. In his mind, everything comes down to what Constrains attackers the most right now as well as defenders. And how is AI going to change or unblock those constraints? And his quote is attackers aren't constrained by a lack of access. They're drowning in access. Instead, it's the human labor currently required to exploit that access. That's the limit.
Remove that and we are effed. And again, that's my friend Joel Parish. So think about that. We don't have a target problem and we don't have an access problem. We are stuck at the exploit phase. Right. And if you think back to those pipelines and workflows, you can kind of like put a little red mark around one of these brains, one of these blue brains, and you can actually make one of them red or something and say, this is the one that's really hurting us. Now start
to think about AI. Now start to think about AI agents and saying, can we spin up 10 or 100 or 1000 or 1 a million agents to help with this one particular spot, which, based on the theory of constraints, massively speeds up the entire pipeline because that was the blocker, that was the constraint. So thinking back to our pipelines and again, you want to think about what your pipeline
looks like, right. For your productivity, your security workflows. Think about where they're currently constrained and how your constraints compare to your attacker constraints. Core idea here is to ask how AI will affect those constraints for both you and your adversaries. And once those constraints get unblocked, where does it get blocked next? Right. And how can we use agents to consecutively just keep unblocking or even apply the agents?
I mean, kind of breaking away from the concept of theory of constraints, but you could kind of find all the blue icons and just say, hey, let's scale the crap out of this and let's try to increase our quality scores at the same time. So we were talking about 124 assessments previously. We're talking about 84% quality level. Well, if we actually have those metrics we actually have a workflow like this. We actually have a diagram. Well it just becomes a numbers game. Okay. Can we do a
thousand assessments instead of 100. Can we take the 84% quality level to 85% or is it only 79%. But we're doing a thousand, so it's still worth it, right? Those are the types of questions people are going to be asking when they start doing this. So before we go into the next piece, I want to quickly cover my definition of agents because we're going to talk about agents a decent amount. So lots of different definitions out there. And I think it's good to level set before proceeding.
So I think it's an AI system component that's capable of autonomously taking multiple steps towards a goal that previously would have required a human right. And if you break that down, it's a component, right? It's not all of AI. It's a piece of AI. It's autonomously pursuing multiple steps. So autonomously. So it's given a goal and it's autonomously chasing that goal by taking multiple steps on its own.
That's the autonomous part. That's the goal part. And the last part is kind of the most important steps that could only be done by human previously. Right. So this means not scripting, not automation, not basic programming. Because if that were the case, it would already be scripted, right? So one or more of these steps that it's taking autonomously towards the goal can only have been done by
human previously. And I think my overall favorite definition of AI is actually technology that does something cognitive that previously could have only been done by humans. So it's kind of in that mind frame. The next idea I want to talk about is a frame of thinking I've been thinking about for like the last six months or so, which I call AI state management. So the idea is, AI's ultimate form, or one of its ultimate forms, is to collect and understand the current state of a system
to capture and articulate the desired state. You know, we know what the current one is. Now. We want to know how we wish it looked, and then use however many pieces of intelligence or agents, or thinking or reasoning or whatever you want to call it, which is basically AI to help us go from the former to the latter. Again, collect and understand the current state capture and articulate the
desired state, and then use intelligence, automated intelligence AI. I didn't think that was actually an acronym, but, um, that was accidental, but. Yeah. Automated intelligence. Artificial intelligence is what it really means. But you're using all those tools available, reasoning agents, context, all of that to go from the state that you're in to the state that you want to go into, right. The state that you wish you were.
This is an extremely powerful universal use case for AI as it gets more and more advanced, especially as it gets larger and larger context. Because even a small company, you can't fit everything into the context we currently have. The latest model that just came out has a 10 million token context, but even that is not big enough to hold, you know, the full state of an IT system, the full state of the business, the full state of
all employee activity. Like that's a lot of stuff. Plus you have to update it right every five minutes, every ten minutes, every 30s every hour. However often are you going to do. It takes a lot of context. This is the ultimate idea, the main concept here. Current state. Desired state. How do you transition? So in order to do this, you need to have a massive amount of data as I just talked about. And that state has to be updated as often as possible, maybe easier for
a smaller company still. Actually not possible for a smaller company today, but soon it will be. But for larger enterprises, extremely non-trivial. We're talking about if you wanted a really high resolution state, I mean, that would have to be terabytes, at least gigabytes depending how much you summarize. But what you build is your current state, your desired state, and then you ask questions continuously as your main form of managing the program. And you finally get to recommendations from
the system. It's really powerful and it's not really theoretical. I actually built a system like this in 23. So the contents of the context are of course everything. This is everything about the system. But the advantage is you can feed it your goals, your logs and your projects and your budget and your team. And what are people currently working on? What is the current status? All of this stuff, you know, all those stand up meetings or all those planning meetings that all kind of goes away?
It's just data inside of this context. And I want to take a look at how it all fits together. So like I said, I built a system like this back in 23, very early after I guess ChatGPT came out. I started building this system, and it's a working system that uses this using a fictional company called alma. So I started off with tons of context about this fictional company history, mission, goals, projects, risks. Team members budget, all that. And then I said, okay, what questions do I actually
want to ask about this? Because I do. My background is, uh, information security. So cyber security. And, uh, I've been doing a lot of security consulting, a lot of security assessment, a lot of security program management. And I have essentially been using this system before I came out, because it doesn't really require AI. It just requires really good questions and a really good understanding of the company. And I was doing that manually without AI. And now with AI,
it's completely ridiculous. So let me show you some of it. So here's one of the most common questions that we get when we're managing a program. What is the list of projects that we're working on? And here's the answer. Notice the fact that I just gave it an echo. Right. This is a command line version of the system. But give me a list of the projects we're working on, along with a ten word summary. Okay. And now here's
all of our projects. This is the type of thing that would take quite a while to get an answer on. If you're a manager or your director or the VP or whatever, and you're like, hey, what are we currently working on? Not too many places, unfortunately, have like an intranet that you can go to and just like get the current list. Turns out that things old, there's actually four versions of it. There's competing versions of it because some other team is doing the same thing. Actually, it
lives in this Google doc. Oh wait, we got rid of that Google doc. That one's deprecated. Now go check this new one. Oh, it's also in confluence. Oh did you check Jira. Like it's a total mess. Here you are asking the single unified AI context and it gives you the answer. Here's another common one. What are our metrics that are related to our projects? How are they related? This is the type of thing. This is a giant
blue icon. This is mental work that a human on your team must do to try to figure this out. This thing just instantly mapped it. Just like that. Instantly told you we are working towards fixing these metrics or improving these metrics. And that's why we're doing these projects like that. Here's another one. We start with the risks, and we want to see how those risks relate to our projects. Right. So we have our risk register in here. And we could see why we're working on particular projects
because of these risks. And we have a mapping for that. So now we have a breakdown of why we're working on what. And we can ask questions like are we focusing on the right things with our efforts? Here I'm asking about remediating critical vulnerabilities on crown jewel systems. Look how clean this narrative is, right? Here's our progress on
remediating critical vulnerabilities on crown jewel systems. Cool. This is the type of thing you can just copy and paste this and send it on to a leader or whoever you need to send it on to, assuming they have the access to see it. And yeah, shows a consistent reduction in the time taken to remediate critical vulnerabilities on crown jewel systems, improving from 21 days in October 22nd to less than six days by March 2024. That's a
cool narrative and you've got the data right there. This is total critical remediation, and this one we're calling a pattern to actually output the graphable data. And this is for something called fabric, which I'll put a link to this video right here. Should be over my head or something. So this one says create a TRC graph. So now look at that. If you recognize that that's actually CSV.
Guess what you could do with CSV. Boom. Now you have graph data you can add to any tool to get an instant visualization of progress over time, to put it in a presentation or whatever that just flew out of the eye automatically in a couple of seconds, right? So those are pretty cool. That's like level one. Cool.
Let's move on to level two of pretty cool. And I can give you a million more examples because you can literally just ask anything of the system, and as long as it's somewhere in the context, it's going to make all the connections itself, right. But those don't truly show the power of telos file structure, which, uh, again, that's another project you could check out here, uh, which I put together a while back. But if you want to see the actual power of this, level two is
where it starts. So let's look at at a more advanced example of like a what if question. So I tell the system that Nadya is leaving, uh, which she isn't, but I tell it to help me readjust things based on project priorities. And here's what it gives me. And keep in mind, the more detail I have in the team section about skill sets and experience, the better this gets. But look at this. Nadya is leaving the team and I need to give her projects to someone and or
get additional help. Use your expertise as a risk professional to help me reassign her work and or get additional help keeping in mind our risks and priorities, especially our critical projects like that's a little bit unnecessary. Like, you can actually not give it that context and it's still going to figure it out. Look at this. Reassign this project to this other person. Assign this one to this other person. Hire a contractor to do the WAF install.
And it's got reasoning for this. Keep in mind what we're asking is a complete and absolute cybersecurity expert. Okay. So it knows the different skill sets. It knows the different requirements. So it's giving us advice on what to outsource, what to do later. Just completely defer and what to prioritize in which person to give it to. That's insanely helpful. Insanely helpful. So what about adding new stuff to the context?
How do you update the program? Well, because AI is seeing the whole picture, you could just add it to the bottom of the context. And if you're using Rag, you just add an additional entry. If you're using, you know, a single file or you're using CAG or you're using
whatever you're using, you can simply append to it. You don't have to go in and clean and keep the thing perfectly, you know, upkept and maintained, because when it reads the update, it'll realize that that supersedes the previous thing. As long as you have like a timestamp, which we do here. So this is how you can update progress, for example, on a KPI, you know, July 2024 Criticals are now being fixed in nine days. So what I added here are a few lines with new metrics updates,
bottom items here. And guess what? When I run my previous question, here's the graph that it produces. Look at that. It added new items to the graph to the CSV output, because we added them to the context. And the I just figured out okay, we're going to update the graph. So now let's see how powerful the system is for knocking out very time consuming everyday tasks for any security program. I mean, like so much of security is actually just
doing this stuff, responding to auditors. Here, I'm telling it, I need a narrative on critical vuln remediation times for an external auditor. Not only wrote the narrative, but added in our actual numbers, it just wrote it with all the numbers already intertwined. This is the type of thing that can derail somebody in the security program for hours, or a couple of days to go and find all this data, see, pull from the different things or whatever.
Like this just takes massive time away from security teams to be able to do this. So think about how much time an AI like this actually saves. And here's one for the executive team. And notice how it changes the tone according to who you're actually making the report for. And the more detail you put in the context file around sensitivities, reporting preferences, or in the reporting pattern you use for each audience, the better this output is actually going to be. So you can set these up for
as many groups as you regularly report to, right? Internal team. The board. Auditors, internal, external, whoever. So those are pretty cool. Level three. Now let's look at some insanely helpful functions that will save even more time. So by a show of hands, who likes security questionnaires? Okay, let the record show nobody. This is absolutely insane. This is another section of the tlos file, the context file that I created for this fictional company. Okay. It's a set of updates
to Alma's IT infrastructure and security controls. And just like everything else, look how casually these are written. This is like a logbook migrated from Google Workspace to Office 365, with MFA enabled for all users. It's not in a schema, it's not in a format. It's got a basic time stamp there. Rolled out Sentinel one on 50% of corporate laptops in August of 2020. That's all you need to say, right? And you see how I've got a history here. Okay.
All the way back to July 2019. Basically, we're a soup sandwich. Back then, admin accounts still not required to use Toofar company laptops. No, MDM everyone is admin. It's it's, you know, mass chaos pandemonium. And then as the updates start going down through the months, it starts to get better and better. And this is just it's just a logbook. So now the question is we're being asked by a customer what percentage of endpoints are protected by endpoint protection.
How often are you asked stuff like this in a security questionnaire all the time. And they'll ask it in a different way each time. So you can't have like a static question and static answer if you're in vendor management. You've been down that road before. So here's the answer written in a form we can copy and paste directly to the customer. All came back from alma, the AI system. Here's another one asking how much our infrastructure is US based.
I asked to give me an answer for both cloud and on prem, and to give the response in an email gives me the 100% answer that comes from our Tlose context file. So we just recently got this where all of our stuff, all of our cloud environments, all our different AWS regions, all of our, you know, on prem stuff, it's all US based. That was an update that at the bottom of the Telos file, keep in mind the context file up here could have been the
worst answer because that's the old stuff. But now that we have that updated, the answer is now 100% and we have it perfectly in an email, we can just send that to somebody. And ultimately we're heading towards being able to ask questions like these. How should we move from our current to desired state? Going back to that idea, we talked about what should be in our desired state, right. That that's a, you know, big brain question. Big brain question what should be in our desired state that you
actually don't see in there? How about this? How will attackers exploit our desired state? Right. Natural. Follow on. How should we change that desired state to make it less exploitable? All looking at the same unified context. So the insane thing about this is realizing that most software is kind of the same in cyber. We're pulling in data from somewhere. We're trying to pull signal out of that data, and
we try to do something with it. Right. But it's exactly the same with productivity apps or B2B processing or whatever. It's all the same. So the core idea here is that when the AI understands where we are and where we want to go, it can help us get there. And that's where this entire program management thing that I just showed you that I just demoed, that's real AI system, like it's sitting right here. I can ask you questions. And obviously my context file for this thing was a
few hundred lines, right? I think it eventually got to a few thousand lines. But as the AI systems scale right, we've already got millions of tokens available. This is just going to get easier and easier to do at larger and larger scales. So we're going to be able to do the same kind of stuff even for a company. So this next idea is kind of a culmination of all of this in a way to prep for attackers in a more tactical way. Like what exactly are they going to launch at us, and how can we get
ready for that? So the whole concept starts with asking one question what do attackers wish that they could do to us? What can they do now? What can they do in like six months from now? And what can they do in 18 months or 24 months or 36 months? And obviously, you can't know exactly the answers to those because you don't know what text exists at that time. You don't know how advanced the AI is going to be,
how good agents are going to be. Right. But we can start with a giant list of attacker capabilities, some of which they have already, some of which they're about to get, some of which are coming soon and some of which are distant in the future. But if we start with what we know, the ideal is with what would be devastating to us. I'll give you an example
of devastating to us. They can instantly identify all open services on the entire United States at any given time, and can launch an attack against all of those services simultaneously. Any of those attacks that get inside and are successful, they can pivot within 30s to compromise other systems internally. They can instantly crawl around and find all the different, uh, stuff like an advanced human red team. They could figure out what the business does, uh, what their most sensitive, uh,
talking points are. Their most sensitive, like worst case scenarios are and it could right then the exploit where and the ransomware and the ransom notes and the verbiage and everything. And then it could actually call with AI and start negotiating the ransom because it already has the stuff. It's all zipped up. Oh, and by the way, they're already selling to all the different information brokers at the same time. And this took them four minutes. Okay. And they could
do that to an entire country. That is science fiction right now. That is science fiction because we're talking about millions upon millions of IP addresses, web locations, APIs, websites, attack surface components, basically. Um, so IPS services, uh. Net ranges like think of how much work you have to do to actually go and do that in a short amount of time. Think of how much infrastructure you would have to have. So in this list, these capabilities are
further down the list. So the idea of like, okay, look at a very small startup with ten people and find their infrastructure, find their subdomains, determine what open services are available, see if any open services are for sensitive services. Get back a list of those within ten minutes. Okay, that might be possible today. That might be possible in less than ten minutes. Uh, fairly soon. It depends on the capability of the attacker, but it's a type of
thing that's within the realm of possibility. Further down the list is doing that for a larger company or doing it in five minutes instead of ten minutes. So now what you start to see is the better the tech gets, the more realistic these things become. You start moving down the list of capabilities until you start moving towards the sci fi worst case scenario I gave you in the beginning.
So we basically start with this giant list of attacker capabilities, some of which they already have and most of which are not possible yet. For each one we collect a bunch of information for it, some metadata that will help us like figure out how it might be used against us, or how difficult it could be, like cost or talent, or scalability or constraints or whatever. Because this is what's going to determine how, when the tech improves, which parts
of these are going to change. And here's what makes this thing so powerful. When I talk to companies who are thinking about cybersecurity and AI, the number one question I hear is what should we build? What should we build? Like, okay, I hear about agents, so I build agents. What do I build agents to do? What should they defend? Should they monitor like they don't know where to start? So
this framework is a really good answer to that. It's basically saying, here's what's going to be thrown at you because here's what the attacker wants to do. So what are you doing to be ready for each of these? So you basically take this attacker capabilities. That's why it's Acad right. It's for defense as well. You use this attacker capabilities map to build your defense right. The same way that they're using AI to power this list of
capabilities and move down the capabilities list. You could be doing exactly the same for defense. So what we end up with is an AI cyber infrastructure that basically looks like the attackers. We are constantly assessing ourselves. Basically, we are executing everything on this attacker capabilities list as a unified engine. It's a continuous kind of like a continuous
red team type situation. Or in the current, you know, parlance, it's essentially a attack surface management or external attack surface management. But really that starts to merge with automated pen testing. Automated red team attack. Surface management. Like all these things start to merge into this AI powered execution of these
attacker capabilities. And it's just continuous. And the same thing that the attackers are doing is what the defender has to be doing to themselves so they can get there first. As we get this capability up and running, the entire game becomes two things making sure we're adding new techniques to look for, right? So we're making this thing smarter.
We're making it more knowledgeable, right. New techniques, new attack surface new services to watch out for, you know, new ways of finding information, whatever new threat Intel and making sure we find our issues. The second one here, making sure we find our issues and fix them faster than our attackers. Right. So this is a race. This is a giant game where they are running the Acad system. We are running our own Acad system, this automated continuous
red teaming system that knows everything about our company. Right? Going back to the context thing. They're going to have this this giant context that knows everything about us too, right? But they have a disadvantage. They should have a disadvantage that they don't know all of our internal configs, hopefully. Right. So we should be able to move faster than them. But we have to start by building the same thing
that we know they are building. So both attackers and defenders basically end up building a world model of our company. What we do, what we care about, what matters to us, what our weaknesses are, what's most valuable that we have to different types of attackers, and what we must avoid at all costs. Then we are looking at the capabilities map to see what new capabilities are coming online as a result of the advancing AI, we could say, okay, well,
we don't have to defend against that yet. For example, we don't have to defend against an attacker that can see every single change and attack it within five seconds. We do not have to defend against that right now. That would cost tens of millions of dollars. We would have to hire a massive staff. We would have to massively scale our automation. It would be an investment that would sink the company, most likely because the company needs
to be building products. They can't spend that much on security. So that's like a thing we don't have to worry about yet, because we know the attacker doesn't have it yet either. That's why this map is so important, because we could say as the tech starts improving, we could say, you are here, right? You are here on this level of like what is possible in the world given the current state of automation, given the current state of AI, right?
So we're looking at the capabilities map to see the new capabilities coming online as a result of advancing AI. And we could say, okay, yeah, this is where we need to be currently. This is where we need to be thinking about coming up soon, because we know that's what our attackers are about to get. So it's not just what you need to build, but it also helps you inform you know when to make what investment of
what size in what area. So I'm still talking to a few people about how much of this I should actually release, but I'm pretty sure attackers are going to figure this out anyway. So I think the priority needs to be on enabling defenders. We've got over 60 capabilities so far for attackers and around 40 so far for defenders, because even though the defenders should be using the attack map as well, there are also defender capabilities. And this is something Jason Haddox and I talk about a lot.
And I think his idea was to start with a defender side basically like what is the what are the core things like automated SoC and uh, you know, incident management and stuff like that. What are those things that we could use current AI to just improve? Um, so I think we're we're thinking about the combination of the two together. And, uh, yeah, we've got 60 attacker, 40
defender right now. And, uh, looking to release this within, uh, 30 to 60 days, uh, depending on a few outstanding conversations. So what I love about it is it can give us tremendous focus. We can base our efforts on what we expect to face in the real world. And based on the combination of what AI is possible and what the attacker wishes they can execute. So what I love about this is how it can just give us tremendous
focus as defenders, right? We can base our efforts on what we can expect to face in the actual world from our attackers. And based on this combination of what AI is possible and what the attacker wishes that they could do to us. So I want to end by answering the two questions we started with. How does software and cybersecurity change? When we add AI and specifically agents, it changes everything. It replaces human intelligence in workflows, right?
And it does so at a scale that addresses the theory of constraints problem for attackers, making them far more capable. That's what's going to allow them to move through these stages of the attacker capabilities map. Right. It is solving this the various pieces of the theory of constraints. Second question how should we prioritize our efforts around adding AI agents to our cybersecurity program? Start by building your AI state management system. Now I'm calling this like a UCC,
a unified company context. And there's a million companies working on this, right? Microsoft has their own version. You know, Databricks is working on theirs. Splunk is probably working on theirs. I imagine the game is about to become bringing all company data into a thing that I can see and hold in its brain all at once. I'm calling it UCC Unified Company Context. But who knows what Gartner is going to call it? Like, everyone's just going to like,
figure this out, uh, fairly soon. So you've got to start building this thing now because your attackers are going to be building a UCC for you, which all their attacker tools are going to then use to attack you on a continuous basis. If you weren't vulnerable this morning at 9 a.m., maybe you will be at noon. It's going to check again when something they learn that you just acquired a new company, you have a new attack surface. Oh, maybe they'll now attack you now because of that. Maybe
they learn things through your bug bounty. Maybe. Whatever it is, the context and the situation on the ground continues to change. So they're going to have this world model of you. You need to have a better one. That's the trick of this. And the second thing you want to be doing with this context that you have is figuring out what your desired state is and what your current state is. And this is going to allow you to say things like,
what should I fix first? Vulnerability prioritization. I've been in the vulnerability management space forever, and it's so frustrating to try to figure out how to prioritize a vulnerability. And people are like trying to put that information inside the vulnerability itself with like a CVE rating or something. You can't get the prioritization from the vulnerability itself. It has to come from the context of where it's affecting something, right?
The company itself. So this UCC is going to be the context for doing vulnerability management So you're talking about current state to desired state. You're going to say, I don't want any critical vulnerabilities in my crown jewel systems. That's one of my items in my desired state. Now when you ask the question, what can I do to
make that real? It will then say, well, based on the fact that you have these vulnerabilities and these things in your risk register, and the fact that we know all these dev teams and we know what dev teams are using, what GitHub repos, and we know what code changes go in and we know who owns what applications. This is the holy grail. This is the thing that management has never had which you will have due to
unified company context. You'll then be able to find the actual vulnerability that can be fixed by the actual developer in the actual code repo, and you can submit your own PR and just have them approve the PR right. This is the type of thing that you could do with this unified company context. So the next piece is start working on your set of defender capabilities. Based on. Now you have your own context. You understand the attacker context that they're going to be moving through as this
tech tree improves. And now you can start improving your system. You can start assessing yourself continuously using these same exact techniques and using your own internal UCC. And if you do that, I think you're going to be in extraordinary shape. That's what I want to share today, and we'll see you in the next one. Unsupervised learning is produced on
Hindenburg Pro using an SM seven B microphone. A video version of the podcast is available on the Unsupervised Learning YouTube channel, and the text version with full links and notes is available at Daniel Missler newsletter. We'll see you next time.