A Conversation with Bar-El Tayouri from Mend.io - podcast episode cover

A Conversation with Bar-El Tayouri from Mend.io

May 06, 202546 min
--:--
--:--
Listen in podcast apps:
Metacast
Spotify
Youtube
RSS

Episode description

➡ Get full visibility, risk insights, red teaming, and governance for your AI models, AI agents, RAGs, and more—so you can securely deploy AI powered applications with ul.live/mend

In this episode, I speak with Bar-El Tayouri, Head of AI Security at Mend.io, about the rapidly evolving landscape of application and AI security—especially as multi-agent systems and fuzzy interfaces redefine the attack surface.

We talk about:

• Modern AppSec Meets AI Agents
How traditional AppSec falls short when it comes to AI-era components like agents, MCP servers, system prompts, and model artifacts—and why security now depends on mapping, monitoring, and understanding this entire stack.

•  Threat Discovery, Simulation, and Mitigation
How Mend’s AI security suite identifies unknown AI usage across an org, simulates dynamic attacks (like prompt injection via PDFs), and provides developers with precise, in-code guidance to reduce risk without slowing innovation.

•  Why We’re Rethinking Identity, Risk, and Governance
Why securing AI systems isn’t just about new threats—it’s about re-implementing old lessons: identity access, separation of duties, and system modeling. And why every CISO needs to integrate security into the dev workflow instead of relying on blunt-force blocking.

Subscribe to the newsletter at:
https://danielmiessler.com/subscribe

Join the UL community at:
https://danielmiessler.com/upgrade

Follow on X:
https://x.com/danielmiessler

Follow on LinkedIn:
https://www.linkedin.com/in/danielmiessler

Chapters:

00:00 - From Game Hacking to AI Security: Barel’s Tech Journey
03:51 - Why Application Security Is Still the Most Exciting Challenge
04:39 - The Real AppSec Bottleneck: Prioritization, Not Detection
06:25 - Explosive Growth of AI Components Inside Applications
12:48 - Why MCP Servers Are a Massive Blind Spot in AI Security
15:02 - Guardrails Aren’t Keeping Up With Agent Power
16:15 - Why AI Security Is Maturing Faster Than Previous Tech Waves
20:59 - Traditional AppSec Tools Can’t Handle AI Risk Detection
26:01 - How Mend Maps, Discovers, and Simulates AI Threats
34:02 - What Ideal Customers Ask For When Securing AI
38:01 - Beyond Guardrails: Mend’s Guide Rails for In-Code Mitigation
41:49 - Multi-Agent Systems Are the Next Security Nightmare
45:47 - Final Advice for CISOs: Enable, Don’t Disable Developers

Become a Member: https://danielmiessler.com/upgrade

See omnystudio.com/listener for privacy information.

Transcript

S1

Unsupervised Learning is a podcast about trends and ideas in cybersecurity, national security, AI, technology and society, and how best to upgrade ourselves to be ready for what's coming. All right, welcome to Unsupervised learning. I'm here with Burrell Taylor, head of Minaya, and it's great to see you.

S2

It's great to be here. What a pleasure.

S1

Awesome. So before we get into the IO stuff, uh, it looks like you have a pretty interesting background. And I just want to, like, get a walkthrough of that real quick. Like, uh, what gets you excited about tech? Like, what have you been doing in tech all these years? And, uh, just like to hear about you.

S2

And, wow, it's a it's a very long and short time. Um, basically, since I'm 12, I'm programming, um, I'm into tech. I also like I wasn't too into game development and I also thought game development. And then I went into slightly into cyber of it, like hacking games into getting more like a, you know, getting money in the game or, you know, points or something.

S1

Yeah. Yeah. Higher. Higher scores.

S2

Exactly. And it's so fun. And yeah, especially in the early days, it was so easy.

S1

Yeah. Because everything everything was local, right? Uh, all the resources, everything was stored locally. So you could just edit it and it would appear on the server side. Yeah.

S2

Yeah, I remember all the all the Hexa, all the Hexa. Yes. All the hex that they tried to find the right place with the, with the scores to, to change. And it's also fun. Um.

S1

Yeah. You imagine the tools that we have now. It would be so much easier. But I.

S2

Know it's.

S1

Crazy. Yeah, yeah, yeah. Um.

S2

And then, like, you know, since then, I'm, I'm moved into more network security operating systems. I also continue into the Army. And so I have like a lot of experience in Army stuff like cryptography, networks, research. Um, then I was the first engineer in augmented reality startup. Um, and afterwards I built my own company and we did prioritization for, uh, for cloud native alerts, like for container, container image scanning. Uh, and then we got acquired and

into mend, which is the same company I'm in now. Um, yeah. And it's now basically it's called Mend Container. Today it's the container reachability. And now also I moved into mend AI, which is a new product, and we founded, uh, to do AI security.

S1

Very, very cool. What do you think? It's like the common thread going through all of this. Like the main thing, uh, driving curiosity.

S2

Um, wow. Amazing question. Um, I think I always was really, uh, really excited from new, uh, new industry changing technology. It was always the technology that, uh, that, that, like, was enabling something and, and make it so exciting, uh, especially in the startups world where you have these giant companies and one day or like in a process, pretty quick process, um, they're all disrupted. There's tons of new things, uh, you can build. And yeah, it's a challenge it to challenge

the Giants. So it's amazing.

S1

Yeah. So, um, are you mostly interested in the network security stuff? Obviously you're doing AI stuff now. Everyone's doing AI stuff. Um, but like network security, application security. Uh, like, what is like your center of mass? Like your favorite thing.

S2

So for sure, 100% application security. It's so exciting. But basically, at the end of the day, it's all about developers and the vulnerabilities, um, like you have in your code. Um, so my focus is now application security and AI security for inside the application. And that's like our special take. Um, we look just on application, we try to, you know, there's so many issues there. Um, also on the AI components there. So.

S1

Yeah, what do you see is the biggest problems right now in Appsec? Um, obviously a big part of that is going to be AI because there's so many AI applications. But would you say AI is the biggest sort of application security thing happening right now?

S2

I thought like we looked on many issues and when we founded the Atom Security and it was clear the biggest issue is prioritization and the the like the size of the backlog of vulnerabilities of critical vulnerabilities you have. Yeah. And then now with all the reachability technologies that allows you to to like reduce the noise, basically remove the false positives and the, the trend of platforms and aspm allows you to take all the findings in one place

and prioritize them smartly. And like we see the biggest issues now like a what's introduced in AI with AI. And then there's like two different things. There's the I the employees are using in order to be more productive so that it's reduced some risks. And and what we see as like the biggest risk. And because we like, uh, we see how the world is changing and how our

customers are moving like towards the, um, in the products. Um, is the AI components inside applications and especially the AI agents, um, inside these applications that are in production.

S1

Okay. So when you say AI components, do you mean like, uh, libraries? Like what other pieces other than the agents do you mean by AI components?

S2

So it's a great question because like in the beginning, um, when when I like did simple data science before before, you know, ChatGPT and llms, um, there wasn't many components. You know, you had data set and then you build with it some some machine learning model. And and now it's the amount of components is exploding because you have the, um, you know, the data layer and you have data set and then you have also data is like changing because

you have data for training. But if you take already a model which is another component and then fine tuned it, or just doing alignment. So you have many types of data. You also have many types of models. You have models you're using for training, models you're using for uh, for, uh, for like for fine tuning and models are using as is. Yeah. So that's more components. But we also have all the

components on the code layer. And we have the system prompt and the agent Ancient tools and the third party tools like MCP servers. Yep. And the actual agents that can speak with many of these tools. And we have like many agents we're already seeing, we already see adoption of like multiple multiple agent frameworks.

S1

Yeah. And maybe the APIs themselves, which they're not actually AI components, but those agents will be calling back to traditional APIs as well. Yeah. So that's a pretty good list of the components. You're going to mention something else like another another component.

S2

And I'm definitely someone just asked me today like what's the difference between like old APIs, third party APIs and MCP servers or agent tools? And basically like you, you would you would think there's no difference. And I would argue, but like, tell me what? Maybe it's also a question for you. Tell them what you think. I argue the main difference is the way you're using them and not

the tool is basically the same tool. The way you use them is that it's more close and more connected to your data, your critical data.

S1

Yeah, yeah, I would say another big difference is that, um, usually when you build an API from scratch, you have a skilled developer who is going through very systematic process to define exactly what methods are possible. They still make mistakes, obviously, because we have API security problems. However, at least it's being manually done right. Whereas with um, I feel like with MCP servers, the problem is you could have a, you know, data and you can have APIs on the

back end. You spin up this NTP server and it just kind of goes and collects all that functionality and presents it's its own new APIs that could be used by the ancient tools. So I feel like you could create more functionality without knowledge. And that that's kind of the issue, because you might be surprised by actually what

can happen through that NCP server. So I think, yeah, I think a lot of stuff is being stood up with NCP servers, and the people hosting it don't actually know all of its capabilities.

S2

It's a great point. Also, in a way, you can't know all the capabilities because part of the like, the best thing about this AI revolution is that the interface, but also the worst, is that the interface is fuzzy. The input and the output is fuzzy. It means.

S1

Yeah, it's 100%.

S2

You can't define exactly the signature of the function with all the variables. That's only integer. You can't put string there, right? So now you can afraid only from integer overflow maybe. And and and now the interface is so fuzzy. It's text, it's PDF it's voice. It's PDF with image that has some something inside it. You have so many options. So the text is huge.

S1

That's right. And then there's also the issue of like the fuzziness of if you're actually interacting with an agent on the receiving side that is front ending but separate from the MCP server. If you're talking to an agent and it has the ability to use the tools, you might be able to confuse it or trick it into using the APIs it has available in unsafe ways. Right. Um.

S2

That's a great point.

S1

Yeah. And it might it might respond back and say no. And you ask in a different way and it still gives you results.

S2

And by the way, we've seen in the wild and many, many patterns, malicious patterns we've seen with open source libraries, like in the beginning of a, you know, the concept of SCA and open source security and with models and like a concept, like a typosquatting when you're trying to do phishing to humans. Yeah. So humans are close, you know, the wrong open source library because they change something. They added a dash. So same with models. We've seen it.

S1

Yeah. For like for like npm packages, stuff like that. Package managers.

S2

Exactly. So we've seen it also with models in the evolution of us of our product. And now we're seeing it with both with agent tools and libraries. So cursor uh you can very easily precursor uh to use a typosquatting package. Let's call it this way. And same for NTP servers. You have malicious FTP servers.

S1

Right? That's a that's a great point. And that could just be a man in the middle that just passes on requests. Right?

S2

Exactly. It's totally it's look 100% legit.

S1

Yeah. But I'm saying when you submit to that malicious one, it could be still submitting to the the real one and returning you real results. But in the meantime, gathering data or doing whatever.

S2

I guess that's the worst, because when you have a Bitcoin miner, I guess like the let's call it malicious actor knows that it's going to be caught in the next few months, and you're trying to make the best out of this month. And this silent, silent man in the middle type of malicious packages, malicious models and malicious servers, that's probably the worst, especially when it's third party. so you don't need even to open source your server.

S1

Yeah, MCP is going to need some serious security help very quickly. It's because everyone's just running full speed, installing as many of these things as they can. And yeah, it's it's a good point. It's it's a really big mess right now. Um, what what other things? Uh, we got malicious MCP servers. Um, we've got agents that can be tricked. I always talk about just like the, um,

the agents having too many tools available. Uh, because oftentimes I think when if the business is pushing, like, we must have I, we must have I. Oh, and by the way, we must have an agent. They just give the agent access to too many tools. And, um, the guardrail infrastructure isn't really there yet. I don't know if, uh, you guys have something around this, but, um, Um, like, uh, I use bedrock a lot, and bedrock has, um, uh, some pretty cool guardrails stuff built in, but we're running

way faster than the guardrails are being laid down. So I just feel like there's a there's a mismatch between the amount of power that agents have and the amount of, um, access and power that they have.

S2

I think this asymmetry is was right for every new category in security. That's true. I think AI security is probably one of the quickest categories to to catch up. If you look like on on games, for example, game security on operating systems, um, in the early days or networks in the early days, we still have protocols like DHCP and ARP or DNS. It's so easy, so much trust into into other, uh, other people in the early days.

S1

Yeah.

S2

Categories like a car's security. Um. Oh, yeah. Security. Very. It's amazing how quickly they adopt security practices even before the log4j2, uh, happened. Of the industry?

S1

Yeah, it's a good point. Yeah. ARP is always the one that trips me out the most. It's like. It's like, um. You don't even have to ask a question. You could just receive answers like, oh, by the way, um, here is the Mac address that you're supposed to talk to. And your host is like, thank you very much. I will update that table immediately. It's just.

S2

It's amazing.

S1

And the fact that it all still works. Yeah. Interesting. So I think I agree with you. I mean, we're basically running with scissors with I, um, it's funny because I've been in the space for so long, like 90 since 99. So we, uh, and I started mostly in network security. And as you move to web security, you have to relearn all the network security issues, right? We learned that those lessons for ten years, 15 years, we forget them when we move to web security, we forget

them again. You go to mobile security, you forget them again a little bit when you go to cloud security. And now with AI but yeah, maybe. Well, okay, here's the question. Why is AI security picking up so fast compared to the other ones? Why is the delay shorter?

S2

Um, I think it's I think if you like, if you think about it, um, everyone understand all the products are going to have AI. And because of this fuzzy interface and that you want to you want it to be connected into your most important data sources. And I think everyone understands kind of a game of of who's going to be the first lock forward in a way. Maybe everyone just understand it's a matter of time and you don't want. It's just you don't want it to be you. The first lock for Jay.

S1

That makes.

S2

Sense. I understand I understand all the companies, like all my customers that like trying to push hard for to develop with AI, to develop AI into their products. Because if they won't do it, all the competitors will do it. I mean, it's a huge advantage. Also us as like a security vendor, we need to we're using AI to solve problems in our products to to understand code and

to suggest code. Remediations. And it's something that you know, or you're going to be the first to do it and you have this advantage, or you'll be the last to do it. And you Like you lose the game.

S1

Yeah, that could be it. The the fact that it just feels so big. People just have a natural fear of it. Whereas maybe it was slower with the previous revolutions. How do you see the distinction between security of AI versus AI security.

S2

Security of AI? Basically, with every new every category in security. When you say security, it usually means securing X, right? Like, yeah, a cloud security. It's securing the cloud. SaaS security is securing SaaS somehow with AI security. Uh, it's it's still confusing because I guess it's in the early days, uh, I think we should align that AI security is securing AI and not like it's not, uh, securing the the output of AI or something like that. Um, but a time will tell, I guess.

S1

Yeah, yeah. I wonder if they start to merge. I think the reason, maybe one of the reasons that it started out being separate was this whole concept of, um, just the model, uh, remember, uh, data poisoning. It's not talked about as much as, like, um, in 22 or 23, but it was like, what is the, um, can the data be poisoned that, uh, you know, the eyes are being trained on? Right. So it was like, I don't know, there's just not nearly as much focus on that anymore.

And now it's more about it. I would agree with you. I think it's actually merging more now.

S2

Yeah. Also. Yeah. Also like, uh, in the beginning everyone spoke about like, AI driven security. Um, and it's kind of funny because, uh, anomaly detection, it's basically like using AI in order to find anomalies. Um, and so many of the categories are already using heavily, heavily based on AI. But if we look in the future, there's not going to be any vendor that's not using AI. That's right. Also in application security, like your promise at the end

is suggesting how to harden your code. And of course you're going to use AI.

S1

That's right. That's right. We don't have any database companies because um, what we do that that only make databases or whatever. But every company is a database company. Every company is an Excel company. Like. Yeah, it's just, uh, it's just built in, um, so so what what do you think about the current solutions? Uh, what do you think about current solutions in terms of, like, what are the current, like, appsec vendors, like the ones that have

been around for, you know, 15 years or whatever? How are their solutions like By solving these problems that we've been talking about.

S2

And the really simple answer we just don't. And like a classic abstract, abstract solutions, doing a really good job in detecting some patterns in your code of vulnerabilities, like CW is at the end of the day, inherently is a pattern, and not it's not a technique to solve it. To find the CW, the way we think about it is about finding patterns and CVS, which today is maybe a bit sad day for CVS. And let's say there's many other security advisories. GA the others, you know, the

GitHub security advisory, the Ruby Security advisory. So all of them, um, it's something that it's in your libraries. Um, so the way you think about it is only libraries and code issues. But the thing about AI security is that first, to even to detect these components, it's not enough to look in libraries. And libraries can give you a hint. And actually it's something it's a great thing we do. We only using the libraries tell you the hints. It it. I remember the day when we found you can do it.

You can just just take the libraries even before you put, you know, some heavy scanners on all the places. Just take the libraries and extract all the hints you can about the usage of AI. And that was like a great, uh, you know, uh, Aurora for us. Um, and, uh, and that's maybe the first step, basically. How do you find models? Models is something um, or you find it the art of the model artifact, which can be either the repository in the container in some S3 bucket or spatial, uh,

you know, models, repositories. So that's a new place for models. But if it's, you know, third party, if we're using some inference providers, which from my experience and most customers starting from using inference providers, most companies, they start with some prototype and they will use the easiest, you know, OpenAI, Azure, OpenAI bedrock. Yeah. And they can so so that's something you can't find anywhere. Not in the container. Not the it's not an artifact. And the only place is in

the code. And none of our current solutions is like is positioned us to find it. And so, so like, uh, you need a new, new solution. And you I think the AI security companies are going to be in your products are going to be something like totally new, and that is going to be merged into the current DevSecOps and Appsec workflows.

S1

Yeah. The way I think. Think about it. Or I always used to think about it because my background is largely web security, um, which we would have a major distinction between dynamic and static security. Right. So, like, I was at 4 to 5 for a very long time, and there were the static people over there, and we were the dynamic people. I feel like the AI piece definitely is on the dynamic side. Um, it has to be right. So rather than just like web testing and

API testing, it's got to be more comprehensive. Um, so let's just jump right into.

S2

We, you know, we started with, uh, with the static part of AI because we found that the biggest issue is threat modeling just to discover what you don't know you have. And we found that, uh, that's we found that it's not just a big issue. It's like huge issue because most companies just don't know what they have on the order of ten. But then we because of that,

we moved into the dynamic section. And when we're now like trying to offer both because at the end of the day I a model, there's a bunch of numbers and like code you can understand. You just can't understand it. And the only way is through through a conversation, through like dynamic and simulating attacks, through pentesting basically.

S1

Yeah. Yeah. Absolutely. So. So let's think about that. So, um, just talk me through like how your solutions are set up. Like what is the the basic tagline for it like, um, is it, uh, is it asset discovery? Are you discovering like the the structure of the application? Are you enumerating like controls? Roles. Are you testing controls? What exactly is it that the suite does?

S2

Um, so it's it's kind of both. Um, we're starting from, uh, statically scanning all the assets, all the AI resources, components, you can call it in different names, and to find all the AI you have. And usually at this stage we find that usually we start like, you know. Like average company will say we have some AI components, some AI driven applications. Then where we do the initial scan, we found it's more than that by a factor of ten. And and that's every time exciting to see like and

you know it's it's you in the security. It's really easy to say that it's bad. But actually it's pretty amazing how the industry is trying to push harder to push forward. So so even without the security knowing everyone, just trying to use it, trying to use it, trying to make a smarter applications and more valuable products. Um, so so we detect it. And then the question is, uh,

if I have dopesick, uh, it's a drug. Maybe it's bad because rag connected to data, you know, you don't want to leak this data, and you may be concerned about dopesick unless you're in China, which is a very it's the opposite. You probably want to use dopesick. And so you just want to know what you have. Then you want to detect all the risks you have in these components. And which is kind of a composition analysis, um, when you think about it in deeply, not only libraries

in SCA. Um. And that's amazing. You find, uh, legal risk. You find security risk for all these components. It can be models, uh, you know, uh, agents and speed to MCP servers, agent tools.

S1

And how are you getting all the components? Are they providing them to you, or are you getting them dynamically? Like, how are you getting these from the customer?

S2

Um, so we started with taking the hints from the open source libraries, and then we used these hints to look into the code and find, uh, all the components in code. Um, and if we have, uh, if you're using self-hosted models or like open source models, uh, we looked for artifacts. So we'll find a file that we, that, you know, from the magic of the file, from the headers of the file, we know it's a model. And then we have some fingerprinting, uh, technique to, to match

it into the actual file. It's a inherit from. So if you took a file, fine. Tuned it, you know, took something from hugging face. Finding it and using it will tell you about this file and that it's related to hugging face model and hopefully not malicious one.

S1

Oh, interesting. I mean, that's that's an offering by itself, right? Just like asset discovery.

S2

Of course, you'll be surprised by how much, um, you know, there's a the proportion between what you know and what you don't know, uh, is, is large.

S1

It's pretty.

S2

Amazing.

S1

Yeah. Whenever, whenever I do security assessments, I usually have a really large visual that I'm building out throughout the week. And honestly, it's just laying out what the application does. It just lays out what the functionality is, where the data is flowing. And as I'm interviewing more and more developers and more and more people in the company, I just bring them in and show them this thing and they're like, oh yeah, everyone starts taking pictures, they start taking,

they're like, oh yeah. Yeah. So because they don't have any documentation that's actually this good. And it turns out if you just see it, if you just visualize it, you're like, it's obvious to everyone who walks in the room that this is a problem and, you know, when it's not visualized, um, or it's not explicitly laid out. Yeah. You just miss the stuff. Okay. So so you have the list of components.

S2

By the way. It's a, I like like all the trend of showing topology.

S1

Yes.

S2

What you have. Yeah I love this trend. Um, I think there's a really large debate in the security industry because people say at the end of the day, it's not it's not showing the data. You need tables. But there's something about visualizations that like pass the give you the value shows it in a different way. And it's funny.

S1

I would just show an arrow and I would like color code the arrows like according I would have the data classification for the company on the board. And if it's one of the like last two data classification like, um, sensitive secret, whatever their classification is, then I would just have all the connecting dots or the connecting arrows be red. They're like, why are these red? Because, um, you know, Sarah over here or John over here said that that

type of data classification is included in this data. And suddenly after you talk to 20 different people, the whole board is red, right? The whole board's red. And they're like, okay, I didn't realize the problem was this bad. Yeah. And then those documents end up being used as the official

document for the application going forward. So I feel like this is this is absolutely needed, especially for I when I talk to people about AI implementations, I'm just like, Show me exactly where the agent is in this workflow. Show me exactly which APIs it has access to. And as they start writing it down, they're like, oh, well, I think I see the problem. I haven't even said anything yet.

S2

Yes. It's amazing. You know, I love threat modeling. I love, like, doing it, uh, with customers. And you started from it so much.

S1

Yeah, absolutely.

S2

And at the end of the day, uh, like, it's not enough because you need to detect the issues. Um, so there's malicious models, which is, like, surprisingly, surprisingly, surprisingly, like, uh, getting, uh, evolving a category. And because in open source models, there is a you think a model is only like a metric of numbers, but actually it has also some serialized code. It's like many of the many types of models are

like pickle files. Yeah. Not necessarily. It can be a family of, uh, of types of people files, uh, which is a code that is serialized into some, like, uh, some opcodes.

S1

It's always it's always the parsers. It's always the parsers.

S2

Yes. Yeah. And it's going to, you know, pull code. It's theoretically it can pull another binary, uh, from remote and run it. Yeah. So it can be super malicious.

S1

Yeah.

S2

It's also like a known risk, you know, like other classic vulnerabilities. Um, only this week I saw a company that, uh, scanned all the papers, like, all the academic papers, and try to extract the existing attacks and to map, you know, attacks to models. Hmm. I think we still it's still not, you know, we're still not there at the end of the game. It's not like, uh, see that you have so many security advisories. I think we're getting there. And we're for sure more mature as an industry.

S1

Yeah. Okay. So let's say someone what is an ideal customer look like for you, like in terms of they come to you and they say, I have this problem or this problem or this problem and you say, okay, perfect. That's exactly what we do. What would those problems look like and what would you tell them? The solution is.

S2

Um, there's two types of companies and I see all. It's a company that's concerned about AI, but they don't know what to do. And so that's kind of the free threat modeling stage. That's that's a stage where the first thing you need to do is discover everything. You have all the agent models. Um, usually a company will say, we have a policy that says we're only using, you know, Azure AI. And then we'll find many, like a hugging face model lipstick and other service providers. And and that's

the first stage. And once you we hope to do the threat modeling, we see how the companies start like thinking and getting more advanced. And the other type of companies are the a bit more sophisticated ones where they already have some like 2 to 3, let's say, or a few like major AI driven products. They know about them. They already like a threat model them. So the discovery will still help them a lot, but the more advanced they know, they have some, let's say uh, tax uh application,

uh reviewer, automatic reviewer. Um, that like is very smart, but it's very risky because it's a PDF and something can happen. Usually they wouldn't know exactly what the risks. They wouldn't know that what they need is, you know, some something that will simulate attacks of sending PDF with an image inside the image. It will have some prompt injection,

but they will know they will. Heard about the OWASp top ten for applications, which is, by the way, great a list of threats and, you know, great awareness.

S1

Yeah, that makes sense.

S2

And then we like tell them, hey, you have here potential a malicious model. Check it. You have here IPsec. And that's the findings from the dynamic scanning of the you know, all the attack simulations. Okay. And then we need to fine tune how to have to do what's the right attacks. What's the most important attacks to check.

S1

Nice. Okay. And then what what is the product like. What is the product suite do like what are the the pieces of functionality.

S2

And so it's basically what we spoke about. And it starts with discovery of all the components. Then it moves to the risk of each component individually. And once you know about all the components you need somehow to connect them into a behavioral risk and to to understand, to contextualize all the risk of the entire components together. And because you can't have a system prompt, you can't have a model. And you can have some, you know, rug

that connects them to a database. But unless you understand you have this database, this system from twist, this model, that's the only way to understand that you have context leakage. And so for that we have these behavioral risk rating automatic rating attack simulations and drop everything. We're doing a mitigation what we call governance and mitigations and where we where you can create policies to prevent let's say if

it's Rug and Leipzig block it, this facial combination. And in the near future we're going to release the ability to what we call guiderails, which is not guardrails. It's guiderails.

S1

Interesting.

S2

The concept of mitigating putting like a the mitigations into the code.

S1

Not like.

S2

A.

S1

Firewall. Like dynamic. Dynamic detections as it's coming in.

S2

No. So actually not dynamic. And the point is that integrated into the development process. And the developers will get suggestions for mitigations to the code. For example, add to the system prompt this and that in order to block this redeeming findings in order to mitigate from these findings. For example, a format your output of LLM into more like a less fuzzy format because you don't need a fuzzy output. So why, uh, like why open your attack surface and.

S1

So many.

S2

Small things that everyone has. Everyone got these issues and you have agent, maybe the same agent should not access the data and run code afterwards, because if someone will like prompt injection the agent, you can run code that access the data. And so you know these simple. There's so many simple steps no one knows.

S1

Yeah, that makes sense. Okay. So it's it's the initial assessment of, like, what the attack surface is. Um, just discovering the assets. Then there's the dynamic assessment, and then there's mitigation.

S2

Exactly.

S1

Perfect. Um. All right. What do you think is happening next? What are you worried about happening coming up? Uh, trends or risks that you see coming up soon?

S2

Um, I think we it seems like we have a every new like it seems like every, every every old category in security and need to find a way to adapt. And because, like speaking with customers and we see that identities that is like, you know, kind of an existing, uh, category with tons of issues now has a new aspect in the AI driven applications because you want to manage, you want to make sure one user can't access a different user data. And so, you know, it's old problem

with a new suit, let's call it. And so.

S1

Yeah.

S2

We see so many so many, so many nutrients. And exactly like you said before, so many new same problems and new problems which basically the same. So that's one thing I think, which we need to remember it that all the new things are basically old.

S1

Yeah. Yeah. I like you're talking about um, yeah. I think the distinction between what an agent is doing and making sure it's not acting on behalf of an actual human. Right. Uh, making. Yeah, non-human identity versus human identities and making those distinctions, having separate policies, separate policies for those. I think that's going to be important.

S2

Yes. And, you know, like the same concept. You want multi-tenant also for agent. You want permissions also for agent and.

S1

Separation of duties. Like all like you said, all the old stuff we have to re-implement relearning the lessons from, you know, 25 years.

S2

Exactly. And but I think like if you, if you look on the new things classes and you see that multiple agents of course, but multiple agent frameworks, agents orchestration and create more issues because it's way more complicated to understand how it works and the communication between agents. Open the make the exploit way more complicated. It's way more complicated,

but like way stronger. A very similar to to what you have when you try to exploit memory corruptions and that you try to jump between a different places in the program in order to finally run the code. And so it's very similar to here. And if the code is if you have more attack surface, when you have the code is larger and you have more places and more gadgets that you can more places you can jump between each other until you run the code. And here

you have many agents. So you can you need just to find the agent, you know, to send the right prompt into to run the code, I think.

S1

I think it's worse, like you said, I think it's worse with multi-agent because that's more opportunities for tricking. Like you might have a smart one. Um, so a prompt injection might not detonate with the first one you're talking to, but it might pass it along to a dumber one behind it. And that one it will detonate on. Yeah. Interesting. Um.

S2

Also, if you think about it, when you have more components that speaking with each other, it's hard to track. What's the purpose of each one. So you're going to have these, you know, uh, let's call it a small drift in, in the communication between them. Uh, there's one that should do X, the one should do Y, but they will have somewhere like some very small interaction that will probably be exploitable.

S1

Yeah. Because for each one.

S2

Individually is totally okay. It looks fine.

S1

Yeah. So for the CISOs that are listening, like, what's the one piece of advice like what's the one thing you would say to someone who's implementing AI or trying to secure it?

S2

Um, I would say the, the easiest thing to do is, at least from my experience, is to try to create really hard and forcing policies. And that prevents everything and blocks everything. And that's really easy. Um, but I think the right way to go is to create integrated workflows like any other security issues, and that will help to enable, um, enable developers and enable the products to be really good and valuable in the AI driven, um, and more and

less disable, um, and it's right for everything. And I think it's especially for AI and I see it firsthand, uh, maybe a bit too much, unfortunately.

S1

Yeah, that makes sense. And where can people learn more about the company?

S2

Um, so Mend-ooyo is an apps company? Um, traditional apps company. One of the first SCA vendors that grew into, you know, SAS, SCA container scan container image scanning. Like I spoke before about the ability technology that got acquired for and now also AI security. And we love what we do. We have a lot of customers. Um, and uh, we're I think we're the first AI security vendor in the market

doing it. Shiftleft. And so we're really proud about it and hoping to make awareness in the industry about it.

S1

Awesome. Well, it was great chatting with you and, uh, look forward to chatting again.

S2

Lovely. It was my pleasure.

S1

All right. Take care.

S2

Thank you.

S1

Unsupervised learning is produced on Hindenburg Pro using an SM seven B microphone. A video version of the podcast is available on the Unsupervised Learning YouTube channel, and the text version with full links and notes is available at Daniel Miessler newsletter. We'll see you next time.

Transcript source: Provided by creator in RSS feed: download file
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast