Part 2 of 2: Leveraging Artificial Intelligence (AI) for Enhanced Cybersecurity - podcast episode cover

Part 2 of 2: Leveraging Artificial Intelligence (AI) for Enhanced Cybersecurity

Jul 26, 202423 min
--:--
--:--
Listen in podcast apps:

Episode description

Welcome to “HSDF THE PODCAST,” a collection of policy discussions on government technology and homeland security brought to you by the Homeland Security and Defense Forum. 

In this second of a 2-part series, our panel of cybersecurity experts shared their thoughts on applying Artificial Intelligence algorithms for security orchestration and response, the threats posed by adversarial AI and how to defend against them, transitioning products from R&D to operational areas within the Department of Homeland Security, and workforce training needs around AI.

 Featuring:

  • David Carroll, Associate Director, Mission Engineering, Cybersecurity Division, CISA
  • Donald Coulter, Senior Science Advisor, Cybersecurity, DHS S&T
  • Meghan Good, Vice President and Director of the Cyber Accelerator, Leidos
  • Dr. Reggie Brothers, former Under Secretary for S&T, DHS (moderator)

 This discussion took place at the HSDF’s 4th Annual Women in Homeland Security Celebration on March 22nd, 2024. 

Follow HSDF THE PODCAST and never miss latest insider talk on government technology, innovation, and security. Visit the HSDF YouTube channel to view hours of insightful policy discussion. For more information about the Homeland Security & Defense Forum (HSDF), visit hsdf.org.

Transcript

Dr. Reggie Brothers, former Under Secretary for S&T, DHS (moderator)

DC can you elaborate more on what you're learning from your social scientists in terms of trusting the metrics ?

Donald Coulter, Senior Science Advisor, Cybersecurity, DHS S&T

Absolutely . I think what we're learning is that the way you present information to people matters , and you have to be able to tailor that to the person , the individual right . One system won't fit all so different people either .

Whether they're in different roles , or even different people in the same role , they still have their own personal preferences for how to interact . One of the things that is actually very important is to build their confidence is being able to believe that , if something goes wrong , what can they do to roll those changes back .

So not only in the design of the systems . Up front is thinking about the implementation and architecture around your implementation . How can you make sure that you're building confidence that , hey , this is how it works .

You'll be able to see the results and the impacts of the changes that are being made , whether they're recommended or even being implemented automatically . But this is an easy button way of how to roll back if you have major concerns with what's happening .

Dr. Reggie Brothers, former Under Secretary for S&T, DHS (moderator)

And David , what are you seeing in your environment ?

Donald Coulter, Senior Science Advisor, Cybersecurity, DHS S&T

I'll get to the rollback in a second because I run a large DevOps SRE , ml Ops team , so that's interesting . I can give a practical experience with that . But I think , before all that , what I think I'm trying to . On another panel I was asked you know , do folks have ? I just keep saying anybody in this field has to have a basic data competency .

I think it was in this chair . You just saw my CIO at CISA talking . We're big partners in data governance and trying to get the norms around this right . He's leading that effort . We're working with them on that . I think those are just critical skills .

The practical things that folks in the audience might want to do to get started is just start getting that literacy out there , and what you're going to encounter from the social science perspective is I really like doing this , and this is just another new thing . So what I've tried to tell folks is it really didn't matter to me .

I went through that we acquired a $100 million AI company at one of my positions and I disrespected them as the head of data protection and cloud security . I just didn't understand . So I had to roll back , take the time to understand , get educated .

I just had a little bit of time on my hands , went to UT , got myself that AI ML backing and that was painful . So I empathize , but I just think it's essential .

I think if we aren't doing that for our employees , if we aren't giving them those opportunities , if we aren't teaching our , then all these concepts we're talking about , these , these tools , are going to still become mythical .

They're still going to think there's something they're not and I try to teach it when I teach the classes on it or I or I or I talk to folks . I said it in the green room , I I'll say it here . It's math , it's not magic , yet it's not a parody . We aren't up there at . You know where we're talking about things like iRobot or you know .

I could go on and on , but we're not there . So I think if we get that started and folks start to normalize it and understand it and it gets less scary , I think that's when we start to see traction on some of these initiatives . Now , on rollback , it's very important Canary testing for us , like when we push things out .

We push it out through site reliability engineering . We practice everything that any software engineering group would practice , and what that does is it allows us to ease in to these things as we see things and observe as we go along . There's no big bang . There's no . This is IOC , this is FOC .

Boom , that's a programmatic thing , and when you're talking about the scale we're talking about , you know you have to start and you have to work your way up to it , because that's the only way you're going to see these things , these behaviors , these patterns that we're talking about and that comes from places that we have labels on right now that folks maybe don't

pay as much attention to .

At the leadership side your DevOps team , your security operations team , your systems engineering team , the folks who are watching this , who have those placemats that say , yeah , this is what it should look like , and then all of a sudden , something goes off and then the notional thing right now is , oh well , let's go get data science , that's interesting .

No , it's not interesting , it needs to be fixed . It's that type of thing , but without that discipline .

So when folks are saying , hey , I want to go from a traditional , I do a big bang to I'd like to run agile release and I'd like to have full stack engineers looking this stuff over and then pushing it out , that is critical because it's not just because you want those folks to have those great skills and everything .

It's because that is the way that you ease into something in any global infrastructure , in any large enterprise .

Meghan Good, Vice President and Director of the Cyber Accelerator, Leidos

If I could just jump in too . But I think that goes a lot too with R&D and S&T to production , to an operations environment . R&d it works really great in the lab , and then you keep scaling up and you keep scaling up and scaling up , and then you need that cyber liability engineering .

You need that DevOps practice around it and that I think , drives adoption as well , of where you're very honest , like this is R&D . This is a prototype . It does great with this kind of protocol and this sort of data , this sort of attacks that we can detect and knowing the bounds , and then how do you keep incrementally improving it ?

I think that makes the users then adopt it too . Because you went through that . You had to go through all the valleys of tears .

Donald Coulter, Senior Science Advisor, Cybersecurity, DHS S&T

Yeah , I think expanding that like ML , sec , ops pipeline to be inclusive of your development pipeline , your R&D pipeline , you get those people talking together , you get those systems talking together earlier and you're thinking through the challenges of how do you share data between those systems , especially when , a lot of times , they are operating at different levels

of like , security and sensitivity . You're working through some when a lot of times they are operating at different levels of security and sensitivity . You're working through some of those problems that you're already going to have to solve , even in the production system anyway , and building those relationships between those people that are going to be involved .

So hopefully that's building trust . I'll add at the end , I go further than that . I want my CRCX teams involved . They are bookends . This is what we were thinking about and this is what the analytic was supposed to do .

Didn't really do that on the end , because you can get into that mode where you're like I got it done , it's okay , we're good , and then you don't get a feedback loop and then you wonder why nothing is being . You know why did we get this data again ? What was the operational mission ? I don't remember .

So that's the only thing I would add is your CRCX teams on both ends , like bookends , just watching that and spinning that in a continuous cycle . It's very valuable .

Dr. Reggie Brothers, former Under Secretary for S&T, DHS (moderator)

Let me ask you guys a question , because if I'm seeing the audience , I may have heard , or I may have heard , if I'm a leader in my organization , that I've got to give many of my people data science creds . What are you guys really saying ?

What level of education or fluency do you think a user has to have in order to be more comfortable with these tools ?

Meghan Good, Vice President and Director of the Cyber Accelerator, Leidos

I think there's a whole range , first off , but I do think there's easily available ones from Coursera and these little nanodegrees of data science for dummies . I mean , there's so much out there right now that I think is very easy to consume . I think other ones are even just following things on OpenAI .

I actually feel like OpenAI has done a great service to us all of making things very simple to use , but then I think it's all out there . Then , at the other side of it , of practitioners , they have to have more in-depth knowledge . That's probably traditional education , but that's a whole lot experiential too , and of actually trying things out on these tools .

I don't think I have a UT degree in it , though , so I'm just saying that because it was one of the most painful things .

Donald Coulter, Senior Science Advisor, Cybersecurity, DHS S&T

I hate Python . I never want to see it .

Meghan Good, Vice President and Director of the Cyber Accelerator, Leidos

But you have a point of view on it and that's an important competency builder too .

Donald Coulter, Senior Science Advisor, Cybersecurity, DHS S&T

You know the thing is you have to also be empathetic to folks . It was a journey . It was not easy , it was painful and and it wasn't something I was used to I was at that point I wasn't even building things . Even building things , I was a risk guy going , no , you can't do that , this is how it has to be built .

And then they look at me and go we can't use the data that way .

If I were to tell you how many times a day I hear that where I work at CISA , well , you know , we got PPNI or we got PII issues Maybe they aren't , maybe they aren't issues , when maybe they aren't , maybe they aren't , maybe they're just assumptions , and that goes back to the labeling and everything .

And we say , well , we can't really analyze then because we just don't have access . There's whys in there and you've got to remember the folks who are dealing with that .

If they just get something from the privacy office saying , nope , can't do that , then the whole thing just goes off the rails at too early of a stage where you're not actually pushing the envelope a little bit , because the machine's going to push the envelope for you . It doesn't care about your privacy impact assessment .

The adversary doesn't care about your rules or your FISMA score , they don't care . And it's essential that we act ethically . We act this way . But it's also if folks know more about the data , if they know more about how this works and they're less . I don't even want to say they need to be expert at it , they just need to be less mystified by it .

You know , like the chat GPT thing , I like to joke around . I'm like you know there's a reason why it can't spell things right . There's a reason why pictures look kind of wonky . I asked a Navy officer when I'm not doing this , and I said , hey , could you draw me a picture of an aircraft landing on a carrier ?

And I got this weird looking flying squirrel thing with wheels that was coming right down on the water and I'm like that's interesting . The model has no concept of what that term carrier means in this context .

But that got me laughing and you see the mistakes that it makes and you hear about it on the news and you say , well , we shouldn't eat glue , we shouldn't do that . But it's essential for folks to have that , because I still tell folks all my team I'm like every one of you sitting there have the bigger cognitive machine , use it , be deliberative .

You're the heuristic machine , you're more powerful . Give them that confidence , get that mystification out of there and let them go to work .

Dr. Reggie Brothers, former Under Secretary for S&T, DHS (moderator)

Thanks , we're running out of time . I do want to touch one more topic . That's adversarial AI . So , megan , if you could give us a definition of adversarial AI and then like and then I'd like , I'd like you guys to talk about what your concerns and how do you think we can mitigate this .

Meghan Good, Vice President and Director of the Cyber Accelerator, Leidos

So I think there's a bit that I referred to as adversarial AI before . There are probably textbook definitions , but my working definition is really with AI out there .

You know , chat , gpt even is a great example there are ways to manipulate the data , to find things around the model , ways to avoid being detected by a model and ways to actually steal it or replicate it , and so there's a whole host of adversarial techniques being developed to do that .

And the thing about adversarial AI for me is that it's very interesting , as we're at an adoption point of more and more AI in our systems , and how are we preparing ourselves for that ?

And I think that really is something that's very easy for us to identify as cyber folks , because we're used to an adversarial domain , we're used to things always being in contest to the defenses that we have and we have to build our defenses based on that .

So exploring adversarial AI in that context makes a lot of sense , and I think for us , what we're doing is making sure that our models that we deploy are actually defensible , that we're actually able to already anticipate that adversarial pressure and then have ways to be resilient to it .

Donald Coulter, Senior Science Advisor, Cybersecurity, DHS S&T

Yeah , one of the things I'm excited about is even as we develop our tooling .

So , for instance , we have a program developing enhanced malware binary analysis and identification capabilities , but as an inherent part of that program , we're using adversarial AI to attack the tool even as it's being developed , to identify some of those flaws or weaknesses or potential vulnerabilities in the resilience of that tool and its efficacy , and finding new holes

or okay , we developed this , but what could go wrong and then quickly addressing that and using AI to identify ways to address that to improve the efficacy of the tool .

So I think , including as we learn more about adversarial uses of AI technology and adversarial attacks against AI based systems , from a cyber perspective , we're probably at an advantage because we're always thinking adversarially , but incorporating that not only in our cyber programs but in our tech intensive programs in general , just having that adversarial mindset as you

develop it and incorporating that into your program and your development plan is critical . It is critical and again , it's always a joke . Right , there's the old book how to Lie with Statistics , and then I just add that hyperscale , because that's what we're talking about .

But you know when you're talking about , you know , the ghost of the machine , the model and an attack . Yes , to that end , I think one of the things we're really focused on , whether it's AI or not , at least at CISA and at least in my division , is trying to be the , you know , trying to be the . What I would say is the bellwether for secure by design .

That's a big initiative at CISA . You're going to see a lot more of me talking with my friend Bob Lord on that , and it's essential that folks understand . To me it's still basic knowledge . Right , we get a lot of requirements . We get , okay , check the box , control , control , control . But what does count as adversarial ?

Because testing sometimes could seem you won't always get the same results that you think , and some of that is actually good discovery it's not necessarily adversarial .

So , and when it's happening as fast as it's happening in these environments , you can quickly kind of just check the wrong box and let it go and not know the difference between something that has a necessary effect and maybe he's teaching you something and something you know you could kill it too quick and that's the danger , right ?

And it's a very fine line and I again I think it goes way back to you know teaching folks that this is a different domain , that we are doing things differently . That compute , you know . It , whatever whichever way we want to call it , at least in cyber is evolving into something else and we've got limited time to get on board with that .

It's not coming from cloud , where for most , including myself , took years to get my head around that .

Dr. Reggie Brothers, former Under Secretary for S&T, DHS (moderator)

The time slice is just tiny compared to that Do we have time for a couple of questions .

Audience

Any questions , sir ? I'd like to go back to Secure by Design , because I hear a lot of AI talking about incident response sort of that attack . What's the other side ? Sorry ?

Donald Coulter, Senior Science Advisor, Cybersecurity, DHS S&T

Does AI have anything to play in terms of building systems more securely from the outset ? Of course , at DHS we've got the big responsible AI initiative and that trails right down into our cyber focus .

Again , if you're not actually observing , like again , we can't just and I'm just going to speak to government in general and maybe big integrated on the commercial side too you can't just offload that and say this is the requirement .

You have to have responsible and comprehensive understanding of what it is you're putting out there , and whether that's a model or a system or a program . And so we've just kind of said , okay , well , this is a thing and it's not a comp . I mean , it's been a comp thing to everybody in the room for years .

Right , most of us work in this environment , but to the average person out there , it's just IT . I mean , we all deal with that with our families , right ? We're just the IT person does this stuff . And so I think that's the revolution , right , and you'll see things like secure code , secure binaries . We're going deep on this stuff .

Secure models is just another version of that . It's just we have to think in terms of tying into zero trust and everything . We have to think compromise as a practical matter and , just like anything else , the day you put it on the wire , whatever change it is , it's probably on its way to being compromised in one way shape or form .

So if we get that ideology and we take a breath and we understand that it's always going to be that way , then we start to learn how to get faster and better .

Dr. Reggie Brothers, former Under Secretary for S&T, DHS (moderator)

Questions Sure Sure Questions so .

Audience

So , so , yeah , thanks . Really interesting panel . I guess this question would be for Megan , but really could be for anybody .

You mentioned that having AI in our enterprises adds new attack surface , and I certainly agree with that , but I'd like to know sort of what you think that attack surface , and I certainly agree with that , but I'd like to know sort of what you think that attack surface is .

What are the areas of new risk that deployment of AI brings to us that you're most concerned about or most working on ?

Meghan Good, Vice President and Director of the Cyber Accelerator, Leidos

Great question , neil . Thank you . So I think with that there's similar systems . Right , it's already deployed . Ai gets deployed to software . Right , it gets deployed with data coming in , data coming out right and a decision there . So , from the actual surface itself , it's not that much of an evolution to what we've already been doing .

But I think the struggle becomes what you're doing with the data and the actions that you're taking , the responses going on and the changes that you might make which would modify your attack surface very quickly . Are you blocking something , are you moving something ?

And I actually am excited about the application of AI and having that with even deception capabilities and how you can change your attack surface faster but add a whole lot more complexity in the mix there . But I think , with what I'm seeing is that there's there's just another avenue of a manipulation to the data .

That then is changing what we do to respond , and there's a bit of it that with learning systems and reinforcement learning that we're deploying , it's happening in ways that we might not be able to explain at scale yet . So I do think there's some uncertainty and a risk factor that's added in as well .

Dr. Reggie Brothers, former Under Secretary for S&T, DHS (moderator)

Questions Sure , questions Sure .

Audience

As a follow-on to the previous question , thinking about the fact that code assistants have been around for a while , we've been training them for a while have we looked at any adversarial poisoning of open source , of stack overflow questions and the other things that the AI coding models have been trained on ? If so , what are we doing to mitigate that problem ?

Oh man yeah .

Donald Coulter, Senior Science Advisor, Cybersecurity, DHS S&T

Why don't you get started ?

Audience

That's a big we right , oh man .

Donald Coulter, Senior Science Advisor, Cybersecurity, DHS S&T

But I do think that you're right that as a major concern is that how are we vetting and validating the type of stuff that's out there and this really affects a lot of not only the stuff is like on these websites , but also that's getting pulled into our open source software .

So being able to understand , like an identified prevalence , being able to have an ongoing relationship with where did this code come from ? Where did it get updated ? Who made changes to it ? What does that thing do ?

I think these generative AI capabilities will continue to get better and better at being able to reason upon and explain what particular codes do , and that we will be able to identify those and see those and make better risk decisions about what to incorporate into the software that we're actually building and using and deploying .

But I think that's still a major area of concern that it's always going to be . Out there , people are going to be poisoning , people are going to be putting fake , malicious packages or co-opting packages that were at one time were trusted , but someone forgot to pay their registration and someone else got a hold of it and inserted something malicious there .

So I think that's a huge problem that hasn't been solved yet from my perspective .

Maybe he's solved it , Nope , hasn't been solved , no , but we're working hard , right , and again there are , you know , folks , there's a balance there too , because the whole purpose of open source is collaboration , is advancement , is ingenuity right , you don't want to stifle that , but again , I've seen varying levels of this in varying different environments , both

commercially and government , where you have to take a rational approach . And so I've seen bastion versions of Git where you bring it in , you scan it and you make sure , but that unfortunately will not survive this next iteration because it's going to be happening faster than we can get it through those old net locks and balances .

So I think we've got a physical problem as far as space and time and how fast we can move against the variability of the AI , and we've also got to understand and respect the philosophical problem , and that goes back to , again , the traditional .

I put a stamp on it and everything's certified and we're good changes in the next 10th of a second in this environment . So we have to kind of be practical as well and be respectful of the fact that the whole purpose in doing this is to accept some risk in the factor of innovation . So it's a hard problem . There's not a silver bullet to that one .

I don't know there ever will be .

Meghan Good, Vice President and Director of the Cyber Accelerator, Leidos

And I don't know . There's one organization that can do it .

Donald Coulter, Senior Science Advisor, Cybersecurity, DHS S&T

It's a community thing , it's a community , a collective .

Meghan Good, Vice President and Director of the Cyber Accelerator, Leidos

But then I think the challenge there is what do we each do as part of that right and who ? You can control your additions to it ? You can control what you take from it and how you update it over time , but you're still responsible for that change over time too .

Transcript source: Provided by creator in RSS feed: download file