Welcome to CyberFocus from the McCrary Institute where we explore the people and ideas shaping and defending our digital world. I'm your host, Frank Cilluffo and we're going a little off programming here since we're on location at RSA for their big conference and have the privilege today to sit down with Phil Venables. Phil is the Chief Information Security Officer for Google Cloud. I think he is the CISO. He's been at this for a long time. He was CISO and more at Goldman Sachs at a number of
banks. And the reality is I've admired his work for many years. I'm happy to say I'm not sure he would agree, but we've been friends for about 20 years. So. Really excited to sit down with Phil today. Yeah, no, like it's really great
to be here. So, you know, great to see you at this event as well.
Thank you, Phil. And you seem to up the ante every time you go on to a new job. So firstly, before we jump into some of the good work you're doing right now with Google Cloud, I'd be curious. You also co led a major effort for the President's Council of Advisors for Science and Technology on Physical cyber convergence, which quite honestly I think is here. This isn't something that we're looking at tomorrow. Do you want to maybe go through some of the high points there? Yeah,
yeah, sure. So one of the reasons we started this and pcast for the President as you know, does many, many things. But we picked on one report around cyber physical resilience. Just recognizing a couple of things. One, that cyber and physical are clearly coming together. You just can't really distinguish them. Now it and OT just blend together.
And also the importance of resilience. We see attacks over and over again where no matter how good the defenses are, there's always the possibility for an attack to get through. We all know there's never going to be 100% security. So we have to think about resilience in the face of attacks. And so the genesis of the report was what can we say about this and what can we recommend? And there was a number of things we recommended. I'm not going to go through the whole thing,
but some of the biggest. Brains were part of that. Yeah, no, exactly. And I was privileged. I co chaired it with Eric Horwitz who's the Chief scientific officer at Microsoft. We had a huge number, number of great people. Dan Geer was part of this that I know. Many of the security community know well and we came up with a number of recommendations. I'd say, like, probably the highlights were that government agencies
and critical infrastructure providers should really focus on setting more ambitious performance goals. So minimum viable delivery objectives of how they can sustain public services in the face of attacks. Then also to shift from lagging indicators of cyber performance, like breaches, malware, events, vulnerabilities, more towards leading indicators. So what can we measure about how to produce more reliable software? How can we make sure that we can do provable cold restart, recovery of
infrastructure? All of the kind of things that if you do well, then you know you're going to achieve the lagging indicators. The other thing we really focused on as well was one of the recommendations which has attracted a lot of attention is the notion of creating this National Critical Infrastructure Observatory. And So we've asked DHS, along with some of the FFRDCs, to basically build a digital twin of United States critical infrastructure.
And then the logic behind that was our adversaries have a much deeper visibility of us than we have our own. Systems than we do. And so that's just wrong. And so, you know, if we had this observatory, we could look for weak points, we could look for concentration risk, we could use it in real time. And so that's something we're excited about. And then finally, does that have legs? Is that something? So we've got some implementation summits coming up. DHS and the team at dhs, particularly
the National Risk Management center, are very focused on this. Awesome. And there's a number of FFRDCs. For example, we met recently with MIT Lincoln Labs, who've been doing a bunch of work already in this space. So we think there's a great path to bring some of the technology together. And then finally, one of the things we also focused on was the need for more professional education. So again, we talk a lot
about creating a much bigger cybersecurity workforce. I think that's great. I think we have to do that. But we also have to make sure that all of the other engineering professions have an element of professional cyber certification inside other professions. So it's not just all on the cyber people. Phil, you and I had this discussion just before
we got on. That's obviously something I'm very passionate about as well. And I think it segues very well into some of the work that Google is doing right now, because it's not only the onus of our CISO and our computer science programs and the like. Everyone needs to have a modicum of cyber awareness and it needs to be Integrated into all disciplines. How do you see that playing out a little bit with your secure by design and secure by default work at Google? No, that's exactly
right. I mean, the way we think about this is, I mean, we all talk about secure by design, but what does it really mean? It means you've got to have security built into the technology and the platforms rather than bolted on after the fact. You know, I'm well aware here at RSA there's an entire show floor of security products. Some of those products are absolutely fantastic and will always be needed. But a lot of products really should just be actual features of the platform where things
are built in, not bolted on. And when, you know, when we look at this, I've often advocated that we need secure products, not just security products. And a big part of the responsibility of all of the tech companies, not just cloud companies, but all of the tech companies, is to keep doing that, building in security, making sure security is on by default, making sure that people don't have to pay large amounts of extra money to take a weak platform and make it a secure platform that
should just be built in. And that's ultimately what we drive towards. And it aligns
very well. The administration last year came out with a significant national strategy around cybersecurity. And one of the big takeaways is to shift some of the burden away from end users and small and medium sized businesses and the like to entities that can do more. You're in a pretty significant place where you can. No, exactly. And we
take that responsibility very, very seriously. And it's not just Google Cloud, the wider Google and what we do, billions of people that are on our platform and services every
day. I mean, you know, some of the things that we've done, whether it's on malware and spam filtering in Gmail, whether it's safe browsing built into Chrome, all of the security defaults in Google Cloud, all the things that we do is about making it easier for end users and customers to operate securely without having to do that extra, without having to do that extra work. And that's the only way, I think.
Forward and pretty awesome that you're there in a position because quite honestly, I don't know how you keep running so fast in so many areas. But I think it is very different than financial services, is it not? Well, I think it's different in
terms of scale. I mean, one of the things that it's been great coming from financial services, especially where I was, you took it. Seriously before anyone else. Exactly. And Where I was at Goldman Sachs was a very kind of engineering led collaborative culture.
But again, I think regulation and regulated environments is important. And again, financial services, I wouldn't necessarily say all financial services is universally perfect, but generally speaking, financial services, like some other highly regulated industries, is more secure because of the regulation and also because there's a natural affinity between the shareholders, the leaders of organizations to protect customers, because
that's right for business. And I think as well, coming back to the PCAST report, one of the things we called out in an analysis of incentives is regulation is important, but also leaders of organizations need to understand more about what customers expect. And having a secure, privacy adherent, compliant business means that they're going to do better as a business in the long run. And you opened up another set of questions. So
if more of the onus is on some of the larger entities who can do more and are doing more, that has to come with some incentives as well, right? I mean, I would imagine liability issues or those issues that concern you or. Well,
I think, you know, one of the things, I think we have to have a level playing field. And so one of the things, when we look at this, I'll give you an example. So as a cloud provider, we get appropriately highly scrutinized and deeply reviewed by some of the world's most sophisticated companies with some great security teams. And we love that. And generally. And you're getting tested every day by the bad
guys. Exactly. We test ourselves, we're being tested by adversaries, we're being tested by vulnerability
researchers and bug bounty programs. And it's great and that's helped. But one of the things you also got to look across the industry is there are plenty of tech companies and SaaS, providers and other organizations that maybe not get that level of scrutiny. And I think when we think about setting out expectations for all of tech, we have to be consistent in the expectations. And when there's a level playing field, then
there's an incentive for people to invest consistently in security. If one set of companies are held to a different set of requirements than the other, and other companies get rewarded by having lower standards and being first to market, then that's a structural disadvantage. So for me, liability or not, the most important thing is a level playing field and everybody being held to the right standards. And that's very legit. And I'm glad
you brought that up. So you know, I'm going to go a little off piste for a second. The administration recently promulgated National Security Memorandum 22, which designates critical infrastructure sectors. What are your thoughts? Should cloud have been designated or is it well covered?
I think it's well covered. I mean one of the things, it's kind of a standard joke. So when I left financial services, a lot of my friends in financial services, it must be so great going to an industry where you're not going to have to deal with regulators. And I'm thinking, well, wake up. I now get to
deal with the regulators of all 16 critical infrastructure sectors. So I probably spend more time with financial regulators than I did in my prior job because appropriately so, regulators in every sector come to the big technology companies and they expect a lot from
us. So we feel like we're being held to a lot of standards. I would actually be supportive of there being more standards more consistently, but certainly I don't think designating cloud or not really makes much difference because the regulators in every sector are coming to us anyway. And in fact our sophisticated customers in each regulated sector have very exacting on what they expect of us and how they monitor what we do
as well. Well said. We can't escape without having a discussion, especially this year. AI
seems to be everywhere. It's blinking in our faces, in some ways very real, in other ways not so. But I'd be curious, how is Google, how are you actually implementing AI in your innovation and in some of the programs you're working on? Well,
it's fascinating and I know everybody wants to talk about generative AI, but we should all remember, as I think we all do, the past 10 years have been transformational in the use of traditional deep learning predictive AI. And so again I mentioned before, we protect billions of people every day with AI. That's all grounded in how we do safe browsing and email malware filtering and many other things. But again, generative AI
does represent a significant change. Organizations are figuring out how to use it for a variety of business purposes. We see it being transformational in security. Ultimately, I think it does yield a decisive defender's advantage. And again, it's not just the really, really exciting things of using it to find vulnerabilities. We're already using it to decode malware, we're using it for anomaly detection. But there's also a lot of possibility for just transforming
the productivity of the workforce. I mean, one of my favorite recent examples is our detection and response team are using generative AI to improve the write up of incident reports and post mortems. And the AI writes it at better quality than the individuals and it takes half the Effort. Now again, that sounds like a really kind of mundane example. Multiply that by exactly. You stack that up like 50 of those and you've just kind of created another 50 headcount on your team. And as we're all
thinking about how to scale the workforce, that becomes important. So you are, I've been
told, a pessimist is an optimist with experience. I still fight back as an optimist. And you do think that AI, the red blue adversarial AI vis a vis the defender, you think we can get ahead of this? Oh, yeah, yeah. No, I think
it's even more so. I think it's a decisive defender's advantage. Now, I know I'm kind of, you know, it'll come across as I'm talking my own game here. Loving
to hear this. Yeah. But the reason is so I think I'm not diminishing the
threats that come from the attacker use of AI. So we absolutely have to be paranoid. And again, it's clearly not just cyber threats. There's threats in the misinformation and disinformation landscape. There's threats that drive fraud and all sorts of kind of impersonation driven
fraud. So again, we have to treat this and treat it very seriously. But when you think about the defender's advantage, the defenders have all of their data, they have all of their context, they have all of their ability to integrate this into their environment. And defenders that take on and get going with using this technology are going to be able to scale in the face of attacks. And I think there's work to be done. This is not magic, it's real work. But I think we get
there and I hope as you learn some of these lessons, you can share that with others as well. Well, in fact, it's interesting. So we, we published, I think
called the Google Secure AI Framework, which is more about how to control and manage the risks of broad AI deployment while still getting the benefit from it. But we're also publishing through blogs and through other handbooks and actually also embedding in some of our tooling how to do all this and how to make advantage for defenders. And again, I'm not going to do a product pitch, but a lot of our announcements
this week were about how we've taken our SEC lm, a large language model. We've trained on all of our threat intelligence and security data, how we're embedding that in product to just put this under the hood of everything that people will use so they don't have to. Engineer it all themselves, which is pretty awesome. And looking at
what do you think your job will look like five years from now? Well, I
don't think it's going to get any simpler. I mean, I think it's just going.
To keep getting more technology changes, human nature remains consistent. That's right. There's always going to be bad actors. It's interesting. What we're starting to see though, in the broader,
broader question of the changing role of the CISO is as AI happens in organizations, you're not just managing the security risk of the AI deployment, you're managing compliance, privacy, you're managing data governance and data lineage of the training and test data. It's becoming quite a broad issue. And then they're managing the operational risks of the deployment of the AI. And for many organizations outside of very regulated industries, that's kind of a
new set of risks to manage. And most organizations, the boards and executives are turning to the CISOs to think about how to manage end to end risk. And it's almost like the early days for organizations that don't have chief risk officers, the CISO becoming more and more like the Chief Risk officer as the businesses digitize and as AI becomes an intricate part of the business. It's an evolution of the ciso. And
we're watching CISO teams evolve, evolve and take on this mantle. It's actually quite impressive to see. And your background is unique. You were chief Risk officer as well. Right. So I love to see it. I mean, it's typically financial risk, cyber. Is risk
at the end of the day. I mean, our attack surface is growing exponentially. We get that. But at the end of the day, it's like managing risk. And unless you can articulate that to the C suite in those terms, we're always going to be playing catch up. No, exactly. And I actually think, you know, financial services have
had chief risk officers for a long time alongside the ciso and they typically work as a check and balance and support for each other. I actually think a lot more organizations should have a chief risk officer role in addition to the CISO role so that the CISO can focus, or a variant of the CISO can focus on
product engineering, embedding security in the technology of the company. And then the CISO as a Chief risk officer or an actual Chief risk officer can focus on the independent validation to make sure that the organization is focused on things in the right way. So it's going to be interesting over the next few years as we continue to digitize business and embed AI, Just how the CISO role changes to become more like a Chief Risk Officer. Yeah, that will be fascinating to see. But your job will
not get easier, right? I think it'll just change. I think some things are going
to get easier. Some things are going to, you know, I've learned new headaches. You know, as soon as you solve a thing, you have to move to the next thing. And that's just, that's kind of what makes the role interesting. And we can't
have a conversation around AI and not touch on some of the ethical conundrums. What are your thoughts there? Well, so this is why I think a lot of organizations,
it's not just about the security, privacy or compliance, it's about the trust and safety, the bold but responsible use of AI. Every trust is the coin of the realm. Exactly. And I think a lot of organizations are building teams that manage the trust and safety of this. It's intrinsic to what we do across Google, across all of our platforms, to think about this. And we've been taking all of our experience and putting that into the platform itself so that customers do get a choice of where
they set, you know, the safety parameters on things. But we work with customers to help them set up these trust and safety teams. And I think there is, it is going to be important. And again, deploying AI inside the context of a set of operating controls rather than just naively throwing it out there is going to be one of the most important things. Now, not sure you want to go there, so
don't feel like you have to. But I've also seen a much stronger public private partnership between Google and some of the three lettered agencies and just government writ large. And quite honestly, I think that's the trust that you're bringing to some of those dispositions. I mean, we've got a terrific partnership and I mean kudos to all the
people in this administration. You know, we work with the NSA Cyber Collaboration Centre, we work with DHS on the jcdc, on the Cyber Safety Review Board, we're doing a
lot of work with nist. So we've very, very much more heavily engaged with our government partners, not just here in the us, but around the world as well, because we recognize our position in support supporting critical infrastructure and we know we have to partner on intelligence sharing, on collaboration sharing, best practices and as well, many governments around the world are also our customers. And so we. I now know more about FedRamp
than. I ever did before and probably more than you ever wanted to know. But
because that is a lot of paperwork too that should be AI automated. Well so
it's interesting. So there's a lot of automation of the compliance process. Things like NIST have got a great standard called OSCAL which is machine readable control assessments and AI to analyze those controls and ship machine readable attestations around is already starting to transform the compliance industry and so I think we're going to see some good stuff. And
let's talk for one second threat. How would you paint the threat picture that we're grappling with right now? Yeah, well look, I think the threat continues. Nation state organized
criminals continue to get more sophisticated. I still think there's a lot, particularly in the criminal environment around ransomware and other things. There's a lot of targets of opportunity still. And I think we need to keep making it harder for the attackers. And the good news is through defend. Or you think through scaling response through defense. I mean I think when you look at a lot of breaches still and you look at
the root cause of. The breaches, same ones over and over. We've seen that movie. Exactly. And I know everybody knows this, I won't go through it all in detail but implementing strong phishing resistant multi factor authentication, keeping systems up to date, segmenting all the basic, what you might call hygiene, which not to say that it's easy for
organizations that are of any sufficient complexity doing all this can be quite tough. But when you do it you mitigate a whole bunch of risks and a lot of attacks don't happen. Now of course as you know from your background, nation state attackers that really want to exhaustively and exclusively go after you if you are a prime target, you're going to have a whole different set of defenses and detection and response
needed. But generally speaking and they're going to use more than just cyber, all sorts
of exclusive. Well and I think organizations have to pay attention also to insider risk.
Not just trusted insiders going bad, but otherwise trusted insiders being coerced to be bad by outside actors. I think is going to be a thing in the future as as people's digital defenses get better, the attackers are going to have to go to the good old fashioned ways of getting inside organizations. You know one of the challenges
we're dealing with is a lot of the bad actors and the ransomware gangs and the like are operating in countries we don't have extradition treaty with. Do you think there's some technical solutions we'll see that can maybe enhance the long arm of the law? Because we still to One extent or another. Blame the victim rather than impose cost and consequence on the bad actors. No, I think we absolutely have to keep
imposing cost and through that deterring threat actors across the whole range of activity. I think it's right, we shouldn't be blaming victims. But there's a degree, some of that
they need to. There's a degree to which responsibility like in most walks of life,
you know, there's a minimum standard of which if you fall below it, it could be considered negligent. I don't think we have a societal consistency of what that line
is. And I think that comes back to the question on standards. But interesting. I mean I think, you know, the work that the various agencies here in the US have been doing on this kind of so called defend forward strategy of imposing cost, dismantling attacker infrastructure all the way through to sanctions and indictments, they all have a deterrent effect. And you know, if you're a criminal actor in. Yeah, if you're a
criminal actor in Eastern Europe, you can't go on your vacation to Miami anymore. May not affect much, but it gives you pause for thought. Well said. The tyranny of
time requires I be a tyrant. But one last question. What questions didn't I ask that I should have? Well, so I think one of the things to think about
on the secure by design stuff and secure by default is that's not just on the tech companies, that's on everybody to accept that universities. Exactly. And I think people, we have to be great vendors, but customers also need to be great customers and really actually pound the table and be prepared to implement this stuff. And the final thing I'll say is, you know, we talk about executives and the tone at the top of executives. They also need to make sure there's the resources in the ranks
in their organizations to get security done. So I think we have to have a broader conversation as well as just the tech companies. Phil, thank you for all you're
doing. Thank you for your leadership. You really have led so many people and inspired so many people in this field and I'm just happy you're fighting the good fight. So thank you Phil. Thank you. Appreciate you do. Thank you.