Secure by Design - Eoin Woods - podcast episode cover

Secure by Design - Eoin Woods

Sep 10, 202420 minEp. 93
--:--
--:--
Listen in podcast apps:

Episode description

Wie kann man Sicherheitsprinzipien frühzeitig in den Entwicklungsprozess einbeziehen, anstatt sie erst hinterher irgendwie reinzubasteln? Eoin erzählt von seinen Erfahrungen aus der Vergangenheit und zeigt, wie das Sicherheitsbewusstsein im Laufe der Jahre gewachsen ist. Er betont die Bedeutung von Prinzipien wie „Defense in Depth“ und die Verwendung sicherer Standardeinstellungen. Wir sprechen auch über die Herausforderungen, mit den neuesten Sicherheitsbedrohungen Schritt zu halten, und über den Wert der frühen Einbindung von Sicherheitsingenieuren und Testern in das Projekt.

Transcript

Hello and welcome to a new episode of the podcast Software Testing. Today with another episode of the OOP 2024 in Munich. My guest just now was Eoin Wutz. I talked to him about how to implement security by design, which principles you can follow and how we can implement this in a development process. Because we all know that when security is tested, the child is often already in the well. The episode is in English, but you will still have a lot of fun with it.

Hello Eoin, fine to have you here on the podcast show. Well, thank you. Thank you for the invitation. It's very nice to be here. Yeah, it's great. We had a, I think it was the last preparation call I had for the OOP conference and so we get this slot now and I'm very happy to have you here because you have a very interesting topic in my opinion. Because in testing, we have often the part to test security things and that's always a thing which happens too late.

And you have the speech here, your talk is about security by design. So make security upfront. The first class citizen, as opposed to the thing that's always shoved in at the end and nobody wants to touch. Yeah, yeah. That's the whole idea. Yeah, great. Yeah, so let's start into this topic. What are your ideas to that? What are your expectations to make security by design?

Well, the really interesting thing about security is that I worked in security engineering briefly and I worked in a number of places where security is very important. This was back many years ago. And then when I started working, even in large financial organizations, it was quite hard to get people interested in security.

So about 10 years ago, I started giving talks on it and I used to get five people in the audience and they were all security people and I was just talking to people who knew about security. And the amazing thing that's happened is that now you get a room full of people who actually want to know about security. So that's just fantastic. Something's changed. It's a really, really great trend. But the thing that I think that still persists is exactly what you talked about a moment ago.

It's not just security, is it? Security, performance, resilience, availability. They're all things that, of course, we know they're so important and we will get around to them eventually. And that's what just never works, as any tester will tell you. If you start thinking about security in the last 10% of your project, you're going to get 10% security. So you really need to start early.

And the difficulty with that practically is that most software engineers don't have lots of background in security. It's changing a little bit. Some university courses today, I observe, at least in the UK, are starting to have a lot more security right from the first year and that's terrific. But a lot of people in the industry, they know about security in abstract terms. They know that they've got authentication, authorization, and auditing and so on. But actually, what do you do?

And then when they try and engage with the security community, they meet security engineers. They're very clever people and that's all they do. And the problem is that's all they do. And they don't know much about software delivery. They've typically come from an infrastructure background, or if not, they did a bit of development years ago and now they do security. And they just speak a different language.

And they will very quickly overwhelm a software team with all of this complicated stuff the team absolutely has to do now, otherwise there will be a disaster. Whereas, of course, we know that nothing's absolute. Everything's got to be in balance. One of the things I noticed was that principles really help people. I'm a big believer in design principles to help guide design decisions.

And quite a long time ago now, I realized if we could identify some principles that were easy to understand and remember, that might just get people thinking about security much earlier. And that's when I came up with this set. The reason being, I didn't really want to invent a set, but I went out to look for them and there were sets everywhere from minimal sets with eight or nine up to-- I found one that's got hundreds of principles in it.

And they're all valid, but how would you possibly use it? So I tried to find a middle ground. It distills them down to a set that's pretty accessible, but I think covers a lot of what software teams need to understand about security. Oh, very good. So let's look deeper into these 10 principles. So maybe we-- Look how much time we have. So maybe we start with your top five. Sure, sure. Some of the important ones are things like defense in depth.

It's very common today that we have very sophisticated attackers. You can't assume that one security mechanism, say, a piece of encryption or an authentication system is going to protect you. They may well defeat one of your mechanisms. It's really great to have more than one mechanism so that once they're in, they haven't got everything. So defense in depth is quite important. The other thing is that nearly every security mechanism has flaws, or we make mistakes in how we apply them.

So it's important not to be totally reliant on one kind of security mechanism throughout your system. So in really secure systems, those that, for example, governments defend, you'll find every single layer has its own security mechanisms. They're all independent. They all interlock. You can break in to one level. You've just got the same problem again and again and again. What they're trying to do is make it very expensive to break in. So that's one thing I always remind people.

Don't depend on one thing. The other thing, which is unfortunately still quite common, is don't invent your own security technology. It's a lot harder than it looks. Nearly every relatively inexperienced software engineer I've met thinks that how hard can encryption be? I read a book on it recently. I'm going to create my own password vault. I just go, "Don't." It sounds fun. It's a lot harder than it looks.

And that goes for all kinds of mechanisms, even things that look quite straightforward, like integration into maybe OAuth, into authentication and authorization systems. If you can possibly do it, find a library that does it for you already. And the reason is because even security technology produced by the experts, by the professionals, it's got flaws in it. The first thing they do when they produce it, they start beating it with them.

All those expert security testers start testing it immediately, and they always find problems with it. What are the chances that you're going to have no problems with yours? It's pretty slight. The other thing is just that, honestly, it's an overhead you don't need on your project. I know it looks fun, but honestly, perhaps there are better things you could be spending more time on.

Do you recommend, if you think about libraries doing the encryption stuff, to go to open source because it's more transparent what happens there, or closed source from companies? Do you have any... I think it's quite context-dependent, actually. I don't think either has a monopoly on good security. You're absolutely right. The transparency around open source is quite reassuring. There's a lot of independent security researchers looking at that stuff all the time.

But in certain markets for certain applications, actually there are closed source solutions which they're really the superior offering. You have to ask the right questions of the vendors, but they may be the right choice in your case. I personally tend to veer towards the open source ones, so it really does depend on what you're building. Are there any principles to fit on the process of how I develop software?

I can say that don't build your own, use it, but that's one point on the whiteboard at the top and nobody sees it because they are doing their daily business. How can we deal with that? Really, the design principles are... I'm aiming at designers. It's the things they should be thinking about when they're doing their design. You're raising a much bigger and very important point. How does the whole team go about doing secure software delivery together?

Really, they need a secure software delivery lifecycle. It's just a posh name for saying make sure you do the security work all the way through the lifecycle. There are a number of industry standard models. AWOSP are a great organization, full of resources, and they have a secure software development lifecycle that you can use. It's very well developed. It's been used by lots of people. But there are a number of models for secure software delivery out there.

A number of government bodies have them as well, for example. I always recommend start from a secure software development lifecycle from experts. Of course, you'll tailor it. Everyone tailors software development lifecycles, but start from one that's been used already. I think we have to implement into the process. A lot of clients, for me, there is security one part of the non-functional requirements. If they are written in any user stories, it's often too late to think about it.

What does it mean to make this or that? One of the things we always recommend is to bring security user stories into the process, so-called abuser stories. How is someone going to attack this system? That's a simplified version of a technique, which is in all the secure development lifecycles called threat modeling. That's where it sounds complicated. It's really not. It's about sitting the team down and saying, "Okay, what have we got that's valuable? Who would think it was valuable?

How would they attack us in order to get that valuable thing?" It could be a piece of data, it could be a financial transaction, it could be an operation that they want to either enable or disable. It's a question of going through that process in a structured way. There's a number of very good books out there. Adam Shostak is the classic one on threat modeling, but there are actually quite a number, which are all very good. As the real experts in threat modeling like to point out,

it's a really simple process. Don't overcomplicate it. Just go through the process of thinking out what could go wrong. It's a little bit like with high availability. You need to sit down and work out everything could go wrong, what will happen when each piece of this goes wrong. Once you've been through that process, you're in a much better place to know how robust your system is. Yeah, I understand.

Often, security or the hackers and all the cyber stuff, they use a lot of day zero parts to vulnerabilities to get into a system. Is there any part we can address this new up-to-date mechanisms to solve in our design? I think it's really tough, actually. One thing is defense in depth, which we talked about, is that if you're relying very much on one component to keep you safe, if it's got a zero day, then someone's going to exploit that.

The other thing I think is around processes, as you were saying. Do you have a strategy for keeping your components up to date? But you know, in the days of open source, that is really difficult. Some ecosystems, I mean, the Node.js ecosystem would be a great one to use as an example. They have such fine-grained components. You've really got to have automated help to stay on top of that. One of the other principles in my set is trust cautiously. There's a number of things that that implies.

One of them is, for example, network connections don't either make or accept unauthorized and authenticated network connections. But another one is be careful what you insert into your system, what you bring in. It could be data. Be cautious about what data you accept because there are lots of exploits that involve putting malicious data into a system. But also think about what you're building it from, and not just open source.

Commercial stuff, commercial libraries, for example, you're using as well, or commercial platforms, how secure are they? And as you very rightly point out, if they had a zero day, how would you know and what would you do? Unfortunately, zero days are a really intractable problem, if nothing else, because there is a black market in zero days. So actually even knowing they're there can be difficult.

There's only so much we can do, but definitely knowing what's in our system and probably having automated support, particularly for the open source, for keeping on top of if there are vulnerabilities in our components or not, is really important. I think there is a big, huge problem. What you mentioned is which data can I put into my system and will I allow to put in my system?

I know a lot of clients, they have a landscape of applications and driving through their data, and all the APIs are very lazy stuff, so they are letting all through. And I think there is a big security issue there because some of these infections can go through all the systems and then making a big bang here on every edge. So I think that's a big point to look at how secure are my contracts or my APIs to make a good interface. Yes, I think you're absolutely right.

It's a bit of a trade-off, isn't it? Because the simple thing to do is always lowest common denominator, make everything a string, because that's going to be really straightforward and you're not going to reject anything. Everything will get translated to a string somehow. You're right.

People can craft very clever malicious attacks that send you malformed strings that your string processor thinks are fine, but then when they're translated to something else, actually perhaps just cause a failure. So it's actually denial of service attack or perhaps do something actually much more subtle and malicious, which ends up in, for example, remote command exploits. So yes, it's something you've got to be really careful about.

The trade-off is really strongly typed interfaces take longer to develop, they're less flexible, they're more difficult to evolve. So yes, I understand why people want to make them very flexible, but always bear in mind if you're not validating things really, really carefully, as you say, as soon as you've got the ability to take large amounts of unformatted data in, you've got potentially a huge security problem. Yes, that's true.

What other principles are in your set so we can go a little bit deeper there? Sure. So other ones would be making sure that you use secure defaults and that you fail securely. My classy example goes back many, many years when you used to install relational databases, Oracle was a classic example. They always enabled three or four very powerful accounts with standard passwords.

And in most enterprises, if you knew about this, you could go up to a production Oracle system and find at least one of them still had the default password. It was a security cliche for many years. Have you got Oracle? In that case, you'll have a Scott Tiger account, meaning there was a user called Scott with a password Tiger. It was a demo user. You could get into almost all the Oracle systems. Oracle did, to be fair, fix that many years ago.

An awful lot of open source stuff, cloud demonstration software, does come with standard logins. They're unlocked and they've got standard passwords. Similarly, network hardware. I'm still amazed how much enterprise-grade network hardware comes out with default users and passwords installed. Because, of course, it's convenient. We know it's convenient. It's completely insecure and really quite dangerous. And the related thing is a bit like that. Don't fail to an insecure position.

So a classic example is years ago, I know of a database vendor who was very-- they were very concerned about their performance and their availability, and they were well-known for it. Rock-solid database used by big banks for transaction processing, that kind of thing. And they had this problem that it luckily didn't get beyond the beta stage, but they built an audit into their database. The customers asked for it. It seemed very sensible. It's got this rigorous audit trail.

It was tamper-resistant and banking-grade, as people like to say. And then what did the engineers do? Well, of course, the engineers were very concerned about performance and availability. So if the audit started malfunctioning or filled up, they disabled auditing and continued processing. Because, of course, they were completely focused on performance and availability. Luckily, it got to beta, and some customer went, "No, hang on a minute.

Our audit trail filled up," and then you just continued processing without the audit trail. But you can see how the tradeoff happened in someone's mind. We are the performance and scalability database. Therefore, auditing is optional. "Oh, no, no, just a minute. No, it's not." Similar thing if you--I mean, years ago, Unix systems used to panic. They used to come up with a prompt you could get into root without a password.

Subtler things today are things like when you've got message-driven systems, what happens when you get a complicated multi-component failure and you bring it all back up again? Does everything authenticate correctly and refuse to process until you've got end-to-end security in the system? How do you start processing opportunistically? So, will something always wait for all the security services to be available? Those kind of questions. Yeah, yeah. Very interesting part.

I wonder how can we put more awareness in the design phase to mention all these points, these principles, because a lot of clients are very feature-driven where we have to go-- Very much so. So, how can we argument that we have to do this work? I'm amazed the tester's asking me that question. He answers, "Testers!" Because testers always think about what can go wrong, and that's such a valuable service because most people aren't.

I like to think software architects are always thinking about what can go wrong, too, but they're only human. They're often under lots and lots of feature pressure, or they may have very specific quality attributes that have been a problem up until now. Say it's throughput, and they're very focused on throughput. They know security is important, but they're very focused on throughput for the two or three sprints.

That's a great job for the tester to go, "I bet this security stuff, we appear to have just ignored it for a while, and I'm quite concerned about this new feature. It appears to bypass the security control." That's just exactly what we should be paying testers to do. And also, of course, supporting testers so that testers feel they're being encouraged to ask those questions. There's always a danger the tester feels that actually they're quite unpopular for asking the difficult questions.

We should be embracing, "Thank you so much for answering that question. Otherwise, we would have made a mistake." I think it's good in all this agile and DevOps stuff so that the testers can get into the team earlier in the process. Yeah, very much so. In the past, the tester was the last one who sees the software and says, "Oh, my God." "Guess what have you done?" Yes, yes.

The other person, of course, to get involved is your friendly security engineer, ideally one who's got an application security background as well as infrastructure. And make sure that you're asking them all the right questions, and you're inviting them in to do some threat modeling with you, be asking you all the questions that you might not have thought of.

If you've got a pen test team in-house, you might not want to use them for the final tests, but they're very, very valuable earlier on, get them pen testing things early. Maybe everything will fail. Probably that's okay right now, but you're suddenly getting everyone thinking about, "This needs to be secure, and currently we're not." So we need to put the effort in. Maybe even the product owner, who knows, might finally go, "Oh, yeah, actually, that's a big problem."

"I need to spend some more time on security." Yeah, great. Ewan, thank you very much for this inspiration, for these tips for us. I think we can use it in our projects and in our teams to think again how to make our systems more secure, and to think security up front and not in the testing phase at the last one. So thank you very much for these insights here. Thank you very much. It's been a pleasure. I always love to talk security. Yeah, thank you. Have a nice conference here.

Thank you very much. (classical music)

Transcript source: Provided by creator in RSS feed: download file