Sponsored Conversation: Ev Kontsevoy from Teleport - podcast episode cover

Sponsored Conversation: Ev Kontsevoy from Teleport

Mar 07, 202241 min
--:--
--:--
Listen in podcast apps:

Episode description

In this sponsored conversation, I talk with Ev Kontsevoy of Teleport.

In this series I have organic conversations with entrepreneurs as if having lunch with them and hearing about the product for the first time. They give their pitch, and I dig deeper with questions.

Teleport, in my own words, is a way of rethinking how people access and use computing resources. It's a policy-based system that controls who can do what across your entire infrastructure using a central access plane. 

Become a Member: https://danielmiessler.com/upgrade

See omnystudio.com/listener for privacy information.

Transcript

S1

All right. In this stand alone episode, I'm doing a sponsored interview with Eva Concevoir of Teleport. So we've all heard sponsored conversations before, and the structure I really like is imagining that you're having a lunch conversation with an entrepreneur and you're learning about the product for the first time. So that's really how I approach these, basically, I say, look, we're having lunch. Tell me about the product. And I get to comment and ask questions just like a normal conversation.

Now these are sponsored, so I'm not likely to blast someone from orbit. But I'm also going to be honest if I see a challenger or a question just like I would during a lunch conversation. And the way we're looking to avoid conflict here is by pre filtering, we allowed to do sponsored interviews. So that's the approach the natural pitch in a conversation over the time span of a meal.

UU

And with that, here's of from teleport. All right.

S1

Well, it's very nice to meet you. Likewise. Yeah, so I guess, could you tell me about yourself and tell me about the company?

S2

Absolutely. So I'm an engineer who was always obsessed with computing infrastructure. So probably the reason for this is as most engineers, I started programming at the fairly early age, and I always like to write code that kind of makes computers do things like physical things like play music, using something that moves inside of a computer or do some special effects with a monitor. And for that reason, I kind of grew up and gotten into the workforce.

I was naturally attracted to this cloud revolution that started to happen because just being in a data center, I see the ceiling racks and racks of servers. It's just that there's always been various fascinating. So it's a second company has started to make the lives of other engineers a year. So the first was what was an email cloud delivery technology. So if you were to run the applications in the cloud and you wanted to kind of send and receive email messages of massive scale? That was

my first company called Mail Gun. But after Milgrom got acquired by Rackspace, which at the time was second biggest cloud provider, so I got exposed to big cloud problems. And one of those problems was the access because as companies continue to push more and more data to the cloud. So the importance of data security in data centers is

now more important than ever. And it just so happens that when it comes to infrastructure security, when it comes to infrastructure access, the technology we use for that surprisingly is a lagging application level security where, like 10, 15, 20 years. In other words, when you're accessing web apps online and by web apps, I mean, like banking. When you log into your bank to check your balance or pay your bills, you're actually using state of the art technology.

But if you are an engineer at a sales company and you're accessing products, servers, computing environments, you're using antiquated stuff and people don't realize that they actually using better tools than engineers that build software. But that's true. So that's why Teleport was started to go and bridge this gap to get this state of the art technology into infrastructure access space. So software developers and another types of engineers can securely and conveniently access infrastructure.

S1

Awesome. And so just looking at it at a cursory level, it looks like the idea is controlling ingress and egress. It's like you have a single control point or all the different operations that need to happen is, is that the way you characterize it?

S2

So that's absolutely accurate. But I would say that so this type of description, it doesn't communicate much, right? So if someone would listen, it almost sounds like a network solution like, oh, you have a like a firewall or a proxy because they could be described using very similar language. The interesting thing is, it's all about identity at the end of the day. So even if you put security aside and you think about how is it that we

do computing like, what is the process of computing? Who's involved? So you will see that there are three very different kind of actors in that dance. So you have hardware, but the actual things that stored data and perform operations and the data and the actual hardware. So then there are software and software. They act intelligently. So because we are pretty good at baking software, that makes decisions. So software is like control in hardware to make computing happen. Mm-Hmm.

And then you have humans. Humans are obviously the most important thing. So humans create software. So there is this kind of loop of humans make decisions and then they create software, and then the software makes decisions on behalf of humans. And then it tells hardware what to do and hardware on behalf and software makes changes to the data. OK. So then if you are thinking about stealing someone's data, you now have a choice. You can attack hardware, right?

So you can try to gain access to that machine, maybe even physically. Just break into data center steal, get the server out of the rack and run away. Like a Hollywood movie? Hard to imagine, but probably possible. Or you can attack software. You can try to inject your code into that software somehow, maybe through access, through cross-site

scripting or by attacking supply chain. So you put your code into your software and then you get into hardware and then you get to today or you attack humans, you send an email and say, Hey, click on this thing to claim your whatever, and then you end up in their laptop. And from that laptop you get in software, hardware store and so forth. So that is important thing to realize that there are these three different entities and stealing data attacking could be done in three different dimensions.

And historically, there have been completely different industries, completely different products, completely different organizations responsible for protecting each one separately. So you've heard the words like endpoint security or things like infrastructure access and then the like. These solutions, they would say we protect laptop or we protect your code or we protect your servers. And that is a broken approach. You see how fundamental that is. It's broken because they

all disjointed. It means that if you were to have a complete protection, you have to, first of all, use different solutions for each of these three components. But then you also have to synchronize. You have to synchronize how they configured.

S1

Well, isn't isn't that because the history, though, is that they were very distinct components like it was a way up top and it was like a piece of iron sitting in a data center somewhere. So physically, the history is that they were very separate.

S2

Correct. Correct. And like you see, we as like, we're intelligent human beings. We obviously don't do anything obviously stupid. There's always a history behind. We've been making these incremental decisions. And each of those incremental decisions historically has been the right move. Right. But the end state that we really like we find ourselves in right now is just terribly wrong. Here's a very simple example of why it's broken every company.

Most companies probably want to enforce one simple rule that states that a software engineer who no longer works here doesn't have access to our infrastructure. Yeah. Now, thank so for that to be true. You have to configure multiple tools in the same way. Mm-Hmm. So if you forget to say that this laptop is no longer trusted, that that laptop will be allowed to get in, or if you forget to say that this password is no longer valid through a web UI somewhere like intellect cabinet, just

control panel engineers will be able to get it. If you forget to say that SSA is no longer accepts that they will be able to access infrastructure, you see simply because your data like it sits in the house, that's data center. And that house has dozens of doors for software hardware people where you have to synchronize access across all of them. And that is now becomes. It's almost actually impossible for most companies just due to the complexity of it.

S1

Well, especially on using separate software and also

S2

policies and software comes with expertise. So if you bought a solution, you better have experts who know how to set it up, configure and use it. And the expertise is always in short supply. Like every single company is

struggling to hire engineering, talent and security in particular. And that, I would argue, is the fundamental problem that we solve with the act to splain concept, where we say that treating software access, hardware access and people were accessed separately is just not no longer scales like we have to have a single plane that works for all three actors. And in order for that to work, we need to agree on a common technology or common technological platform. It

needs to be open. It needs to be an open standard. It needs to be easy to understand the reason about. And then when you say we're not going to support anything else like all the legacy stuff needs to go away, that is what teleport is. Teleport estates that for software, hardware and people where to seamlessly work, create this trusted computing environment. Everything has and everyone, everything and everyone really important has to have an identity and identity is represented

in the form of a certificate. So there are two standards for certificates that exist that we support both SSA certificates and X five of nine certificates and teleport says that to do anything for machines, for software and humans. All three have to have certificates and either something is allowed or not is done by looking at all three certs for every security related action and then saying yes

or no. So that is the major innovation here. Is this consolidation of these three actors of computing and on top of Common Foundation, which is a certificate which is identity. So it solves, first of all, it solves the fragmentation issue where you have these kind of silos of security all over. But it's also often methodically eliminates the. Huge, a risk that exists in your system if you have secrets.

So when companies talking about secure a vault, when they're talking about passwords, rotation, when they talk about encryption addressed, so they basically saying that our infrastructure has certain things on it that are protected by encryption. Mm-Hmm. So protected password is protected by encryption, so you cannot claim that the data itself will never be stolen, like things get stolen every once in a while. But you are relying

on decryption or encryption decryption as your last line of defense. Mm-Hmm. So here's why it was just statistically not going to work. So if you have, let's say, a secret, let's use such a key. For example, we have a private association somewhere in your system. The worst case scenario here, like a lot of them, engineering laptops, but just to let, let's assume you have an encrypted key. So which means that there's decryption happening somewhere. Mm-Hmm. It could be happening

or thematically. So you have some kind of scripts, some applications, some automation that does decryption, or it could happen manually. So there is a human that needs to type in the decryption key on a keyboard. OK. So in the first case, if it's a piece of automation that has decryption, you could have a bad deployment. You might have your code that does. Decryption with the decryption key might accidentally end up on on GitHub visible to the entire world.

You know, mistakes happen. Mm hmm. The probability is close to zero, especially if you're world randomization. So doing your best not to do these bad deployments. And if you're doing it manually, well, humans are humans. Everyone like you might end up with a sticky note in the monitor somewhere. Mm-Hmm. And that sticky note may end up in the news, if that possible. It was a real story in

S1

an interview with VIDEO.

S2

Yeah, yeah, yeah. Well, again, the probability of that happening is comically low. But notice what happens as you scale, as you acquire more and more and more secrets as you get more and more and more hardware and you get more and more humans who can make mistakes. So the probability the combined probability starts to creep up and then eventually that happens.

S1

So. So let me jump in here real quick. So you're essentially saying these three things let's wrap them with some sort of access control plane of some sort and then have a policy inside of that plane, which. Looks for certain actions being allowed to be done and tying that directly to people

S2

views or true identities. Yeah, actually, it's important to say that we should not treat machines or humans separately from each other. They need to be treated the same.

S1

OK, so there are

S2

three parties A, B and C, right? A is hardware. B is a piece of software like microservice, for example, and C is a is a is a human. And when they interact, you should pay zero attention to visit machine is that person doesn't matter. You simply look on a certificate, you look at properties of a certificate and then you look at policy. So if the certificate says that the like one of these actors is a production environment,

it's not staging AHA. So it triggers production policy because what what happens in private to production data versus staging is very different, right? So and then if one of these certificate says that I'm a like, a temporary contractor. So now you know that there is something happening on production with a temporary contractor. So you want to see what the policy of enforcing, but you should not paying attention as a human is a machine is itself doesn't matter.

That is the key distinction because if you implement a system like this, then you have this massive unification. You essentially saying there is a single source of truth that issues identities to everyone and a single source of truth that makes authorization and authentication decisions and a single source of truth where the audit goes?

S1

Well, that how about the asset data? So how do you complete the the details of the policy? So I need to access this web service. I need to contact this API. I need to get onto this hard disk and pull this data. When you're trying to write a policy that says only this identity can do this during this time of day or whatever. Doesn't that require a lot of asset metadata to exist?

S2

Correct. So the assets method data I received by saying assets, you inject that distinction because the asset assumes like machines, self, the right ones are not assets. So this is why I want to double click on do not make that distinction. Just simply says metadata, right? So you have metadata associated with identity. Where's it coming from? That's I think, your question.

So here's the thing. If you are a human and you going through a log and process, that metadata will get injected by idea the identity platform your company uses. It could be Active Directory sell point Okta. Like all of these things they can, they deliver a lot of metadata when you go through authentication. Teleport doesn't do authentication, by the way. We rely on your SSL. Sure, we just get your right that your identity is given to us. OK, so that's where it's come from that comes from in

this case. So it's already there like your company knows who you are, like you are a member of a group, you have an email address, you have a manager. All of this going to be in your certificate. So then if your piece of software like you're a microservice and you are lunching inside of, let's say, a Kubernetes cluster, your identity will be handed to you in the form of a certificate and it will. The queue environment will be encoded in there. The the the community cluster knows

I'm staging and production. If you database machine, it will all be there. So we already have this metadata on the infrastructure level like that. Technologies have been built by other people like we don't really need to imagine that or I'm sorry not imagine them re even invent that will. And the same is true for hardware when the hardware, when the company issues you a laptop, if it's OK, it's an Apple laptop that there is a hardware security module on it and you can mark like this laptop

belongs to us. So some other laptop shows up and it has a TPM with a different fingerprint on it, and it will not. It will not be trusted, right? So the key technologies for having metadata and storing it already exist. So what teleport does that? We suck all of this metadata out, put it in a certificate, and then we make sure that these certificates are available when the decision is to be done, either to allow or deny specific operation.

S1

Yeah, very, very interesting. So how does this touch in with? Because it sounds very cloud friendly because that's where we have a lot of metadata present.

S2

It's easier to do in the cloud, definitely, because cloud allows you to do everything through code. So as you provisioned machines, using your scripts, as you create Kubernetes clusters and your services as there's just a lot of it's just awesome if you're in the cloud, if you if you run your own data centers. Companies that do that, they don't just like bare metal and nothing else. Yeah,

they have. Things like VMware, OpenStack, they have private versions of what Ada Bliss offers, and those have similar capabilities.

S1

OK, so you're going to get metadata from somewhere, whether it's VMware or somewhere, OK for an MDM or something. OK. Very interesting, so as in so how how upfront is the policy editor because it seems like the policy is is very key. Right. You have the you have the control plane and then you have the policies that sit on top of it. Is that a is that a huge part of the product and the interaction point with the product?

S2

So the answer is yes and no. I would say that companies historically struggled, not with defining policy. Mm-Hmm. Because like the mid-sized organizations, their policies are not that complicated. You have, like even even the basic ones that are often not enforced. For example, the policies like engineers must not push production data easy enough, right? Mm-Hmm. So if you are in engineering, you should be dealing with staging. You should be dealing with test environments like don't touch

customer data, don't you? So the policy for that is like a couple of lines of configuration using like arbitrary language like you and I could come up with a way to declare its policy. So the difficulty has been to actually enforce it because of this siloing off axis, because you have like cloud access through the API, you have a say you could go through just control panel through web. You I can go through Kubernetes. You see that there are just so many protocols, so many different doors.

So synchronizing it the having a single way of defining that is the much bigger problem. And I'm not saying your question is not important. I'm simply want to remind that the focus is consolidation having a single voice, but then to define policy in teleport, we offer kind of two ways of doing it. Do you have like a static way of defining policy? You can think of like YAML file. The gamble says that like this group, this is what they could do inside of communities. This is

what you can do with telling them be. But the interesting one is policy as code, so you could basically use your like programming language of your choice, which will be teleport will just ask your code. But what shall

I do now? This read like this is what I. Identities of everyone involved like this allows you to kind of extend teleport behavior beyond static configuration so you implement completely arbitrary rules like if your last name ends with a Z, then you cannot do something on Tuesdays after 2pm. So that is kind of two ways of doing it through code dynamically or statically via a static concept.

S1

Yeah, it just seems to me that the policy piece is huge. Not not the technical aspect of like, how do you write it or implement it? But the aspect of explainability understanding your security program by being able to look at your set of policies that are in place. So it's like someone comes audit you, a customer or a regulator or something something and they come to you and they say, I want to know the current state of our environment. You could say here are all the

policies that are currently in place. Here are the violations.

S2

That's right. Yep, yep. It's actually interesting computer science problem that you reminding me off. So, uh, so those of us engineers who went to school and if you take like a prologue class. Mm hmm. The policy is very similar to declarative programming. Mm hmm. So if you're writing a piece of software or using those languages, you basically define a set of facts like predicates. For example, cats

are animals. Dogs are animals. So then the program will say that both cats and dogs are animals, and you can ask the question Is dog an animal? And the and the system will respond yes and no. So as you add more and more statements like this and the system, your probably your application becomes more complex and then you can ask the question So is Bob an animal? And the thing will and and the system of predicts will look like Bob is a human and human is not

an animal. Therefore, Bob is not an animal. So you could just take a simple example. But notice that the answers will always be correct. You will either get yes or no or not enough data. But we don't have a system like this to query policies today because policies exist, as I said, because of that fragmentation and siloing, you have different policies for different systems declared using different languages and components, so you cannot ask a question can bob

touch production data right? So because no way you because you have no one to ask. Simply because of that fragmentation, so that is why we rely on audits. Mm-Hmm. Audit is a retroactive way to see if you have an error in your policy. It's essentially troubleshooting policy. That's what audit is. This is why auditors look for this all logs. Mm-Hmm. Because they want you. They want to see if you

have a way of spotting in there. I am basically saying that audit logging in the future should be considered almost like an absolute practice. Instead, we should have a system that is similar to prolog where you can ask a question can log access from production. And if the answer is no, it's not like they're there.

S1

We don't really give me the current state of the employee.

S2

So that is the future that we're driving towards. But the first step is to consolidate everything in one place. You see, this is why. I answered my your first question that way, because when you said like, just can I single control plan that everything goes through? That is true, but you see how deep that consolidation goes, that you making entire things like audit, log and observability not needed obsolete.

That is the kind of benefit that you get if you consolidate access policy and identity source in one place.

S1

Interesting. And so what are the technical steps here like? What is the plane look like when it's installed inside a different environment? How does it look for physical assets? How's it look for Cuba daddies versus easy to versus all these other places?

S2

So they're essentially, I would say, maybe three components of a system and. It's just kind of on a low level teleport is a single binary look, would you believe that simplicity is absolutely essential to security? So every engineer is familiar with the concept of a Unix daemon, so you have a system that's running on every box. So and we use that model. OK, we're going to be just like that because everyone knows how to run it. Everyone knows how it works. Everyone has an idea of

how much resources it requires. CCD is also it's almost stateless. It has a little bit of config. It's simple. It never goes down. It has no dependency. It's like the first thing that comes up and the last thing that goes down. Mm-Hmm. Maintenance free or almost maintenance free. So that is what teleport is. It's just a drop in replacement process. OK. So with get out of the way now,

how does it work when it's running you? You have to say that on certain machines, teleport works slightly differently. So in other words, you run the daemon with different flags in different places. OK. So you have to say like these machines are going to be proxies, so you have to select the proxy. OK, so proxy. It means that it's a machine that is exposed to the outside world, also on the inside. So that's your kind of front door.

So teleport proxy. Again, SOCOG works exact same way, so if you think of jump host, that's really your proxy.

S1

OK, so this is like your your take on zero trust rather than VPNs like these are your proxies to get into everything behind.

S2

Correct. So but teleport proxies, peaks, all protocols. I feel like the massive difference there. Hmm. So when you try to connect to a MongoDB or teleport, the proxy will start speaking MongoDB protocol to you. If you're talking associate proxies, toxic assets h, so your existing tools think that they are talking to actual mongo.

S1

So if including on another distant host that's behind the proxy, how is it being routed to it? How does it know? So go.

S2

Yeah. So the proxy, what proxy does they? The proxy gets your request, and in there it sees which host you're actually trying to go to. So it makes this transparent connection and you go into that host on that host you have teleport to running and teleport is interacting closely with whatever it is you're accessing. Is it? If it's if it's Linux operating system, that's direct comparison to SSA.

So it will just do Fool Association limitation, right? So but if it's good or my sequel, or if it's maybe a Kubernetes cluster, then it will connect that connection directly to the cluster and it and it will put certificate on the wire. Because all of these workloads actually support certificates could be an instance of birth certificate databases do. Hmm. But with a certificate coming from the proxy by itself is really dumb. It simply connects sockets together, and that's

by design. Mm hmm. Because if you have an attack on the proxy, not because teleport is vulnerable, but simply because there is always an attack. You might have Postgres. I'm sorry, the WordPress or some other application that's older, built on the same machine, but like human made a mistake somewhere and people bad people have gotten onto the proxy fight. There are no secrets in the proxy, the

way it works that. Before even connecting to your proxy teleport, it will it will look on the wire and see if you have a certificate. And if you don't have a certificate that will redirect you to identity manager like actor sell point Active Directory, it will say go do your SSL. So you go through that you log in using the choice of authentication your company prefers, and then that system will redirect you back to teleport with certificate

on the wire. Mm-Hmm. Now we actually know what that's wrong. That's not how it works. It redirects you back to teleport with metadata on the wire. So we have standards like Samuel or Panaji Connect where your identity is going to be there. So the proxy will then take that identity and will send to the second component of teleport called Certificate Authority. Certificate Authority will look at you and there's OK which protocols you're using, or you need the

cessation ID MongoDB you need. And then this will issue certificates for everything you need, and those certificates will go back to you and they will be put in the wire and then you'll be redirected back to proxy. And then proxy will look at these certificates and it will connect you to this thing. And when you add it's encrypted end to end this, your proxy proxy only looks at the circuit doesn't actually see any data. So then your connection is established, let's say, into MongoDB or Linux

box and decryption only happens there. So that is the magic reminds me

S1

of Kerberos a little bit.

S2

Yeah, exactly. You see, these ideas are not new, but we as an industry, we know what the best practices are. It just so happens that they're not always available to people, so they have to resort to. So that's really what we do. We just make best practices easy by making

him default. So and now when you connecting to, let's say, like a database or Kubernetes, you have certificates in the way, which means that Kubernetes will put you in the right group and the role based access control will kick in. And the audit will now have like your metadata will be in the audit, so it will all be now synchronized and then the same thing will happen if you are a bot. So if you're like a backup script and you get started by scheduler inside of a Kubernetes cluster,

the certificate will be injected into Kubernetes secrets, right? So now you have certificates. So if you want to make connections during anything like every programming language runtime when you open socket call, there is a certificate optional parameter refresh TV, right? Just put it there. That's all you have to change, like one parameter, one line of code and that you

don't need API keys anymore. That's it. Like now your Borth, your automation has an identity, and that identity will be used to give you access to whatever it is you need to back up. Mm-Hmm. So you see how it works for software now on the hardware side when the machine is booting, but you see teleport is a demon. So the first thing it does when it comes on line, it will go it so it will do its own off. It will go to certificate authority and say, Hey, I'm

over here, I'm a production host. If that's true, the certificate will be issued and it will land on the box. So you see, every microservice has a certificate, every human has a certificate, every hardware piece of hardware has a certificate and in on the roadmap we're going into client as well. So which means your laptop when you try to get in because laptop, it needs to be compliant and you have IT-BPM, so your laptop will receive a

client hardware certificate. And so now you have you basically covered everything with certificates. They have this identities, and now you can do this. Fun things with policy enforcement asking questions. Cook who has access to what?

S1

Yeah, really, really cool stuff.

S2

Yeah. But let's talk about the negatives, because if it was so obvious, why would it like didn't exist before? So the sacrifice is complete goodbye to backwards compatibility. Mm-Hmm. Because you have so much infrastructure out there. Let's start with routers like every piece of network equipment has a associated baked into it and you cannot get out and it works using public private keys like there is no certificate support. If you look on the client, like using

windows using putty. No certificate support. Yeah, there are versions of SSA agent on some Linux distros. They cannot hold certificates. So we just basically making these choices that if we were to make a difference, I would have to go where the puck is going and saying if if you don't do certs, you should stop doing it. So that's that's probably the most visible kind of drawback of the system.

S1

That makes sense, which would make sense that it's easier for newer orgs, like if they were to spin up a new company tomorrow and start with this, it would be much easier.

S2

Well, you would be surprised. On one hand, though, what you're saying is definitely true. But yet let's also remember that any large company is basically a collection of orgs. Right? Yeah. And certain are newer than others. Mm-Hmm. Samsung is arguably like a huge company that's definitely not, you know, it's like feels like it's a half of the South Korean economy. Sure, sometimes. But we have a significant presence at Samsung. So the teams that are starting new projects or just or simply

operating on the newer technological stack. So they are adopting to work more and more frequently, and we have plenty of large companies still apart.

S1

Yeah. And there are a lot of companies who periodically they say, how should we be doing this different? And that's a perfect time. There are regular intervals where it's a perfect time to reevaluate and move into the future rather than doing the old thing.

S2

Yeah, I think the big another like other than fragmentation of access, which is absolutely killing everyone. Another huge problem is now with just treatment of secrets. Mm-Hmm. For a long time, it was considered to be acceptable to store secrets in an encrypted way as long as you do it properly. So you would use some kind of encrypted vaults or you would use basically just you would rely on encryption to protect your infrastructure. And it's okay. It's fine.

But we have solid algorithms. We had like amazing mathematical breakthroughs. So let's do that. And that's true. But the reason why secrets no longer work is because of scale. So let me walk you through why he managed just right. Well, key management is one use case, but you can. But you could think about it in the generic terms like you have data and there's the standard like data and you should be encrypted at rest. OK, encrypted with what? Well,

with a secret. So where does that secret go? And you have these instances of that like sprinkled all over there have API keys for internal external services. You have message credentials, you have a data like backups encrypted, just like all of the secrets. Where do they go? Like what is happening? So that is a growing concern and the problem in the space because it's not scalable and

this is where scale breaks here. So imagine for a second that you have a single secret in your infrastructure and your infrastructure is a single server and your team is like five people. Everything is really small, so you encrypted something that's secret. And then you have to ask yourself, what is the probability of that being stolen? OK, let's look into it. If you encrypted something, it means there is decryption happening. Mm-Hmm. If decryption is happening, it means

it's either manual. So one of your five engineers will just remember the secret and do decryption. Or it's a piece of automation, something that you periodically deploy. Mm hmm. OK. So if you are periodically deploying, there's a slight chance you will have a bad deployment. Type the wrong thing in the keyboard to flip the wrong thing and your secret will be on your public GitHub repository. You will check in with the code every once in a while.

We make mistakes. Well, humans. Mm hmm. So there is a tiny chance that might happen on the automation side. But if it's on the human side, same thing. Humans make mistakes. There is a tiny chance you're going to click on the wrong attachment. You can sign up for the book with a phishing attack. Your laptop will get compromised and the secret will be stolen, but you would think, Oh, we're going to follow the best practices. We hired good engineers,

the smart people not going to happen. OK, fine. But then what happens if you have two secrets and then two servers and then 10 engineers and then 100 secrets and 100 servers in a thousand years? So as you scale, as you process more and more data, you can grow that combined probability of human error keeps increasing. And I will never accept and we shouldn't, that humans are infallible.

We will eventually make a mistake. So which means that the existence, the mere existence of secrets and your infrastructure is a liability. And the bigger you get, the bigger that liability becomes. So the future then, is to move to completely scrupulous future, where there's zero infrastructure secrets that are present anywhere. And that is where the certificates come. Come in because maybe we don't eliminate the need for secrets for all things infrastructure, but we eliminate the need

for static credentials for access. So the API keys go away because your applications will get this ephemeral certificates that are automatically expiring so you don't really need encrypted storage for your API keys. So things like associated passwords who go away, things like private public keys go away. Things

like like passwords to windows. You see, that's a huge kind of liability exposure that certificates on identity just eliminate, which is a big, big shift that's happening in the industry that now is important.

S1

I love this. I think this is. I feel like it's where things have to go because complexity just continues to grow. Exactly. This is this is the only way to actually address it with a system like this.

S2

It's something I enjoy seeing the similarities in how humans think about different problems. Like one, when we talk about carbon footprint and global warming, it's basically the same answer. Yeah. It needs to go to zero. Like simply lowering something by 20 30 percent is not enough because it keeps accumulating so few, gaining 20 percent all the time. So eventually that that curve is going to get you to say, Oh,

S1

it's just like you said with population, the population grows and you have a small problem. Well, you have a big yes.

S2

Yep. Yep, yep, exactly. Exactly.

S1

All right. Well, this has been fantastic. Definitely enjoyed this. I wish it was a real lunch, but me too. Yeah. Yeah. Maybe sometime soon. And thank you so much for the time.

S2

Yeah. Thank you for asking all the right questions I couldn't hope for to cover so much, so quickly. Awesome. All right, ladies,

S1

we're listening to the standalone episode. We'll see you next time.

Transcript source: Provided by creator in RSS feed: download file