¶ Engineering in the AI Era
It's become a joke how the era of AI has led to the demise of every job in tech. In fact, so many jobs have died this year. I'm surprised I haven't been invited to more funerals. Product management is dead, user research is dead, and most egregious of all, software engineering is dead. I'm probably preaching to the choir here, but anyone who really believes that software engineering is dead because your friendly neighborhood LLM can write code, is definitely not an engineer themselves.
My guest today, Cortex.io founder Anish Dhar would even argue that engineering is definitely not dead. It's just growing up. Formerly an engineer at Uber, Anish founded Cortex to make it easier for engineers to understand complex code bases. As an engineer himself and someone who's users are engineers, what he's seeing in this space is actually an evolution in engineering excellence and a disconnect between new and old ways of measuring it.
We shared next gen thinking from measuring and evaluating excellence in engineering, and a hot take on how the vibe coding craze fits into the conversation. Let's jump in. Oh, by the way, we hold conversations like this every week, so if this sounds interesting to you, why not subscribe? Okay, now let's jump in. Welcome back to The Product Manager podcast. I'm here today with Anish Dhar. He's the founder of Cortex.io. Anish, thank you for making time to talk to me today.
Thank you so much for having me.
Yeah, so can you tell us a little bit about your background and how you arrived at your role today?
Yeah, absolutely. So I'm the co-founder and CEO of Cortex.io. We started the company about six years ago, but before that, I used to work as an engineer at Uber. I really started my career there and a lot of the problems that I faced as an engineer at Uber actually inspired all of the reasons we started Cortex. So two really close friends of mine. Uber has this massive internal service architecture.
As an engineer there, it was really difficult for me to understand different parts of the code base, especially when I joined. There were so many different services that were being built. It added this intense complexity and I was talking with a really close friend of mine who was an engineer at a very small startup called Lend. They only had a hundred engineers while Uber had over a thousand.
But we were both facing similar challenges around organizing and understanding our service architecture. And so that just rang these alarm bells that, okay, if Uber is on one end of the stores and Place D Scale, and this other company that just starting their journey on microservices has the same problems, it's clear that this is a big problem in the industry. And so we ended up starting the company. We went through the Winter 20 y meter batch. Then yeah. Fast forward to today.
We just made their series C and I work with a few hundred different enterprises, use Cortex to manage their simplicity.
Cool. That's an amazing journey and it's always wonderful to hear when a company comes outta the ashes of an issue that you know intimately. And speaking of which, we're gonna be talking about engineering excellence and what that looks like in today's tech. Landscape on this episode. So of course this is an issue that you've been very close to throughout your career.
So to kick us off, how do you define engineering excellence in 2025, and why is it becoming such a critical focus for CTOs and VPs of engineering right now?
Yeah, absolutely. It's a great question. So what we found at Cortex is that for a long time, the conversation was really focused on developer experience. And developer experience is a really critical part of any engineering organization, right?
It's. Simple things like making sure that developers have, when they join a company, it's really easy to get their internal system set up and they're connected to GitHub and the various tooling that they have, or when they're maybe deploying a service, the infrastructure set up in the right way so that there's not a lot of troubleshooting or steps to getting get there.
But what we found over the last couple of years especially, is that the conversation has shifted a lot more from just developer experience to what we call engineering excellence. And I'd say the big difference between the two is engineering excellence is really the focus of. Different teams within and your organization. So you can think like SRE, security developer, productivity, even developer experience, but it really aligns them to actual business outcomes.
And I think that's the key difference here where a lot of engineering excellence is thinking about how does the work I'm doing actually impact the business and how does it actually move forward? The goals that we have, all the way from, the CEOs organization all the way down to the specific SRE, for example, that's working on it. A good example of that could be.
As an organization, we're really focused on improving our customer experience and we want, when customers use our product, it to be reliable and a better customer experience leads to more revenue because people are using our product more. So from an engineer excellence initiative, your SRA team then might have a production readiness checklist that they're trying to implement.
Because before services are deployed, they wanna make sure that all services are meeting kind of the standards of the organization. And so you can see how an initiative that starts with the SA team drives back to this real business outcome that the organization cares about. And I think it's really critical for teams to think about their initiatives in this way because it reaffirms the value and aligns.
The technical initiatives is something the business cares about, which is I think what engineering excellence is all about.
Interesting. And this I'm sure you've described many times is kinda like a never ending journey, which involves many disciplines kinda working in tandem. Tell me about this framework that you've developed. What are the key pillars look like? How do you look at this from your own organization?
You're absolutely right. It really is a never ending journey. I think that a lot of companies that we work with, especially, especially the size of the company or if they're a large enterprise, there's a lot of legacy infrastructure versus maybe a new company that's built on newer technologies and this maybe even AI first.
There's still different initiatives that revolve around engineering excellence and really acquire, I think, a really thoughtful approach across all these different teams about what is the work and how does it impact ultimately the business excellence goals we have. And so from a framework perspective, I think one of the things that we have really worked on is, yeah, like how do you define engineering excellence for your organization?
And I think the way that we think about it is it starts with business excellence, right? There is different goals that you have as a leadership team, and they can be around things like. Unlocking innovation and reducing time to market. It could be maybe lowering costs and increasing efficiency. And then usually the third one we see, which I just mentioned, is how do you improve quality and customer experience? And then underneath that is really the pillars of rendering excellence.
And these are the different teams and practitioners that kind of make up the initiatives that drive these eventual goals. And that's things like velocity, efficiency, security, reliability. Even within those subcategories, you'll have initiatives like maybe there's a security migration or approach readiness checklist. Maybe there's an incident management process that you're trying to implement.
Or just something as simple as we want to track door metrics to understand from a productivity standpoint, how is our engineering team actually performing? And then the foundation of any engineering excellence initiative really comes from what we call, we like to call 'em the four Cs. It's essentially like complete visibility, continuous improvement, consistent developer experience.
Of course, clear ownership because without ownership and understanding the different parts of your code base and all the services, it's really hard to actually drive these initiatives forward. And so typically we find that without that foundation, it's really hard to drive any initiatives. And typically we also see IDPs or internal portals are a really strong way to build that foundation.
They can also be done through internal tools, but you just need some sort of system to be able to understand what people are building so you can drive these engineering initiatives for.
Okay. So I wanna dig into something that you mentioned there about measuring performance, because I know that there's a little bit of tension around measuring developer productivity and metrics, like lines of code. Can be a little bit controversial between engineers, so how should engineering leaders think about measuring productivity in a more holistic way that kind of takes into account all these kind of Cs and that kind of thing?
Yeah, absolutely. I think the interesting thing is a lot of developer productivity over the last few years has been about lines of code or door metrics, and there's a lot of different frameworks that I think get from created to simplify how to think of a productivity, and there's some truth to how those metrics are calculated, right? Yeah, lines of code isn't a good indication of is someone being productive or not.
But if you're delivering zero lines of code consistently, quarter to quarter, there's something clearly wrong with the output or even from comparing team over team. It's interesting sometimes to see those data points, but I think the conversation is really shifted from, okay, I have this data to, how do I actually get engineers to think about that data or improve it? If you really break it down, it's like a completely different problem.
A way more difficult one because anyone can go in, hit your GitHub, API, and you get these metrics and get a snapshot of how your team is doing. But just because I show a set of metrics to an engineer and try to describe, Hey, we have to approve this metric as an engineer, that doesn't really mean anything to me, right? Like I'm focused on building software for the business and I'm focused on doing that, typically in the most efficient way possible, but.
I think the conversation has really shifted, especially with the CTOs that we work with on, okay, I have these metrics now how do I translate that into something that an engineer cares about? And I actually think that's where engineering excellence plays such a critical role. Because I think engineers, especially once who work at fast forward companies, they want the business to grow. They want, they're building products because they wanna see the impact of their work with customers.
I think development productivity has really shifted from here's just a bunch of metrics to as a business, these are the things that we're, we care about. And these metrics tell a story as part of that. But for an engineer it's how do you translate that into something then that the work that I, I'm actually working on, it means something. And so I'd say that's like the big shift that we've been seeing.
So have you noticed it? There are some kind of outdated evaluation methods or KPIs that people are starting to move away from. What would you say is like the new school of evaluation or do you have any specific examples you could share?
Yeah, absolutely. So I would think about productivity as sort of
¶ Defining Engineering Excellence
input and output metrics. So output metrics are all of the classic frameworks that you see today to track drone productivity. Like the, one of the most famous and popular ones is door metrics, which is a few different set of metrics that kind of give you this holistic view of, or they're supposed to give you a holistic view of like, how is my engineering team performing? I would say that most engineering organizations today.
While they want to see these metrics, these output metrics, and try to capture them, right? It goes back to what I was saying a little bit earlier around, okay, then how do I actually influence those metrics and see them move? And so what we've been seeing, especially with our customer base, and I think why we've seen kinda this interesting Cortex grow over the past few years is because there's this whole concept of input metrics that influences output metrics.
For example, let's take something like deploy frequency, right? Deploy frequency is a great metric to look at because. The rate at which your engineers are deploying software is probably a good predictor of how fast you're shifting product, which at the end of the day is then how you beat your competition to market, and how you just move faster as a business. So maybe as an organization, you've decided that deploy frequency is the main metric that you wanna track.
Okay, if I have a dashboard of deploy frequency and I can, my OKR pull this up and try to show the whole engineering team. Hey everyone, we're at deploying two times a week. We wanna get that to four. Okay as an engineer, how am I actually supposed to think about as it relates to my work and, my part of the business or the services that I own, right? Like obviously deploying faster means maybe you have to ship more, but does that lead to increased bugs?
Because I'm shipping more, is reliability gonna go down? So there's so many different variables that go into this and I think that's where input metrics become really critical because the input metrics ultimately influence the output metrics. And so maybe for deploy frequency, maybe there's a process that you put into place to actually see that go faster. So I can give you an example.
Like one of our customers or Riley, they had an initiative to kind a very similar initiative like this and they were tracking deploy frequency using one of our engine intelligence dashboards. What they found was that, okay, we want our engineers to deploy faster, but the way we're gonna do that is by putting really important guardrails in how engineers deploy and giving them really clear guidelines on this is what a good deploy looks like as relates to our reliability guidelines.
Because what was happening is engineers are trying to deploy faster, but it would lead to bugs or things would break. So there was this hesitance to actually move as fast as quickly as possible because there was all this customer impact that was happening.
And so going back to those input metrics, they basically came up with this production readiness checklist and it was a list of eight or nine different input metrics that as a whole kind of give a really good understanding of, is our deploy process actually healthy or not? And so these are metrics like. It's our on call setup up correctly. Is our build process actually passing? Do you have tests that are passing on your services?
And what that did by putting those input metrics in place was it gave engineers a really clear guideline on, okay, I own these 10 services. This is the process. And these are input metrics that actually means something to me because they represent the services and how they work. What they saw was like a gradual increase in deployed frequency from two times a week to three to four, and especially with the critical services and also a reduction in things like incidents and things like that.
And so going back to your original question, I think a lot of enterprises are thinking about, okay, yeah, we have those output metrics, but then how do I translate that into something engineers care about and they feed into each other in a very meaningful way. I think you have to be thinking about your developer productivity and metrics in general kind of as like a complete story between the two.
Yeah. Yeah. That makes it a lot more, yeah. I think holistic is a really good way of looking at that. Switching gears just slightly, but we're on the same topic of deploying more, more quickly. We can't have a conversation about engineering in 2025 without talking about vibe coding. Let's talk about these AI tools that are transforming the coding workflows right now. I've heard you say before that you can't vibe code your way to a million users per day. Maybe a hot take, maybe not.
So what would you say is the reality versus the hype when it comes to using AI for production scale at environments?
Yeah, it's certainly a very hot topic and I think that every engineering organization is thinking about AI or has adopted some sort of AI coding assistant, for example, right? There's several really popular ones, including cursor out there in the market. Even the engineering team at Cortex, almost all of them are using some sort of AI coding assistant to help them with their day to day. But what we found.
Talking to engineers on our team and also working with, most of the customers that we work with have been thinking about similar sorts of initiatives. I think that AI coding assistance are great for when you have an initial idea and you want to quickly validate something. Or even if you're a front end engineer and you wanna quickly mock out something very quickly from an idea to how something could look and feel like.
What we've seen is from a engineering development process, it's perfect for something like that. You want something quick and dirty, something you just show people how something can work. Or if you have an idea as a, entrepreneur, you wanna quickly validate it. I think, you're seeing such amazing growth with things like that.
But the reality of where coding assistance and vibe coding is today is that you could never trust code that is shipped from vibe coding to actually power a production system that is then being used by. Millions of different users. And that's just experience from what we've seen, right?
Like at the end of the day, I think Vibe coding is at best a junior engineer who has really just learned how to code versus I think that's where senior and staff engineers who understand system design, who understands how infrastructure is actually deployed at scale. We are just not nearly there, and I'm not saying that it can't get there. I think the rate at which AI is developing is unbelievable.
And I think it'd be stupid to say that maybe there's not a world in which AI systems can understand actual production instances, but the reality of it is today, it is just, I can tell you for a fact there's no enterprise out there that's relying on vibe coating to power any production system that handles millions or billions of users because it just. The level of technical expertise you need to set up those systems and diagnose them and make sure that they're scaling.
It's just not even close to that. But as a whole, I would say productivity has improved with coding assistance. It's just in different areas than I think maybe the market likes to talk about, or it's sexy talking about vibe putting. But the reality is it's not just not ready for enterprise production systems.
I think that's very relatable to a lot of professionals who feel like folks who are outside of their profession are getting excited about the accessibility of their profession using AI tools. But that doesn't mean that it's necessarily, coming right behind you. But this is an important thing to talk about. We have, leaders listening to the show, so for engineering leaders trying to balance investment in AI tools versus their headcount. From your perspective, what should they be considering?
Like how do you evaluate the actual impact that AI is having on a team's productivity, and how do you take that into account with your budget?
It's a great question. I think there's a lot of different facets to this question. And at the end of the day, engineering, like even just taking it from a very macro point of view, I think engineering teams or engineering leaders. That prevent their teams from looking at these tools or prevent their teams from accessing, like for example, things like cursor or get a copilot or whatever.
I think it's a big disservice to, I think, the overall health and quality of your engineering team over the very long run because I think the reality of it is in the next 10 years, a lot of the systems, or at least initial code that people will write will be AI assisted just because.
Kinda going back to what I was saying earlier, if you're just starting a company or you're starting an idea, or you want to quickly iterate and test something, the reality of it is like it's just 10 times faster using something like Cursor because you can just so quickly iterate on your ideas and you don't really need thing, think about scale or how things work. And so even if you take like a very long-term view of it.
I think that's why engineering teams that are adopting these AI tooling and are learning how to use it in different parts
¶ The Four Cs Framework
of their coding lifecycle, even like the largest enterprises where, you need production skill systems, like those engineers at a collective whole are gonna be at an advantage compared to kind of people who are saying oh, it's not really it's just hype, I think just starting there, I think it's very important that engineering leaders let their teams explore with these types of things and even give their non-technical users access to these tools.
Because that's probably the most interesting innovation that I've seen, especially from our customer base, is like product managers and TPMs and data scientists. Who understand technical concepts but maybe didn't have the expertise to code, who can take ideas and share them with the engineering team in a lot more powerful way because they can actually spin up code and things like that.
I think from that perspective, it'd be really foolish not to create budget to give your engineers access to these tools. Now, I think the million dollar question is how much productivity actually are we gaining from this? And I think honestly, every single enterprise is trying to answer this right now. I. We see it all the time, right? A lot of times customers will buy our product. In conjunction with something like get a copilot.
And then the first question they ask us is Cortex is, okay, we have this tool that is supposed to, three x four x the output of our engineering team. We actually wanna figure out is that actually happening. And I think it actually just comes back to the earlier kind of system of input and output metrics I was talking about, right? Like it's not enough to just pull up, deploy frequency and say, okay, just get a copilot, have an impact on that, right? Because maybe get a copilots making it.
You shift faster, but it's really bad code. And that leads to level reliability issues, right? So you have to really take this like almost like 360 view of it and look at different metrics. And I think it comes back to engineering access at the end of the day, right? It's as a business, forget vibe, coding, forget everything else. What are we focused on as a business right now? What are the things that are blocking us from hitting our next scale of growth?
Or whatever it is as a business you care about. I think you have to think about is. At the end of the day, the introduction of the coding assistant or AI tool actually making a difference or impact from that perspective. And I think like the reality of it is for a little bit of time, you might not really see it impact or difference, right? Because I do think it takes some time to understand where these kinds of tools will have an impact on your business.
I think maybe the mistake that I see a lot of enterprises doing is like buying into the hype immediately. Just buying 5,000 licenses of whatever tool like cool tool they see and then six months go by and they're, okay, actually is there being an impact? And you might have a very negative reaction, I think without having a very thoughtful strategy on why we're introducing these types of tools. And maybe the strategy is what I was talking about earlier. We don't wanna be left behind, we want.
Engineers to continue to feel like this is a cutting edge place to work. And we just think that it's advantageous for us to have, AI assisted tools in our engineering team. Even if it's something as simple as that. At least you know the intentions of why you're buying it. And I think maybe that's the mistake is like just understand your intentions and then you can do gut checks along the way of is this actually doing what we thought?
I tend to agree. I think that right now. There's so much pressure to have mastered these tools and figuring out where they fit into the workflow, that's helpful. But yeah I tend to agree that right now what we're running into across many departments is the difference between quantity of output and quality of results is like vast. And we're like really having to be, and so intentional not just in the engineering departments, but.
Are we applying these tools in the right context in order to facilitate and empower the best of our headcount rather than just blindly trying to account for it as next year's headcount is going to be lower because it's AI tools will be able to replace. So I think that right now's it's interesting to see everybody figuring it this out in different ways at the same time. And it, I, it's a very disorienting time to be in tech.
Speaking of the AI tools and them becoming more prevalent in development workflows and speaking of quality also, like how do we think about finding that balance of maintaining code quality and security standards and reliability, while also wanting to make sure that we are being, cutting edge workplaces and taking advantage of these tools to the best of their function. So in your organization, for example, how have you guys approached that?
Yeah, it's a, it is a really good question. And just really briefly, I think. I will say, I think the idea that AI is replacing engineers is so far fetched and silly, and I think what is true is
¶ AI Tools and Production Quality
maybe when you're first starting out, instead of hiring like
¶ Input vs Output Metrics
15 engineers, maybe you can hire a few less than that, just because the early days of just iterating and trying to find product market fit, you can just do so much more with. A tool like Cursor than you previously ever could. It could try out 10 different ideas really quickly. And I think the stat's like speed of iteration is really powerful.
But yeah, I think once you're actually deploying production systems, in my opinion, enterprises that are saying, oh, we don't have to hire as many engineers because of AI is just like trying to get some nice media. Outrageous or something like, there's just no way. That's fair. I know that for a fact.
Without naming names.
Yeah. All those, like all of them are gonna continue to, hiring engineers will never not be in demand. But kinda going back to your question about how does AI tooling and AI coding systems gonna impact some of these key pillars around like reliability and security? Honestly, I think that's the big open question right now. And I think that's probably the biggest thing that scares these teams and scares, teams like SRE or security.
Or operational excellence because I think the introduction of these tools ultimately creates more surface area because people are shipping just more code. And the more of your system that is built with ai, the more likely it is that you don't really understand how everything works internally. And what that means is when there's inevitably a reliability issue, because that is. Forever constant. No matter how much you prepare one day there will be something that happens.
Whether it's an influx of, users that you didn't anticipate or a part of your code base that you didn't fully understand, that blows down. The more of your system knows AI assisted, you're just in a much more difficult place because as an engineer then how do you actually go and traverse all the different parts of your code base if you don't fully understand it? And I think that's maybe the biggest downside I see right now. Why? Actually, I think it becomes more critical than ever.
I was talking about that foundational layer of engineering excellence, right? Like complete visibility and clear ownership are really key pillars of any engineering excellence initiative. And if the more of your systems that are created through ai, honestly you have less coverage on those things because who actually owns it, who actually understands and understands that visibility? I think those are the. Speak things that scare a lot of security teams and liability teams.
And honestly, I we see with some of our customers, we even see it with our own engineering team, right? When we do publish code with AI or something is created with AI, engineers are very careful to mention that, hey, some of this was, a hundred percent generated through Cursor or whatever, and we take extra look at those systems. I've seen this. Huge trend around like AI assisted testing.
So there's a lot of companies right now that are actually creating like an AI engineer that like review your tests and stuff. Sometimes I see that and it's a little bit dicey because like at the end of the day, I think AI systems are extremely powerful and are doing more good today than in any harm in engineering team. But it's just scary to think about a world where 80% of my system is rooted through ai. It is gonna lead to like reliability and systems would take longer to resolve.
Security incidents might go up because. You just have less visibility, and I think that's the thing that you have to watch out for.
I tend to agree, and I think that's the less talked about logistical problem that a lot of teams are dealing with when it comes to, enhanced output, which also means an enhanced demand for oversight. We just had a conversation last week with the SVP of product management at Mastercard Gateway, and he was talking about how a lot of AI tools are really accelerating the ability to complete things like forms in order to enter new markets.
The inordinate amount of paperwork that you need to do in order to enter a new market and kind of, comply with all the regulations. AI is allowing a lot of that to be completed in a really rapid pace. But then there also has to be that level of oversight to ensure that there's ownership over that of over those submissions. And so there's that kind of push and pull of you can complete it quicker, but you also need to be able to be accountable at the same speed.
And that I think is a huge logistical bottleneck for Yeah, sounds like a lot of engineering teams as well as other kinds of folks who are using these tools to accelerate. So I think that's a layer that. Bears underscoring a little bit is, sure we can shit fast, but can we also at the same speed say that Yeah. I'm like accountable to that work. Yeah, this is so interesting to talk about.
We've talked now to leaders in all kinds of departments and all kinds of disciplines, and it's very fascinating to see how a lot of these concerns are so parallel to each other despite the fact that the discipline itself is so different. So I really appreciate all the insights you shared today. That concludes our episode. Where can folks follow you online, Anish?
Yeah. I would say that you can follow myself on LinkedIn or Twitter. If you just search my name you'll find me. And then Cortex as well. You can find us on LinkedIn. We're always posting, we are actually hosting Android access summits all across the world. And so if there's one mean the city near you, definitely come through. We just like to have a community of people who think about these types of things. And every company's thinking about them.
So it's really cool to see such a great community come from that. And then we all started hosting our conference called IDPCON and it's really centered around engineering excellence. And so we have leaders from, different types of enterprises who come and developers who are just interested in connecting with other developers around engineering excellence. Long of great talks. It'll be New York City in October, so hopefully we'll see you there as well.
Yeah, that sounds fantastic. Thanks for letting us know. We'll make sure to plug that in the description box, and thank you so much for making the time to join us.
Yeah, thanks for having me. It was great.
Thanks for listening in. For more great insights, how-to guides and tool reviews, subscribe to our newsletter at theproductmanager.com/subscribe. You can hear more conversations like this by subscribing to The Product Manager wherever you get your podcasts.