A lot of the bets we're making inside the company are for things that are not, you know, three, four weeks away. We should be cannibalizing the existing state of our product every six to 12. Every six to 12 months, it should make our existing product look silly. It should almost make the form factor of existing product look dumb. How do you know when it's time to hire someone?
I wanted the company to almost be like this dehydrated entity. Every hire is like a little bit of water. And we only go back and hire someone when we're back to being dehydrated. Any other skills you think people should be investing more in with the rise of AI, building more and more of our product? The engineers are now able to produce more technology.
The ROI of building technology has actually gone up. This actually means you hire more. The best thing to do is just get your hands dirty with all of these products. You could be a force multiplier to your organization in ways in which they never even anticipated. Today, my guest is Varun Mohan. Varun is the co-founder and CEO of Windsurf, which has quickly become one of people's favorite AI coding tools and is basically the main competitor to Cursor, with over 1 million users four months in.
In our conversation, Varun shares what makes windsurf unique. why they decided to invest heavily in enterprise sales very early in their history, why agency is going to be the most important skill for engineers and product builders to build. Also the story of how they started out as a GPU infrastructure company and realized there was a much bigger opportunity up the stack and the two pivots that got them to where they are today.
He also gives a live demo, advice for being successful with windsurf, and so much more. Thank you to everyone on LinkedIn and Twitter and my newsletter community for suggesting great questions to dig into with Varun. If you enjoy this podcast, don't forget to subscribe and follow it in your favorite podcasting app or YouTube. Also, if you become a yearly subscriber of my newsletter, you get a year free of Perplexity Pro, Notion Plus, Linear, Granola and Superhuman.
Check it out at Lenny's Newsletter.com. With that, I bring you Varun Mohan. This episode is brought to you by Breck. the financial stack used by one in every three US venture-backed startups. Brex knows that nearly 40% of startups fail because they run out of cash.
So they built a banking experience that focuses on helping founders get more from every dollar. It's a stark difference from traditional banking options that leave a startup's cash sitting idle while chipping away at it with fees. To help founders protect cash and extend runway. Brex combined the best things about checking treasury and FDIC insurance in one powerhouse account. You can send and receive money worldwide at lightning speed.
You can get 20x the standard FDIC protection through program banks, and you can earn industry-leading yield from your first dollar. while still being able to access your funds anytime. To learn more, check out Brex at brex.com slash banking dash solutions. That's brex.com slash banking solutions. This episode is brought to you by Product Board, the leading product management platform for the enterprise.
For over 10 years, Product Board has helped customer-centric organizations like Zoom, Salesforce, and Autodesk build the right products faster. And as an end-to-end platform, Product Board seamlessly supports all stages of the product development lifecycle, from gathering customer insights, to planning a roadmap, to aligning stakeholders, to earning customer buy-in, all with a single source of truth.
And now, product leaders can get even more visibility into customer needs with Product Board Pulse, a new voice of customer solution. Built-in intelligence helps you analyze trends across all of your feedback and then dive deeper by asking AI your follow-up questions. See how Product Board can help your team deliver higher impact products that solve real customer needs and advance your business goals. For a special offer and free 15-day trial,
visit productboard.com slash Lenny. That's productboard.com slash L-E-N-N-Y. Varun, thank you so much for being here and welcome to the podcast. Lenny, thanks for having me a long time listener. Oh, I really appreciate that. I'm so excited to have you here. I feel like just you guys have become this
Overnight success, which is definitely not an overnight success, but I feel like I've been hearing about Windsurf more and more as people's favorite AI tool. And I just don't think people know the story behind Windsurf, behind Codium, the company that you built. So I thought it'd be good to maybe just start there. And have you just briefly share the history of Codium and how Windsurf emerged out of Codium?
Yeah. So the company was actually started close to four years ago. As you know, AI coding was not a thing four years ago. ChatGPT was not out four years ago. At the time, we actually started out building GPU virtualization and compiler software. Right. Before this, I worked in autonomous vehicles. My co-founder, who I've known since middle school, worked on AR VR.
at Meta. And for us, we believe deep learning would touch many, many industries, right? It wouldn't just touch like sort of autonomous vehicles, it would touch financial services, defense, healthcare. and we believe these applications were hard to build these deep learning applications so we made it possible for you to effectively run these
complex applications on computers without GPUs. And we would handle all the complexity of being able to actually run the workload on the GPUs for you, right? And we were able to optimize these workloads a ton. And in the middle of 2022 rolled around and we had a couple million in revenue. We were managing upwards of 10,000 sort of GPUs. We had eight people. At the time, we were free cash flow positive.
But I think what we felt was when once these generative models started to get very good, we sort of felt a lot of what we built was not as valuable. And this was like a very, very hard moment for us at the company. We were only eight people at the time, but we felt, hey, would people be training these very bespoke? sentiment classifier models anymore that were like very, very custom models? Or would they just ask GPTN is this a positive or negative sentiment?
probably is going to be the latter, right? And in a world in which everyone was going to run generative AI models, why would an infrastructure company be a differentiator? Because everyone is going to run the same kind of infrastructure down the line. So instead, what we decided to do was we believe...
generative AI was almost going to be like the next internet. And in that case, what we should go out and do is build the next great apps like Google, like Amazon. And we vertically integrated and actually took our infrastructure, our inference infrastructure to go out and build Kodium. at the time. And at that time, we were early adopters of GitHub Copilot, and we thought the coding space was going to get tremendously disrupted in the next coming years.
So we actually took our infrastructure. We ran our own models at massive scale. We even trained our own models. In the very beginning, it was very, very simple. It was purely an autocomplete sort of model, right? Which basically means that as the user was typing, we'd complete the next one or two or three or four lines of code. But we provided the product entirely for free in all the IDEs that developers coded, right? That meant VS Code, JetBrains, Eclipse, Visual Studio, Vim, eMath.
And the reason why we were able to build it for free was because of our infrastructure background. We were able to optimize these workloads a ton. And I guess very quickly after that, some large businesses also wanted to work with us. And we built out kind of this enterprise motion to work with these large companies like Dell, JPMorgan Chase.
and for them the bigger thing wasn't just hey could we autocomplete code or could we chat with the code base it was could you offer us a secure offering that was also personalized to all the private data inside the company so we took our infrastructure and made it so that we invested a ton in making sure that we deeply understood these large companies code bases
Right. And that's what we were working on until six months ago. And it's not that we've stopped working on that. But basically what we realized six months ago was we were getting limited by the IDEs that we were already working. So VS Code, which is a very popular IDE, had a ceiling for the AI capabilities we could showcase our users. And because of that, we decided to go out and fork VS Code and build our own IDE.
with some of these new agentic capabilities and over time in the last couple years the model capabilities have also been growing exponentially year over year That's sort of where we are right now. I skipped a lot of pieces there, but that's where we landed.
There's so many interesting threads there. One is just there's always this question of just where value will accrue in AI. And it's so interesting. You guys started almost at the bottom layer of infrastructure GPUs. And then you went to what people call a GPT wrapper. Not actually, but... So I guess any just like lessons there, just thoughts on just where you think value will end up in this world of AI and the stack of AI tools?
Maybe I can start by just saying like one thing about startups that I think are like really true. It's very unlikely the first thing that you believe you should go work on is going to be the right thing, which is like a very hard thing to kind of. wrangle with being a startup founder, right? You need to kind of be irrationally optimistic that what you're going to do is going to be differentially important, because otherwise, why would you go out and do what you're doing?
And if it's obvious, then a bigger company would have already done it, right? But then you also need to be really, really realistic because most ideas that are, I guess, non-conventional are usually bad ideas. Right. So so it's it's this weird kind of tightrope you need to you need to kind of, you know, kind of balance on top of where you're kind of pushing for a future that you believe is true. But all the while you're getting new information, you need to kind of.
kill the beliefs that you had. And if I were to start with the infrastructure piece, we first went in with the assumption that model architectures were going to be really, really heterogeneous. Working from an autonomous vehicle background, there were many different types of model architectures out there. There were convolutional neural networks, graph neural networks. Recurrent neural networks, LSTMs, sort of lighter neural nets with frustrum point networks.
Right. And there were maybe like tens of architectures we were dealing with. And at that point, we were like, the complexity of this is so high that it's very clear if someone offloaded the complexity, there would be a lot of value. Fast forward to the middle of 2023, everything looks like it's going to be a transformer. So now our hypotheses are just wrong.
So at this point, then most of the value is probably not going to accrue at purely the, at least this is our belief at the infrastructure layer, it's going to accrue somewhere else. Where is the layer that you can actually differentiate? And we believe the application layer is a very, very deep layer to go out and differentiate.
What are the number of ways we can build better user experiences, better workflows for developers? We think there's effectively no ceiling on that, on how much better we can make the lives of developers basically.
You touched on the second thread that I thought was really interesting here is just how you guys pivoted from ideas that were working. Like you were making money. People loved it. You said you had millions of dollars of ARR revenue. And then you just like, no, we're going to completely change the business.
So the question there is just like, how do you what have you learned about knowing what to follow? And one thing I heard there that was really interesting is just once your assumptions change about that you built your idea on, it's time to rethink this idea and maybe try something else. You know, I think the way we sort of think about this is. Even when we're working right now, we just accept that we're going to get a lot of things wrong. We're just going to get a lot of things wrong.
Obviously, that's a very big moment because that was a bet the company moment in the sense that we basically said, told our investors, hey, we're making money on this. We had already raised $28 million of capital and we were just like, hey, we're just going to pivot entirely from this. And we did that overnight, right? This wasn't this thing where we just said, hey, maybe a quarter, one or two quarters, because one of the things we knew that's very important for startups is focus.
And if you're if you're trying to do another thing that you think is big and you're focused on something that you don't believe is valuable, you're guaranteed going to fail the thing you think is going to be big. Right. So that's like that's a very obvious thing there. But I think once you go in with the assumption that a lot of your hypotheses are going to be wrong.
but you will do the most concentrated work possible to go out and validate these hypotheses. And you won't be in love with your ideas. Like I think ideas, it's awesome when you have a great idea, but you should never be too in love with your ideas.
And you have an organization that is very truth-seeking. I think a lot of people at the company have had their ideas tested over and over again. Even just building Windsor, that is not a complete company pivot, but that's a big decision that we made at the company. And sometimes you're wrong and sometimes you're right. But if you have an organization that comes out and you feel like morale is not going to be low if you made the wrong decision.
That's the best, right? That means you have optionality for the rest of time. And Lenny, one thing that I try to tell the company. about this is this year, the total amount of engineering output we will have is much larger than the engineering output we've had since the beginning of the company's creation till now. So that almost means every year is a new lease on life for us.
Right. It's almost a new way for us to test out an entirely new set of hypotheses. And maybe we were wrong about our original hypotheses, like in the in the first place. What what makes us more smart than everyone else to be right? You know, more times than that.
That's so empowering. Makes me think about Ori Levine was on the podcast, co-founder of Waze, and he has this phrase that he wears on his shirt. His book is called, this fall in love with the problem, not the solution. And I feel like that's exactly what you're describing. Okay, so let's talk about windsurf. What's the simplest way for people to understand what is windsurf?
An IDE, right? It's an application to go out and build software and build applications. The crazy thing is a lot of people who use the product don't even probably know what an IDE is. which is crazy, and we'll get into that in a second. But why did we go out and build Windsurf? And what is Windsurf, maybe? Why couldn't we have just done this on top of conventional IDs like Visual Studio Code?
So maybe just to get into this a little bit, as we saw that AI was getting more and more powerful, the way people go out and build technology, we thought the interface for that was going to change remarkably. It was not going to be a conventional pure text editor where the user is writing a handful of lines of code or most of the code. And the IDE provides like maybe some.
basic feedback on what the user is doing right or wrong. And the basic feedback to be, hey, there's a bug in your software, a compiler error in your software. it could do much more right it could actually go out and and modify large chunks of code and one of the key pieces that we recognized was With this new paradigm with AI, AI was probably going to write well over 90% of the software. In which case, the role of a developer and what they're doing in the IDE is maybe reviewing code.
Maybe it's actually like a little bit different than what it is in the past. And we'll see this very soon with with Windsurf. Maybe, you know, when you're using the product, actually, a good chunk of the user's time is actually reviewing what the AI is outputting. So we needed to build custom review flows into the IDE to actually make it so that it was easier to actually go out and do that, right? Because the developer is not spending all their time writing code.
And this is the fundamental premise on why we built the product. We thought we were going to get limited a ton if we had very, very basic UI out there. And I'll give you even a simple example here. We have this autocomplete product, right, that completes a handful of lines of code. Now we've actually launched this offering called Windsurf Tab that basically shows you refactors as well. And these refactors are almost inline refactors. And we were able to build a custom UI for that in Windsor.
but in vs code we needed to because of the access to the apis we needed to dynamically generate images right alongside the user's cursor because we just didn't have access to the capabilities to showcase and edit properly. And what we realized is immediately by porting over to Windsurf, our acceptance rate tripled. Same ML models. It just triples. So what that I guess gave us confidence in is
Yeah, like you could argue technology is very important. And I think technology is very important. But if our users are getting very little value from the technology we're sort of building, you need to really clarify, maybe we do need to build a new surface and interface. And that's what WinSurface is. So the big bet you took there just to make this super clear is you were initially working within existing IDs that everyone was
familiar with. And then it was like, this isn't going to get us where we need to go. We're going to try to convince people to switch to something completely new because it's going to be so much better. It's our own ID. I think I think maybe people may not recognize just how risky that is convincing engineers to use something completely new. That's a huge deal. Yeah, no, of course. And one of the key pieces that I just maybe Lenny, that would be important to share is we.
A lot of our developers do use Visual Studio Code, but there are lots of people that write in languages like Java, sort of C++ and so on and so forth, and they might use the JetBrains family of IDs that are like IntelliJ. And for us, we're actually still committed to building on those platforms, right? We just felt, though, that one of the dominant IDs, which was Visual Studio Code, was limiting the sort of... user interface that we could give to our actual customer.
What is the current state of traction for Windsor? If you hear all these crazy numbers about all the competitors in your space, what can you share there for folks just to know? Yeah, so maybe a handful. We launched the product. like a bit over four months ago. And in that period of time, over a million developers have tried the product and obviously we have many hundreds of thousands of monthly active users.
But right now, I love how like these days, like a million. No, no big deal. Like it's just the numbers are absurd these days. Like we're just getting used to just 100 million AR here, million users in four months there. It's just like, oh, of course. How could you not have that? But that's absurd. It's just like an insane time right now.
You touched on something that I wanted to get to later, but I may as well bring it up now. The question of just how engineering will change in the future. You throw out the stat that like 90% of code is going to... be written by AI in the future. Dario from Anthropic recently said the same thing. You guys have a really interesting glimpse into just how things will look in the future. So I guess the question just how do you think coding
Specifically, we'll look in the next few years. How different will it be from today? I think when we think about what is an engineer actually doing, it probably falls into three buckets, right? What should I solve for? And I guess like everyone who's working in this space is probably increasingly convinced that solving it.
which is just the pure, I know how I'm going to do it. And just going and doing it, AI is going to handle vast majority, if not all of it. Right. In fact, it probably actually with some of the work that we've done in terms of deeply understanding code bases. how should i solve it is also when you get closer and closer to getting done right if you deeply understand the environment inside an organization if you deeply understand the code base
how you should solve it given best practices when the company also gets solved. So I think what engineering kind of goes to is actually what you wanted engineers to do in the first place, which is what are the most important business problems that we do need to solve? What are the most important capabilities that we need our application or product to have?
And actually going and prioritizing those and actually going and making the right technical decisions to go out and doing it. Right. And I think that's where engineering is probably heading to. Now, does that mean that no one needs a CS degree? I think I think that's maybe a little bit overplayed a little bit just because, you know, maybe maybe here's here's maybe my argument for that.
A lot of developers nowadays that build full stack applications, at least until like a handful of years ago, they probably went to college and took an operating system course. Right. And in theory, they're not really like playing around with the operating system, like the kernel scheduler, like very frequently. But do those principles help them in understanding why their applications are slow?
Do they help them in understanding why some design decisions are better than the other? Yeah, that makes them a much better engineer than another engineer. And I think that idea and the understanding of what's going on at the bottom will make a good engineer even better. But also at the same time, it empowers a bunch of people that never understood all of those things how to actually build as well, which is another remarkable sort of
thing that fell out through this whole process. I don't know if you have kids, but just like say you had kids or you had niece or nephew going into college, let's say. Would you suggest they do software, they do computer science or would you suggest like you're not going to have a good time if that's the career you choose right now?
Yeah, I think, you know, maybe I think back a little bit. So I went to MIT. A lot of us at the company went to MIT together on the engineering team. And I think like when I think about what we learned the most for engineering or computer science. it was not exactly like how do you write code right that's maybe that is like almost a given that you can you can kind of write code after after going to college
It's more like the principles of how you think about a problem and how you break it down, right? And how you solve it in an interesting way, right? So an example of a class that I really enjoyed was our distributed systems class. And there you're kind of reading through literature and understanding. how how some design decisions were kind of made.
And I think it's more like a problem solving right and a major it's a major of how you solve problems given some constraints of how you know how computers today function right like here's the speed at which memory sort of operates here's the speed of which you know here's how much computation you can do in one one cycle or one second
And based on that, you can make some trade-offs and solve a problem. So I don't know if I would say that you shouldn't go get a computer science degree. I think computer science is almost synonymous with problem solving. In that case, I think it's... pretty valuable is everything you learn in your computer science degree useful i'd say a lot of things that i learned in my computer science degree are not useful i'll give you an example i took a parallel computing class in julia
And I don't think Julia is a very popular programming language anymore. Am I very sad that I took the class? No, the principles of parallel computing are still very useful, I would say today. So what I'm hearing is skills that you still want to build, whether it's computer science or maybe some version of computer science is kind of like building the mental model of how computers and systems work, parallel processing, memory, hard drives.
internet, things like that. And then there's just like problem solving skills, being able to solve interesting problems. Is there any other skills you think people should be investing more in with the rise of AI building more and more of our products? I think one of the things that's maybe a little bit undervalued is this kind of agency piece. And I think about this a lot, which is.
You know, you have a lot of people that could go through college and go through school and they're basically told exactly what to do on a PSAT. And, you know, they're given these very, very, I would say, well-defined paths that they need to take. And I think I think maybe in society and just school, we don't prioritize.
how do you make sure you get people with real agency that want to build something, right? Like their goal is not just to maybe graduate from college and then get a job at a big tech company. where they're told exactly what to do or where to put the pixel on for this one website. And I think that's maybe a skill set that is undervalued. just right now, probably in the last, you know, maybe 10 years or so. And I think that's going to be really, really important.
you know a skill for us for startup obviously these are skills that we just look for We look for people that are really high agency because we just recognize that by default, if we don't innovate and do crazy things, we're going to die. The company is just going to die. So we just look for this, right? But I would say like for most software engineering jobs, that's probably not the case.
right just think about you know big company x and what they're hiring for on the average software engineering interview it probably doesn't look like I love how you phrased that. If we don't do crazy things and innovate, we're going to die. That would be a great title for this podcast episode. And I think it's 100% true. There's just a lot of crazy things happening and a lot of innovation happening. And if you can't keep up, you'll die.
So let's talk about hiring. You have a really interesting approach to hiring. There's a few questions I have here. One is just how do you try to stay really lean? That's a common theme across all the AI startups these days. How do you know when it's time to hire someone? I love the idea of being a lean company, but I don't idolize it in the way that
hey, it is a dream to be a 10% or 20% company that's making 50, 100, 200 million in revenue. That's not, I think, what we idolize inside the company. I think what we idolize is be the smallest company we can be. Right. That's what the goal is. And maybe, Lenny, the way I would sort of put that out there is if I told you, hey, I'm going to build an autonomous vehicle. And I said, our team is 10 people. You should rightfully say, hey, Varun, you're not serious. And you'd be right. I'm not serious.
at that point. So I think the answer is What is the minimum number of people to go out and build? the crazy ambition project that you have. And I think the project we are trying to go out and do, which is completely transformed the way software gets built. We've mentioned this inside the company. Our goal is to reduce the time it takes to build apps and technology by 99%.
It is a tremendously sort of ambitious goal. And it's not possible for us to be a 10, 20, 30, 40 person engineering team in the long term and actually satisfy that goal. We think there's a very, very high ceiling. So that's maybe the first key piece there. It's like, actually, if we can crack actually being a fairly sizable company.
but still operate as if we're a startup. That's the dream, right? That's the dream. In terms of hiring philosophy, the way we sort of think about things is we only hire for a role if we're actually underwater for that function. so let's say we're going out and building inference technology Unless we are underwater there, we will not go out and hire someone to go out and work for that. And the reason for that is, I actually think this is like...
this is a, this is a feature, right? When, when you hire for a role and you already have enough people there, what you, you know, you get a lot of weird politics that ultimately ends up happening and it's not because people are bad people. I don't I think most people are really well intentioned. But what happens when you have people that join a company and in reality, you didn't really need them? they will go out and manufacture some other thing that they should go work on.
right? They will go out and figure out something else to work on. And realistically, it's not that important, but they will go out and try to convince the rest of the organization that it is important, right? And I just think as a startup, we don't have the bandwidth to go out and deal with.
Right. For me, I would like to see everyone just almost like be like raising their hands up, being like, I'm dying. Like we need we need one more person. And that's when we go out and hire someone. And one of the analogies I like to give is I want the company to almost be like this dehydrated entity. Right. And every hire is like a little bit of water. Right. And we only go back and hire someone when we're back to being dehydrated.
And it sounds it sounds painful. It sounds painful that you need to be underwater and raising your hand. I'm about to die and dehydrated. But I also know that it's a really exciting way to work. Like it sounds hard, but if you're in it, it's just like. I guess talk about just that side of it because I think it could...
sound like this is terrible. I don't want to work this way. You know what I actually think, Lenny? It's really good for a handful of reasons, which is that a lot of the people, we respect and trust the people that work at the company. So this forces ruthless prioritization.
You know, you have a team that's going out and doing something. They will never ask to work on something that's not important. In fact, if there are two things that they're working on, they're just going to just tell me, hey, there are two things on my plate. I just don't have the ability to do two. I can only do one and they will pick the one that's most important. And this actually goes back to one thing that I think is true about startups and just companies in general.
You don't win by doing, you know, 10 things kind of well. You win by doing like one thing really well and maybe you fail nine things. This is the thing that I've told the company. This is very different than school, right? In school, you optimize for your total GPA.
But for companies, I just need to get an A plus on the one class that matters. And then I can get an F in all the other classes. And an F in all the other classes doesn't mean, you know, being, you know, just doing illegal things. That basically means you just deprioritize things that don't matter.
You know, that actually forces this organizational prioritization that is just really, really good. Right. And, you know, Douglas and I, Douglas being my co-founder, we can tell the company these are the two things that are the most important. But if we go out and tell these are the two things that are the most important to the company. And then we put the company has 20% more people than necessary. What's going to ultimately happen?
right it's almost a forcing function for ruthless prioritization to have fewer people or people that just that are just underwater internally at the company Everyone listening that works at a big company knows exactly what you mean when you described.
when there's just too many people, they will all find work to do and they will all be pitching ideas. You know, they all want to show impact. They want to do well in their performance reviews. That's just the nature of too many people at a company. And so I think this all really resonates. to even get even deeper on just what it looks like when someone's underwater to tell you it's time to hire. Is it just someone coming to you, Varun?
We need I need someone on this team. This is just not possible. Like, what does that look like even more practically? Yeah, I think it's basically along those lines. It's that, hey, there's some pressure to get.
something done in a short period of time. By the way, one of the things that we do believe, though, for software, if you want to do great things, it's not possible to just say, hey, I want to get it done in one month. Because you have to think about it from this perspective. If a software project could get built in two to three weeks...
what does that really mean about like the true complexity and differentiation of what you built like it's probably not very high unless you believe you are way smarter than everyone else but i think that's hubris I think we actually have a very exceptional engineering team, but also at the same time, I don't think our engineering team is so exceptional that we can do things in three weeks that the rest of the world can't do in like six to nine months. That's kind of stupid to believe that.
So I think basically it comes down to that person coming out and being like, hey, look, like I don't have enough time to do X. Us having a conversation to be like, OK, what can you do then? And if the answer is I can only do less than that, then maybe we make a decision actually. Oh, wow, that's great. Maybe we actually should deprioritize why.
Because this is actually also another thing that's very hard, even for people like me and my co-founder. It's that we also want to do a lot of things, right? There's an urge to do a lot of things. But if we are forced to make a decision constantly on like, we cannot do it. because our engineering interview process is also like extremely low acceptance rate. So it's not very easy for us to very quickly spin up people and have them join the company really, really quickly either.
So I think it's clarifying for everyone. It's clarifying for the person that wants more people. We can just tell them, hey, look, we don't believe you should be doing this other thing. And it's also clarifying for us because we can also get on the same page.
right and sometimes we just kind of agree hey like our teams are very flexible that hey actually we do need to get something done and one of the things that we've kind of tried to make sure is true on our engineering team is people's value to the company does not have anything to do with the size of their team
There are projects inside the company. There are directly responsible individuals for these projects inside the company. And if we feel like one project is very important, then people can move from one project to the next, right? There's no notion of someone owning people at the company.
That is a very bad and gnarly idea, right? In fact, the person that is the most valuable at the company is the person that can do the most crazy sort of project out there with as few people as possible. And that's what you should be rewarding internally. How many people do you have at Codeam at this point? So we have close to 160 people and the engineering team is over 50 people. Awesome. Oh, what's the other bigger functions?
So we have go to market. We have a, yeah. Oh, right. Okay. I want to talk about that. The sales learning that you guys had. Okay. Well, let's close out this hiring conversation. So we talked about what you look. for to tell you it's time to hire? What do you look for in the people that you interview and hire? One of the key pieces that we look for, we have a very high technical bar. So assuming that they actually meet the technical bar, I think
We sort of look for people that are really, really passionate about the mission of what we're actually trying to solve and people that are willing to work very hard. I think one of the things that we don't try to do is convince people, hey, look, we are. you know, we're a very chill company and it's great to work here. I think, no, this is a very exciting space. It's very competitive. You should expect us to lose if the people at the company are not.
you know kind of uh they're not working very hard and i think one of the biggest dog whistles i sort of hear is when i ask people like how hard are you willing to work some people actually ultimately say hey i work very smart and i basically ask them a question
If we have many smart people at our company that also work hard, what's the differentiator going to be? Are you just going to pull them down? Because I think one of the things that's true about companies is it's like this massive group project.
And I think the thing about a person that is not pulling their weight that's bad is it's not the productivity, right? At some point when the company becomes many hundreds of engineers, I'm not going to be thinking about the one engineer that's not pulling their weight. It's the team of people they work with.
that are almost basically saying, is this the bar internally at the company? Is this the expectation? And I guess, Lenny, if I told you you have a team of five people and, you know, the four other people you're working with just don't care. Not too much. Exactly. So for us, I think that's what we more care about, right? We have a culture where it's very collaborative. It's not an individual sport, but people feel like they can rely on other people to get complex sort of tasks.
So the question you asked there just basically is how hard are you willing to work? How hard do you want to work? And I know some people there, there's this whole group of folks that are just like work-life balance. I don't want to, how dare you ask me to work crazy hours. And I love just the filter up front of, if you work here, you will work really hard. You will work a lot of hours. It's a crazy space to be in. And we will win by working smart and also really hard.
You said at some point earlier that your engineering pass rate, as you said, was like 0.6% of candidates, something like that? Yeah, it's probably post or take home. It's probably that actually. So the take home itself filters probably another, you know, 10, 15 X on top of that. Here's a question that I've been hearing more and more is just how do you do interviews these days with.
Tools like Windsurf out there that solve all your problems. We're okay with people using the tools because I think one of the worst things is like if someone comes here and doesn't like using these tools, like we believe they're massive productivity improvements.
we do bring people like into the company like on site so we can actually like see how they think through problems on a on a whiteboard and all these other pieces so so we do want to see how they think on their feet and hopefully they're not just like taking what we're saying putting it in a voice translator and sticking it in the chat to pt and getting the answer out so there is a way to do this
My viewpoint on this is the tools are really, really important, but I do think we still look for some problem solving ability, right? Like if the only way you can solve a hard problem is like put it into chat. I think that's like a concern to us. Today's episode is brought to you by Coda. I personally use Coda every single day to manage my podcast and also to manage my community. It's where I put the questions that I plan to ask every guest that's coming on the podcast.
It's where I put my community resources. It's how I manage my workflows. Here's how Coda can help you. Imagine starting a project at work and your vision is clear. You know exactly who's doing what and where to find the data that you need to do your part.
In fact, you don't have to waste time searching for anything, because everything your team needs from project trackers and OKRs to documents and spreadsheets lives in one tab, all in Coda. With Coda's collaborative all-in-one workspace, you get the flexibility of docs, the structure of spreadsheets, the power of applications, and the intelligence of AI, all in one easy-to-organize tab.
Like I mentioned earlier, I use Coda every single day, and more than 50,000 teams trust Coda to keep them more aligned and focused. If you're a startup team looking to increase alignment and agility, Koda can help you move from planning to execution in record time. To try it for yourself, go to Coda.io slash Lenny today and get six months free of the team plan for startups. That's C-O-D-A dot I-O slash Lenny to get started for free and get six months of the team plan. Coda.io slash Lenny.
Okay, let's talk about this go-to-market sales experience that you guys had. So you started without, obviously, like most people, started building without sales team. And then you realized... from what I hear, that that was a huge miss and a big opportunity to talk about there. Because that's really unique, I think, that you guys have a large sales team and go-to-market team.
Yeah, we actually made this decision pretty early in the company's history, I would say. Like, we hired our VP of sales over a year ago, actually. And the go-to-market team is now over 80 people. inside the company. So it's a pretty sizable function inside the company. Yeah, maybe a little bit of a backstory here. So when we started the company, actually, we had a handful of angels that actually were operators, go to market operators. So an example of one was
Carlos Delatorre, who used to be the CRO of MongoDB. And I think for us, we never viewed enterprise sales and sales as like a very negative thing. I think this is like a Interesting thing that technical founders sometimes don't really like. They think sales is like a very negative sort of part of the process. Everything should be product-led growth.
And I think it's not that black and white. Like I think enterprise sales is really valuable. But maybe when we were a GPU virtualization company and we were an infrastructure company, the reason why we never hired a salesperson is I didn't know how to scale the function. i was the one who was selling the product so ultimately speaking if it was hard for me to sell the product incrementally i didn't know how we could make that into a process that we could then go and scale.
Right. If I couldn't, if I can do one, like, I didn't know how we could take the revenue of the business from a couple million to hundreds of millions. And let alone even tents. So if I didn't know how to do that, how could I go out and hire someone and make them scale it out? On the other hand, for Kodium, very quickly, a lot of large enterprises sort of reached out to us.
And from that alone, in the middle of sort of 2023, we started, I guess, me and a handful of other folks at the company started selling the product. And we were doing tens of pilots concurrently with large enterprises. And we were very quickly able to understand that there was a large enterprise motion that needed to be built in this space.
So by the end of 2023, we actually hired our VP of sales and very quickly after that scaled a sales team. And yeah, I mean, look, if you want to sell to the Fortune 500, it is very hard to do that purely by swiping a credit card. Let's talk about Cursor. I don't want to spend too much time on competitors, but that's what everyone's always thinking about when they think of you guys. You guys are kind of the leading players, I think, in the space. Also, there's Copilot, but that's different.
So what's the simplest way to understand how you guys are different from Cursor and also just how you think you win in the space long term? So I think maybe a pan full of things that I could share. So on the product side, I think we've invested a lot in making sure code base understanding for very large code bases is really high quality. and that's just because of where we started right we work with some of the world's largest companies
like Dell, JP Market Chase, companies like Dell have singular code bases that are over 100 million lines of code, right? So being able to understand that really, really quickly to make large scale changes is something that we've spent a lot of time doing. And that requires us like actually building our own models that can consume
large chunks of their code base in parallel across thousands of GPUs and almost rank them to be able to find out what the most important sort of snippets of code are for any question that are asked. about the code base, right? So we've gone out and built large distributed systems based on our infrastructure background to go out and do that. That's maybe one. Let me actually follow that thread because I think people may underestimate just how big of a deal that is.
So when we talk about we had the founders of Bolt and Lovable on the podcast. So those products, they build something from scratch. They write the code for you. So that versus just loading, say, Windsurf on your million line code base, say at Airbnb or Uber, like it understanding what the hell you have and how it works and where to go change things without breaking it is.
insanely hard. And so what I'm hearing is that's kind of a big differentiator. You guys started there, actually, and then Windsurf is now building on that advantage.
that's right yeah so we that's like a big thing that we spent a lot of time on which is just understanding sort of what the what the code base is doing and actually like one of the other things is what are all the user interactions with respect to the code base and happy to show that also um you know in a bit here awesome um the second key piece probably is
We're not only tied to the to Windsurf, actually, this is probably like a weird statement given even we are talking about Windsurf, which is that actually we're pretty focused on supporting IDs like Jeopardy. JetBrains or IntelliJ has over 70 to 80% of all Java developers code in JetBrains. based IDs, right? The reason why we don't feel as big a need to almost build a competing product to Jepperins is Jepperins is actually a very sort of extensible product in a way that VS Code is not.
the S code is not very extensible. So I think for us, our goal here is not only just to satisfy a subset of users that can actually switch onto our ID, but we want to give this agentic sort of experience to every sort of developer out there. And if that means there are Java developers that write in JetBrains, that's fine. We work with a lot of large enterprises that have 10,000 plus developers, where over 50% of the developers are on JetBrains.
Right. It's a very large, large product. And by the way, that company itself is a privately held company that makes many hundreds of millions of dollars a year. Right. So it's a very, very large company. So for us, that's another key piece. We actually like want to meet developers sort of where they are. And if they if they use a different platform, we'll we'll work on that, too. The third key piece, and this probably sounds.
Another key piece for enterprises is we work in a lot of very secure environments. We have FedRAM compliance, which means we can sell to very large government entities. We have a hybrid mode of actually using the product, which means that all the code that lives that is like indexed actually lives on the tenant of the user, right? Code is one of the most important pieces of IP for the company.
So I think just if you were to look at it from like a big company perspective, there are many reasons why like over the years of just like building an enterprise product, we've handled a lot of complexities that large companies want to see. But that's part of it is because of the history of how we got here. Okay, Varun, enough teasing. Let's do a live demo of WindSurf so folks can see what it's like. And then I'm just going to ask you a bunch of questions as we're going through it.
So I'll let you pull up a little shared screen where you have Windsurf pulled up. Great. So some context, this is a very basic React project. There's nothing in it right now. So if you were to open any sort of file... It's the default React app project. And I have this basic image here. You can pass windsurf images of what you'd like the project to kind of look like.
of what I would like an Airbnb for dogs website to kind of look like. Beautiful, beautiful mockup, by the way. I love that this is like all you need. This is all you need. This is all you need. So... Basically, what we're going to do is we're going to say, hey, one of the cool parts about Windsor is it can actually work in an existing project already. Right. So I can basically say, hey, change this React app. and Airbnb for dogs website. based on this image.
and preview it. So now it'll just go out and start executing code, reading through the repository, obviously doesn't know what the current code base actually looks like and it'll go out and analyze the code base to actually find out the set of changes necessary so we'll go out and wait and see what it's see what it's going to do. But while we're doing that, let's continue the conversation. Awesome. Okay. So first of all, you kind of start, so you opened up Windsurf, you had a boilerplate.
react project ready to go and when server had never really seen this code before you ask it to do stuff on your code base which is just like change this to airbnb for dogs using this design that's right that's exactly right yeah okay cool so we'll let it run and we'll talk So let me ask you this question that I've been asking everyone that comes on that is building a product that helps engineers build products and product managers build products and designers.
Say you could sit next to every single new user that opens up Windsurf and whisper a couple tips in their ear to help them be successful with the product. What would be a couple tips you'd share? Tip number one is... just be a little bit patient and both patient and explicit, right? When you ask the application to go out and make some changes, it could actually go out and make many
prevents this the most is just be really, really explicit or as explicit as possible. And one of the things I sort of ask people to do is in the beginning, start by making smaller changes, right? If there's a very large directory, don't go out and make it refactor the entire directory. Because then if it's wrong, it's going to basically destroy 20 files.
And I think from there, one of the key pieces I think that comes from the users that use the product is they sort of learn what the hills and valleys of the product are. And the analogy I like to give are kind of similar to autocomplete. Right. When you use a product like autocomplete, you would think a product that is suggesting things but only getting accepted 30 percent of the time would be really, really annoying. But the reason why it's not very annoying is actually because.
You've actually learned that, hey, 70% of the time, I don't need to accept it. And the times that I do, I know how to get value from it, right? And you also know beforehand, if a sort of command that you write is very complex, you just expect, hey, like the autocomplete is not going to work for you. Right. So I think it's almost like a like understand what the hills and valleys of the product are. The crazy thing is every three months that kind of gets changed and reevaluate.
It almost becomes the case that it becomes materially better than it was in the past. So I think maybe patience and being explicit are maybe the two important key pieces I would tell users. And I think something that was kind of between the lines there is... Like get a gut feeling of what the model is capable of, like how specific to be versus how abstract it can be. And there's kind of this like gut feeling you start to build over time. That's right. Yeah. And with that.
it feels like we have an actual preview. And guess what? We have a nice dog app. And one of the cool parts is that we've also done beyond just just modifying code is actually being able to point to different pieces. And I guess I could just kind of say, I could point to different elements and say, hey, like make the background. This is not great, great design, but I could basically say. Make this background red.
Right. And just take a particular element and just change it and make it red. And it should go out and be able to go out and do this. The preview aspect of the product of being able to showcase the app while it's getting built helps in that now actually you can live entirely an app. app world right you don't even maybe even need to look at the code granted this looks hideous but uh but in some ways if i wanted to i could go out and do that right
This is what happens when there's no more designers. Yeah, when there's no more designers. Maybe the answer is when you ask me, what should people be doing? They should study great tape. having great taste because I think taste is also very, very hard.
All right. But maybe the other key piece, Lenny, that I wanted to showcase here is like, obviously you could keep going here. Like I could, I could take different components and kind of change them. You know, like we have like a lot of plans here that are beyond just like point and click changing components.
But one of the cool pieces is the AI. There's like an AI review flow as well, right? Which is kind of like what I was saying. The goal of AI is now changed a lot in that it is now modifying large chunks of code. for you. And the job of a developer now is to actually review a lot of the code that the AI is generated. And granted, right now during this podcast, I'm not going to review all the code that's getting generated.
But let's say I want to go out and modify some of this code, right? And this is where if you're an actual developer that actually wants to go modify, maybe I don't like my variable name being called title, I want it to be called title string. right instead like this and if i wanted to go out and make that change and make the change to go out and say title string right and that's what i'm going to do i'm just going to tell the ai to continue
And the cool part about this is Windsurf not only kind of knows about what the agent has done, it also knows everything that the user has done. Like our goal here is to have this almost flow-like state where everything the user has done, the AI also knows. And it's able to predict the intent. And as you can see, it said, I noticed that the interface property title was changed to title string. And then it now has gone out and modified all the locations within the app from title to title string.
And now it no longer says that. So this is where like even if I'm writing software and I want to go and make point changes, the AI can go out and quickly make these changes. for on the user's bath. Imagine doing a refactor or migration, and you just change one part of the code, you can just tell the AI to continue the rest.
And because it deeply understands the code base, it should go out and find all the corresponding places to go out and make the change. And obviously now when I reload my app, there's no bug in the app. It still loads properly. i can obviously tell it to do even cooler things like make the make the app retro
Right. I don't know what that means, but I guess I can do that. And it should go out and make the change correspondingly for me. But yeah, that's maybe the high level sort of parts there where the AI is not only kind of able to operate entirely in app space.
but also on the code space of the users going out and modifying code and to bridge the gap between the two. So it should add leverage to not only non-developers that are just purely building apps, but also developers that are just hands-on keyboard too. Amazing. By the way, if you're not on YouTube, you can't see that you can just select any element of the page and then reference that in your ask if here's what I want changed. I didn't know that was a feature and that is extremely cool.
So interestingly, so having just looked at Lovable and Bolt and Replen and apps like that, it's basically doing all the things those apps do. Oh, wow, there's the retro version. I like that it built on your red and made it really nice, actually. Actually, the red looks way better now. Yeah, with the little green button. This is great. Okay, so I don't think people realize this about apps like Windsurf, but it could actually do a lot of agentic work for you where you just tell it here.
I want you to do this versus it's auto completing code for you. The big difference is you need to start it with some code base. So you have this kind of boilerplate react project. Is there a reason you guys aren't taking that step? and just doing that automatically for you? Is it because you're targeting engineers and they don't need that? Or is there another reason?
Lenny, the interesting thing is like the base app that you saw for this was also generated by Windsurf. The reason why we sort of didn't generate it is installing all the dependencies takes like three or four minutes. and for the demo i didn't want to wait but totally actually most of the users within within uh of the product probably zero to one build these apps
If I can say one interesting thing is when we launched Windsurf, actually, we tasked everyone at our company to go out and build an app with Windsurf. That included our go-to-market team and our sales team. And there was a crazy stat that I think people would find surprising, but we saved over half a million dollars of SaaS products we were going to buy because our go-to-market team has now built apps instead of buying them. Like our head of partnership.
instead of buying a partner portal product has actually built his own partner portal. Right. And that was that, you know, he had never built software in the past. So and we've actually come up with ways inside the company to deploy these apps easily in a secure way. And we're actually now like building very, very custom software for our company to operate more efficiently.
I would not have expected this probably six months ago. That is incredibly interesting. You don't need to name company names, but I guess what's a space you're least bullish on that you think is going to have the most problem here with people building their own version of these sorts of products? you know i think maybe my viewpoint are these very very verticalized niche products i think are going to get
they're going to get competed down a ton. And I think sales products are an example of one of these things, right? And maybe this is a, I don't want to be like very negative, but it's very hard inside a company like ours to task our best engineers to build a best in class sales product. There's not enough interest to do that. or to build investment costs legal. legal software product or finance software product it's very very hard for us
Right. And actually, that's a very big moat for these companies that they were that built these products, that they were able to come out, have an opinionated stance on how to do this, hire good enough engineers to go out and build the software.
And our company is unwilling to do that. So previously, we would go out and buy the technology, right? Because there would be no alternative. But now one of the crazy things is that the domain specialists now have access to build the tools that they ultimately wanted. Right. Which is actually crazy. Right. If you think about why why were these software companies able to exist?
these vertical software companies. The reason is because they had many features. The kitchen sink of features worked for a lot of companies, but each individual company only wanted 10% of the features. But the problem is each individual company was not capable of maintaining a piece of software or building the custom piece of software for 10% of the features. But that has now changed entirely. Now they can.
And there's always been the story of like, why would I spend any time building my own software if I could just But now it's like five minutes of time. It's five minutes and maybe even more custom to your system, right? Like how many times have you bought a piece of, bought a software and you're almost like, why is there no integration to X? And I actually use X.
How annoying is that? That actually makes the software less useful to you. So I think what's cool is when you go back, if someone zooms back to the beginning of when you started the demo, it's basically a PM talking to an engineer. Hey. Build me a Airbnb for dogs. Here's a... stupid mock that i made like with some boxes like that's that's almost like a bad pm uh talking to an engineer and it
just actually works. That's what's insane about this. And so that's why this example you're sharing of go-to-market. folks building their own things. It's like they don't need to know anything. Product building. It's just describe it in some ridiculous way and draw a couple boxes of what you want it to look like and it makes something for you. Which shows that agency is what matters. If you have a product manager that has an idea, there's no reason for why that idea cannot be like more well.
fleshed out, right? Like how many times do you have a product manager that just continualize ideas, but it just feels like They are extremely unsure on how to execute on it. They just want to say things for the sake of saying things. But for the people that have ideas and a lot of, I guess, agency, they can go out and like prove out what they want without any sort of external resources.
I think even more acutely for product folks listening to this, it's the salesperson coming to you being like, hey, I want this thing. It's going to help me with my sales team. And you're like, I don't have a million things to build. I don't have time for this. And so that problem goes away, which I think will make a lot of product leaders really happy. The model that this is sitting on, is it Sonnet? So just to break down how it ultimately works.
We have a model that does planning, and I would say right now, Sonnet is a really, really good planning model. I think OpenAI's GPT-40 is also good, but... The crazy thing is what we try to do is we try to make the the anthropic based model or a sonnet model try to do as much of the high level planning as possible.
And then what we try to do internally is run all the models necessary to do high quality retrieval for the agent. As you can see, the agent needed to understand what the rest of the code base ultimately did. We actually make sure we run models to actually chunk up the entire code base and understand the code.
so that obviously it would not be a good idea if we had a 100 million land code base to send that entire code base to Anthropic. First of all, you couldn't do that. That's over 1.5 billion tokens of code.
Right. So obviously that would be three orders of my three or four orders of my two larger than the largest context links right now. But you also wouldn't want to do that from like a cost and latency standpoint, too. So that's like one. And the second piece that you saw was the model is able to very quickly make edit. to the to the software as well we have custom models that we built that are post-trained on top of popular open source models
that can make these edits really, really quickly to the code base. And the reason why you would want to do that is it's a faster and be also that model can actually have more of the code base in context to So it can be better at applying changes than even Anthropics model too. So I think the way we like to think about it is our only goal is how do we build the best product possible?
right how do we build the best product possible and how do we how do we make the ceiling as high as possible and we will go out and build models and train models wherever necessary but if we're not going to be good at good at a task and we think the open source is better or anthropic is better we'll go and just use the open source
And so the models you guys are building, those are built on open source models that people have released. Interestingly, the one that does retrieval is actually completely pre-trained in-house that actually does that. Yeah, for a lot of different pieces, it's based on open source. Interestingly, for the one that does the edits and autocomplete, that is also in-house.
Like as you're typing, we actually do some autocomplete related stuff. I'm happy to show that. But I think a lot of users are familiar with that capability. So I think the way we like to look at it is like, what could we be best at? And we will go out and trade. But if we're not going to be best at it, we should not just for the sake of ego, go out and trade.
This may be getting too technical, but just is there anything interesting around what you train on? Yeah. So one of the one of the interesting things that we have from our users, and this is like where we try to think, like, why would we be any better? It's that actually every hour we get. probably tens of millions of pieces of feedback from our users. We get a lot of feedback on what they like and what they don't like.
for something like autocomplete we get a lot of preference data a lot of preference data and the preference data is weird it doesn't look like data that you find on the internet It's like data as the user is typing, right? Imagine you're typing some code in a code base. The code's going to be incomplete as you're typing it, right? It's not going to be in a full-fledged form. It's not like it is on GitHub. But we have a lot of data that looks like...
So we are uniquely well positioned to actually build a good model that can complete code, even when it's in an incomplete state. When the models that are out there, the frontier models have consumed very little code that looks like.
so for that case we're like hey we can go out and do a much better job potentially right and we'll go out and train models on all the preferences that we have the same is kind of true on retrieval right there's a way to find out are we retrieving the right data right did the user accept the code change after Was the retrieval actually a good retrieval, a signal that we can get?
So basically the way we like to look at it is if something is just purely like code planning, there's not a great reason why we would be the best at it. right i think that would that would not be a i can't come up with a coherent argument for that but for something that looks more along the lines of hey here's an intermediate code base that is is very gnarly
And here are some changes that need to get made. And we know the evolution of the code, or we've seen the evolution of code across millions of users. We feel like we can do a great job. I think what's interesting about this is this is another differentiator slash mode. for companies that end up winning in the space is you just have more and more of that data than other companies if you're ahead.
Yeah, this is sort of why maybe at a high level, like we like the zero to one app building product space. I think it's really it's a good product space. But ultimately, I think it needs to boil down to you understanding the code, because otherwise you're living at too high a plane where it's not clear why you would be able to be the best at that compared to everyone.
As a company, you mean? As a company. It feels like it might get competitive in a way that it's not clear where you would continue to differentiate over and over with time. I see. Because if they're just sitting on top of Sonnet and just doing what every other Sonnet... rapper is doing, there's not a lot of differentiation or remote. It depends on how you do it. But maybe if I was to say this, if you're if the inputs you're consuming are just web elements, like extremely high level web elements.
then the interface might be like high level enough that it's hard to maybe get better than maybe what the Frontier models are doing just across the board. You are just better off just plugging in Sonnet for everything. Got it. Awesome. One thing I wanted to come back to that I wrote down that I think is really important for people to understand
You talked about how with Windsurf, it's not necessarily, there's like a boilerplate code base that you want to start with because it's actually, because it's not an abstracted zero to one app builder. It's an actual IDE you're coding in. And you talked about how it has to install dependencies, which is kind of this painful thing.
And the reason has to do that is because it's running locally on your machine versus in the cloud, like, say, Lovable and Replit and all these guys. Although I think Bolt runs in your browser in this really cool way. So that's an important distinction. This is like you're running this locally in a machine and has all the libraries you need to actually run it. I think that's important. I think we believe a lot of people...
sort of build software and what are called like code spaces and things in a remote machine. I just think it's that a lot of developers like building locally for what you said, like if you're doing things that are more than just full stack applications you might have dependencies on your machine that are just like system dependencies that are just gnarly to install let's imagine you're building a gpu based application and the nvidia drivers are necessary
You just want to give people the flexibility to build where they can build. And I think, you know, the IDE and building locally has been a thing that people have done for, you know, decades. So probably it's not going to go away in the next couple of years. I love that your sales folks now are running like local host servers.
Well, with the browser previews, it's easier, right? You kind of just like open it up on the side. Yeah. Yeah. Oh, my God. Okay. I have a few more questions just about how you think and operate at Codium. So you guys are kind of at the forefront of how... product teams are going to operate like you're seeing the future every day. And so I'm curious if there's ways you guys have structured your teams.
engineers product design that might be different from other companies are doing it, or have tried stuff that has worked really well or tried stuff that hasn't been a huge disaster. You know, one interesting decision that we kind of have for core engineering is that we don't have pure product managers for for the core engineering side of the company. And by the way, that's purely because we build for developers and our.
product is is built by developers so i think the intuition from our own developers are are is hopefully hopefully valuable if not we might be hiring the wrong type of people so i think like our developers are in some sense flexing to be like more
product conventional product managers now on the other hand if we were building something that looked more like uber or the persona was very different and we didn't ourselves understand it i think we would not be the organization wouldn't look the way it looked
For the enterprise side of the company, because we do work with a lot of large enterprises where the requirements are not something that like our engineers would automatically understand, right? I don't think our engineers wake up and they're like, we need FedRAMP.
Right. Like this is probably something that a lot of customers come to us and tell us. We have people that kind of like flex in this product strategy role that understand kind of what the what the customer wants and understands the technical capabilities that we have to best. kind of build a product that would sort of help them at scale. So I think we have an interesting organization in this regard, but mostly I would say because we are a developer-based product.
Right. I would say that's true. And then also kind of like what you said for the engineering team itself. The team structure is fairly flat. We try to go with two pizza teams, teams that are fairly small, just because I think the problem is when a team gets too big, the person leading the team is no longer able to get in the weeds of the technology itself.
And I think in a space that's moving this quickly, I think it's dangerous to have leaders that don't understand the technology deeply and are not built. it's like very, very dangerous because there's too much like armchair quarterbacking. And so I think that's like maybe one other decision we made. And then teams are very, very flexible. So if we decide something is a new priority, we're very quick to kind of like... change the way a team looks and and it's very centrally planned in this regard
The two pizza team concept I saw a tweet long ago where someone from India was like, there's always talk about two pizza teams, but pizzas in India are much smaller. And so the teams end up being smaller. And they're like, why can't we build as much as these teams in the US? Oh, man.
How many PMs do you have? So you said you have 150 employees, something like that? Yeah, we have. So in terms of like the product strategy function, we have three people in that role right now. I see. So it's like product. It's like they're in their titles, like it's product strategy, not product management. That's right. Interesting. And then 50 engineers, you said 80-ish sales folks.
Yes, that's right. And then obviously we have functions like recruiting, you know, parts of GNA like finance. Right. We have marketing at the company. Right. So some other some other functions internally as well. What's interesting, and this is something that you hear all the time, Companies like Dario, for example, from Anthropic talking about how 90% of code is going to be written by AI. But all at the same time, all you guys are hiring engineers like crazy. Yeah.
Is that contradictory? Is that contradictory? Will there be an inflection point of like, all right, now we don't need them anymore? You know, I think it really comes down to do you get incremental value by adding more engineers internally? Like I'm going to take, first of all. you know maybe just to set the record straight if ai is writing over 90 of the code that doesn't mean engineers are 10x as productive
Right. Engineers spend more time than just writing code. They review code, test code, debug code, design code, deploy code, right? Navigate code. There's probably like a lot of different things that engineers do. There's this one famous law in parallel computing. It's called Amdahl's law. I don't know. I don't know if you've heard about it, but it basically says if you have a graph of tasks and you have this critical path,
and you take any one task and parallelize it a ton, which is make it almost take zero amount of time, there's still a limit of the amount of how much faster it made the whole process go. So maybe put simply, let's say you have 100 units of time. And only 30 units of time is being spent writing software. And I took the 30 and made it three. I only took the 100 and made it 73. It's only a 27% improvement, right? In the grand scheme of things.
So I think, look, we're definitely seeing over 30, maybe close to 40% productivity improvements. But I think for the vision that we're solving for, I think like, you know, even if I were to say the company in the long tail had 200 engineers, probably be too low still at that point.
So the question is like, how much more productivity do you get per person? You know, actually, maybe just to even say one other thing for some of these large companies, let's say you took the CIO of a company like JP Morgan Chase, right? her budget on software every year is 17 billion dollars and you there's over 50 000 engineers inside the company right and you told her hey like each of these engineers are now able to produce more technology that's effectively what you've done right
The right calculus that JP Morgan Chase or any of these companies will make is the ROI of building technology has actually gone up. So the opportunity cost of not investing more into technology has gone up. which means that you should just invest even more. And maybe in the short term you have even more engineers.
Right now, that's not true across the board. There are some companies that are happy with the amount of technology they're building, and there's a ceiling on the amount of technology they want to build. But for companies that actually have a very high technology ceiling, this doesn't mean you stop. This actually means you hire more.
This is a great bull case for engineers. I feel like there's like the canary in the coal mine for the engineering profession is when companies like yours slow down on hiring engineers. Yeah, that's not happening. Seems like Anthropic is also hiring a lot. Yeah, everyone is. So I think that's really promising. I think if you're in college, still makes sense to get into engineering at this point.
Okay, let me ask you this question as a kind of a final question maybe. What's maybe the most counterintuitive thing you've learned about building AI products, building WindSurf, and just being in a space. I think one of the weird things is online, everyone is very excited about the short-term wins that we're making. Right. Like what we're putting out maybe weekly. We do these waves every couple of weeks.
But actually, like a lot of the bets we're making inside the company are for things that are not, you know, three, four weeks, maybe three, six months, nine months away. That's what we're working on internally, because I think this is kind of like Lenny, what I was mentioning to you before. Like one of the goals that I tell everyone at our company is we should be cannibalizing the existing state of our product every six to 12 months.
Every six to 12 months, it should make our existing product look silly. It should almost make the form factor of existing product look dumb. So there's this weird tension where you want to have a product in market and you want to incrementally iterate and listen to users and keep making it better and better. But I would say we were the first identic IDE product out there, right? That's what we landed with.
And I think that the value of that is going to depreciate very quickly unless we continue to reprove ourselves. And we will need to reprove ourselves in ways in which our users are not even asking. So there's this tension here. where incremental feels very safe, right? Add this one more button. Users say, hey, I would like to be able to have this drop down to do X, but that is not the reason why we're going to win.
That's almost table sticks. Yeah, we'll decide to do some of these. We might not decide to do a lot of these things. But it's these longer term efforts inside the company that almost disrupt the existing product that are ultimately the reason why we're going to succeed. And it's this weird tension that you need to have in your head of you can't also not listen to your users at all because they're the reasons they're the reason you exist.
This reminds me of a recent podcast guest. We had Gara from Captions on the podcast, and he... He told us that he has two roadmaps. They have two roadmaps at the company. They have the real roadmap, like the typical roadmap based on feature requests and user feedback and data and things like that. And then they have the secret roadmap.
which is completely not informed by users or data. It's just them making bets on where they think the world is going. That's right. And I love that he calls it the secret roadmap just to make it very mysterious. That's very smart. Okay, I have one more question. I apologize. What's one thing that you wish you had known before starting Codium? Honestly, I wish I had. Maybe humility is like the wrong term, but this idea of just being okay with being wrong faster.
Like I always think about things on like when we make decisions, me and my co-founder, we always talk about it. We're almost like, hey, I wish we had made the decision to do this a couple months earlier. We always talk about this. And the weird thing is outside looking and everyone's like, wow, like actually, you know, the decision was made at the right time. But in my head, I'm always banging my head being like, what if we had made it a couple months earlier?
And I think part of that is, you know, I wax poetically about like, oh, you need to be irrationally sort of optimistic and uncompromisingly realistic. But it's very hard to do this in practice. Because you drink your own Kool-Aid too. Because if you're not drinking your own Kool-Aid, you won't get up out of bed. The answer is already solved. It's not actually any of these startups. The answer is Microsoft is going to be the winner in any software category.
Right. Isn't that isn't that the answer? Just because of distribution, resources and capital. Right. They're going to commoditize every space. So I think I think in some ways. This amount of just understanding that, hey, reevaluate your hypotheses and get into an uncomfortable space. way more frequently is something I need to remind myself even to this day. And probably something that I didn't know coming in and starting the company, right? We started the company at like peak ZERP time.
At that time, probably everything seemed like it was going to moon, and there was probably a lot of irrational confidence, I would say, that we shouldn't have had. Maroon, we covered so much ground. What an incredible conversation. I learned so much just sitting here listening and asking you questions. Is there anything else that you wanted to share before or leave listeners with? Any last piece of nuggets or wisdom before I let you go?
To be honest, I could give predictions about the space. Probably most of them are going to be wrong. I think the best thing to do is just get your hands dirty with all of these products. And I think one of the most obvious things that's going to happen is in the next year, there will be a tremendous amount of alpha for anyone that is able to take maximum advantage of these tools.
Right. Just imagine how many of your coworkers just don't even know the existence of the tools, don't know what they can do and how much less productive they will be. And and just I would just say, get your hands.
as dirty as possible as quickly as possible and when you say get your hands dirty basically it's like download with surf start coding ask it to build things for you yeah build apps build apps um start using it for maybe even making making mocks modifying your existing code base like there's probably ways in which you could be a force multiplier to your organization and ways in which they never even anticipate.
Right. Imagine if you were a product manager that could actually very quickly make edits to the code base and and just start pushing changes yourself. You probably get a tremendous amount of respect from your own engineering peers. You could probably get way more done because of that. It's just I feel like there's no sort of ceiling at that point. I think this is such an underestimated point you're making here. There's apps that can build things from scratch and then there's...
apps like this that can edit your existing code base. If you're a PM, what's the largest company you work with people-wise? Publicly, I'd say, let's just say JPMorgan Chase. They have over 50,000 developers. Okay, so you could be a PM at JPMorgan Chase and be like, I have a problem I need to solve. I want to move this metric. I want to... change the step in the signup flow. You just open up Windsurf and tell it to do the thing you want.
And then can you push straight to GitHub and do a merge request? Yeah, actually, you could do that too. Okay, PR. Yeah, it could make a PR for you. This is insane. Okay. The future is out of control. Okay. That was such an important point at the end there because I think people may not realize this. They see all these other apps. They're like, oh, Bill Ice Prototype.
But this is like legitimately something NPM can actually do work with. Yeah. When you think about the people, at least that I don't know, Lenny, who you respect the most, they're the people that somehow, despite their title, their level of agency and just output. just to the all the way down to the weeds.
to the highest level strategy is just perfection right they know when to go all the way down and i think you know sometimes you see people that talk about roles and they irrationally feel like oh because i'm this role i'm not allowed to touch this well now everything's open season And I think this is an opportunity to almost go all the way down to the weeds and all the way up to the top and just be effective on every level.
Unbelievable. All right. Well, with that, we'll leave folks. Arun, thank you so much for being here. Awesome. Thanks a lot, Lenny. What an incredible conversation. Thanks, Arun, and bye, everyone. Thank you so much for listening. Also, please consider giving us a rating or leaving a review, as that really helps other listeners find the podcast. You can find all past episodes or learn more about the show at Lenny's podcast dot com. See you in the next episode.