The rise of Cursor: The $300M ARR AI tool that engineers can’t stop using | Michael Truell (co-founder and CEO) - podcast episode cover

The rise of Cursor: The $300M ARR AI tool that engineers can’t stop using | Michael Truell (co-founder and CEO)

May 01, 20251 hr 11 min
--:--
--:--
Listen in podcast apps:

Summary

Michael Truell discusses Cursor's journey to becoming a $300M ARR AI code editor, driven by the vision of "what comes after code." He shares insights on building custom AI models, the increasing importance of "taste" in software development, and the future of engineering roles. The conversation covers market dynamics, defensibility in AI, and advice for those preparing for the AI-driven future.

Episode description

Michael Truell is the co-founder and CEO of Anysphere, the company behind Cursor—the fastest-growing AI code editor in the world, reaching $300 million in annual recurring revenue just two years after its launch. In this conversation, Michael shares his vision for the future, lessons learned, and advice for preparing for the fast-approaching AI future.

What you’ll learn:

• Cursor's early pivot from automating CAD to automating code

• Michael’s vision for “what comes after code” and how programming will evolve

• Why Cursor built their own custom AI models despite not starting there

• Key lessons from Cursor’s rapid growth

• Why “taste” and logic design will become more valuable engineering skills than technical coding ability

• Why the market for AI coding tools is much larger than people realize—and why there will likely be one dominant winner

• Michael’s advice for engineers and product teams preparing for the AI future

Brought to you by:

Eppo—Run reliable, impactful experiments

Vanta—Automate compliance. Simplify security

OneSchema—Import CSV data 10x faster

Where to find Michael Truell:

• X: https://x.com/mntruell

• LinkedIn: https://www.linkedin.com/in/michael-t-5b1bbb122/

• Website: https://mntruell.com/

In this episode, we cover:

(00:00) Introduction to Michael Truell and Cursor

(04:20) What comes after code

(08:32) The importance of taste

(12:39) Cursor’s origin story

(18:31) Why they chose to build an IDE

(22:39) Will everyone become engineering managers?

(24:31) How they decided it was time to ship

(26:45) Reflecting on Cursor's success

(32:03) Counterintuitive lessons on building AI products

(34:02) Inside Cursor's stack

(38:42) Defensibility and market dynamics in AI

(46:13) Tips for using Cursor

(51:25) Hiring and building a strong team

(59:10) Staying focused amid rapid AI advancements

(01:02:31) Final thoughts and advice for aspiring AI innovators

Referenced:

• Cursor: https://www.cursor.com/

• Microsoft Copilot: https://copilot.microsoft.com/

• Scaling laws for neural language models: https://openai.com/index/scaling-laws-for-neural-language-models/

• MIT: https://www.mit.edu/

• Telegram: https://telegram.org/

• Signal: https://signal.org/

• WhatsApp: https://www.whatsapp.com/

• Devin: https://devin.ai/

• Visual Studio Code: https://code.visualstudio.com/

• Chromium: https://chromium.googlesource.com/chromium/src/base/

• Exploring ChatGPT (GPT) Wrappers—What They Are and How They Work: https://learnprompting.org/blog/gpt_wrappers

• OpenAI’s CPO on how AI changes must-have skills, moats, coding, startup playbooks, more | Kevin Weil (CPO at OpenAI, ex-Instagram, Twitter): https://www.lennysnewsletter.com/p/kevin-weil-open-ai

• Behind the founder: Marc Benioff: https://www.lennysnewsletter.com/p/behind-the-founder-marc-benioff

• DALL-E 3: https://openai.com/index/dall-e-3/

• Stable Diffusion 3: https://stability.ai/news/stable-diffusion-3

Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email [email protected].

Lenny may be an investor in the companies discussed.



Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe

Transcript

Our goal with Kursher is to invent a new type of programming, a very different way to build software. So a world kind of after code, I think that more and more being an engineer will start to feel like

being a logic designer. And really it will be about specifying your intent for how exactly you want everything to work. What is the most counterintuitive thing you've learned so far about building Cursor? We definitely didn't expect to be doing any of our own model development. And at this point... Every magic moment in Cursor involves a custom model in some way. What's something that you wish you knew before you got into this role? Many people you hear hire too fast. I think we actually hired.

too slow to begin with you guys went from zero dollars to 100 million arr in a year and a half which is historic was there an inflection point where things just started to really take off the growth has been fairly just consistent on an exponential. An exponential to begin with feels fairly slow and the numbers are really low and it didn't really feel off to the races to begin with. What do you think is the secret to your success? I think it's been...

Today, my guest is Michael Truel. Michael is co-founder and CEO of AnySphere, the company behind Cursor. If you've been living under a rock and haven't heard of Cursor, it is the leading AI code editor and is at the very forefront of changing how engineers and product teams build software. It's also one of the fastest growing products of all time, hitting 100 million ARR just 20 months after launching.

and then 300 million ARR just two years since launch. Michael's been working on AI for 10 years. He studied computer science and math at MIT, did AI research at MIT and Google, and is a student of tech and business history. As you'll soon see, Michael thinks deeply about where things are heading and what the future of building software looks like.

We chat about the origin story of Cursor, his prediction of what happens after code, his biggest counter-intuitive lessons from building Cursor, where he sees things going for software engineers, and so much more. Michael does not do many podcasts. The only other podcast he's ever done is Lex Friedman. So it was a true honor to have Michael on. If you enjoy this podcast, don't forget to subscribe and follow it in your favorite podcasting app or YouTube.

Also, if you become an annual subscriber of my newsletter, you get a year free of perplexity, linear superhuman notion, and granola. Check it out at lenny'snewsletter.com and click bundle. With that, I bring you Michael Trull. This episode is brought to you by Eppo. Eppo is a next generation A-B testing and feature management platform built by alums of Airbnb and Snowflake for modern growth teams. Companies like Twitch, Miro, ClickUp, and DraftKings rely on Epo to power their experiments.

Experimentation is increasingly essential for driving growth and for understanding the performance of new features. And EPPO helps you increase experimentation velocity while unlocking rigorous deep analysis in a way that no other commercial tool does. When I was at Airbnb, one of the things that I loved most was our experimentation platform.

where I could set up experiments easily, troubleshoot issues, and analyze performance all on my own. EPPO does all that and more with advanced statistical methods, that can help you shave weeks off experiment time and accessible UI for diving deeper into performance and out-of-the-box reporting.

that helps you avoid annoying, prolonged analytic cycles. EPPO also makes it easy for you to share experiment insights with your team, sparking new ideas for the A-B testing flywheel. EPPO powers experimentation across every use case. including product, growth, machine learning, monetization, and email marketing. Check out EPPO at getepo.com slash Lenny and 10x your experiment velocity. That's get EPPO.com slash Lenny.

This episode is brought to you by Vanta. When it comes to ensuring your company has top notch security practices, things get complicated fast. Now, you can assess risk secure the trust of your customers, and automate compliance for SOC 2 ISO 27001 HIPAA and more with a single platform, Vanta. Vanta's market leading trust management platform helps you continuously monitor compliance.

alongside reporting and tracking risks. Plus, you can save hours by completing security questionnaires with Vanta AI. Join thousands of global companies that use Vanta to automate evidence collection. unify risk management, and streamline security reviews. Get $1,000 off Vanta when you go to vanta.com slash Lenny. That's V-A-N-T-A dot com slash Lenny.

Michael, thank you so much for being here. Welcome to the podcast. Thank you. It's great to be here. Thank you for having me. When we were chatting earlier, you had this really interesting phrase, this idea of what comes after code. Talk about that, just like the vision you have of where you think things are going in terms of moving from code to maybe something else.

Our goal with Kersher is to invent sort of a new type of programming, a very different way to build software that's kind of just distilled down into you describing the intent. to the computer for what you want in the most concise way possible. And really distill down to you just defining how you think the software should work and how you think it should look.

And yeah, with the technology that we have today and as it matures, we think you can get to a place where you can invent a method of building software that's Legion's higher level and more productive, in some cases more accessible too. And that process will be a gradual moving away from what building software looks like today.

And I, you know, I want to contrast it with maybe like the vision of, you know, what software looks like in the future that, you know, I think, you know, a couple of visions that are in a popular conscious that we at least have some disagreement with.

One is, you know, there's a group of people who think that, you know, software building in the future is going to look very much like it does today, which mostly means text editing, formal programming languages like TypeScript and Go and C and Rust.

And then there's another group that kind of thinks like, you know, you're just going to type into a bot and you're going to ask it to build you something. And then you're going to ask it to change something about what you're building. And it's kind of like this, you know, chatbot, Slackbot style where you're talking to your engineering department. And we think that there are problems with both of those visions.

I think that on the chatbot style end of things, and we think it's going to look weirder than both. The problem with the chatbot style end of things is that it lacks a lot of precision. If you want humans to have complete control over what the software looks like and how it works, you need to let them gesture at what they want to be changed.

you know, in a form factor that's more precise than just, you know, change this about my app you know kind of in a text box removed from the whole thing and then um you know the version of the world where kind of nothing changes we think we think is is wrong because we think that the technology is going to get much much much better uh and so a world you know kind of after I think it looks like a world where you have a representation of the logic of your software that does look more like English.

right you have kind of written down you can imagine in doctrine form you can imagine in kind of an evolution of purging in language towards pseudocode you have written down you know the logic of the software and you can you can edit that at a high level and you can point at that And it won't be kind of the impenetrable millions of binds of code. It'll instead be something that's much torsier and easier to understand and easier to navigate.

But that world where, yeah, the kind of crazy hard to understand symbols start to evolve towards something that's a little bit more human readable and human editable is one that we're working toward. This is a profound point. I think I want to make sure people don't miss what you're saying here, which is that what you're envisioning in the next year, essentially, is kind of when things start to shift is.

People move away from even seeing code, having to think in code in JavaScript and Python. And there's this abstraction that will appear, essentially pseudocode, describing what the code should be doing more in English sentences. Yep, we think it ends up looking like that. And we're very opinionated that that path goes through kind of existing professional engineers. And it looks like this evolution away from code. And it definitely looks like the human still being in the driver's seat, right?

And the human having both a ton of control over all aspects of the software and not giving that up. And then also the human having the ability to meet changes very quickly, like having a fast duration loop and not just like, you know.

having something in the background that's super slow and takes weeks. Go do all your work for you. This begs the question, for people that are getting are currently engineers or thinking about becoming engineers or designers or product managers like what skills do you think will be more more and more valuable in this world of

I think taste will be increasingly more valuable. And I think often people think about taste in the realm of software. They think about, you know, visuals or taste over smooth animations and, you know. coloring things, UI, UX, et cetera, on kind of the visual design of things. And I think more and more, and the visual side of things is an important part of defining a piece of software. But then, as mentioned before, I think that the other half of defining a piece of software is the logic of it.

and how the thing works. And we have amazing tools for speccing out the visuals of things. And then when you get into the logic of how a piece of software works, Really the best representation we have of that is code right now. You can kind of gesture at it with Figma and you can gesture at it with writing down notes. But it's when you have an actual working prototype. And so I think that more and more being an engineer will start to feel like being a logic designer.

And really it will be about specifying your intent for how exactly you want everything to work. And it will less be about, it'd be more about the what's and a little bit less about the how exactly you're going to do things under the hood.

And so, yeah, I think taste will be increasingly important. I think one aspect of software engineering, and we're very far from this right now, and there are lots of, you know, uh funny funny memes going around the internet about you know kind of the some of the trials and tribulations people can run into if they trust ai for too many things about constant sharing um around you know uh

building apps that have glaring deficiencies and problems and functionality issues. But I think we will get to a place where You will be able to be less careful as a software engineer, which right now is an incredibly, incredibly important skill. And yeah, we'll move a little bit from carefulness a little bit more towards today. This makes me think of vibe coding. Is that kind of what you're describing when you talk about not having to think about the details as much and just kind of...

going with the flow. I think it's related. I think that by coding right now describes exactly kind of this state of creation that is pretty controversial where you're generating a lot of coding you aren't really understanding the details.

That is that is like a state of creation that then has has lots of problems like you don't really By not understanding the details under the hood right now, you then very quickly get to a place where you're kind of limited at a certain point where you create something that's big enough that you can't change. And so I think some of the ideas that we're interested around, how do you give people continued control over all the details?

when they don't really understand the code. I think that solutions there are very relevant to the people who are bi-coding right now. You know, I think that right now we kind of, we lack the ability to, you know, let the teasmakers actually have complete control over the software. And so...

One of the issues also with vibe coding and letting taste really shine through from people is you can create stuff, but a lot of it is the AI making decisions that are unwieldy and you don't have control over. One more question along these lines. You threw out this word taste. When you say taste, what are you thinking? I'm thinking having the right idea for what should be built.

And then just it will become more and more about kind of effortless translation of here's exactly what you want built. Here's how you want everything to work. Here's how you want it to look. And then you'll be able to make that.

on a computer and it will less be about this kind of translation layer of like you and your team have a picture of what you'd want to build and then you have to really painstakingly labor intensive like lay out that into a format that a computer can then execute and interpret. And so, yeah, I think less is less than the UI side of things. Maybe it tastes a little bit of a misnomer, but just about having the right idea for what should be built.

Awesome. Okay. I'm going to come back to these topics, but I want to actually zoom us back out to the beginnings of Cursor. I've never heard the origin story. I don't think many people know how this whole thing started.

Basically, you guys are building one of the fastest growing products in the history of the world. It's changing the way people build products. It's changing careers, professions. It's changing so much. How did it all begin? Any memorable moments along the journey of the early days? Curzio kind of started as a solution search of a problem and a little bit where it very much came from reflecting on how AI was going to get better over the course of the next 10 years.

And there were kind of two defining moments. One was being really excited by using the first data version of the Git copilot, actually. This was the first time we had used an AI product that was really, really, really useful and was actually just useful at all and wasn't just a vaporware kind of demo thing. And in addition to being the first AI product that we used that was useful, GitHub Copilot was also one of the most useful, if not the most useful, dev tool we'd ever adopted.

And that got us really excited. Another moment that got us really excited was the series of scaling launch papers coming out of OpenAI and other places that showed that even if we had no new ideas, AI was going to get better and better just by pulling on simple levers, like scaling up the models and also scaling up the data that was going into the models.

And so at the end of 2021, beginning of 2022, this got us excited about how, you know, AI products are now possible. This technology was going to mature into the future. And it felt like when we looked around, there were lots of people talking about making models. It felt like people weren't really picking an area of knowledge work and thinking about what it was going to look like. As AI got better and better.

And, you know, that set us on the path to like, you know, kind of an idea generation exercise. It was like, you know, how are each of these areas of knowledge work going to change in the future as this tech gets more mature? Like what is the, you know, end state of the work going to look like? How are the tools that we use to do that work going to change? How are the models going to get, you know, need to get better to support changes in the work?

And, you know, once scaling and pre-training right now, like how are you going to keep pushing forward technological capabilities? And the misstep at the beginning of First Air is we actually worked on, you know, we sort of did this whole grand exercise and we decided to work on...

you know, an area of knowledge worth that we thought would be relatively uncompetitive and sleepy and boring. And, you know, no one would be looking at it because, you know, we thought, oh, coding's great. You know, coding's totally interchangeable because AI, but, you know, people are already doing that.

And so there was a period of four months to begin with where we were actually working on a very different idea, which was helping to automate and augment mechanical engineering and building tools for mechanical engineering. We had, you know, me and my co-founders, we weren't mechanical engineers. You know, we had friends who were mechanical engineers, but we were very much unfamiliar with the field. So there's a little bit of a blind man and the elephant problem from the get-go.

You know, there were problems around, you know, how would you actually take the models that exist today and make them useful for mechanical engineering? The way we net it out is you need to actually develop your own models from the get-go. And the way we did that was It was tricky and there's not a lot of data on the internet of 3D models of different tools and parts and the steps that I just like to build up to those 3D models.

And then getting them from the sources that have them is also a tricky process too. Eventually, what happened was we came to our senses. We realized we're not super excited about mechanical engineering. It's not the thing we want to dedicate our lives to. We looked around, and in the area of programming, it felt like despite...

a decent amount of time ensuing, not much has changed. And it felt like the people that were working on the space maybe had a disconnect with us and it felt like they weren't being sufficiently ambitious about where everything was going to go in the future and how kind of all software creation was going to blow through these models. And that's what set us off on the path to building Kershaw.

OK, so interesting. OK, so first of all, I love that there's this this is advice that you often hear of go after a boring industry because no one's going to be there and there's opportunity. And, you know, sometimes it works. But I love that it's like, no, actually go after the hottest. most popular space, AI coding, app building, and it worked out. And the way you phrased it just now is you didn't see enough ambition, potentially, that you thought there was more to be done.

So it feels like that's an interesting lesson. Even if something looks like, okay, it's too late, there's GitHub Copiles out there, some other products. If you notice that they're just not as ambitious as they could be or as you are, or you see almost a flaw in their approach, that there's still a big opportunity. Does that resonate? That totally resonates. I think it's...

A part of it is you need there to be leapfrogs that can happen. You need there to be things that you can do. And I think the exciting thing about AI is in a bunch of places, and I think this is very much still true of our space. and can talk about how we think about that and how we deal with that. You know, I think that just the ceiling is really high. And yes, if you look around, probably even if you take the best tool in any of these fields,

There should be a lot more that needs to be done over the next few years. And so having that space, having that high ceiling, I think is unique amongst areas of software, at least the degree to which it is high with AI. Let's come back to the IDE question. So there's kind of a few routes you could have taken and other companies are doing different routes. So there's building an IDE for engineers to work within and adding AI magic to it.

There is another route of just a full AI agentic Devon sort of product. And then there's just like a model that is very good at coding and focusing on building the best possible coding model. What made you decide and see that the ID path was the best route? The folks who were from the get-go working on just a model or working on end-to-end automation programming, I think...

They were trying to build something very different from us, which is we care about giving humans control over all the decisions in kind of the end tool that they're building. And I think those folks were very much thinking of a future where kind of, you know, end time the whole thing is done by AI. And maybe like the AI is making all the decisions too.

And so one, there was kind of like a personal interest component. Two, I think that always we try to be intense realists about where the technology is today. You know, very, very, very excited about how AI is going to mature over the course of many decades. But I think that sometimes people...

There's an instinct to see AI doing magical things in one area and then kind of anthropomorphize these models and think it's better than a smart person here and so it must be better than a smart person there. But these things have massive issues. And from the very start, our product development process was really about dogfooding and using the tool intensely every day. And we never wanted to ship anything that wasn't useful to us.

And, you know, we had the benefit of doing that because we were the end users part of our product. And I think that that instills a realism in you around where the tech is right now. That definitely made us think that we need the humans to be in the driver's seat. The AI cannot do everything. We were also interested in giving humans that control too for personal reasons.

And so that gets you away from just you're a model company that also gets you away from just kind of this end-to-end stuff without the human having control. And then the way you get to an IDE versus maybe a plug-in to an existing coding environment is the belief that programming is going to flow through these models. And the act of programming is going to change a lot over the course of the next few years.

And the extensibility that existing coding environments have is so, so, so limited. So if you think that the UI is going to change a lot, if you think that the form factor program is going to change a lot, you necessarily need to have control over the entire application. I know that you guys today have an IDE and...

And that's probably the bias you have of this is maybe where the future is heading. But I'm just curious, do you think a big part of the future is also going to be AI engineers that are just sitting in Slack and just doing things for you? Is that something that fits into Cursor one day? I think you'll want the ability to move between all these things thoroughly effortlessly.

And sometimes I think you will want to have the thing kind of go spin off on its own for a while. And then I think you'll want the ability to pull in the AI's work and then work with it very, very, very quickly, right? And then maybe have it go spin off again. And so these kind of background versus foreground form factors, I think you want that all to work well in one place.

And I think the background stuff, there's like a segment of programming that it's especially useful for, which is type of programming tasks where it's very easy to specify exactly what you want. without much description and exactly what correctness looks like without much description. And often that's the bug fixes are kind of like the, are a great example of that, but it's definitely not all of programming.

So I think that what the IDE is will totally change over time. And kind of our approach to having our own editor was perist on. It's going to have to evolve over time. And I think that that will both include, you can spin off things from different surface areas like Slack or your issue tracker or whatever it is.

And I think that will also include like, you know, the pane of glass that you're staring at is going to change a lot. And, you know, we just mostly think of an IDE as the place where you are building software. I think something people don't talk enough about with talking about agents and all these. AI engineers are going to be doing all this stuff for you. Basically we're all becoming engineering managers. with a lot of reports that are just like not that.

not that smart and you have to do a lot of reviewing and approving and specifying i guess thoughts on that and is there anything you could do to make that easier because that sounds really hard like anyone that has a large team has had a large team being like oh my god all these uh junior people just checking out with me doing

not high-quality work over and over. It's just like, what a life. It's going to suck. Maybe eventually one-on-ones with... So many one-on-ones. Yeah, so the customers we've seen have...

Most success with AI, I think, are still fairly conservative about some of the ways in which And so I do think today that the most successful customers really lean on things like, you know, our next edit prediction, where we, you know, your coding is normal and we're picking the next instance of actions you're going to do. And then they also really lean on like scoping down the stuff that you're going to hand off to the bot.

And, you know, there's for a fixed percent of your time spent reviewing code, you could from an agent or from an AI overall, you could, you know, there's kind of two patterns. One is you could. Yeah. Spend a bunch of time specifying things up front. The AI goes and works, and then you then go and review the AI's work, and then you're done. That's the whole task.

Or you can really chop things up, right? So you can, you know, specify a little bit, AI write something, review, specify a little bit, AI write something, review. And that's kind of, you know, autocompletes all in the way of that spectrum. And still we see often the most successful people using these tools are chopping things up right now and keeping things careless. That sounds less terrible. I'm glad there's a solution here. I want to go back to you guys building Cursor for the first time.

What was the point where you realized this is ready? What was kind of a moment of like, OK, I think this is time to put it out there and see what happens? So when we started building Cursor, We were fairly paranoid about spinning for a while without releasing to the world. And so to begin with, too, we actually, the first version of Kirscher was hand-rolled.

Now we use VS Code kind of as a base, like many browsers use Chromium as a base, and it worked off of that. To begin with, we didn't, and built a prototype of Cursor from scratch. And that involved a lot of work. We had to build our own. There were a lot of things that go into a modern code editor, including you know, support for many different languages and navigation support for moving amongst the language, you know, error tracking support for things.

There's things like an integrated command line, the ability to use remote servers, the ability to capture remote servers to view and run code. And so we kind of just like went on this blitz of building things incredibly quickly, building kind of our own editor from scratch and then also the AI components. And it was after like a couple of months that we just, you know, it was after maybe five weeks that we were living on the editor full time. And I had thrown away our previous editor.

and we're using a new one. And then once it got to a point where we found it a bit useful, then we put it in other people's hands and had this very short beta period. And then we launched it out to the world within a couple of months from the first line of code.

And it was definitely a, like, you know, let's just get this out to people and build in public quickly. The thing that took us by surprise is we thought we would be building for a couple hundred people for a long time. And, you know, from the get-go, there was kind of an immediate crush of interest.

And a lot of feedback, too. And that was super helpful. We learned from that. And that's actually why we switched to being based off of VS Code instead of just this hand-rolled thing. A lot of that was motivated by the initial user feedback. And they have been iterating in public from there. I like how you understated the traction that you got. I think you guys went from $0 to 100 million ARR in like a year, year and a half or something like that, which is historic.

What do you think was the key to success of something like this? He's talking about dog fooding being a big part of it. Like you built it in three months. That's insane. What do you think is the secret to your success? The first version was not, you know, the three-month version wasn't very good. And so I think it's been, you know, sustained.

paranoia about there are all of these ways in which this thing could get better. The end goal is really to invent a very new form of programming that involves automating a lot of coding as we know today. And no matter where we are with Cursor, it feels like we're very, very far away from that end goal. And so there's always a lot to do.

But I think it's been kind of, a lot of it hasn't been over-rotated on kind of that initial push, but instead is like the continued evolution of the tool and just making the tool consistently better. Was there an inflection point after those three months where things just started to really take off? To be honest, I felt fairly slow to begin with. And Maybe it comes from some impatience on our part. But I think there's the overall speed of the growth, which is...

you know, continues to take us by surprise. I think one of the things that has been most surprising too is that the growth has been fairly just consistent on an exponential of just consistent month over month growth. accelerated at times by launches on our part and other things. But, you know, an exponential to begin with feels fairly slow and the numbers are really low. And so it didn't really feel off to the races to begin with.

To me, this sounds like build it and they will come actually working. You guys just built an awesome product that you loved yourselves as engineers. You put it out. People just loved it, told everyone about it. It being essentially all just us. you know, the team working on the product and making the product good in lieu of, you know, other things one could spend one's time on. You know, we definitely spent time on tons of other things. For instance, building the team was incredibly important.

Doing things like support rotations are very important. But some of the normal things that people would maybe reach for... And in building the company early on, we really let those fires burn for a long time, especially when it came to things like sales and marketing. And so just working on the product and building a product that you like, your team likes, and then also then adjusting it for some set of users, that can kind of sound simple, but then it's hard to do that well.

And there are a bunch of different directions one could have run, a bunch of different product directions. And I think that one of the difficult things, I think... Focus and kind of strategically picking the right things to build and prioritizing effectively is tricky. I think another thing that's tricky about this domain... is it's kind of the new form of product building.

where it's very interdisciplinary in that we are something in between a normal software company and then in between a normal software company and then a foundation model company in that... You know, we want to develop a, you know, we're developing a product for millions of people and that, you know, that side of things has to be excellent. Then also one important dimension of product quality is doing more and more on the science and doing more and more on the model side of things.

And so that element of things, doing that well, too, has been tricky. But yeah, you know, the overall thing would note is, you know, maybe, you know, some of these things sound simple to specify, but I'm like, doing them well is hard and they're rough to always run. I'm excited to have Andrew Luo joining us today. Andrew is... CEO of One Schema, one of our longtime podcast sponsors. Welcome, Andrew. Thanks for having me, Lenny. Great to be here.

favorite companies like Ramp and Vanta and Watershed. I heard you guys launched a new data intake product that automates the hours of manual work that teams spend importing and mapping and integrating CSV and Excel files. Yes. So we just launched the 2.0 of one schema. file feeds. We've rebuilt it from the ground up with AI. We saw so many customers coming to us with teams of data engineers that struggled with the manual work required to clean messy spreadsheets.

File Feeds 2.0 allows non-technical teams to automate the process of transforming CSV and Excel files with just a simple prompt. We support all of the trickiest file integrations. SFTP, S3, and even email. I can tell you that if my team had to build integrations like this, how nice would it be to take this off our roadmap and instead use something like one schema? Absolutely, Lenny. We've heard...

So many horror stories of outages from even just a single bad record in transactions, employee files, purchase orders, you name it. Debugging these issues is often like finding a needle in a haystack. One Schema stops any bad data from entering your system and automatically validates your files, generating error reports with the exact issues in all bad files.

I know that importing incorrect data can cause all kinds of pain for your customers and quickly lose their trust. Andrew, thank you so much for joining me. If you want to learn more, head on over to oneschema.co. That's oneschema.co. What is the most counterintuitive thing you've learned so far about building cursor, building AI products? I think one thing that's been counterintuitive for us, hinted at it a little bit before.

But we definitely didn't expect to be doing any of our own model development when we started. As mentioned, when we got into this, there were companies that were immediately from the get-go going and just focusing on kind of training them all from scratch.

We had done the calculation for it to train GP4 and just knew that that was not convincingly what we were going to be able to do. And it also felt a little bit like focusing one's attention in the wrong area because There are lots of amazing models out there and why do all this work to replicate kind of what other players have done, especially in the pre-training side of things, you know, taking a neural network that knows nothing and then teaching it the whole internet.

And so we thought we weren't going to be doing that at all. And it seems clear to us from the start that the existing models, there were lots of things that they could be doing for us that they weren't doing because there wasn't the right tool built for them. In fact, though, we do a ton of model development. And internally, it's a big focus for us on the hiring front.

and have assembled a fantastic team there. And it's also been a big win on the product quality side of things for us. And at this point, every magic moment in Kursar involves a custom model in some way. And so that was definitely counterintuitive and surprising.

It's been a gradual thing where there was an initial use case for training our own model where it really didn't make sense to use any of the biggest foundation models. That was incredibly successful, kind of moved to another use case that worked really well and had been going from there. The helpful things in doing this model development is picking your spots carefully.

not trying to reinvent the wheel, not trying to focus on places maybe where the best foundation models are excellent, but instead kind of focusing on their weaknesses and how you can complement them. I think this is going to be surprising to a lot of people hearing that you have your own models. When people talk about Cursor and all the folks in the space, they would kind of call them GPT rappers. They're just sitting on top of...

ChatGPT or Sonnet, and what you're saying is that you have your own models. Talk about just like the stack behind the scenes. Yeah, of course. So we definitely use the biggest generation models a bunch of different ways. They're really important components of bringing the cursor experience to people. The places where we use our own models. Sometimes it's to serve a use case that a foundation model wouldn't be able to serve at all for cost or speed reasons.

And so one example of that is the autocomplete side of things. And so this can be a little bit tricky for people who don't code to understand, but code is this weird form of work where sometimes really the next 5, 10, 20, 30 minutes of your work. is entirely createable from looking over your shoulder. And I would contrast this with writing. So writing, you know, everyone or lots of people are familiar with

you know, Gmail's autocomplete and kind of the different forms of autocomplete that show up when you're trying to post text messages or emails or things like that. They can only be so helpful because often it's just really not clear what you're going to be writing just by looking at what you've written before.

But in code, sometimes when you edit a part of a code base, it's just you're going to need to change things in other parts of a code base. And it's entirely clear how you're going to need to change things. And so one core part of Cursor is this really souped-up autocomplete experience, where you predict the next set of things you're going to be doing across multiple files, across multiple places within a file.

And, you know, making models good at that use case, one, there's a speed component of those models need to be really fast and need to give you a completion within 300 milliseconds. There's also this cost component of we're running. tons and tons and tons of molecules. Every keystroke, we need to keep changing our prediction for what you're going to do now.

And then it's also this really specialty use case of you need models that are really good not at completing the next token, just like a generic text sequence, but are really good at auto-completing a series of diffs. you know, looking at what's changed within a code base and then clicking the next set of things that are going to change, you know, both deleted and added and all of that. And we found a ton of success in training models specifically for that task.

So that's a place where, you know, no foundation models are involved. It's kind of our own thing. We don't have a lot of labeling or branding about this in the app, but that, you know, powers a very core part of Cursor. And then, you know, another set of places where you're using our models are to help things like Sonnet or Gemini or GPT. And those sit both on the input of those big models and on the output. On the input side of things, those models are searching through what it could be.

trying to figure out the parts of a codebase to show to one of these big models. You can kind of think about this as like a mini Google search that's specifically built for finding the relevant parts of a codebase to show one of these big models. And then on the output side of things, you know, we take the sketches of the changes that these models are suggesting you make with that code base. And then...

You know, we have models that then kind of fill in the details of like, you know, the high level thinking is done by these smartest models. They spend a few tokens on doing that. And then these smaller specialty, incredibly fast models coupled with some inference tricks. then take those high-level changes and turn them actually into a full-code disk.

And so it's been super helpful for pushing on quality in places where you need a specialty task. And it's been super helpful for pushing on speed, which is such an important dimension of product quality for us, too. This is so interesting. I just had Kevin Wheel on the podcast, CPO of OpenAI, and he calls this the ensemble of models. That's the same way they work, to use the best feature of each one. And to your point, the cost advantages of using cheaper models.

these open these other models are they based on like llama and things like that just open source models that you guys plug into and build on yeah so again we try to be very pragmatic about the place that we're going to do this work and we don't want to reinvent the wheel. And so starting from the very best, you know, pre-trained models that exist out there.

And sometimes in collaboration with these big model providers that don't share their weights out into the world, Because the thing we care about less is the ability to read line by line, the matrix of weights that then go to... give you a certain output. And we just care about the ability to train these things, to post-train them. And so by and large, by and large, yes, open source models, sometimes working with the closed source providers to do two things.

This leads to a discussion that a lot of AI founders always think about and investors, which is moats and defensibility in AI. So it feels like one is custom models, is a moat in the space. How do you just think about long-term defensibility in the space, knowing there's other folks, as you said, launching constantly, trying to eat your lunch? I think that there are ways to build in inertia and

traditional modes. But I think by and large, we're in a space where it is incumbent on us to continue to try to build the best thing and everyone in this industry. And I truly just think that the ceiling is so high that no matter what entrenchment you build... And I think that this resembles markets that are maybe a little bit different from normal. software markets, normal enterprise markets of the past. I think one that comes to mind is the market for search engines at the end of 1999.

or at the end of the 90s and beginning of the 2000s. I think another market that comes to mind that resembles this market in many ways is actually just the development of the peripheral computer and mini computers. And I think that, yes, in each of those markets, the ceiling was incredibly high. It was possible to switch.

You could keep getting value for like the incremental hour of a smart person's time, the incremental R&D dollar for a really long time. You wouldn't run out of useful things to build. And then, you know, in search in particular, not in the computer case, Adding distribution was helpful for making the product better too, in that you could tune the algorithms, you could tune the learning based off of the data and the feedback you're getting from users.

And I think that, you know, all of those dynamics exist, exist in our market too. And so I think that, you know, maybe the sad, sad truth for people like us, but then like the amazing truth for the world is I think that there are many elite drugs that exist. there's many more useful things to build. We're a long way away from where we can be in 5, 10 years, and it's kind of incumbent on us to keep that effort going. So I'm hearing this sounds like a lot more like a consumer.

sort of moat where it's just be the best thing consistently so that people stick with you versus creating lock-in and things like that where they're just for like Salesforce where it's just contracts with the entire company and you have to use this product. Yeah, and I think the important thing to note is, you know, if you're in a space where, like, you kind of run out of useful things to do very quickly, then that's, you know, that's not a great situation to be in.

But if you're in a place where big investments and... having more and more great people working on the right path can keep giving you value, then you can get these economies of scale of R&D. you can kind of have, you know, deeply work on the technology in the right direction and get to a place where that is defensible. But yes, it is, you know, I think there's a consumer-like tendency to it. And I really think it's just, you know, about building the best thing possible.

Do you think in the future there's one winner in this space or do you think it's going to be a world of a number of products like this? I think the market is just so very big. And this is also one thing that, you know, you asked about the ID thing early on and One thing that I think a trip of some people that were thinking about the space is like, they looked at the IDE market of the past 10 years. And they said, you know, who's making money off of editors? Like, you know, there's all these.

It's this super fragmented space where everyone kind of has their own thing with their own configuration. And, you know, there's one company that commercially likes, you know, actually makes money off of making great, great editors. But like that company is only so big. And, you know, then like the conclusion was that it was going to look like that in the future. And I think that the thing that people missed was that, you know, there was only so much

you could do building an editor in the 2010s for coders. And the company that made money off of editors was doing things like making it easy to navigate around a code base and doing some error checking and type checking for things. And, you know, having good debugging tools, which were all very useful. But I think that the set of things you can build for programmers, I think the set of things you can build for knowledge workers in many different areas.

just goes very far and very deep. And I think that really kind of like the problem in front of all of us is like the automation of a lot of busy work and knowledge work and really changing all the areas of knowledge work in front of us to be much more productive. So that was all a long-winded way to say, I think the market's really, really big that we're in. I think it's much bigger than people have realized, than the other building tools for developers in the past.

And I think that there will be a bunch of different solutions. I think that there will be one company and to be determined if it's going to be us. But I do think that there will be one company that builds the general tool that builds almost all the world's software. And that will be a very, very generationally big business.

But I think that there will be kind of niches you can occupy in doing something for a particular segment of the market or for a very particular part of the software development lifecycle. But the general like... programming shifts from just writing formal programming languages to something way higher level. This is the application you purchase and use to do that. I think that there will be generally one winner there and it will be a very big business.

Juicy. Along those lines, it's interesting that Microsoft was actually right at the center of this first with an amazing product. Amazing distribution, Copilot, you said, was like the thing that got you over the hump of like, wow, there could be something really big here. And it doesn't feel like they're winning. It feels like they're falling behind. What do you think happened there? I think that there are specific historical reasons why...

Copilot might not have lived up so far, have kind of lived up to the expectations that some people have for it. And then I think that there are structural reasons. I think the structural reason is And to be clear, you know, Microsoft, you know, in the Copala case, obviously a big inspiration for our work. And in general, I, you know, I think they do lots of awesome things and we're users of many Microsoft products. But I think that this is a market that's not super friendly to incumbents.

In that a market that's friendly to incumbents might be one where there's only so much to do, it kind of gets commoditized fairly quickly, and you can bundle that in with other products. And where the ROI between different products is quite small. And, you know, in that case, perhaps it doesn't make sense to buy the innovative solution. It makes sense to just kind of buy the thing spumbled in with other stuff. Another market that might be, you know, particularly helpful for incumbents

It is one where there's, you know, from the get-go, it's just like you have your stuff in one place and it's like really, really excruciatingly hard to switch. And, you know, for better or for worse, I think in our case, you can try out different tools and you can decide which product you think is better. And so that's not super friendly to June comments. And that's more friendly to whoever you think is going to have the most innovative product.

And then the specific historical reasons, as I understand them, are the group of people that worked on the first version of Copilot have, by and large, gone on to do other things at other places. I think it's been a little hard to kind of coordinate among all the different departments and parties that might be involved in making something like this. I want to come back to Cursor. A question I like to ask everyone that's building a tool like this.

If you could sit next to every new user that uses Cursor for the first time, just whisper a couple of tips in their ear to be more successful, most successful with Cursor, what would be like one or two tips? I think right now, and we'd want to fix this at a product level, a lot of being successful with Kersher is kind of having a taste for like... what the models can do, both what complexity of a task they can handle, and how much you need to specify.

you know, things to that model, but like having a taste for the quality of the model and where its gaps exist and what it can do and what it can't. And right now, we don't do a good job in the product of educating people around that and maybe giving people some swim lanes, giving people some guidelines. But so to develop that taste... um would give kind of two two tips so one is as mentioned before uh would bias less toward like hey trying to have the model like you know

trying in one go to tell the model hey here's exactly what i want you to do then seeing the output and then either being disappointed or accepting the entire thing for an entire big task instead what i would do is i would chop things up into bits

And you can spend basically, you know, the same amount of time specifying things overall, but chopped up more. So you're specifying a little bit, you're getting a little bit of work, you're specifying a little bit, getting a little bit of work and, you know, not doing as much the like, let's write a giant thing, telling them all exactly what to do. I think that will be a little bit of a recipe for disaster right now.

And so biasing toward chopping things up, at the same time, and it might make sense to do this on a side project and not on your professional work, you know, I would encourage people to especially, you know. developers who are used to existing workflows for building software, I would encourage people to explicitly try to fall on their face and try to discover the limits of uh what these models can do by you know being ambitious and like kind of a

a safe environment like perhaps a side project and trying to kind of go on to hand to give you say I to the fullest because you know sometimes we do run or a lot of the time we run into people who haven't given the AI yet a fair shake and are kind of underestimating these abilities.

So generally biasing towards chopping things up and making things smaller, but to discover the limits of what you can do there, explicitly just kind of try to go for broke in a safe environment and get a taste for it. You might be surprised in some of the places where the model doesn't break. What I'm essentially hearing is kind of build a gut feeling of what the model can do and how far it can take an idea versus just kind of guiding it along.

And I bet that you need to rebuild this gut every time there's a new model launch, like when it's on it. I don't know, 4.0 comes out. You have to kind of do this again. Is that generally right? Yes, it's not. you know, for the past few years, it hasn't been as big as like, I think the first

kind of experience people have had with some of these big models. But yeah, this is also a problem we would hope to solve much better just for users and take the burden off of them. But yeah, each of these things have slightly different quirks and different personalities.

Kind of along these lines, something that people are always debating, tools like cursor, are they more helpful to junior engineers or are they more helpful to senior engineers? Do they make senior engineers 10x better? Do they make junior engineers more like senior engineers? Who do you think benefits most today from Cursor? I think across the board. Both of these cohorts benefit in big ways. It's a little hard to say on the relative ranking. I will say they fall into different anti-patterns.

So the junior engineers, we see going a little too wholesale, relying on AI for everything. And we're not yet in a place where you can kind of do that end-to-end on a professional tool, you know, working with tens, hundreds of other people within a long-lived code base. And then the senior engineers, for many folks, it's not true for all. And we actually...

Often, you know, one of the ways these tools are adopted is there's developer experience teams within companies. Often those are stopped by incredibly senior, you know, senior people. because often those are people who are building tools to make the rest of the engineers within an organization more productive. And we've seen some very, very boundary-pushing kind of...

We've seen people who are on the front lines of really trying to adopt the technology as much as possible there. But by and large, I would say, on average, as a group, the senior engineers... underrate what AI can do for them and stick to their existing workflows. And so the relative ranking is a little hard. I think they fall into different anti-patterns. But they both, by and large, get big benefits with these tools. That makes absolute sense.

I love that it's like two ends of the spectrum, like expect too much, don't expect enough. It's like the three bears. Is that the allegory? Yeah. Yeah, okay. Yeah, maybe the sort of senior, but not staff, right in the middle. Interesting. Okay, just a couple more questions.

What's something that you wish you knew before you got into this role? If you could go back to Michael at the beginning of Cursor, which was not that long ago, and you could give him some advice, what's something that you would tell him? The tough thing with this is feels like so much of the hard-won knowledge is tacit and a bit hard to communicate verbally. The sad fact of life feels like for, you know, for some areas of human endeavor, like you kind of do need to fall on your face to.

either need to fall on your face to learn the correct thing or you need to be kind of around someone, which is a great example of kind of excellence in the thing. And one area where we have felt this is is higher. I think that we actually were So we tried to be incredibly patient on the hiring front. It was really important to us that, you know, both for personal reasons and also for, I think actually for the company's strategy.

having a world-class group of engineers and researchers to work on on cursor with us was going to be incredibly important. Also getting people who fit, you know, a sort of mix of intellectual curiosity and experimentation because there can be so many new things we need to build.

And then also kind of an intellectual honesty and maybe micro pessimism and bluntness, because, you know, with all the noise and, you know, especially as the company's grown and the business has grown, you know, keeping a level ahead, I think is incredibly important to you. But getting the right group of people into the company was the thing that maybe more than anything else. apart from building the product, we really, really

you know, fussed over. And, you know, we actually waited a long time to grow the team because of that. And I think that most, you know, many people you hear hire too fast. I think we actually hired too slow to begin with. I think it could have been remedied. I think we could have been better at it.

You know, the method of... of recruiting that we ended up eventually falling into and working really well for us, which isn't that novel of going after people that we think are really world-class and recruiting them over the course of, in some cases, many years.

I ended up working for us in the end, but I don't think we were very good at it to begin with. And so I think that there were hard-run lessons around both who was the right profile, like who actually meets on the team, like what did greatness look like? And then how to, you know, talk with someone about the opportunity and, you know, get them excited if they really weren't looking for anything. There were lots of learnings there about how to do that well, and that took us a bit of time.

What are some of those learnings for folks that are hiring right now? What's something you missed or learned? I think, you know, to start with, we maybe we actually biased a little bit too much towards on. Looking for people who fit the archetype of well-known school, very young, had done the things that were high credential in those well-known school environments.

And actually, like, you know, I think we're lucky early on to find a lot of, you know, to find fantastic people who are willing to, you know, to do this with us who were later career. And so, yeah, I think we should kind of spend a bunch of time on maybe a little bit of the wrong profile to begin with. And part of that was a seniority thing. Part of that was like, you know, kind of an interest and experience thing too. We have hired people who are excellent, excellent, excellent and very young.

but they maybe look uh in some cases slightly different from you know being straight out of central casting

you know, another lesson is just like, we very much evolved our interview loop. And so now we, you know, we have like a hand-rolled set of interview questions and then you know kind of core to our um core to how we interview too is is actually we have people on site for two days and do do a project with us so we're test profit And that has worked really well, but you're increasingly refining that.

And then, yeah, I think how to learn about what people are interested in and put our best foot forward and letting them know about the opportunity when they're really not looking for anything and have those conversations. There's definitely been. gotten better at that over time. Do you have a favorite interview question that you like to ask? I think this two-day work test which we thought would not scale past a few people, has had surprising staying power. And the great thing about it is

It lets someone go end to end on it, like a real project. It's not, you know, where do we use? It's kind of a candid list of projects. But it gives you two days of seeing like a real work product. And it doesn't have to be incredibly time intensive on the team's time. You know, you can take the time you would spend in like a half day or one day onsite and you kind of spread it out over those two days and give someone a lot of time to do work on their project.

And so that can actually help it scale. And then it really helps you you know, do you want to be around this person type test because you are around this person, you know, for a few days. And so, you know, a bunch of meals with them.

And so that one, we didn't expect that one to stick around, but that has been really, really important to our value process. And then also important to getting people excited at the, especially the very early stages of the company, because before people are using the product and know about it.

And, you know, when the product is comparatively like not very good, really the only thing you have going for you is, you know, a team of people that, you know, some people find special and want to be around. And, you know, the two days would give us a chance to just like, you know, have this person meet us and in some cases hopefully get convinced that they want to throw in with us.

And so, yeah, that one was unexpected. Not exactly an interview question, but kind of like a forward interview. The ultimate interview question. So just to be very clear about what you're describing, it's that you give them an assignment, like build this feature in our actual code base, work with the team to... code it and ship it. Is that roughly right? Yes. So we don't use the IP, not ship end-to-end, but yeah, it's like a mock very often in our code base. Here's a real mini two-day project.

you're going to do it end-to-end, largely being left alone. There's collaboration too. And then, you know, we're a pretty in-person company. So in almost all cases, yeah, it's actually just sitting in office with us too. And you've been saying that this has scaled to even today. So how big are you guys at this point? So we are going on 60 people. So small for the scale and impact. I was thinking it'd be a lot larger than that. Yeah. And I imagine the largest percentage is engineers.

Yeah, the thing that's more than anything... And to be clear, you know, a big part of the work ahead of us is building a group of people that is bigger. and awesome and can continue to make the product better and the service we give to customers better. And so you don't plan to stay that small for longer. We wouldn't hope so. But yeah, part of the reason that that number is small has

The percentage of engineering and research and design is very high within the company. And so many software companies, when they have roughly 40 engineers, would be over 100 people. because there's lots of operational work and often they're very, very sales led from the get go. And that's just quite labor intensive.

And, you know, we started from a place of being like incredibly lean and product led. And like, you know, we now serve lots of our good customers and it built that out. But, you know, there's much more to do there. A question I wanted to ask you, there's so much happening in AI. There's things launching every, there's like newsletters, like many newsletters whose entire function is to tell you what is happening in AI every single day.

Running a company that's at the center, kind of the white hot center of the space, how do you stay focused and how do you help your team stay focused and heads down and just build and not get distracted by all these shiny things? You know, I think hiring is a big part of it. And if you get people with the right attitude. And, you know, all of this should be asterisked in like, you know, I think we're doing well there. I think that like, you know, we'd probably be doing better there too.

And it's something that we should probably talk even more about as a company. But I think that hiring people with the right disposition, people who are less focused on external validation, more focused on building something really great, more focused on doing really high quality work, and people who are just generally kind of level-headed and maybe the highs aren't very high and lows aren't very low.

I think hiring can get you through a lot here. And I think that's actually like a learning throughout the company is that for any, you need process, you need hierarchy, you need lots of things. But for any kind of organizational tool that you're introducing into a company, you know,

And the result you're looking to get from that tool also, you know, you can go pretty far on like hiring people with the right behaviors that you want, like, you know, to resolve from that organizational thing. And, you know, the specific example that comes to mind is We've been able to get away with not a ton of process yet on the engineering front. And I think we need a little bit more process, but for our side, it's not a ton of process by hiring people who I think are really excellent.

One is hiring people who are level-headed. I think two is just talking about it a lot. I think three is hopefully leading by example. And yeah, for us personally, we've since 2021, 2022, been professionally working on this and working on AI. And we've just seen a sea change of the comings and goings of

various technologies and ideas of, you know, if you're to transport yourself back to end of 2021, beginning of 2022, this is GPT-3. You know, InstructGPT doesn't exist. There's no DALI. There's no stable diffusion. And then, you know, we've gone through all of those image technologies existing, TractionBT and that rise. And, you know, GD4, all of these new models, all these different modalities, all the video stuff. And only a very small number of these things really kind of affect the business.

So I think we've kind of just built up a little bit of an immune system and kind of know when an event comes around that actually is really going to matter for us. And this is, you know, this dynamic too of there being lots and lots and lots of... chatter, but then maybe only a few things that really matter, I think has been mirrored in AI over the last decade, where there have been so many papers on deep learning in academia.

so many papers on AI and academia. Then the amazing thing is there are really a lot of, I mean, A lot of the progress of AI can be attributed to some very simple, elegant ideas that have stayed around. And the vast majority of ideas that have been put out there haven't had staying power and haven't mattered a ton. And so the dynamic is a little bit mirrored in kind of the evolution of deep learning as a field overall.

Last question. What do you think people still most misunderstand or maybe don't fully grasp about where things are heading with AI and building and the way the world will change? People are still a little bit, you know, occupied too much either end of a spectrum of You know, it's all going to happen very fast. And, you know, this is all, you know, bluster and type and super snake well.

And, you know, I think we're in the middle of a technology shift that's going to be incredibly consequential. It's going to be more consequential than the internet, and it's going to be more consequential than, you know, any shift in tech that we've seen since the advent of computers. And I think it's going to take a while. And I think it's going to be a multi-decade thing. And I think many different groups will be consequential in pushing it forward. And to get to a world where...

computers can increasingly do more and more and more for us. There's all of these independent problems. that need to be knocked down and progress needs to be made on them. And some of those are on the science side of things of getting these models to understand different types of data, be faster, cheaper, smarter, conform to the... The modalities that we care about, take actions in the real world,

And then some of it's on how we're going to work with them. What's the experience a human should actually be seeing and controlling on a computer and working with these things? But I think it's going to take decades. I think that there's going to be lots of amazing work to do. I think that also...

like a pattern of a group that I think will be especially important here. You know, not to talk our own book, but I think it's like, you know, the company that works on automating and augmenting a particular area of knowledge work builds the both. the technology under the surface for that.

integrating the best parts from providers, sometimes doing it in-house, and then also builds the product experience for that. I think people who do that, and we're trying to do it in software, people who do that in other areas, I think those folks will be really, really, really consequential.

not just for the end value that users see, but then I think as they get to scale, they'll be really important for pushing forward the technology because I think they'll be able to build, the most successful of them will be able to build very, very big businesses. And yeah, so excited to see the rise of, you know, other companies like that in other areas.

I know you guys are hiring for folks that are interested in, hey, I want to go work here and build this sort of stuff. What kind of roles are you looking for right now? Anyone? specifically you're trying to, any roles you're most excited about filling ASAP? What should people know if they're curious? There are so many things that this group of people need to do that like we are not yet equipped to do. And so, you know,

kind of generic across the board, first of all. And so if you don't think we have a role for something, maybe if you reach out, that that won't actually be the case. And maybe we can actually learn from you and kind of decide that we need something that we weren't yet aware of. But, you know, by and large, I think that, you know, two of the most important things for us to do this year are have the best product in the space. and then grow it. And we're kind of in this land grab mode where

Almost everyone in the world is either using no tool like ours, or they're using one that's maybe developing less quickly. And so growing Cursor Geo is a big goal. I would say, yeah, especially always on the hunt for folks who are excellent engineers, designers, researchers, but then folks all across the business side too.

I can't help but ask this question now that you talk about engineers. There's kind of this question of just like, you know, code's going to write up all our code. AI is going to write all our code, but everyone's still hiring engineers like crazy. All the foundational models, so many open models. We're not out there cheating the horn out. Do you think there's going to be an inflection point of like engineering roles start to kind of slow down? I know this is like a big question, but just...

Do you see engineers being more and more needed across all these companies? Or do you think at some point there's all these cursor agents running, building for us? Again, we kind of have the view that like... There's this, you know, both long messy middle of...

You know, it not jumping to a just like you step back and you ask for all your stuff to be done and you have your engineering department. And, you know, very much like you want to evolve from programming as it exists today. We want humans to be in the driver's seat. And we think even in the end state, giving folks control over everything is really important. And you will need professionals to do that and kind of decide what the software looks like.

So both, I think that, yes, engineers are definitely needed. I think that engineers will be able to do much more. I think the demand for software is very lasting, which is not the most novel thing, but I think it's... It's kind of crazy to think about how expensive and labor-intensive it is to build things that are pretty simple and easy to specify or would look like it to the outside observer.

and you know just how hard those things are to do right now and so if you can you know all of the stuff that exists right now that's you know justified by The cost and demand that we have now, if you could bring that down by order of magnitude, I think you would have tons and tons and tons of more stuff that we could do our computers, tons more tools.

And, you know, I felt this where, you know, one of my early jobs actually was working for a biotechnology company and it was building internal tools for them. And the off the shelf tools that existed were horrible. and did not fit their use case at all. And then the internal tools I was building

There was definitely a ton of demand there for things that could be built. And, you know, that far outstripped just the things that I could build in the time that I was with them. But yes, I think that it's still so, you know. The physics of working on computers are so great, it should be able to...

You should be able to kind of basically just move everything around, do everything that you want to do. There's still so much friction. I think that there's much more demand for software than what we can build today. things costing like a blockbuster movie to make kind of simple productivity software and so i think long into the future yes there will actually be more demand for engineers

Is there anything that we didn't cover that you wanted to mention? Any last nugget of wisdom you wanted to leave listeners with? You could also say no, because we've done a lot. We think a lot about how... how you set up a team to be able to make new stuff in addition to like

continuing to improve the stuff that you have right now. And I think if we're to be successful, like, IDE is going to have to change a ton. What it looks like is going to have to change a ton going into the future. And, you know, if you look around, uh the the companies we respect there are definitely examples of companies that have continued to really like you know ride the wave of many leapfrogs and continue to kind of actually push the frontier but you know uh

They're kind of rare, too. It's a hard thing to do. And so, you know, part of that is just kind of thinking about the thing and trying to reflect on it, you know, in our eight days and, you know, the first principle side of things. Part of it is also, you know, trying to get in and study past examples of greatness here. And that's something that we think about a lot, Joe.

Yeah, what you just told is before we started recording, you had all these books behind you. And I was like, what's that over there? It's like the history of some old computer company that was influential in a lot of ways that I've never heard of. And I think that says a lot about you of where a lot of this innovation comes from is studying the past and studying history and what's worked and what hasn't.

Okay. Where can folks find you online if they want to reach out and maybe apply? You said that there may be roles they may not even be aware of. Where do they go find that? And then how can listeners be useful to you? Yeah, you know, if folks are interested in working on this stuff, we'd love to speak. And they can find, if they go to Chris.com, they can kind of both find the product and find out how to reach us. Easy.

Michael, thank you so much for being here. This was incredible. It was wonderful. Thank you. Bye, everyone. Thank you so much for listening. If you found this valuable, you can subscribe to the show on Apple Podcasts, Spotify or your favorite podcast app. Also, please consider giving us a rating or leaving a review as that really helps other listeners find the podcast. You can find all past episodes or learn more about the show at Lenny's podcast dot com. See you in the next episode.

This transcript was generated by Metacast using AI and may contain inaccuracies. Learn more about transcripts.