Guy Podjarny, Snyk and Tessl founder - The future of programming - podcast episode cover

Guy Podjarny, Snyk and Tessl founder - The future of programming

Dec 23, 202445 minEp. 116
--:--
--:--
Listen in podcast apps:

Episode description

Guy Podjarny is the founder of Tessl - a startup that is rethinking how we build software.

Guy previously founded Snyk - a dependency scanning tool worth billions of dollars. Before Snyk, Guy founded Blaze, which he sold to Akamai.

This episode is brought to you by WorkOS. If you're thinking about selling to enterprise customers, WorkOS can help you add enterprise features like Single Sign On and audit logs.

In this conversation, we talk about the future of programming and the future of DevTools. 

  • The future of programming will focus on writing specifications.
  • Trust in AI tools
  • Snyk is an example of how tools can integrate into existing workflows.
  • Code can become disposable, allowing for flexibility in development.
  • Specifications will serve as repositories of truth in software development.
  • Developers will need to adapt their skills to leverage AI tools effectively.
  • Community collaboration is essential for the evolution of AI development tools.
  • AI simplifies and democratizes the process of software creation

Thanks to Anna Debenham for making this happen. 

Transcript

Jack

Guy Podjarny might be the most successful dev tools founder I've ever met. In 2012, he sold Blaze to Akamai and became a CTO. He then founded Snyk, a dependency scanning tool that is worth billions of dollars. I've used it heavily and it's great. So when Guy does something, you pay attention.

Guy believes the future of programming will be writing specs as in like yaml files and AI will write code to that spec and maintain it. His new company, Tessell, is very different to Cursor and Copilot but he's raised a $125,000,000 and there is a ton of excitement around what he's building. In this conversation, Guy lays out what he calls AI native development and the future of programming or whatever it is that we do to build software. A lot of people are building kind of like the Tesla dev tool of, like, helping the drivers drive better, building a nicer experience to code in that sense. And it seems like you're more thinking about, like, what what does it look like if you don't if you don't need a driver?

Yeah. Would that be

Guy

Yeah. I think so. Maybe even more is, like, it's not so much just if you don't need a driver, but also, like, should this car I don't know. Maybe it's more teleportation or sort of a or like the flying car. Well, so so I think, so so, I wrote this blog post about it, charting your AI native journey.

And I think That's brilliant. I think, you can think about kind of AI startups. Probably true to an extent to any startup, but focusing on AI is easier, across these sort of two dimensions. Right? One is a dimension of trust that goes from attended to autonomous.

Yes. And that's like how much do you need to trust a tool to work to, to to for it to be useful for you? And the other is a is a an access of change of how much do I need to change how I work to be able to use this tool. So maybe let's talk about a few examples on it. And we can talk about dev tools, in in the context of it.

So so when you think about so the the kind of the bottom left quadrant is sort of low change, low trust. It's things that work within your existing workflows, and they do not, you can kind of eyeball them and sort of see that they are correct and choose when to say yes and when not.

Jack

Would you say Snyk was, like, a really good example? Because it was, like, kind of integrated into, like

Guy

I I think I think to a large degree because it was, it's built into the way you work and in fact a lot of the premise of Snyk was that if you want developers to embrace security, you have to build a developer tooling company, not a security company that adapts to how developers work. And so it comes into the pipeline, comes into the IDE, and and sort of tells you, hey, there's a problem here and makes it easy to act on it. I think in the AI world, you sort of see all the coding assistants are, like, probably the best examples of those. In general, like, assistants are a good measure because you say, okay, here here's here's a piece of help, a a completion of your code, a proposal for a fix of a bug, a test that I created. And they have to be bite sized.

They have to be small so that you can eyeball them and easily review them. And then if you so said yes, then you continue and they just, you know, they just just make you more productive. To the extent that they don't give you noise, not that they give you kind of valuable information, then I think that's that's very valuable. So that's kind of the bottom left quadrant and it's, it it's it's it's getting us where most of the AI kind of tools as a whole sit because it's easy to adopt them. Because you don't need trust and you don't need to change, And it includes in in the developer world.

I think the the the path around sort of the Waymo and the Tesla as such, the the the is going up that autonomy, kind of axis. Right? And and so some of that is around, kind of, the the nature of the task that you do. And so if you think about, Intercom's support agents, that's like an autonomous support agent. It interacts with the with the user, and it gives them the, the the answer or whatever, like, as a support chat.

So it's slow change because users are already interacting through a chat interface with, with with the support entity. But it is, like, if you if you need a human on the kind of customer side to review every Yeah. Response to, you know, before if you need it to be an attended process, then it you lose all the value from that support agent. And so you need, you you need to trust that it would get it right

Jack

Yeah.

Guy

To be able to to use it. And in the self driving car world, the the robo taxis are a good example of that because, you know, you you call an Uber not here in the UK, but in in San Fran or others. Open an Uber, you call a cab, you kinda go into a car, it arrives, you go into the car, drops you off, and you use like the same as a human driver, but massive trust, right? You're kind of entrusting your life with the Yeah. In the AI.

And so those are, I think those are like IP routes. The, it's really about can you like, the the demote over there if you're building a company is about intellectual property. Yeah. Because the workflow hasn't changed. So Uber in this context, you know, has the same kind of power, the same kind of control of the distribution.

And and so in theory, if you think about Uber and Waymo and their partnership right now, which is kind of interesting Yeah. The it it makes sense if Uber thinks, well, I don't think I can replicate what Waymo is doing. Mhmm. I can't really get to that IP. It's very hard.

And so I will I will instead call it to Waymo and maybe maybe the other way around as well. It doesn't make sense if you think, hey. I can actually do this myself. And so when you think in dev about all sorts of attempts at autonomy, you go from the the kind of the more lightweight, just just sort of bigger tasks. Say, I will convert all of your code base from whatever spring version a to version b.

Jack

Yeah.

Guy

And they don't really claim full autonomy. But if it's a big task, then it needs to be somewhat autonomous because if you have to review every change, then it's no good. Or test creation, I think, is another example. So so you see some of those. But mostly, I think what you see in dev is you see, attended processes that just try to take bigger and bigger chunks.

And then you have the big ambitions like Cognition's dev in. And they veer a little bit into the change trust that change access that we can discuss. But they talk about autonomous tasks. And, and I think, that autonomy thing is like, that autonomy path, I think, is just a very hard path for startups to take. So if you're building a a dev tool today or building kind of an AI dev tool, then the bottom left, quadrant makes sense.

It's just you should expect 10,000 other companies to be doing the same thing. Yeah. And you should appreciate that while you need to build a good product, you really have to build an amazing GTM. And it may be better off to build it to accumulate some quick traction and then sell it to an incumbent because you're not really changing who has control over the workflow. So if you're still working in GitHub's world and, you know, whatever the Yeah.

ID, existing IDs, existing build systems, and all that, and you're just creating tests or just creating something, then it's a bit hard to to say, like, you you would need the incumbents to be much slower, for you to actually gain distribution. So but I think there's still a role to play here at the bottom left. But if you go to the IP route, you go to the top right, it's really hard. Like, it's gonna be hard for you to build that IP, and it's gonna take a while. And eventually, if it's mimicable by the incumbent, you're not gonna be in a strong position.

So you should only do that if you if you really think that the IP that you are building is is truly hard to replicate.

Jack

This episode is brought to you by Work OS. At some point, you're gonna land a big customer and they're gonna ask you for enterprise features. That's where WorkOS comes in because they give you these features out the box. Features like skin provisioning, SAML authentication, and audit logs. They have an easy to use API and they're trusted by big dev tools like Vercel as well as smaller fast growing dev tools like Nock.

So if you're looking to cross the enterprise chasm and make yourself enterprise ready, check out WorkOS. We've also done an episode with Michael, the founder of WorkOS, where he shares a lot of tips around crossing the enterprise chasm, landing your first enterprise deals and making sure that you're ready for them. Thanks WorkOS for sponsoring the podcast and back to the show.

Guy

Does that make sense?

Jack

Yeah. It does. And so so I guess kind of playing that back, it's like the bottom left, it's people can adopt it, and you can get something out there that's, like, useful and people don't need to trust you that much.

Guy

Correct. Because it's small. You can Yeah. Because they can just review them and and attend to it easily.

Jack

See that it works. And yeah.

Guy

Yeah. So as long as the advice makes sense most of the time, it's quite useful.

Jack

Yeah. And but they may struggle because of all the competition, the distribution, and it's probably I guess you're, like, reading between the lines, you know, maybe Cursor is gonna be bought by GitHub at some point or that sort of thing. And,

Guy

it's interesting. You have to think about Cursor is interesting. They're really competing. They made a bold choice to say, we're actually we're not changing the methodology, but we are not gonna be where you are. We're gonna require you to replace your ID. Mhmm. And that's, that is a change. That is different to most of the existing coding assistants that integrate into where you are. So they actually embraced a little bit of a change. But their ID is just a fork of Versus code.

And so it's not a dramatic change. It's a it's a choice of a tool, and and they support it. You know, there's and and but they create a dependency on Versus Code. And there are actually all these indications about how, there's definitely an opportunity for the Versus Code team, who is clearly kind of comes out of GitHub and Microsoft, to, to make it harder to, to fork. For instance, I don't know that they would do it, but, like, VSCode can choose to change its license.

Yeah. And now suddenly the cursor team has to support and maintain all the forks, all the maintenance of VSCode itself. Right now, they're riding on it with their fork.

Jack

Yeah. It's in kind of a

Guy

So so it's a little bit of a change. If you compare it to Augment or to, Codee from Sourcegraph or to Tab 9, They integrate into the existing ID that you have. So they give you code code assistance. But what is what is their sort of defense against, you know, Copilot, or or the Microsoft? It's it's purely in how good their completions are.

And the incumbent has time. But the classic race that happens with startups is does the incumbent gain innovation faster than the innovator gains distribution? Yeah. That's the equation. And and so and then how how entrenched is the distribution that you got as the as the disruptor? So maybe I can talk about the change route. Is that is that useful? So the change axis is a little different. And I think probably the easiest example there is the text to video example. Yeah.

It's a bit harder actually to imagine it in in dev, and that's what we're doing at Tesla. But in, in text to video, you know, it's an entirely new way of creating videos. There is basically zero chance that the current incumbents in video production will win the race in text to video, like in production. Because it is just entirely entirely different.

Jack

And it's

Guy

not about getting, extras and production setups and video and cameraman and, and so maybe there's pieces of the pipeline that are similar, but most of it is entirely different. And so companies like Synthesia, for instance, are you know, they're they're creating a new flow. The the the this change makes it harder for, teams to adopt. You know, it's harder for, an existing studio that produces whatever a little clip or a movie or something. We have to think about how would I produce this?

The levels of control that they have are different. You know, things that used to be easy to do are now hard and things that used to be hard are now easy. So there's just a lot of learning. There's a lot of fear for your job and your change. So it's it the humans delay it.

And it's, it's harder to penetrate and might not yield the short term reward as a company. And Synthesia specifically has has chosen a smart path focusing on training videos, like whatever the McDonald training video to all of its employees in all of the countries and all of that where there's just a more acute pain of wanting that in different languages, wanting that to be updated frequently. And so people are willing to take on that challenge and and, you know, that that, has led to a lot of success. And their their their ambitions are greater than that, but they Yeah. They they start there.

Jack

That's the least human resisted change.

Guy

You need motivation. Like, people resist the change, and so they you need to provide some some immediate gain that is really necessary. But then if they do succeed, they have an opportunity to actually lead to that market. Again, they don't really need fear the existing video production companies. Yeah.

As someone that will, that will, see what they've done in Mimikit and have better leverage. You know? Legal tech, I think, is an interesting one as well. Like, on one hand, a world in which, you know, there's a lot of data and such within the legal companies Yeah. To be able to to to train AI models that would actually be quite good as a lawyer.

At the same time, those companies are controlled by the very, the very same, people that are concerns for their jobs and are, I guess, kind of disinclined to to think that a system can can do their job. And from a consumer perspective, consulting a system to tell you about, you know, what it like, when you when you work with a lawyer, you get legal advice and you get pointed out a bunch of things, but you also kinda get a bit of this reassurance.

Jack

Yeah. It's like almost like therapy. Like, it's gonna be okay.

Guy

Yeah. It's gonna be okay. You can

Jack

handle this for you.

Guy

And I guess you also get, like, a little bit of, like, legal, like, capabilities. Like, you can go and tell your investors. It's okay. I consulted a lawyer. Like, these good lawyers have said that this is okay, so it's okay.

And so it's actually quite fear inducing to think that you would consult it. So it's a little bit of change. It's actually not a lot of change in how you would interact, but it's some change because you don't go talk to a person. You have to interact with the system and work with it. And it is a lot of trust.

Yeah. But again, I don't think the risk of the incumbents are there. So so this sort of change access is really if the if the, trust access is a little bit of humans, but a lot of it is tech. How do you make the tech reliably work and get the right results? And, yeah, then how do you, convince people, the humans, that it's correct?

The change requires the humans to change. And that is harder to achieve, but more durable once achieved. Mhmm. And in dev, I guess, in in in TESOL, I don't know if it's should we talk about TESOL?

Jack

Absolutely. Yeah. Yeah.

Guy

So in in in Tesla, what we're trying to do is we're saying, okay. The existing AI dev tools are are very good at supercharging the existing activities. But by doing so, they actually, they they I think they have, like, a local maxima that they might hit. And also they're, they're not necessarily, tapping into the full potential of of the eye. So I'm a big fan of these tools and I recommend that people use them.

So I'm not at all close to them. And I think some of the innovations in the tech that come from there will enable also the future. So it's not, like, some waste of time. But today, a lot of them are mostly a new UX to create code.

Jack

Yes.

Guy

There are different way. Like, I will create code through my I can create code by hitting tab

Jack

Yeah.

Guy

By accepting a a pull request that, that a machine has provided me Yep. Or, or a variety of those reasons. But, eventually, I the the way the system operates post remains the same, and it remains code centric. So the code over time, the code accumulates new requirements, new changes, new kind of enhancements, migrations, all of the history is in the code. The code is the only, thing that matters Yeah.

In the system. Like, your docs and all those, they're all descriptive of the code. They fall out of line. They, but there no. It's a thankless job to update the documentation. There's no Yeah. There's no functional forcing function, to to to actually kind of make you do that. And so the source of truth, the canonical artifact in software development is

Jack

the code.

Guy

Right? Like, good luck 2 years after. Dogs are

Jack

keeping up with the code. Yeah.

Guy

Yeah. And oftentimes, like, the requirements, you think, like, 2 years in, like, good luck finding the requirements that led to the elements.

Jack

Yeah.

Guy

So I like the analogy of, like, you go into a house and you look around, it says, oh, there's a sofa over here and there's curtains and they don't really match and there's this kind of crack on the wall. There's a bit of an odd pillar somewhere in it. And and you don't know, like, that the curtains are a family heirloom that, you know, you cannot There's

Jack

a reason there.

Guy

That there's a reason there. But the sofa was just moved from the previous place you lived in, and would you be happy to sort of shed that? And the column is actually a supporting column, or was it just again a design choice? And and you just you just don't know. Like, you come along and this is just the house.

And so when the when the LLMs or humans read the code, that's the reality they face. They they just they just see reality, and they have to deduce out of the how the application works, what the application is supposed to do, and logics. And so we see an opportunity to flip that around with LLMs and to create a new way of producing software, that is spec centric, in which you will have canonical artifact would be a specification, which loosely said it would be what it is that you want to do and some validations. Like, I want to create this ecommerce site and here are some questions. Like, when someone adds something to the shopping cart and hits checkout, I want to see that the balance has been reduced or a bunch of those.

I'm clearly oversimplifying. But the core of it is some some behaviors and some validations associated with them that will evolve over time and, you know, something we need to kinda figure out. How do we how do we specify? The intent is for the spec to tap into the, the the, flexibility of the LLMs to allow you to to use the fact that it's easier than ever to say what you want to the LLM, but counter the fact that it's also, like, probabilistic is not, factual. And, and really lean into the the fact that LLMs can fill in the gaps.

Yeah. Then we can come back to that. But I think if you if you define the spec and you define, the the validations, then you've basically defined the degrees of freedom of saying, this is this is, like, the essence of my software. This is what it is supposed to do. Here are here is how I would ground it in truth.

It's like here are the validations that say this is correct. It allows you to now build a development process that, revolves around the functionality versus the implementation. And, and then the code becomes, disposable. Like, you you generate the code with AI. Yeah.

And when you are once you're happy with it, like, once the, spec conformance tests have passed and maybe a few more, then then then you're good. You can throw the code away. You can create new ones. And suddenly, that opens up, hey, I can create code in Python or in JavaScript.

Jack

Yeah.

Guy

I can create something for AWS or for GCP. I can create something that is low latency or, low cost depending on my needs. I can tap into probabilistic improvements like an an LLM again that would come along and say, find vulnerabilities and fix those vulnerabilities in my code. And maybe in the process, they break it. That's the the reason that's the hold back today.

It's like, what? Did it break the application in the process? And having that validation, that sort of foundation allows you to say, well, no. It didn't break it. And and so this this sort of anchoring in what is correct and what is the desired behavior and an ability to define the degrees of freedom, really unlocks, like, a lot of opportunity.

But to achieve that, you have to figure out good techniques to say, how do you deal with the lack of predictability? You get high adaptability Yeah. And low, predictability or so and that's really, I think, where where we stand and what we think the future of development would be is, you know, on one hand, you have code as it stands today, which is highly predictable, but very much not adaptable. Yeah. Like, you have to work hard.

It's fragile. It's different. On the other side, you have this, like, go to Anthropic and, like, type in a prompt and or whatever and get some application come out where you have no controls. Like, next time you're gonna add a menu item, and it might change the layout, and it might break the checkout action. And it and there is no grounding in truth.

And the idea of the spec is in simple form is to say, like, create, these, like, repositories of truth, you know, in the in the process that are not just a prompt, they're not just an ask, they're a specification and accrue mileage on those specs. So, like, learnings and scars and edge cases and all those, they accrue to the spec. And so those specs become like software matures. The specs mature. Yeah.

And they represent what it is. And so it it taps into that magical easy creation, but it gives you some of the rigor. And finding that out, I think, is a new development. Like, we call it a new development paradigm, a new approach to how you would develop software that really revolves around, defining the spec, around having systems that are able to understand your requirements without being too laborious. So Yeah.

LLMs today, you can tell it. You can tell an LLM to create a Tic Tac Toe game Yeah. Without telling you the rules of Tic Tac Toe. Yeah. And that has never been the case. So how do you how do you, how do you deal with that? How do you know which decisions the LLM has made? How do you, separate between like, when do you want to persist to those decisions? You said I want the button. The LLM said that button is red.

She's like, fine. I don't care. But you don't want the button to turn green next time you change a word on the page. Yeah. So how do you persist to those decisions? Sometimes you don't wanna persist those decisions. So managing those degrees of freedom, and and figuring out how do you specify those is is, I guess, what we see this, the craft of software development in the future. So they're very long

Jack

Yeah. No. It's it's it is super interesting. The way I was thinking about it, and I'd love to hear if this is how you think about it at all, but it's, like, kind of building kind of the the of the future with, like, v zero in my so, like, you could say, like, generate an website where I can play tic tac toe, and it can build that for you and deploy it, and it's there. And that's, like, a great experience.

And it seems like what you're trying to build is more like the v zero where you can build, like, a banking application or something. It's more like the the seller building the vessel of the future, and you're building, like, the AWS of the future in a sense of, like, this is that sort of in any way?

Guy

It's a it's a it's a interesting kind of analogy of sort of of the of the future. I think, Vercel, runs a platform that is more structured. It's more opinionated. And so, when they use v zero, which is amazing, you create an application within this opinionated structure. Yeah.

And once that has been created, there's still no interim artifact that represents the decisions that you've made. So so Vercel creates v zero creates the, the the React application or whatever it is that they've created. And, and then next time when you ask Vercel to change it with AI, it needs to understand from the code what has happened and work with it. And so to to some extent, it has the same challenges of, of this. It needs to look at the code and and and kind of reverse engineer what was, you know, the logic behind it.

And it's similarly lossy. So v zero is new. All of this stuff is new. But in 3 years' time, the prompt that created the changes, those are they disappear. They're not a versioned resource. They're not associated with anything. And so there will still be the problem of, like, why is this here? Was this just, like, a random decision by the end of them at the time? Or was this a user explicit explicit requirement? And Yeah.

But where it's easier for them is that it's easier, to at least understand the system because it's a bit more declarative, and it's because they're working a more opinionated surrounding. And so I guess I would my my prediction, is that, many of many of these tools and platforms will naturally end up embracing specs within them. Like, I don't think spec centric development is a Tesla only thing. Like, we are we're gonna build a platform, you know, we want to kind of help form the movement and, and build it out. But we think there will be many tools that will evolve into this, methodology.

Because I think as these applications grow, someone will say, hey, you've created this thing, but I want to be able to say, like, keep this piece over here.

Jack

Yeah.

Guy

I want it stable. I wanna be able to specify some core of this behavior. And I want to define a few tests that help the LLM wrapped by v zero, or others. Like, ensure that that is grounded in truth. Right?

That that remains there. And then it expands. And so so I think I think most of these tools, once they encounter once they get past the initial creation, they will they will discover the need for creating specs. And and maybe those specs look very differently. Maybe some specs are just conversation threads and such. But, but it's hard for me to imagine how they don't.

Jack

Yeah. The specs is interesting. It's like it it almost sounds like, you know, you were talking about, yeah, as you said, there'd be different types of specs. And I'm almost thinking there's, like, you know, there's designs, and you can put in, like, almost, like, images into the specs and stuff like that. And, like, do you see these being very like, these aren't just like a, you know, YAML file. These are, like, kind of

Guy

Yeah. For sure. I think, I think specs I think there will be many specs just like there are many programming languages.

Jack

And Yeah.

Guy

I think the way, to specify a mobile game is very different than the way you would specify a crypto contract or a pacemaker or a SaaS application or, and and so I think I think there will be different specs. I think specs will compose just like software composes, because Yeah. Even in these examples, it might be that that crypto contract has some SaaS app around it. And so you might compose different types of, specs. And, and I I I think you really lose a lot if you don't tap into the versatility of how you can express what you want built that the NLMs offer.

So, yeah, I think you would have specs of a varying level of, looseness, if you will. Right? It could be that if you want something algorithmic, it's pseudo code. It's really quite detailed. Yeah.

And if you want something broader, then then you might lose lose language. And in some cases, if you want to say this is a design system, then you might express that design system or design constraints. So I think I think that would evolve. It's it's interesting to also think about how good the platform is at understanding what you meant. So

Jack

Yeah. That's that was actually gonna be one of my big questions is this. Because it's almost like you've got how well you can describe it. And it's like, is there like, let's say there's a I build something and there's a bug. It's like, is there a bug because my spec was bad, or is it a bug because, you know, let's say there's, like, something with Tesla not quite right. Like, how like, how what is the kind of, like

Guy

Yeah.

Jack

Because you're not exposing anything to them in that sense, or are you like Well,

Guy

I think it has to be transparent. So we, you know, we we don't have the platform is not out yet because we're still experimenting. Right? We have a lot of a lot of ideas. We have some principles. We're building them out. We're trying them out. Some prove out to be bad. You know, we we build them. And so we have this sort of core conviction and increasing clarity about what a spec is.

But we still think that there's a a journey to be had here. And but generally, what we think is that it's important to be able to capture the LLM decisions in a spec of its own, and and be able to say, here are the decisions that the user has made and here are the subsequent decisions the LLM has made, and to allow the user to interact with it. And if the system is very good, then most of the time you don't need to interact with it. So maybe you interact with it as a as a bug. But also maybe those decisions are are questions.

When you go from JavaScript to Python, as, like, 2 versions of your software, how much do you care? Like, you care that they would be the same in what's in the user spec. You shouldn't care that it's the same. Like, maybe one of them throws an exception when there's an error and another returns minus 1. If they are used in 2 different ecosystems, you might not care.

You might allow that degree of freedom. But within the evolution of the JavaScript version, you wanna say, no. I wanna persist those decisions because someone using it right now would, would would otherwise, have a failed change. And so you need these element decisions both for the kind of the human need to interact with and and attend for debugging purposes, but also for evolution of the software over time. And so they have to be transparent.

They have to be open and available to you. And if you really cast your eyes further, you can think about personalized software.

Jack

So you

Guy

think about software that says, well, you, when you browse your site, you really like, I don't know, like minimalist UI.

Jack

Yeah.

Guy

And someone else, you know, prefers to have like more luxurious, or like, you know, more decorated one. And, and it could be that you can express that preference and that the LLM can create these multiple versions for you, which is within its reach. But it needs to know what's the sandbox, you know, what's what is it allowed to change and what is it not allowed to change. Today, humans will define that with some responsive design or investments. But, I think with the LLMs, you want more adaptability.

Once Once again, you have to ground to the truth. You have to define the boundaries. But I think what's important is that the platforms will get better over time at understanding you. Right? If you're if you work with an agency and you build a I like them, like, a simple marketing website analogy.

Right? If if you've built 5 marketing websites for me and I'm asking you to build a 6th, then I need to give you a very short spec because you know me, you know the domain, and, you know, it's it's it's easy. I don't need to tell you much. If I've never worked with you before, I will need to tell you a bit more. If you've never built a marketing website before, I will need to tell you some more.

And if I wanted to be more eccentric versus usual, I would need to tell you more. If it's if I if I'm asking you to build a a more mission critical system, I wanted to give you more. But but generally, you would learn, you you learn a domain and you learn a person or a need or an organization. And and that helps you. It's like the best developer in the team versus the newest developer.

Like, what is the organizational context that they have? What is kind of the historical knowledge they have? So similarly with the LLMs, hopefully, over time, they both learn domains. So that, hey, in this domain, just very, very good at creating whatever crypto contracts. Right? Or AWS Terraform scripts that relate to AWS. And then, they learn you. You a person, you an organization. They learn preferences. They learn constraints.

They can evolve that learning. And so increasingly, it allows the specs to be, simple. To be more concise. And and that's sort of interesting because it becomes dependent on the decision making of the LLM. And that was that sort of third unlock. Right? We talked about these 3 unlocks of why spec centric development is made possible. One is it's easiest than ever to to say what you want. 2 is that machines can write code. And the third is that the elements can fill in the gaps.

And the the better they are, the more black boxy it is. Yeah. And so you can you can but you can choose unlike the current systems. So I I host the AI Native Dev podcast.

Jack

Yeah. It's good. And I

Guy

had the pleasure of talking to Simon Last, who's the cofounder of Notion. And he was pointing out that fine tuning, he finds, is is a bit overblown and that it's actually very hard to to use. And notably, it's very hard to debug. Because in fine tuning, what you would do is you would provide the the LLM or the the model with a whole bunch of correct and incorrect answers. Like here's here's here's the input and here's what here's a good type of answer.

Here's a bad answer. And and the problem is, like, you do that. You inform the system about it. The system gets smarter. But when it doesn't work, what can you do?

Yeah. And if you contrast that to something that's maybe more of an agentic workflow, then you can say, okay, I wanna specify that when this happens, then I want to survey or critique the results when you come back. And so when something doesn't happen, you can see where in the flow things broke down, and it went down the wrong route, and you can invest in that area. And and I think that comparison talks about, like, the former maybe is more magical. Yeah.

But you're really quite dependent on the black box. The latter

Jack

Yeah.

Guy

Allows you more control. You can you can choose your levels of control. And so when we think about spec centric development, there will be a gradients you would you should be able to lean on to magical systems and require less grounding for them. Maybe not that far from just going to anthropic and prompting something today. When something is simple enough, well known enough, sort of and and, you know, like, maybe, like, low repercussions.

But you should have a gradients of control, as you as you approach, like, to to be able to actually make this be the way you develop software.

Jack

Yeah. This is it's very interesting. One one of the things that I kinda wanted to ask you is, like, getting into the future again, maybe this whole episode is just coming in living in the future, but, you know, let's say, like, 10, 20 years' time, you achieve what you set out to with Tesla, and it's, like, it's the way to build. If there's like a startup and we've got 2 founders building a startup in the future and maybe they're maybe it's like an AI boyfriend girlfriend dating app or something if that isn't already out there, how like, what what skill sets are needed at that point, and what does, like, what does it look like when they're building with TESOL?

Guy

Yeah. I think so so, yeah, I would, first of all, sort of say that, you know, we we we'd like TESOL to be, you know, the leader in AI native software development, but we do think that it is a movement and it is bigger than just TESOL. So they might be building TESOL, hopefully. And they might just be building, you know, AI with using AI native development and maybe using some other tools in the process. So I think there are 2 aspects to your, to your question.

1 is, what are the skills that they would need to be able to build what they want? And I do think that, a n a development extends or expands, who is able to create software. In that sense, they might be able to get away without, like, full on development. And it depends on what it is that they build, especially if they build something that, that that isn't, like, entirely technologically novel. Their their innovations elsewhere, it's about the data that they have access to or it's about the go to market approach or it's about the, you know, the niche that they serve.

And so in those cases, I think they might be able to make do with less technical skills. They would still need to to have product skills. So they would still need to understand what what is the product, what are some trade offs. There are things like, do you want it to be more extensible or simpler? Yeah.

Those are choice. They would need to make some choices around, our architecture. Do you want to really be all in on AWS and maximize everything you can do? Or do you want it to be kind of cross cloud, you know, kind of portable? And so those are core decisions, but they they might be able to do that.

But the if they are trying to do something that's more technically deep, and maybe this is the second aspect of the question, is I do think that there will still be a development craft. And I think, you know, when, there are many c developers or c plus plus developers that think that JavaScript developers are not real developers. Right? And, and it it's, you know, they're they're I I think they're absolutely wrong. They are developers, but there are levels of, and types of skills that you would have, around, the the granularity of your control.

And and so I do think that for things that are a bit more innovative, a bit more elaborate, more of these trade offs, broader and bigger systems, there will still very much be, a need for software developers that can produce more, produce better, produce more innovative software, using using this approach. And I, like an example that maybe illustrates this is, if you think, you know, whatever early mid 2000, building a a rich website with, you know, like dynamic behavior and, these kind of a single page apps, or or, that that exist back then, that would have required, like, a lot of work and and a very small select set of organizations were able to invest the Facebooks of the world. It's like, okay. Yeah. They they can afford to do this, but the rest, they they only a few people had the expertise at the time to do that.

Fast forward to today, it's a lot easier to create those types of apps. And and that's great, but it also means that now consumers expect those types of apps. Now if you have a website that that is from the mid 2000, from, like, 2,005, whatever it is, you know, it feels old. It feels archaic. Yeah.

And so, consumer expectations and and, like, user expectations, enterprise expectations, they grow. And so as we are able to produce more software, there will still be kind of the organizations that are, that excel on the technology side, on the software creation side, and that would manifest. And as it gets easier and easier, demands will grow accordingly. I want my software more personalized. I want my software more, I don't know, like, proactive.

I want, you know, I want better assurances around things working and not working. And and I don't know, maybe things that I don't even know how to describe today. And so I think I think the opportunities will grow with the ease of creation. Ben Galbraith, our head of product, uses the word leverage a lot, which I like. Yeah.

Developers will have more leverage, and therefore, with a small level of effort, you'd be able to produce substantial outcome. But there's still a difference whether you doubled that initial effort or not. You're still able to produce very substantially different outcomes. Yeah.

Jack

That makes sense.

Guy

So I think that would still be the case.

Jack

Effort still has reward. It's just that

Guy

Exactly. Yeah. Has been. But but if you want to just cross a line to say, hey. I, you know, I just whatever it is, I want an application to manage my personal finances or whatever, help my kid organize their homework, setups, or even for a business.

I want something that tracks, you know, whatever it is, the the the flow of goods within my factory. Then those might be things that you can produce a 100 times faster with a lot less skill. Yeah. And I think that's an exciting future. Like, I think that's a a a place that allows more creation.

So just like AI does in in images and in video and in audio, It's scary. It's different. You don't really know what the destination is. But generally, I think that AI simplifies creation and democratizes creation, and software creation is an important aspect of that.

Jack

Yeah. It's very exciting. I realized that you've you've got a good start. But, Guy, it was so nice to chat with you. And, I yeah. I'm very, very excited about Tesla. And, where can people learn more?

Guy

Well, thank you. Great to to be here and sharing it. I think right now, if you go to tessell.ai0, or or dotai, or just redirect it to dotai0 because it is still the dev platform, then you can register to the wait list. You can sign up to the newsletter or you can, subscribe to the the the podcast. They're all kind of a part of the same the same mix, to just just sort of be part of this nascent creation, and very much would love people to join our Discord, AI Native Dev, because, you know, we think, again, this is a movement and people have opinions and you should, you know, have a voice or you have an opportunity to shape the future of your profession.

And, I think, I don't think, like, we're we're providing the stage for it. We're helping, but, but, fundamentally, it's the it's the community that should dictate, where this heads. So we'd love to hear from you.

Jack

Yeah. I think everyone building a dev source should be should be keeping at least one foot in your community just to understand what's going on.

Guy

And and I should maybe, like, one one last word there is that we do think, there's a whole domain of conversation about, whether, AI will drive us towards closed ecosystems versus open. If you the more you are in that magic box that just magically knows what it is that you want, the harder it is for other dev tools to plug in and extend and say, I will do this step in this 30 step process much better, and I will specialize at that. And and I think that's a concern. I think we want to make sure that there is a thriving world of developer tools that compose together so that every consumer of them would be able to build the right thing. And that's definitely how we're building Tesla.

And I think it's something we just need to be mindful of of how do we collaborate right now. I think today, you look at a lot of the AI dev tools, and they all try to do everything. Yeah. Part of it is just we're learning, but, we're trying to think about it more as a composable system. So what you can expect from us is that, one is we want to give you a stage.

We'll kinda help you air. If you're building an AI dev tool today, we'll help you, share what you're building. And 2 is that as we as we kinda build our platform and put it out there, you'd be able to plug in and say, hey, if I am amazing at creating tests or at creating this documentation and optimizing at this specific stack and whatever, you will be able to to

Jack

Plug in.

Guy

Connect that superpower, into the full platform. So a lot to build, and, yeah, hope to see everybody on the community.

Jack

Yeah. Amazing. Yeah. Thanks again, Guy. And, thanks everyone for listening. Thank you.

Transcript source: Provided by creator in RSS feed: download file