We've basically grown 50% per month since then.
I'm joined today by my friend Eric from Trigger, and we dig into the journey of Trigger across four versions.
Like, the feedback we were getting was like, it's not working for some reason or another. It was a whole rewrite again.
Eric talks about how they discovered something that people really want.
That's really what's driven our growth, I think. We heard it, like, in the frequency and, like, the style of what was happening in our Discord, in our Slack, in our GitHub, there is no better feeling in the world than shutting off the servers. Like So
there's been four versions of Trigger. Right?
And Well, yeah. Three major ones.
Three major ones.
The fourth one is really just like a evolution of the of three. And we the fourth one is more like, okay, now we're really doing just version numbers that don't break everything and rewrite everything. So yeah. The three major versions. Okay.
But still, it's an easier convention than like Opening Eyes one or something. Yeah. Yeah. You've got four. There's four of it. In Trigger, there's like v four, v three. I was part of the v three wave, which I think a lot of people were.
Yeah.
And that's when I started using Trigger and found it to be really really good. Do you wanna take us back to like Sure. Yeah. The evolution of how you got from like
How far back do you wanna go? I mean like
Well, let's start let's start with v one Trigger.
So the first version it was we kinda hit the hacker news with a headline that was something like open source Zapier alternative for developers or something. And back then, was really really about taking what Zapier was for like sales and marketing and stuff and then bring it try to bring it to developers. And what that was, is basically like, oh, you get integrations, and you get like these kind of step by step workflows, and you know, you just kind of define the inputs and where the outputs go, and all, you know, that kind of thing. And then everything just kind of runs, but either what, like on a schedule or, you know, based on some event, you know, web hook thing. And that that was it did really well on Hacker News.
Although it was really funny because around that time, like three, it was like three days in a row, almost had like an open source Zapier alternatives. Really? And then by the third one, people on Hacker News were like, what the heck is going on? Like, why are all these Zapier alternatives happening? And I was like, that's a good question. I would like to know that.
Do the others still exist, actually?
I don't know what the other one. I think Windmill was one of the others. Oh, yeah. Maybe? Or I can't remember, actually.
Maybe they were just in our in our comment section. But but yeah. So that was the first version, and that was with the tech behind it was really only worked on, like, long running servers. So you had to have, like, a long running, node process running somewhere for it to win. And so we very quickly, this was during YC where we launched that and we sort of were talking to people during the batch and stuff.
And we kind of quickly realized that well, everyone was on serverless that wanted this, that will like, that we wanted this problem. Right? The other thing we realized was like, was sort of the back office versus like, inside of your app, like, difference. Like, so Zapier is really like a back office tool. So you, you know, you have things going through a sales pipeline.
You have, like, marketing, you know, so things that, like, are behind the business, but aren't, like, in the deep inside the applications, you know Yeah. Lifecycle. And we we were like, well, like this back office stuff. Like like the Zapier, the way they charge and things was you could get away with just being back office because, you know, if you're if it's a sales pipeline, there's like an ROI there. Right?
So you could charge more for like one single run or one, you know, actual thing for like a sales pipeline. Cause you could well, like $1 spent on that equals like a $10,000 deal. Right? So but for like developers and stuff what stuff that developers were doing, it wasn't didn't make sense to be back office. And so, yeah. So we wanted to be inside of an application. So we kind of come and we wanted to be serverless. So we were like threw all that away and we started in v two.
Were people using it?
Or No. Not really. I mean, people were trying it out, but like because you needed a long running server, and like no one was running long running servers almost, or people that wanted our solution wasn't weren't running long running servers.
And that was like two months or something?
Yeah. It was about a month. It was like yeah. We worked on it for like four months. Okay. And it was like live for like two. Okay. But we went down the route of like, no one had long running servers, so we were like building like a platform to run a long running server.
Uh-huh. This has
been like and then we kinda like, what are we doing? So we were like, well, let's come up with the people on serverless wanna do this stuff. Right? We also want it to be a bit more inside of your application instead of, you know, a back office thing. So we came up with a v two, and the constraints of serverless being you're running on someone's serverless function, which is like Vercel or, you know, Lambda or something. And so you actually don't have a long time to run a multi step workflow.
Yeah. Because this could be something that takes minutes.
Yeah. Yeah. It could be making like hundreds of API calls. Yeah. In sequence.
Yeah. Yeah. So you can't do it in one run. So you, you know, you have to break things up into like these reusable chunks, and then those are cache, and then what you end up doing is you have this whole sort of special syntax, not syntax, but it's written in an SDK, right, to create these deterministic chunks, then you wrap everything around. So you can do these multi execution serverless function things that end up pretending to run a piece of code once, but really you're running it hundreds of times, and you're just doing like one step at a time, and then you're like out and everything is like serialized, so like every step is like saved to so, you know, you have to like save the output, so then you when you rerun it, the output's there.
And then anyway, so it was this whole solution and
And that was v two.
That was v two. Yep. And that was like that was like we I think we launched that, and then kinda did a we we sort of did it was like a, you know, alright received, and then we did some big, like, co marketing thing with Supabase. Because we had this, Supabase integration. So we still had integrations because, you know, we you're doing these steps, and you have to, like, each step you have to wrap in this code and all that stuff.
And so we built these integrations to, like, make that easy. Right? So, like, you do a call to, like, GitHub, and you behind the scenes, it was doing all this wrapping and stuff. You you would just have to do the call.
And what would be the example that someone was using it with Superbase?
Well, Superbase was interesting because like, Superbase was also on like the trigger side. So like, you would, you know, write into the Superbase database and then it would trigger like one of our jobs. Right? Yeah. Based on like the database event.
So like go Yeah. Send send them like a welcome
Yeah. The canonical example is like you insert the user and like the the user signs up and Superbase just inserts, you know, into the user table. And then you listen for that event and trigger, and then it triggers the job, and then you send them like a multi step email welcome flow thing. Right? That's like the canonical example. So that was that was one of the that sort of got us on the map. The v two was on the map.
People were u there were like
Yeah. Yeah. Yeah. We had decent amount of v like, decent amount of traction.
But it was all still open source completely.
Yeah. It's all open source.
There was no Yeah. But there was no Did you have a SaaS at all at that time?
It was open source, and it was yeah. We we almost have always, like, done cloud and open source in tandem. We we sort of never done, like, we're just doing an open source thing, and then we'll do the cloud later. Yeah. And we've always sort of done it at the same time.
And I think that's probably because, like, you know, we're a startup and we're YC, so we're, like, trying to, you know, build like a sustainable business. Yeah. And the cloud is part of that. So we always had both of those things going in parallel. Okay. And definitely, there were a lot of people who were using v two, like self hosted. They were just running it themselves. It was not that hard to run. You know, was like Postgres and Docker, and that was about it. That was all you needed.
Yeah. Because it was really just like orchestration.
Scaling DevTools is sponsored by WorkOS. If things start going well, some of your customers are gonna start asking for enterprise features. Things like audit trails, SSO, SCIM provisioning, role based access control. These things are hard to build, and you could get stuck spending all your time doing that instead of actually making a great dev tool. That's why WorkOS exists.
They help you with all of those enterprise features, and they're trusted by OpenAI, Vercel, and Perplexity. And if you use them for user management, you get your first million, yes, million, monthly active users for free. I honestly don't know any dev tools that have a million monthly active users, apart from GitHub maybe. So that'll get you pretty far. Here's what Kyle from Depot has to say about WorkOS.
We use WorkOS effectively add all of the SSO and SCIM to Depot. It's single handedly, like, one of the best developer experiences I've ever seen for what is, like, a super painful problem if you were to go and try to roll that yourself. So for us, we can effectively offer SSO and SCIM, and it's, like, two clicks of a button, and we don't ever have to think about it. It's, like, one of the best features, that we can add to Depo. It's super affordable, which effectively allows us to, like, break the SSO tax, joke.
Essentially, say, like, you can have SSO and SCIM as, like, an add on onto your monthly plan. Like, it's no problem. So it really allows smaller startups to essentially offer, like, that enterprise feature without a huge engineering investment behind it. Like, it's literally we can just use a tool behind the scenes and our life is exponentially easier.
So like V2 is doing well, but we had a Discord channel. You know, we had Slack connect channels with people who were like paying. And we were like constantly, basically like the feedback we were getting was like, it's not working for some reason or another. And so we'd always, we'd constantly be like, it's not working, we would, you know, try to fix things. And a lot of times, what wasn't working was the the code was written in a way where it wouldn't work.
Because you you you have to be very careful about the code you're writing in these types of systems, because you can get easily get into a situation where you've written something non deterministically, and then everything breaks. The whole thing breaks, and it's very difficult to figure out why. Because you are continuously executing the same, like, function, the same code. Right? And if anything changes, like, by the tenth time you've run it, before that, like, tenth step, if anything changes, the whole thing might break.
What would be like if you got, a kind
Yeah, like you're using a day a date, and that date is just like date dot now. That would be enough to like break people's
stuff. Yeah. Okay.
It's like, which is like a normal thing to do. Yeah. And like or like because everything had to have these cache keys. So you do a step, and you have a cache key that's a string
Yeah.
Which basically defines like that's the piece of work Yeah. That will be reused. Right? Yeah. And if you put anything in
there Yeah.
That is dynamic in any way, then you'll just keep doing that over and over again. Because it hasn't seen that cache key before, every execution. Yeah. And that would happen all the time.
Okay. So what you should have done with the date dot now, you should have like
You had to use like a, you have to do it inside of like one of those cached things. Right? Because then it will only run once. Yeah. People were just like not putting code in those things. Where they were just like putting code outside those things. Then it would run every single time. So it was like, yeah.
Way to
work. And it was, yeah. Had to change the way you kind of built code. You know, wasn't like you couldn't take code from inside of your like request response handler thing, whatever you were using. Right? Like Yeah. You couldn't take it from your app router route. Yeah. You couldn't take that code and like, oh, that's taking too long. Yeah. I can't just take that code and put it over there. You'd have to rewrite the whole thing.
And I think that's the key thing as well. I was like, you're saying about AppRooter, but you're like
Yeah.
These are kind of like Next. Js devs.
Yeah. Yeah.
You're typically targeting like
Yes.
Kind of Yeah.
They're hosted on Vercel. Yeah. And yeah, I mean, background jobs. Right? The the I guess the the the need, the, like, first need of a background job is I have I have stuff happening in request response cycle handler thing. Right? Yeah. And I need to get that out because it's take my user's sitting there waiting for the page to render and it's spinning. So that, like, that that's like the canonical why you would need a background check. Yeah.
Yeah. So it wasn't a great story just to go put that code over there. Like, you know, sorry. You and it was like, for good reason. Like, you could you can't just take it and put it over there. Yeah. Because over there, it is still a serverless function. Yep. Because we didn't host, we didn't run the code, we didn't host it. Yeah. We just called into your existing function.
Yeah. And that's they were typically deploying on Vercel?
Yeah. Yeah. So we'd have like a slash API slash trigger Yeah. Endpoint. And we'd we'd manage all that and stuff. Right?
But we would just You piecing together Yeah. Like first cell functions.
Yeah. Yeah. Yeah. If you sort of understood and you knew what you're doing, it would work really well.
Yeah.
But then there's all these other limitations like oh, this was a fun one actually. We very early on had this very strange bug. We couldn't understand what was going on. We're like, it's almost like because we would it was almost like a two way thing where we would call into the serverless function, but then from there, it would also like do API calls to us. Right?
To like save things and things like that. Yeah. And we're like, it's almost like the post request to us is being cached on that side. Like, that can't be what's happening. Like, there's no way like your post is being cached. But we looked, I mean, very quickly when we're after looking into it and figuring out like that's what was going on. It was like, yes. Fetch, which Vercel, you know, patched Yeah. In their functions. By default, cached post request.
So we think it's already done it.
So it would be like, yeah, it would be like, oh, I've already done that fetch. Already done that post. So I'm not gonna do it again. It was like so we were like, oh, yeah. So that that and that's like a bug. No. It wasn't it was actually by design they did that. But they have since, you know, fixed that. But we I was like, I can't I couldn't believe that. So there's all sorts of little things like that.
Like, it's a it's a computing that's running over there. Yeah. And we're trying to orchestrate it, but there's all sorts of stuff that could be happening. Like, you know, the request body limit is 4.5 megabytes. You know?
So if you're trying to like invoke it, and you have all this cache data, and it was over 4.5 megabytes. Yeah. Then now you have to do some other thing. You know, so anyways, we basically iterated on it a bunch. We did, like, try to fix everything we possibly could, and we were still running into issues. Yeah. And were issues that we couldn't fix. Like, we didn't have control. Right? And then I did a tweet.
I was like, I did a poll. I was like, when you're running like a, when a trigger. Dev job is running, where do you think it's running? And I was like, on trigger. Dev's cloud or, you know, in your hosted, wherever you're hosting your service. It's like 55% thought we were running it. Yeah. So I was like, oh. I was like, that's a problem. I was like, yes.
Because I think even like, I still got a bit confused when, you know when I said to you v free was open source or like you you weren't running it. And I think I still, even now, I got confused where like Yeah. I knew that sorry sorry, v two
Yeah.
I I knew that v three was the where suddenly you were running everything. Yeah. Yeah. I'd kind of assume that v two, because I hadn't used it, was actually just fully open source and it was just open source. But think what you're saying is, you you the code is hosted with was in v two was hosted with trigger, but like the actual running, the compute was running on not on trigger.
Yeah. That's right. Yeah. We would just call into the code. Like you would write trigger dot dev code, but that was just running in your serverless function.
Yeah.
And yeah. So we were like, we're kinda like, okay, maybe we should just run it, you know? And then we'd have control. We'd be able to do, you know, the whole stack ourselves and all that. So, yeah. So that was late twenty twenty three. Yeah. And that was, yeah, when we decided to like to to do Actually,
went from basically like you're saving code in like database. Yeah. Like you're kind of like in in a sense, it's like you're you're managing JSON sort of in a sense.
Yeah. It's managing JSON. Yeah. Basically that's all it was. Yeah.
And then suddenly v free is like
Yeah. You're running. Yeah. We're running. We have to build it, and then we have to ship it, and then to us, and then we have to like queue it, execute it, and yeah, the whole the whole shebang. So that was where I'm talking about the ignorance came in. It was good. It was good that we didn't know all the challenges. And and we had just hired our our first, you know, non founder engineer. He was one of our open source contributors.
And he was I think we wouldn't have allowed do it without if we hadn't hired him. If we hadn't hired Nick. Because he he he built the whole like execution side of things. And he was like, you know, a Linux nerd. Like, he has Linux on his laptop. You know? Very very very like
slide point not to go down, but he he's such a cool guy. But he he he was saying at the pub that he wouldn't have joined Trigger if he weren't open source.
Yeah. So I mean, yeah. That was that was it was good timing, basically. I think we we probably wouldn't even considered it a possibility if he hadn't come on, because I don't think we would have wanted to, like, take on the whole challenge. You know?
So it was a good time when we were like, alright, let's just do it. But it was a whole rewrite again. Because you're you can't you know, we don't you don't need all that stuff that we were doing before if you're actually just running the code. So now we we say, you know, oh, we're you just just run it until it finishes. Yeah. Like, why why have like a limit of how long? Yeah. Yeah. It'll just run until it's done. However long, whatever that takes.
But, like, we yeah. So we we basically embarked on that in early twenty twenty three, and then we launched it in, like, launched the first version in, I don't know, May 2024. Something like that. It was a very quick schedule because in March, defer,
who was
doing something Yeah. Was doing they were actually very similar to v three. They shut down, and they basically were like, by the end of the month, like, we're finished. Like and so we were like, we were like, oh, let's get this finished so people can, like, jump over from defer to us. Yeah.
And so we kinda rushed rushed that last bit out to try to get, like, capture some of the the people who needed somewhere to go. Because it's actually quite similar. They were doing the whole, like, building and running and all that sort of stuff. So although it was fun, it was it was a bit like, they're shutting down. Like, right with like the same idea.
So that was a bit we were kinda worried about that. But yeah. So we we barked at that. We and then we launched, like, the proper cloud version of it, like, probably later that year. I think September August, September, somewhere around that of 2024. It's not that long ago, really. But but, yeah. It's a complete rewrite in that we run everything, and it's definitely been completely different since then. Since since that September.
So May was just the open source?
Yeah. It was sort of it was like private beta.
Okay. Okay.
And it was open source, and it was also like the cloud was private beta. Yeah. And we learned a lot of things between that and like going live in September. And yeah, I mean, we're still learning stuff. But
And so v free now is like people can Yeah. Essentially just write what feels very much like your
writing language They can take the code that was just written over there Yeah. To go from top to bottom and then stick it in like a we call them tasks now. Yeah. Because we had to come up with a different name. We were like jobs, now we've done that one, task. So now they're, you just take it and put it in there. Yeah. And they could just run.
And it feels to me like, it's like a Lambda function. It's like writing Lambda function.
It's like a Lambda function. It's very similar like a Lambda handler. Yeah. And we have like the, you know, one one big thing we sort of got rid of was the integration side from v two. You know, we don't we don't do integrations anymore.
That was one another thing we sort of learned was trying to do integrations in code is like, you need like a massive team of people to be able to do that well. Because like everything changes all the time and you just you have to keep up to date. I mean, we had a our our like most popular integration was the OpenAI one. And that even that, they were just like always releasing new stuff. Yeah.
So you had to, day one, be like, okay, all of a sudden now we have to stop what we're doing and like go update the OpenAI integration. And so we were like, yeah, let's not do that. And we did this whole thing with webhooks and everything with v two, where we would completely manage the whole webhook lifecycle and register them for you and handle the triggering side and all that sort of stuff. We got rid of that because we're like that was and again, like, you have to have a team of people to do that well. And so we kinda like, one of the lessons we learned with v two was like, don't try to do too much.
Like, were trying to do like everything. Yeah. So we really simplified like v three that took that chance to simplify and like only do, you know, so you have to actually trigger a task like manually. So you can do that from wherever your back end. Right?
Yeah.
So you can handle webhook and trigger from that. Like, we will just you do it. Yeah. Use Hookdeck, you know, and they they'll do it, and then that you can just call. So, you know, use use a tool who's actually made specifically for that.
The only thing we have done is like the scheduling stuff. We kept that that that in because a lot of people a lot of people would, you know, have to have scheduling stuff in the back in their back end or their background job. So we we did keep that. But, yeah, we simplified stuff and then, yeah, we made it we made it possible, I think, to build like a a good product. We with v two, it wasn't possible, I don't think.
We we we couldn't have built what v two was because we had these we have these constraints. Like, we're not gonna be able to hire, like, 15 people to work on integrations.
Mhmm.
So we kind of, like, realized, like, okay, let's build what is possibly good. And that's focus. But, yeah, you can just take code, slap it in, you know, task, and then it works. But we have these like, we have one little bit of special sauce that we sort of built into the platform. And that is, because a lot of times what you do want is like this idea of durable execution, right?
Like, you wanna be able to like do this task, and then you give it an idempotency key, and it will only run once or something like that, right, for example. And we have these tasks and you can kind of have a tree of tasks, so you can have one task trigger another task. But of course, you wanna know what the task did, what the other task did. So you trigger and then you wait for the result. And you basically have to do that in the traditional way of doing it, is the way we're kind of doing before in v two where you would have to write everything in this like deterministic
Mhmm.
Sense where you would be able to go in back and like execute the task again, execute this like, you know, function again, and have skip down until you get to the point where you just were. Like, that's how you resume code. Like in a traditional durable execution system is you you have to do it deterministically. The other way is, which we've done now, and this was completely driven off of like, we sketched what we wanted the code to look like. Like, we're like, this is what we want the code to look like
in terms
of the user. And we're like, okay, how do we do it? Like, how do we actually do that? Because like, if you're in the middle of a JavaScript function, there's no way to like say, I wanna actually close, like, I wanna stop this process and resume it from that part of the function. You can't do it. Right? Like, that's what we wanted. We wanted to be like, why can't we just, like, call a function again and say start from that point? You know, like, start from line 10.
And that's because something would take age. Like you wanna restart it.
Yeah. But you might be you're triggering another task and that might take a while. Yeah. So why we don't wanna actually spend this CPU, the process of waiting for this other thing.
Well, like I saw you had an example where like you you want like Slack approval. Like moderation or something
like that. Something. Could be something that indeterministic In
the interrupts it. Might take Wait till Eric approves it.
Yeah.
And then restart
from Right. Yeah. You restart from there. But you obviously, you don't want the compute running for a Yeah. Just to wait. Yeah. And so, really what we wanted was be like, we wanted to be able to like call a function in JavaScript and say, start on line 10 Yeah. Of your but you can't do that, obviously. Like, this doesn't exist. So how you do it is to start on line 10 is you have the previous lines one through nine be cached.
Yeah. Right? And that's the whole thing we were doing in v two. And we were like, well, we can't do that. Yeah. And so we how do we do it? How do we actually do that? How do we get it to resume on line 10? And so what we built was like this, which is completely hidden from the user, but we do it all in the background is we actually do it like a process. We take a snapshot of the entire process.
We save that off to like storage, and then it basically freezes like the memory and everything that's happening in the process like into disk. Yeah. And then you rehydrate that from disk, and then it basically, it's like the process never exited. So the process just comes back on line 10.
Yeah.
And so we're using this technology called CRIU for the CRIU. CRIU, like c r I c r I u, it's an acronym. It stands for like something I forget, checkpoint something in user space or something. Yeah. So anyways, it's this like Linux thing. And yeah, so we're doing all that like in the background. Dom, that's cool. And that's what allows you to do, you know, line 10. Yeah. Sorry.
So the code look you write the code normally, but it doesn't run normally. Yeah. And so we're, you know, that's the sort of like the thing the only thing we sort of added that wasn't like just a normal code running from start to finish. Super cool. Yeah. And
then how was the reception of v three compared to v two?
Yeah. A lot a lot different. I would say maybe like two orders of magnitude better than
Two orders of magnitude. A hundred times about
Like, we're we're basically, since we launched the so we launched it in, like, where you could pay for the cloud, you know, uses based, like, cloud stuff in September, I think. It was maybe maybe it was August or something like that. But we've basically grown 50% per month since then. That's insane. I wasn't sure
if you were gonna say that on the show.
Yeah. Yeah. Yeah.
You told me that.
No. Can say that. I can say that. Think your
odds on YC say the minimum is like 16 or something.
The I think it's like 7% per week. 7%. It's like really good. 7% per week is really good. So that's what we shoot 7%
per week is what? Per month?
Like I think it's like 30 or something. Okay. And you're at 50. Yeah. So and every month we're like, we can't grow 50% again. Like, that's not gonna happen. Right? So obviously, some months are like more than others. Yeah. But that's about our like That's average. Yeah. Revenue. So, yeah. It's it's and, you know, obviously, people have a like, they wanna run compute for all sorts of different reasons. And if you give them the platform to just run it forever, like, you just take away that worry.
Yeah. And and also what we ended up doing was we sort of didn't we didn't even think about this, I guess, but when we did v three, we kind of built this instead of it being like a Node. Js only thing, it's an image. Like, you get a Docker image. Right?
And you can customize it and stuff. So you run like, you run Node. Right? It's like, it is Node, but then you can run Python or Go or you can, you know, shell out to FFmpeg or you can install, you know, LibreOffice and you know, parse a PDF or something like that. So a lot of people ended up coming and like doing, installing these like custom packages and doing stuff, you can't, like it's really hard.
You can do that stuff on Lambda, like it's tricky, but you could you can sort of do it. But in a lot of other serverless platforms, right, you wouldn't be able to to do that. And so you do have this like full actual like compute platform. We can do all sorts of stuff. And it actually what I what I mean by we didn't really think about it was, of course, like a background job platform, like, particularly needs that kind of thing.
Right? Because that's where you're not going to be doing like a FFmpeg encoding job in a request response cycle. Yeah. So that kind of was, like, the way we architected it, meant we could do that fairly easily, and then that kind of that's really what's driven our growth, think. Mhmm.
Is that that full platform ability. And then the fact that you can sort of break break things out into these, like, sub tasks and things that where you can you can do things idempotently. So you can sort of like, that's a a way to basically save money. Right? So you're not doing work unnecessarily, but you're also saving money on the other side where you're triggering something and waiting it for it to finish.
And that thing isn't running either. So so yeah, it's like a it's, I guess, economical from that perspective. And so yeah, v three has been been def definitely a different feeling, and you you can sort of just tell. Like, it's you can feel it. Like, the amount of people like trying it and like giving you feedback and the amount of people like in your GitHub or in your Discord or in your Slack.
Your Discord is very active.
And they're just and the amount of time it takes, like some people would just like come on the platform and like we never talk to them. And then there's, they're like using the cloud like crazy. Like, and we've never even talked to them. Like, and they're just kinda getting on with it. And running massive workloads, which, you know, yeah. It's it's definitely a different feeling from from b two.
Yeah. Is it I felt like there is something where, like, it's kind of, you know, obviously you wanna do be good at marketing and everything like that. But like, I feel like there is some like, you probably aren't two orders of magnitude better at marketing when you did v three. No. Yeah.
That's true. That is for sure. It's like Yeah. Yeah.
There is something where it's like.
Yeah. I mean, the YC advice, like make something people want. Right? So we we kept hearing what people wanted. And you don't hear, like, directly.
Yeah. It's not like we like, no one sort of came in when we were doing v two and said, like, v like, it just spelled out what v three was. Right? But we heard it, like, in, I guess, in the frequency and, like, the style of of what was happening in our Discord, in our Slack, in our GitHub, like, that was that's sort of like you you're talking to them not necessarily about like like, we we did do the whole like mom test interview thing. We we we would talk to people and be like, what are you what are you doing?
Like, what do you need? Like, that that's that is useful. Right? But I think when you hear from like, for six months
Mhmm.
From people about all these problems, and then you sort of like, okay, actually, we just we sorta did that, maybe we would solve like most people's problems in this way. Yeah. And like, it wasn't like everyone was like super happy about v three. Like a lot of v two people weren't happy because they liked that model.
Yeah.
Yeah. Yeah. People who were getting on with v two were, like, happy with with how that was working. But we we were like, well, yeah, that like, it it it is great when it's working Yeah. For you and you understand how to do it and all that stuff.
But it was like to get to that next step, like, wasn't gonna. It was we were never that was the model that didn't work. And so like a lot a lot of people and luckily, like like there's a few people out there who are who are doing that style of thing. So it was like, they could go and, like, Cloudflare Worker Workers has one, and Upstash and Ingest, they all have that model. And so, you know, if if you want that model, it wasn't like we were the only thing doing that.
So that was good. And we did give people, like, six months. We were basically had, like, a deprecation of six months on our Obviously, you can still self host it, and people are doing that currently. But we gave people, like, plenty of time to
Yeah.
To to either like migrate to something else or go to v three or something.
Yeah. That's actually like, do you have any advice on like when you are doing big changes and you kind of like
Well, yeah. It's interesting right now with like big changes because I think you really have to be careful about big changes now because of LLMs. Like, we didn't foresee the problems of v two to v three from that perspective. Because I think back then, I mean, this isn't even that long ago, but obviously, late twenty twenty three, right, people were still reading docs. There was no cursor.
Like, no no one was vibe coding. But now, no one reads docs. Why would you? Right? The AI reads the docs for you, right, maybe? And then you just use the AI to write code. So we didn't really foresee that. So I'd be really careful about, like, making huge changes now.
Mhmm.
Like, I guess we're lucky because we have we have something that works now, so we probably won't be making big changes. But if that happened, like, later, we would really, struggle, I think. And we still struggle to this day with people coming in with v two.
That's right.
Like, why doesn't
this work? And it's
like v Yeah. I mean, this happened the other day. A guy just was vibe coding and was using all sorts of stuff from v two. And then also stuff that never existed. Yeah. Sound pleasant. It was just like a like a hallucinated package that we never even had. And he was trying to get to work, I was like, well, that package doesn't exist. Yeah. Because we're like an open source dev tool, I think we were really careful about like, we weren't gonna just shut v two down
Yeah.
On the cloud and be like, you guys are are screwed. Right? Because one, like, we've been in that situation before. Yeah. And you never wanna have, like, a couple weeks to do something or your business, know, is gonna be completely messed up. Yeah. So we were like six months. Actually, think was slightly longer than six months, but it was like.
Advertised six months.
Yeah. Yeah. And yeah. Well, yeah. It was supposed to be January 31 of like 2025, but of course, think we ran it for another like two months
after that.
You know, just because we had a few people who were still migrating, and we were working to shut it off. I'll tell you what, there is no better feeling in the world than shutting off a service like this and deleting the code. Yeah. That was amazing. But yeah, I would say, yeah.
We one thing we did really early was actually create like an LLM thing. Yeah. So we were like, put put this into the LLM and like, it it will migrate the code for you to like v three. Yeah. So that was that a lot of people found that useful.
Now you'd probably do that with, like, an MCP server or something. But but, yeah, I think we just being really careful about, like, the community and making sure that, like, people weren't weren't, like, in a bad place. And and we were like getting on calls with people and migrating them over manually. They were in our Slack Those lots of people. Helping them. Yeah. Yeah. So
We're coming towards the end, Eric. Mhmm.
So
just I don't know if you wanna just talk about very briefly touch on v four and the future.
Yeah. Well, v four, so it's just like a we've we've rewritten the engine, what we call like the run engine. And not much from a user perspective is changing. Other than now, we we basically have made the whole so that whole thing where you can kind of wait for other stuff to happen, and then resume and, you know, it was a bit hard coded, and it wasn't very extensible except before. And we've rewritten the engine to basically make that programmable.
Like that that so you can almost, like, programmatically now, like, stop where you want. That's the whole like human in the loop thing where you can you can stop it and then be like, you can manually resume from somewhere else based on like a Slack approval or
something.
Or like an agent running and it does something.
That's so cool. I really love that. I think people should it's just really cool when you, I saw you demo that where it's like Yeah. Yeah, just here's an image someone created, approve it, or yeah.
Yeah. And so, yeah, so you can do it through anything. Right? You could have a maybe a maybe a lot of people have these other things running, like this pool of compute that's running and maybe doing something. Right? And they can throw like the token. We have these tokens that you can use. And so you throw the token to the other system, and then when you want the trigger. Dev workflow to resume, you use the token to resume it. We call those wait points.
And so now that's very extensible programmatic, you know. So that's going to unlock a whole new set of use cases. And so that's really the v four in a nutshell is that, and we've made some improvements for agents, like, where people are running, like almost everyone is running the AI SDK inside of their tasks in some way. So we've made a lot of improvements around that, and you can stream what's happening in the LLM straight to your front end. If it's happening in the background, you can stream it.
And then we've also added some, basically, that's from your background job, right, to your front end. And that we're we've also added a thing where you can interrupt. You can stop the LLM from running. So it's like that two way communication with the LLM that's running in the background. Because obviously, you know, you're using like an agent and you wanna stop it.
Yeah. So and so, yeah, we've we've made all that better. We've made the observability of all the AI stuff that's happening is is much better. And generally made it a better tool for easily, wrapping your tasks to provide them to LLMs, and we have better Python support, so you can run Python. Well, a lot of people like, the agent stuff in Python is a little bit more advanced than JavaScript, so a lot of people want to run agents in Python and then, like, coordinate it and everything.
So the Python stuff is a bit better. So, yeah, little lots of little things that we do. And we were like
Far along. Okay. We once had a tsunami warning. I think we were interviewing Jake from railway. We got a tsunami warning Here. In San Francisco.
Oh. That was I say we. Okay. I was gonna say what? Yeah. No. It's the tsunami.
Yeah. That was Jake Jake from railway.
That was the town Okay. Didn't happen though. Okay.
No. And he was just like, oh, interesting. And then just carried on. He's like a critic. Jake's committed to the
Yeah. Yeah.
To building b two SaaS.
Nice. Nice. Yeah.
He's going down. He was like, I'm gonna go down.
I love the tweets where it's like, oh, like some a planet has been discovered with life or something. Like, oh, now we have a whole new cohort to sell our b to b saster or I
feel like it must be either James from Posthog.
Yeah.
Or like he's just taken that whole like
Yeah.
Anything joking.
Yeah. The quick Yeah. So
if people wanna check out triggertrigger.dev.
It's in the name.
It's in the name. Yeah. Yeah. Eric, thank you so much. I was so glad to finally, we did one episode, but this was the real Trigger one.
Yeah. Know.
You were always like, could we do an actual one rather than just talking about open source?
I mean, that was fun too. Yeah. I was geeking out about open source licenses. Yeah. Yeah. Yeah.
Okay. Amazing. Well, thank you very much, and thanks everyone
for Yeah. Thanks for having me.