¶ Intro
Hi everyone. My name is Patrick Akio. And have you ever imagined software having a blueprint? It's what we discuss as well as the challenges of a technical founder. Joining me today is Anthony Alaviva and I just had a blast of a conversation. I love his perspective, small companies and big companies. So enjoy. I love that you're saying you're combining travelling and working here and there.
¶ Enhancing observability
I I'm lucky enough to do that at a company that it's not my own, but you actually have your own startup, so you have the flexibility to do that. Basically what what are you doing at the startup mainly currently? So we're we're doing monitoring for for tech companies. And when I say monitoring, it's like 1 aspect and people think about different things when they think about monitoring. But I I realised that you know when we build a lot of back end
services, we're coming. Maybe you have a mobile app that is communicating with your, you know, your systems or you have other customers that are communicating with your systems and they do that through this API interfaces and a lot of
things that go wrong. I usually reflected in there and I started to think, I mean, I've had a lot of experiences that forced me to do this, but I realized we're we're not doing enough monitoring at the level of those APIs, at the at the level of looking at the data and learning what the data should be like, and maybe detecting when something changes when something goes wrong. You mean the request and response data specifically? Yeah, but the payloads
themselves, what is in there? Yeah, Yeah. And I think the decision to build this was from a previous
¶ Huge production issue
job. I used to work for Delivery Hero in Germany in Berlin. And I was responsible for a migration of some legacy service that had served the company for like 10 years before. And we had to hit a performance limit. We needed to rewrite it into a new technology stock. And of course we did this like you should. You know, we wrote a lot of tests. We spent some months on this migration, but when we rolled out, put this change in production, of course things went wrong and we lost a lot of
money. I think we lost like something like €2 million worth of orders really. Yeah, to that. How long was the downtime? It it, I think we're down. It wasn't really a downtime. I would even explain what happened, but in some countries, specifically in Scandinavia, there is a requirement that you have to include the dietary requirements of every dish. OK. That you're selling and delivery heroes like this company of food delivery brands.
Yeah, in like 50 countries. And only the Scandinavian countries had this issue, of course. So when we're developing and building, we're testing against, I don't know, Asian countries and European countries, I mean, like Germany, but we're not really testing against Scandinavia. So this issue happened in Scandinavia. So pretty much all of
Scandinavia was down. And those are kind of tricky issues because when someone says, oh, something is wrong and you're like testing, but you're still testing with the other countries. Yeah. So that is what happened. And we realized the issue was basically that there was a field missing the payload which should be present in Scandinavia. Yeah. And no one, everyone who worked on this project in its early days, they're either the Co founder of the company was like,
not. Yeah, I'll touch your code. Yeah. And I think the other people, like, no one knows anyone who worked on this project anymore. So there was no one who could have maybe stayed. Did you check this? So it was just a painful process of figuring out what is missing different payloads in those specific countries. And and I realized that this is the kind of work that computers are good at.
Computers are very good at pattern matching, you know, finding patterns between things, and I decided to be on API
¶ Orders are down, but why?
token. Good stuff. It's really funny. And I understand the use case at some point when you're talking about 50 countries and all of the countries or at least cross country, have different regulations in what needs to be done, even on a technical level, you accommodate for that. You make changes in the future. The history of all the people is gone.
That knowledge might be gone. And then when you create changes, and I mean the team still makes the test suite, is that if the test suite is not rich enough, then all of a sudden you can, I mean, release a bug in production that just covers a few countries and then you're in the big deep unknown, I feel like, yeah. It felt like that. Yeah, right. Feel like the unknown unknowns because you have this we're not having. So we had this other tracker that, you know, we we kind of
have. We know exactly how many others we should be processing at every moment. Yeah. And it's just like, yeah, others are down in these countries. Why? Oh, like, shit, why? Yeah. And you go to the web page, it's working. And you're like, you go to this, I think on the mobile app or something, it's crashing. And you're like, OK, why We're actually doing what we should be doing. And there's no one to ask to say, OK, do you have an idea why this is happening? Yeah, that's hard man.
¶ Things we don't think of as developers
How, how different is it than working on let's say your own start up versus working in. I mean Delivery Hero is a is quite a huge company basically now you're first of all in control of what you build, but you're also responsible. I mean you have a team, so it's basically a lot on your shoulders. How's that difference been so far? It is. It's a different skill set and as an I mean, I'm an, I'm an engineer. All of my career was writing
code and building systems. I realized late that I had to learn some skills because, I mean, I had this experience. I had some other experience from previous jobs that also enforced the decision. Yeah, and I just went out and started writing code. Yeah. Yeah. That makes sense. So it's like, I think maybe the only thing I did was I had a landing page with a wait list, OK?
Which, you know, I shared around and people joined the wait list and I was like, oh, I have people on the wait list now. I can take my time and build and they're. Waiting for. It and they're like, six months later, I'm like, Oh yeah, I've built something now maybe all these people on the wait list would sign up and cricket. So I realized I had to learn, you know, learn how to talk to
customers, talk to people. Like don't just have ideas and go implement, but let let the people who would use these things play a very important role in, you know, what you're building. Yeah. And then have to learn, of course, managing people a little bit and just a lot of a lot of things that we don't think about as developers. I can imagine. SEO, pay per click, advertising. There's just a lot of things. Yeah, yeah. It's, I mean, it's basically
running a company, right. Normally you have people that are specialised in like this small part of running the company either when it comes to selling the product or marketing the product or actually building the product.
And now all of a sudden, I mean especially when you're starting, it's like everything you can do everything and anything and then you have to decide and I completely understand that you default to comfort because of the the engineering side is like, OK, bread and butter, we can do this, we can build the product, but then it's like, OK, how do you sell the product and if no one knows, how do people know. So the marketing side of the product, walk me through kind of
¶ Getting the first customer
getting that first customer because I think that's always a big win when you actually land your first customer. My, I think my So we have free tier, and then we have, like, people who pay. Yeah, we had, like, people on the free tier pretty much immediately. Yes, I shared what I was doing on Twitter and everywhere, and a lot of people just have had fun with it.
That's great. But the companies who want to use, who really want to monitor critical systems, they're not just going to use any random tool out there. Trust becomes very important. And gaining that trust was really the hard part. I think it took, it took it took a year from hey, you know, you know we're launching to actually having someone drop money and say OK and I want to like really, really pay and I don't know what I did to get that
person. I think I, I just posted a lot more videos on YouTube and I used to someone who had signed up like a year ago but was just watching and just watching me post and share things and eventually paid. Then it just slowly, you know more and more people started to use it and share. I have had a few, actually a lot of word of mouth to be honest, just because at this point I didn't know anything about sales, I didn't know anything
about advertising. So a lot of the people who came in was word of mouth and I I found this niche, kind of sub
¶ Finding a niche
niche of users in fintech, but more like this younger fintech companies, OK. And the reason is that companies that are like financial technology companies, they depend on a lot of other third parties. So it could be MasterCard, it could be. So have some Nigerian customers, fintech companies as customers. And for them, there is this national Interbank settlement
system. So when you want to send money to another bank, you're basically communicating with that system and that system is very unreliable. Oh, really it is. OK. So for these fintech companies, it became very important for them just to log all their communications with third parties, with those third parties that they depend on.
Because when something goes wrong, they want to take a proof and say, hey, like I called you with this payload and you responded with, I don't know, something else. And I think I found a very good niche there because they were actually looking for something like that. You know, some people of course rode years, maybe did something with like Elastic search and, but I mean how much can you do
on your own, right? So there are a lot of people who are really searching for something they just ignore. It's like a it's like a product category that you don't know what to search for. It almost doesn't exist. So everyone's just rolling, rolling something themselves. So what most of these people would come looking for just something that logs the requests and they would stay for the
other things. For example, we would if there's any change, like a breaking change in any in your APIs or any APIs you depend on, we would like detect it and just send you an alert that hey, we've noticed the breaking change, this field is missing or this field was updated and then you can go meet that that provider and say, hey, you updated this field, why? So they changed, Yeah. So they would come for just log in and then stay for the other other things.
I love that. I think that's it's really cool that you found a niche where the auditing is really important and probably then holding those third parties accountable for any breaking changes basically because especially when you're talking about financial technologies, I mean anything money people will put extra money in to keep safe basically. So it makes a lot of sense that that need was there. Yeah, there. It's just a very I worked in
¶ The fintech industry
fintech, you know, years back, maybe 5-5 years ago, actually, maybe more like 7 years ago. But the niche is just so fickle. It's money. Money's super important. You don't want to lose people's money. But stories you hear like I know a customer who lost basically all of the they raised funding and they lost all the money they raised. No way, due to basically a
provider issue like this. So when you when when you send money, say you're sending 10, I don't know €10 to someone, what would happen is that person would get 1000.
So those two zeros at the end and why that happened was that third party that they relied on updated their APIs and said we need you to send with two zeros at the end of course something like that because I mean that's what you should do anyway and they did not do that before, but we just made a breaking change overnight and these guys lost a lot of money because they were sending extra money to to to users.
So there are lots of those little stories of breaking changes and problems and and it's nice to help these companies figure that out. Yeah, it's. I don't know how it's grown. I think a lot of financial institutions have like very old, old software nowadays. There's practices to, for example, generate your documentation or make sure you have a contract and then generate your APIs. So then breaking changes are kind of out-of-the-box because you have that detection just by
virtue of your way of working. But I do recognize that a lot of, let's say older institutions, I mean at some point if you have a team and that team grows and then people lose, you lose people basically people leave you all of a sudden have AP is and you might not even know what AP is are for or which ones are there out there and then especially what they do. Yet you have to create software, you have to create change because otherwise the business
value is stagnant. And from a business perspective, you need to create value basically to keep customers and to attract customers. So yeah, it's this kind of every evolving problem then, from a legacy perspective. Exactly. There's especially in large
¶ The problem of many separate teams
companies, there's also the problem of many separate teams that don't always know what they're doing. Yeah. And one use case that someone like I don't know, solving with APIs, OK, which we did not like design to solve. This use case, yeah, is that they have a documentation team and they have, of course, the engineers. And the engineers are supposed to tell the documentation team that here we made this change that should be documented. OK, but. How does that work? How does that work?
Yeah, it's it's a very common thing that these engineers just forget, you know, and it's then the documentations are always different from what is actually out there. And this this document. Like the head of documentation in this company just realized, like the dev guys, they really don't care about a lot of these other things. But I just want to know if there's a breaking change that I can tell my, you know, my team to update something.
So that's keeping, I don't know, keeping track of what's going on between these departments ends up being useful. Yeah, of course. Yeah.
¶ Someone to hold you accountable
You mentioned you had a lot of skills to learn in kind of creating this start up. I'm assuming the, the engineering side was just more so comfort you say like the biggest learning was there, was it more on the sales side, marketing side or anything beyond? It was on the sales and marketing side and it still is to be honest, cuz I'm still learning and it's it's, you know, coding really is a comfort zone.
Sometimes I find myself just writing code and doing stuff and then I have to remind myself like why are you doing this? Is it really that important or you should be talking to customers or I don't know, figuring out marketing. So I had to, I think one of the biggest things I did, which I mean, you know, you know Dennis, she ended up becoming some kind of so Dennis from SAS lunch, but she ended up becoming a kind of
like a coach in some sense. You know, because sometimes we would like talk about what I want to do with regards selling and marketing and then, oh, we have this customer bug report that also needs to be done. I think I'm going to spend the next two weeks fixing this bug. I was like is that really what you should be doing or could someone else on your team do that or could you outsource that? So that kind of reminder and that became or was it's very
important. And for sales, it's just a lot of learning to be honest. A lot of podcasts, yeah, like this one. And a lot of books and also a lot of lost money because you know, you, you, you, you listen to people talk about the strategies that are bulletproof and you pump money into it and realize, no. There's holes everywhere. Yeah, OK. But is it then the the learnings, do they come from how
¶ Knowing the competitors in your niche and sharing it
for example you find your customers or how you talk to them or what you say, how you hone kind of your your pitch or what on the sale side was like the biggest learning there? How I talk to customers evolved a lot, so when I started out I read this book called Running Lean. OK. I don't remember the author's name, but it's it's like a book based on lean startup. Yeah. And he, he does a very good job of listing out like a kind of a road map on how you can have
this customer conversations. Yeah. You know you when you get on a on a conversation with a customer it's it's very easy to just start rambling and and selling. You know and oh we do this. We do that. We do that. And sometimes that's not actually that is not even how you sell it turns out and how you sell is really you jump on these calls and you try to learn about them, learn about your use case and help them solve a problem. You know and sometimes that solution might not even be what
you're selling. And I've had a lot of that where I someone comes on a call and says oh want to do this, this, this, this. And I'm like, OK, that's an interesting problem you want to solve. Have you tried this other company? Have you tried this other company? And then they're like, Oh yeah, I didn't know about this people because it turns out as a start up founder, we are like we know a lot about that niche just because we're always researching. We're always looking at
competitors of course. So we end up being the ones who know. Like if someone has a problem in a related niche, we likely know all the other players that might perfectly solve that problem. So why do we keep that information? Because, you know, we just want to sell what we are offering when you could actually point someone to the correct solution. And I learned that along the way, and the conversations became a bit nicer.
Yeah, of course. Because it became lesser selling and more of like just getting to know someone and and more pleasant and maybe more genuine as well. But also just the questions to ask, you know like because at the end of the day you still want to sell, so you want to ask questions about what problems it's solving and lean, running lean helped a little bit with those questions.
On the customer acquisition side, I also learnt a lot and I'm still learning, but one of the things that I learnt is there's just so many approaches to customer acquisition and. All of it works, but they just work for different people and different different people, different niches, okay, different kinds of problems. So what you do when your B to C is different from what you do when your B to B?
Yeah. What you do when your B to B with like a huge sales team is different from what you do when you are like, I don't know, a team of 10 people. And I'm still learning about this and realizing that some things I like more. So even those certain ways kinds of marketing might work, some are just more suited to maybe my my team's personalities and and yeah, our strengths. Let's put it that way. Nice.
¶ Filtering out what features you need
I I love that you go into those calls with the mindset of helping people, right? Because if you're in that mindset with this scarcity thought that OK, I have to sell, basically it's a must, then probably you're not going to make friends with people. You're not really going to help them and you can't like you, you don't have a silver bullet. Basically your product solves a specific niche and if the customer doesn't have those problems, then it's just not a
good fit. And I love that you have this mindset of I'll still help them out and I'll, I'll say, OK, you have these options or you can go to these companies because you are in this space and you're very much aware of what happens around you. I think that's very lovely. That's really cool. Yeah, it is. There, there's this lady, April Dunford. She has a she does some talks, so she has some YouTube videos which are already just good information already.
But she has a book called Obviously Awesome OK and a talk called April Buys a Toilet OK and she basically in, in, in, in in the talk she talks about how hard it is to buy a. Toilet. Yeah, to buy anything. She used the toilet as an example. Because, I mean, a toilet is something you've been using your whole life. You would expect that you're like the expat of toilet if you've used it your whole life.
But then she goes to buy this toilet and I was just like learning about all of these features in terms and floppers and payload size. And I was just like, what the hell is a flopper, you know? And so she, she, she gave this example of just the toilet. And then I made a kind of a correlation to our, I don't know, the software things that we're selling. Yeah. To show how it's really hard for like the people who want to buy things to just they're just all these terms.
They don't know if they even need those things. They don't. So you who is in the niche you, you know all of this. It's your job to like, filter just the information they need, you know? And yeah, well, you don't need to know about Flapper like you know. You need this toilet. Yeah. Where do you, where do you want to put the toilet? Is it in the basement? So maybe you know this kind of toilets that you know you don't need something that's super strong.
You use it once in a while in the basement. So maybe this category is good enough for you? Or is it something, Are you like putting it in? AI don't know, in a in a fancy hotel, maybe you don't. If you're not, maybe you don't need this like jade go toilets, you know, it's like just help with with, I don't know, position in what products or what solutions your customers actually need. Yeah, yeah, for sure.
¶ Standard operating procedures
I'm I'm curious since you have the experience of a lot of production companies, you also have the experience of creating your own product and solving customer problems. Basically where those customers are companies and I think mainly they are developers because you solve developer problems. There is not really a a silver bullet when it comes to starting up a product. When it comes to documentation, I think everyone kind of does what they think is best and then
they learn along the way. They take that experience and they still share. But there's not really, let's say, a blueprint of how you create software. You can find a lot of them and then you have to pick which one works best for you. When it comes to starting up, first of all, then making it scale, documenting observability, it's very hard, I feel like, to find kind of the right way to do things nowadays because there's just a lot of options. This is a This is a rabbit hole. Yeah. Yeah.
No, I know. It it's something I've been thinking about a lot. So my background is in medicine, laboratory medicine, and in in, I don't know, in the medical field. We have SOPS, the standard operating procedures for literally everything you want to do. Yeah. So almost anything you want to do, you don't have to be
creative. You don't have to like you, you can just go, oh, my patient has, I don't know, this disease, you know, and you go get SOP for diagnosing that disease and just follow the SOP and you'll be good. OK, what that does, You know, it sounds like, yeah, you're taking away creativity from the doctors. But I mean, first of all, how much creativity do you want from your doctor? I want procedure. Yeah, exactly. But it doesn't mean these doctors are not creative. It just means they can.
They can stand on the shoulders of giants and focus on the other things. I think we need more of those in software. If if I, when I speak with veterans in the industry who have been doing this since, like the, I don't know, the 90s, the things they say they were doing in the 90s are almost exactly the same things we're doing now. What has changed? The tools have changed. Maybe the standard in your user experience has changed, but it's
more or less the same things. And I think because we don't have this kind of standard operating procedures, a lot of people keep reinventing the same, like same things over and over the solutions to the same problems. And I think there's a lot of room for that.
Then I think if that happens, then as engineers we can focus our time on being creative, you know, thinking about more creative ways of solving problems, like the actual problems we want to solve, not how do I write from a database or how do I read database or how do I debug this thing. Can we get there though? Because I feel like a lot of
¶ Standardizing runbooks
companies see this challenge and they're trying to fix that. Like I've seen even my friend has like a code automation tool where he generates based on a a blueprint kind of the API landscape based on best practices, any connections to databases. But there's not one tool that like every developer uses because developers love their creativity more than doctors. Way more. Yeah, yeah. Yeah, I think it's hard. I think, like you said, that developers are very religious.
Yeah, Yeah. You know, it's like we, we fall in love with our tools with this particular database or this particular. So when you have this, like, what your friend does this scaffolding generator to generate something, yeah, Then one developer is going to come and say, oh, this scaffolder is using this database. I hate that database. Yeah, yeah. I don't want this. Exactly. So I I I'm thinking of it not even from a tool perspective, but more from a run book perspective.
We're having more run books now in software, but most run books are within companies. So in Delivery Hero, for example, we had like run books for pretty much any kind of incident. If there's a database incident, this is the run book for that. If there is, I don't know whatever incident and what makes that useful is that I can hire pretty much junior developer right? And telling me these are all the run books, these are the alerts we have, these are the monitoring platforms we have.
When you see this error just go pick this run book and just do everything in these steps like step by step. And so I think what would be good is if this runbooks go outside of this company world guide. I know Google has similar list of runbooks and a lot of big companies have the same, Microsoft has the same.
But if these are most standard standardized, you know, across the industry, so people are not solving the same problems over and over, then the details maybe in one of the steps, you know, maybe you can have like generators or something for like some of the steps in between and maybe have some creativity within the steps. But the general runbook is kind of the same across the industry. Yeah, interesting. When you when you mention your runbook example like I I have
¶ Multiple proven tracks to follow
when I started from university, I went into operations before development. That's where we had the most runbooks because we would always analyse and this was in a company where OPS was separate from development as a whole. So then we would have runbooks when you have a specific error either. How to fix it? Or just how to debug basically and when there is a hole in the runbook, you you add to the runbook. Basically you have to figure it out on your own at that point.
And the only runbooks I've seen mostly from the development side is not really on the creation side. There's always ideas of OK, this is kind of the latest way to create this. Whether you're doing contract first or you're generating documentation like there's options. I feel like sometimes too many options, but they're separate runbooks I've seen when it comes to releasing. All right, we have started from scratch. We have not gone to production yet. This is going to be our first
release. Then you have a runbook of how to get your stuff to production and then people follow that when it comes to releasing. And that becomes easier and easier to the point where you don't need a runbook anymore. So then the only runbook you have is, let's say a first time release where you make sure that all your production keys are there, your configuration is updated, all your feature flags are enabled, and stuff like that. So I see it from the OS perspective.
It's from the development side by virtue of having so many options. I don't know if a runbook would work. That is the. Yeah that that is something worth thinking about. I still think it could work. I mean even in in medicine it's not that there are no options. Yeah. There are lots of options for things. For example. So I, I, I, I specialised in laboratory medicine before I left. Yeah. And in if you want to do a malaria, some family tropics. So malaria is a big thing.
So when you want to test for malaria, you know, you take a piece of blood, you put it on a slide. You, I don't know, you stain it just like a dozen stains, each of them with like slight differences. You put it in the microscope, how you it, basically. There's a lot of options. But the runbook is, yeah, we know there are lots of options, but this is like just this basic
option that everyone can follow. And then you know maybe you're doing something very special or maybe you are in, in our case maybe you might be in a village without electricity, then maybe you know then you can change one of the steps for something that works without electricity. But the basic can be there. That's what I think we need as well in, in the industry in what
¶ Anomaly detection and how to investigate
I'm trying to do with API toolkit now. So now the performance evolved a little bit such that we, you know, we monitor the the errors. We also we look at, we're now analysing a lot of things. So we're looking at the headers, we're looking at where if there are runtime errors in the code, like with Sentry, we cannot associate those errors with the payloads that came in and out. And the next step is that from this data, like, can we give you a step by step list of things
you can do to resolve an issue. So for example, if we look at the headers and see that you're not following some best practices, maybe you're not passing authorization properly. Can we tell? OK, this is exactly the industry standard for this. Yeah, I maybe give you some links to, you know, figure out how to do that.
So some it's thought that it's still taking shape, but it would be nice if not just API took it. But like any of the tools out there, you know, could when they detect something or they realize something about your systems, they can also give you steps. They can tell you this is the industry standard, and then maybe these are steps you can use to fix it or implement that standard on your own systems, yeah.
Yeah, very interesting. I I love that it's on a smaller level, let's say when you have an operational concern that not not do you only like detect an anomaly for example or an issue, but you also give like the step by step on not how to resolve it. Because I think sometimes it's very context dependent, but at least how to investigate it or
where to find documentation. And I recently saw a talk where it talked a lot about engineering culture and very, very cool to see that one of the biggest factors in engineering
¶ Developer experience and tools
culture and I'll I'll probably have to find this research and put it in the show notes. But a lot has to do with knowledge sharing. So not just from a tool perspective to the people, but also across people where they find how interesting an issue was or a problem they solved and
they share it with other people. Because I think if you have a passion for creating value solving problems when it comes to coding, people will share that passion with each other and it like becomes this huge bonfire of the flame that is your engineering culture. So when tooling also then provides that provides them with insights that they might not have known or might not have found. I think that can be very valuable in that way. I think, I mean, I agree with you and I, I don't know.
There's a thought that has not taken full shape in my mind. Yeah, but I feel like developers build this amazing things for every other industry, but not so much for ourselves. Yeah. Because there's always, you know, things that we do over and over, things that we are analyzing. And maybe we just need more people thinking, you know, like, how can we, how can we solve things for ourselves as well? Yeah, yeah, I think.
I think people are trying like in the last few years I've seen more and more companies tailor towards developer experience. It's been a bigger and bigger term. This problem solves so and so and it increases your productivity and it really leverages so and so, so you can do this easier. Basically I just think developers as a user are super picky. First of all, I will not pay anything. So if it's not free, then I, I I'm not even going to choose it
as an option basically. And then when there's a free tier I can see oh, maybe if I spend so and so I get these extra things and I'm hooked. That's the only way. I don't believe in paywalls anymore, especially as a
developer basically. And then still I'm very picky with the tools I choose because you don't want to have too many tools and then you have to choose which tools are the best basically for your use case and do you really need a tool or are you then going to be reliant on that tool? Like it's. It's so many thoughts and I love the kind of flexibility of, yeah, being able to do things and switching on the fly as well. Those are the thoughts I have, so I think other people might
have them as well. Developers are difficult. Yeah. Sometimes I wonder if it was even a good idea to target that niche.
¶ We could build it better
But also developers would see something and feel we could build this in the house. We can do better. Yeah, we can do better. Yeah, they don't do better. No, no, no, no, no. That's the hard one. But it just stops them from deciding to use something that could solve their problems. Yeah. And I I like what you said about like being very picky about like adding new tools and maybe think of a a joke.
Someone shared that. How, you know, a very veteran software engineer is that in his house, he doesn't have any technology. He just has one printer and A and besides it in case the printer makes a wrong sound. I was like, yeah, I I remember because we know how things break all the time and we just don't trust anything. Yeah, it's funny. I when you said when you were
¶ What works well at a bigger scale
talking about blueprints, if I were to start a new project, I I do have kind of a the things I've liked from previous experience that I would start with like the last project I worked with, we had a contract first approach and it was GRPC. We need to make a mobile app. We needed to stream data and the documentation. If you're contract first, it's just there because you look at the contract that's what you use in production. That's the kind of your API
landscape. And we didn't get to a scale where it was a lot of contracts, because I think starting is easier once you have it at a certain scale with multiple teams, that's a whole different beast. And I've never actually experienced that to that degree. But that's how it would start because then I feel like documentation to, at least from an API perspective is there. We also had some conventions in place to document and generate, for example, architecture diagrams, which was very easy
when it's close to the code. But yeah, when it comes to then scale, I don't know how you would scale beyond that. Basically, I haven't had that experience. What have you seen at scale that like works really well when it comes to kind of conventions and still being productive? I don't know. So I think in I don't know in in. It's still manual in every company I've been in. To be honest.
I think documentation, I mean actually I was going to ask what you're using in like this architecture diagrams. Is there like something like mermaid or something like that? Yeah, we've done. We've done a lot with mermaid, actually. Yeah, yeah, yeah. But there's a few other options as well. I just don't know them. The mermaid one is the recent example. And so you had something that would look at the architecture and then generate the diagrams. Yeah. Did people did your engineers
actually use those? Things. So The thing is we had a lot of annotations in place and also when you create a new API, you would have to put an annotation on there. First of all to this was another project to create then Swagger docs because we had third parties that relied on our software. But because of those annotations, we also had kind of an automation that would see if mermaid diagrams changed based on what you had in the code versus what was generated and
then you would have to update. So there were some conventions in place, but still, like there's a human factor in there. You can forget it and then your automation might tell you, or you might forget everything as a whole and then your automation cannot tell you because you created the automation. Like the human factor is always going to be there. Which is why I I really think the challenge of scaling is is the hard part when we're talking about a blueprint of how you do
things. Yeah, it's the people, you know, you need to get everyone, everyone to buy into. You know, this kind of this processes. Yeah. And that is the hard part. I think that is also why anything that can be automated is always preferable. Then you don't have to depend on everyone remembering that they have to add some annotation or something like that. Yeah, but I've not seen at scale, you know, in practice at the larger companies it ends up
still being. I mean, you have a lot of squads or like smaller teams and it everything ends up being at the team level. Yeah. And some teams might just forget or might not care about setting processes as much as others. And so I'd scale, yeah, just how it is. I mean, as an engineering leadership, you can enforce it and say, oh, everyone has to do this and this and everyone enforces it for, I don't know,
1/4. But, you know, I mean, people live in the companies, new engineering management or managers come in and they don't share your vision. I don't even know. Yeah. So yeah, I've not seen it work. Very well at. No kind of scale, yeah. Yeah, it's the it's the humour
¶ Analyzing your code and beyond
factor that that is the most difficult one. And I was thinking when like in this kind of booming age of AI, especially the last year, we have a lot of.
Very interesting models that might even be able to help with that because we have metrics when it comes to how complex your code is. When it comes to like your nested if and for loops for example, I've had coding conventions that would tell us OK, your pipeline fails because your complexity is too high just by virtue of you going to the right instead of staying in a straight line. Cyclomatic complexity. Exactly like especially with Go that that was easy to put in there.
You had go. Yeah. Yeah. I love Go. I know you do as well. Yeah. Yeah, Yeah. But that's not. That's a different episode. Yeah. Yeah. Like, I like Go because it keeps things simple. Yet the. So we have tooling that enables us to observe what might be wrong with our software, how complexity might grow and probably with AI we might get more options for that more information, more out-of-the-box information by virtue of kind of having this blueprint that analysis your software.
The problem is humans will still need to change the code, I feel like. And then if you if you choose to ignore that because even with conventions I've seen people say, yeah I don't, I just don't agree and I'm like, damn it, we have conventions in place in the 1st place which means we aligned on this as a team and you cannot just say we don't agree. So then we have this discussion. The human factor of change is
always there. Yeah, so, so I'm also using LLMS for some of these, but I I think it's about where you want to analyze, right? So there are lots of tools that analyze at the code level. Yeah, you know, cyclomatic complexity. And those are very important because you want the code to be maintainable. Yeah, you don't want a ball of mess that no one wants to dive into. But I I think I've been thinking about it more from the user perspective.
Yeah. So what we've been doing is we take like we take the requests that users are making, we kind of know the average request and we pass this into some LLM to figure out what is the user's flow, the user's journey, you know? And I think that is useful because then you know, OK, when a user comes to this platform, they hit this endpoint which is owned by this team. Then they get this data and they hit this endpoint which is owned
by this team, you know. And that is a different way of like applying LLMS because you're just looking at the data that users are doing and not the code. And it's a different set of insight that they would give you. Yeah. For the code level, I don't. I'm not. I don't know. I've not been impressed by all the code level LLM that I have tried. No. Me neither. Not yet. Yeah, not yet. Not yet. I am hopeful though, for the
future. Yeah, I am hopeful too because they shine very well when you give them like maybe a function or maybe one file and say OK, I don't know add this new function to do this, yeah. But when it comes to looking at the entire code base, I have not had any success. No, me neither. Not yet.
¶ Opportunities for improvement
I I like that you take kind of the the code as a boundary and you say, OK, we go beyond that and we look at interactivity because then I feel like something like anomaly detection, which humans are just it's really hard when you have a complex system, when things are tied together and interfacing and even sequencing of messaging sometimes either in parallel or linearly just becomes a lot to comprehend. First of all, from an engineering perspective, how's
this work? And you can look at the code, but that's still the boundary of code. And when it comes to then the messaging, the interfacing in between, I think that's really important when it comes to anomaly detection. It's not just anomaly. I mean there are lots of things we are not doing. But you know when you work in this field you start to see the opportunities. For example, if you're looking at this communication and end users making with all these
services, yeah, you start. At least you can as a user. You can figure that maybe LLMS can figure it out too. Opportunities for improvement. Maybe this particular endpoint is always sending the same kinds of data. Always. And maybe should just be cashed. Or maybe it should even be cashed at the CDN level because the last time the data changed was six months ago.
So we don't need it, yeah. Yeah, maybe it just had code some CDN rules to you know that this rooms for these opportunities you see maybe some endpoint gets called very much and I don't know should be it's too slow. But what is what? What you need in this endpoint? You're already getting from another endpoint by another team. Maybe this endpoint should not
exist. You know that this kind of things that I imagined like LLMS could be like pay programmers or partners in just looking at this patterns and recommending things that you can do. I like that a lot it it
¶ Most software is fast enough, or not?
triggered me and I thought of like Google's Lighthouse where you just open up your browser, it runs through an automated set of tests and then it gives you some suggestions based on how your website performed. With the history and kind of the capturing of requests and response over time and the knowledge of OK, this might not be a good idea. Yeah, it can be very helpful to have kind of an assistant there that recommends you stuff to improve.
My problem is I don't know how much sometimes we need to improve because some stuff like when you're talking about speed, I think a lot of software is fast enough, yet we do a lot of things to make it even faster or we over optimize for many, many users and we might not even reach that point. So then it's this balance of OK, where's the value? Like exactly with your challenge of creating your own start up, should I really be fixing these bugs or should I really optimise
these things? Or should I actually talk to customers and do these things instead? That human decision factor, especially when there's an assistant that tells you these are the best practices you can improve on these things, is going to be even more crucial then I. Have to decide on which hats to wear? Yeah Yeah, founder hats. Like you said, most apps are fast enough. Probably should be talking to customers, yeah. But if I were to use a hat, I don't think most software is
fast enough. Like every day I'm frustrated with the things that I use and now we're in the engineer hat. I think part of the reason is that these days we're building micro service architectures and platforms. So your service on its own might be fast enough. It might be 500 milliseconds. But then from a user's perspective, to satisfy what that user wants to do, you have to hit like four of those services.
Now the 500 milliseconds stacks up, and now you have two seconds or three seconds you're waiting for a page to load, you know. So I don't think systems are fast enough, to be honest. Yeah, Fairpoint, it's just, and this is The funny thing, the stuff I use might be fast enough because if it's slow enough, I'm not going to use that. Yeah, But I love thinking about these kind of how the industry can improve when it comes to either just more knowledge sharing assistance or
blueprints. I do think there's a long way to go and I love that you said that whatever we were doing 30 years ago, like the landscape, it might be different. The context is definitely different, but what we do is not that different and we should definitely work more on standing on the shoulders of giants, yeah?
¶ Automating ourselves away
We're sorry, I spoke with this engineer who used to work, I don't remember which company, but he worked on these telephones in like the 80s or something and he was just telling me about like some of the micro services that they were doing at the time. You know, I think it was service oriented architectures at the time. And I'm like, why did we spend like the last 10 years? We're inventing. We could have just read a book from like from the 90s. Yeah, we're going back and
forth. Yeah, it's just ping pong, ping pong. But we need to like, there's a lot of new stuff that we should be inventing. I think, you know, we should. An ideal situation is if we actually automate ourselves out of our current jobs. Yeah, I mean, it looks like it's going that way, even though I don't like. There's different opinions on this, but I think a human will still be in the loop.
But it does look like code automation and code generation is getting to a point where it it can just sometimes do it better than a human. I would love that, yeah. I I think what would just happen is if the code generation gets better than us, it's getting better than us at the things that we're doing now. Yeah, yeah. Which means there will be new things for us to do and figure out. I'm I'm in the same train of
thought. I I think then your role will evolve rather than be obsolete because with this now automation tool that creates stuff, then you're going to be like, oh, so if we have these options and everything's super fast, how do we create and tie everything together to create value basically? And that's then going to be the right race, I feel like. I mean, if if you think about our jobs now already, we what do we do most of the day?
We're connecting APIs together. You know he does service for XYZ, he just says. So we're basically just connecting things. Yeah, so it would just be a higher level of that. Maybe no longer just connecting APIs, but actually connecting maybe algorithms or use cases. But I don't believe we would be out of job. Do you know how hard it is to describe project to someone?
Yeah, yeah. I think half of our jobs is developers is just we're listening to the specifications from a product or from management or whatever and then try just trying to make sense of it. Yeah, yeah, Yeah. So I don't think anyone's automating that out of. We might get prompt engineer. I've heard that. Yeah, cool man. Let's let's round it off here. This has been a blast entry I must say thank you so much for coming on and sharing. Was this kind of what you expected coming into this I?
Mean I had a lot of fun. Yeah, that was good. That's always good to hear. So I really appreciate you having me here. And also I appreciate Denise from SAS lunch for I mean the connect. And yeah, he made this very fun conversation happen. Awesome, awesome to hear. Thank you for recommending Anthony Denise. I I always love it when people do referrals, cause usually they're really good and I had a blast for a conversation. Thank you so much for coming on.
I'm going to round it off here. Thank you for listening. Please leave some comments and some love behind. Below the like button is something I'm trying to work on to make it more interesting than just the comment section. But in any case, thank you for listening. We'll see you on the next one.