Welcome to Bytes in Balance, the podcast where we navigate the wild world of software engineering together. I'm Dan, and this is Demian. We have been juggling code, themes, and sanity for over 35 years combined. From junior devs to principal engineers, we have worn every hat in the industry. In this podcast, we're sharing our journey, lessons learned, and mentoring tricks to help you find your own balance.
It's not just about the tech. We dive into people, psychology, communication, and all the messy bits in between. Think of it as group therapy for the digital age. We bend, swap word stories, and share what we think is solid advice. Sometimes we even bring guests. to shake things up. This podcast is our way of tackling the stress, burnout, and growth pains that come with the job. It's as much a balancing act for us as it is for you. Grab a seat and let's navigate this madness together.
You'll find some interesting links in the episode description if you want to learn more about us or the topics we discuss. All right, let's get started. Hey, Dan. Good to see you again. Yeah, likewise. So what are we talking about today? Yeah, so today is a really interesting topic. We're talking about legacy systems. It's a very fancy topic. Yes. It's one of the most overloaded, overused terms in our industry, I would say.
It is basically an attempt to put a positive spin in that light on something that we all think is crap. That's ultimately when people use the word legacy code, what they mean is I don't like this thing and I want to replace it. That's the root of the problem, maybe, or one of it. It's funny because I was reading something or talking with somebody about that. And this person was saying like the use of word legacy in basically almost every other industry.
That's a completely different meaning than in the software industry. Like when you think in any other industry, like music industry, the legacy of this musician, the legacy of Picasso, is normally an awesome, positive thing influencing the culture. Something that has stood the test of time. It has lasted. It has outlasted. That's a positive thing in every other light. And then you look at software and you think, man, what a piece of crap.
Yes. It seems that software doesn't age well. I think that's part of the problem. Right. Ages like milk and not like wine. So why are we talking about this other than obviously it's fun? What are the motivations here? A lot of the software out there is legacy. You could argue, somebody said once, that the moment you deploy something, it's already legacy. And I think it's good to have this conversation, understand what is the spectrum, what is legacy, and how to deal with it.
From what are serious strategies to actually deal with legacy software to things like go get a therapist. Don't go crazy. I think it's an important topic because there is a lot of legacy systems out there and they are going to be there. for a long time. So we have to deal with them. Yeah, and you know, specifically, I hear this coming up a lot.
So both of us are doing this mentoring and coaching and interview coaching thing. And one of the things that I see one in job descriptions a lot is I see. Companies are looking for candidates that can deal with legacy systems that know how to work on existing systems. They don't want to hire only people who want to come in and build new greenfield things that are shiny and have no dependencies.
You know, it's just not viable. So they want people who know how to come in to an existing system that has lots of dependencies and maybe problems and warts. And they want people to know how to orient themselves and get into it without complaining and trying to throw it all away. That's a big thing. I see it coming up a lot. I had a really interesting experience in my first team at Amazon. We inherited this huge legacy system. We went through this whole journey of thinking it was crap.
wanting to throw it all away, trying to do that, realizing that was much harder than expected and ultimately settle in this hybrid solution. But I feel like we went through this whole journey of learning what is like Z code and why does it suck? What are the problems with it? And then ultimately, what do you do about it? That first team was kind of a microcosm of that.
I think it's obviously the easy approach to anything. It's complain and say, this thing is crap and I don't want to be there. Now, once you are beyond that point, which is the easy path, I think there is also a lot of things to learn from this type of system. I feel that I see a lot of new and junior engineers avoiding these things, like if it was the plague or the pest.
I see this same thing, for example, with things like ops and ops tasks and whatever people want to avoid. Oh, no, I want to develop Greenfield things from scratch, not dealing with ops. And people don't realize that. Actually, software aspects are related to legacy systems and to ops and to operate systems. And there's an incredible amount of learning on those aspects. And building the next green field or new shiny system, whatever, is just a small part of it.
So I think it's good to have that conversation on that regard to see if we can change people's mindset and thinking like, okay, this is crap. I'm just going to complain and go somewhere else always. So I wonder maybe as we go into this discussion, do we want to maybe... separate out different flavors of what we mean by legacy software. There's some people out there, again, who are using it in this general term of anything that
I don't like or anything that is poorly factored or poorly written or whatever. And that's not the definition we're trying to go with necessarily. But let's talk about the different versions of those and kind of what the implications are. Like, what is the problem? Why is it not fun to work on these kinds of systems?
Legacy Software is called that because it has been running for a long time. Long could be a relative term, could be only a year or two, but it's basically been serving a job and been working for a long time. The nature of our industry software is that it's easy to change if it's flexible.
And so because of that, often you find the requirements or the business context around a system has changed since the time it was put into production. So a lot of times what we mean by a legacy system is just something that was built for one purpose. and the purpose has shifted or over time the requirements have changed and now it's a bit of a not a great fit. Maybe there are some obvious problems, maybe there's some obvious bottleneck.
You know, everybody looks at and goes, man, I wish we could replace this or I wish we had seen differently 10 years ago when we were building this, but we didn't and we're stuck with it. So it's our legacy system. That's often one kind of framing of it. Yeah, I'm trying to think how would I characterize this and I'm thinking like
It is something that is not necessarily easy to operate or to maintain. It could be old, but it could not be old. It could be just something that was born one year ago. It has so much technical depth. It should be something that there is some type of knowledge gap. We don't really know how this thing works. for sure, which makes it hard to maintain. It should be something that it's a product of some weird
Frankenstein architecture. Somebody had the great idea of combining three completely different technologies that don't mix well together, a couple of programming languages, whatever, and created this Frankenstein that is definitely hard to evolve. It could be something that is tied to all technology, that it's not maintained anymore. So I think there is a factor of lack of maintainability, lack of knowledge, easy to evolve risk.
and so on and so forth. And that could come from an old system, old mainframe, global, whatever system, or it could be something new that we started to put together in the last year. And now it's incredibly hard to change. They fight everybody. The new people that come and don't know how this thing works. I think you're hitting on probably the most important part which is basically just the lack of knowledge.
Sometimes this is because it's not documented well. Sometimes it is documented well. And honestly, developers are just reluctant to dive into it. Sometimes I've seen exactly that. But I think the lack of knowledge is the big thing. The experience of working on a legacy system, you've seen engineers kind of shying away from this like it's ops work. Well, why do they do that?
The obvious reason is that it kind of sucks. Sometimes it's frustrating. Maybe it's hard to find the plate. Like, let's say you'd have to do a task. You have to go add a feature, fix a bug, whatever. In a complicated system that you don't understand, it's poorly documented. You may have a really hard time finding where to make the change. That may not be obvious.
You may not be able to test the change. Maybe it's really hard to run the software locally so that you can reproduce the bug or at least just get it running so you can make the change. Deploying the change may be very complicated, may be fraught with risk.
You might be scared to roll it out, even if it's just a simple one-line change. Maybe if this system crashes, the impact is very high and it's risky. All of those elements combine to basically make more friction, more frustration, more unknowns when you're building something. Yeah, and I think that adds a lot of friction to do things. Could be a key word here. It's like when you say, okay, once I found the place where I need to make the change, I made a change.
Is it the right change or not? Does it work or not? I don't have a way to test this locally. There may not be unit tests. I cannot run the stack and test. Or if I can, it's just a partial thing. And then my only chance to see what's going on is try to deploy and deploy this thing that it...
could be broken has a lot of risks. So then you're probably going to go slow and you're going to probably do a lot of double checking things. And if you're lucky, you're going to deploy and it's going to work. And if you are not, it's going to break and you're going to have to hopefully roll back and then start over again. And that friction, I think, is frustrating.
And that's probably why developers tend to avoid this, because we want to deliver, we want to deploy features, we want to improve things. It's in our nature. Right. And you see that when developers have a control, when they're given the freedom to go build something greenfield, that's often what we focus on doing a lot is making sure that we have.
fast iteration that we can develop really quickly, that we know it's predictable where to make the change and how quickly we can make that change. That's some of the things that we focus on right away because that's the most obvious source of frustration early on. Yeah, and if you think about this, all the practices of unit testing, automated intake testing, continuous integration, continuous deployment, etc., they spin around having this.
quick feedback loops that help us deliver and develop faster, which is very rewarding. Yeah. You talked a lot about some high-level aspects of legacy systems. I'd love to quickly go through my worst example of a legacy system I worked on, and I think it hits on all of those.
I won't go into a whole lot of details, but basically this was at Amazon. We owned this big, complicated data pipeline that was building reports of how efficiently various teams were using their servers. And it had a lot of inputs and a lot of outputs. We'd look at like... CPU and memory usage of all these different machines and pull data from data warehouse and stuff.
And it was one of these things that I think was developed over many years by many different people. Probably most of those people were interns. It sort of had that look of like. Three different intern projects bolted together.
Ultimately, what we discovered, actually I discovered, I had to dive into this and rework parts of it. It was a bunch of Perl scripts that were invoking a Java program. So far, that's fine. This kind of stuff happens everywhere at Amazon. The Java program then turned around and invoked another Perl script. on the machine that pearl script dynamically generated like it built strings and spit out text it basically dynamically generated java code that it then compiled and ran on the fly
So this was Perl script that was ruining certain Java code, and this Java code was ruining Perl script, and this script was writing Java code, like template generating Java code, and compiling it and ruining it? Yes. And again, they hired one intern that knew Java and a year later they hired another intern that knew Pearl or something like that. And one person didn't intelligently design this system from beginning to end.
It was the result of multiple people who perhaps didn't want to understand the work of the previous person. And they just added on, oh, look, we can be clever and do this. And it ultimately resulted in just something that when I finally came back to my boss and walked through, like, here's how it worked. His jaw hit the floor. It was exactly the expression that you have right now. Yeah, my jaw just hit the floor. Like, I think this one wins off a lot of things that I have seen.
Yeah, yeah, yeah. It was just such a classic example. And this one wasn't too painful to unwind because it had clear inputs and clear outputs. But when it's a system that doesn't have that or has lots of different side effects or whatever, it's going to be very painful, very soul crushing to go through and untangle. I can imagine. Side effects, state, etc. I have seen a lot of crazy ones too, but I can't think of one at that level. The closest that I can think is that I was given once.
something, because that's the only way I can call it. This was like 20,000 lines of Visual Basic 6 software, you know, and this is a bunch of methods, very lack of structured forms or whatever. And the goal was to migrate this thing to Java. Basically, the forms were relatively simple, but the whole business logic inside of this thing was like impossible to understand, to test.
to reverse engineering, it was very hard to follow. I remember learning a bunch of Visual Basic 6, which I don't remember at this point anymore, but they had this object that it... kind of equivalent to the Java object is a genetic, could be anything. The approach was like, okay, let's just write a small piece of Java code that is going to parse.
this Visual Basic code and it's going to replace kind of a compiler if you want to think about it that way, but very, very bare bones, simple one. And it's going to spit out the equivalent in Java syntax. A transpiler, essentially. A transpiler, yeah. And this was like 95, 96% correct. The code would not compile for sure, but you could take that code, put it in a development environment and just fix the compiling errors.
And we did that, and let's hope it runs. We have no way to prove this is correct. We delivered this to the customer, and we never heard customer complaining, so I'm hoping that it was correct.
you have to do this in very short amount of time. So it's not that you spend two years reverse engineering that thing. No, but that's often what you end up doing is end up reserting clever tricks or things like that to dig your way out of the, I assume the only way that company ended up in that situation was. possibly years and years of sort of adding on little things, bolting on little things here and there, adding more methods to the original Visual Basic version.
Each time you just add a little bit more and one little thing, how much more can it add? Ultimately it's just this big ball of hair that's too complicated to unwind. Yeah, and I think, as you mentioned, and I don't know this for a fact, but maybe this was an intern after an intern. So it's just adding feature after feature after feature with the less amount of cost as possible. And sure, there's a moment in which it's unmanageable.
So that's a legacy system. Just go and try to make a change there and make sure it is correct. It's like, who knows? Yeah. Why don't we talk about this next? Basically, what are the things that led to the Lexi code being created or the tech debt becoming accumulated? Because people don't set out with the goal of building a big ball of spaghetti. It's hard to fight entropy to some extent.
Yeah, it's very easy to create legacy code, in my opinion. The moment you write some piece of code, you deploy it, you put it there somewhere, it is already legacy or it's on its way to become legacy. So I think everything goes by nature towards legacy. And if you don't want it to be there, it's a continuous effort to make sure it doesn't end into that situation. And that continuous effort implies money. It has a cost. It has an effort.
related to that. I think if you don't keep pushing and you don't keep iterating on software, sooner or later it's going to end into that direction. There are things you can do to make it last. longer or make it harder to get there unless you actually are willing to pay that cost. Yeah, this makes a lot of sense. And I think probably the next area to dive into is what do we do about it? How do we approach jumping into a legacy system?
I would say that probably the first step is just understanding those business pressures that led you to developing this. oh, this system wasn't just designed by idiots. Actually, it was designed by smart people who were responding to really real business constraints and business pressures. fighting entropy boils down to money. And if the business didn't have the money or didn't have the resources or couldn't prioritize or whatever, then it didn't happen.
No, and I actually can argue that lots of legacy systems are actually extremely successful systems. Because if you think about it, if the system is not successful and it is not serving customers and it's not fulfilling a purpose, Why have it there? Just you could turn it off and nobody cares.
But these systems are normally systems that have been very successful. They are doing whatever they are doing well. They are serving customers. And that's why they last so long to the point in which nobody remembers how they work or the hardware gets outdated if you go into that direction.
So you could build that case to some extent too, right? They even survived enough changes and adding features. And of course, you pile technical debt, but they survived that. So they keep operating even after adding those features. no it really is a mindset shift i think early on in my career especially when i was on that team where we were dealing with all this legacy software i was sort of thinking like
oh man, I hope all my code doesn't become someone else's legacy code. Actually, nowadays, I think my perspective is totally different. I'm like, wow, I hope my code someday becomes legacy code because that'll mean it accomplished its goal and it's outlived its purpose and all that.
And maybe we're closing the loop with the conversation at the beginning about the meaning of the word legacy, right? It's like, okay, if your code is still there 10 years down the stream, still working there and somebody... He's struggling to kill it because, hey, that's your legacy. No, no, no, for sure. I'm there, but not everyone is there. This was a huge pet peeve for me at Amazon when people would essentially just use the word legacy as a synonym for thing I don't like.
I remember talking to teams. Maybe you're trying to onboard onto the service, some service that this team offers. You go talk to them and they're like, well, you could use our legacy stack or you can use our new service. And I'll say, oh, I guess I'll use the new service. And they'll say, well, it's not ready. So you have to use the legacy one. I'm like, that's not legacy. That's just in production. That's prod. That's not legacy.
Do you remember that principle? Yeah, the tenants or something. Yeah. So Amazon has leadership principles, which are like those 12 to 14, dive deep, learn and be curious. They separately have a set of, I think they're called tenants. Maybe they're called principles. But it's basically for the principal level employees, level seven employees. Other companies call this staff or whatever. They had a set of guiding principles.
I think the one you're referring to is the respect what came before. That's exactly what I had in my mind, is respect what came before. It's like whenever you approach to a system, I think the first thought that you have to have is that the people that put this together had reasons. to put this this way. And you better understand those reasons. You better figure out, make sure you know those.
Yeah, it's easy to just come to somewhere and say like, oh, this is crap. That's as I mentioned at the beginning, that's the easy approach to things, you know, and the one that doesn't work. It's like understanding why people put this. together this way, understanding those reasons. And then is when you can start making progress one way or another.
Yeah, that was a big learning that I had to go through. You can get caught up in complaining about the stack. And that's fun. I never want to take away someone's right to complain. Do that till the day I die. But trying to channel it into productive things.
There's always this tendency to want to throw it all away and start from scratch. And by and large, that's the wrong approach. There's very few cases where you can do that, and that's the right call. Yeah, in some cases, you have to do that. In some cases, it just doesn't make sense. I remember a specific case, a system, and this was probably not exactly what we're talking about, but this was a big monolith.
And the whole thing was like, oh, microservices, we need to split this thing into microservices, whatever. And it's the only way. I remember at the beginning feeling that, yeah, that was the way. And eventually I got to the conclusion like, no, that's not the best that we can do. This thing is worth it. And splitting this thing into microservices or whatever is going to take way more effort than people think.
What are the pain points that this thing has? And let's try to address some of them and see if we can reduce the pain. And when you start moving into that direction, you start addressing certain of these pain points and you improve the situation a lot. And it's what I call the rule of the 80-20. And you can approach the systems and try to think, what is the 20% of things that I can clean or that I can do to have the 80% of the impact over there?
I like that. One of the ways I like to look at it and maybe I just do this to myself, like in my head. So I compare the situation that I'm looking at in the software realm to some other engineering discipline. I like to think about civil engineers building bridges or structural engineers building big buildings or whatever. And we're kind of spoiled in the software realm. Software is easy to change.
And so it's easy to just throw something away and start Greenfield. I could go quickly whip up a new package in five minutes and go start go running.
That tendency, the fact that it is so easy, leads us to that solution way more than it's appropriate. So again, I think about civil engineers building a bridge or whatever and imagining them telling their bosses like, oh no, this bridge is just too big. It's too much of a monolith. We need to... split it up and build it with a bunch of different micro bridges or like
oh, this bridge is too old. It was installed a hundred years ago. Sorry, just can't be improved. Like, no, that would never fly. You just get it done and you do exactly what you do. You define concrete outcomes. What do you want? How do you improve them? Maybe it will involve teasing it apart and breaking it into smaller services, but certainly not just for the sake of building microservices. I'm so glad that microservices trend is starting to finally wane. I'm so sick of it.
It's like everything in the industry. The problem of this industry is that we create buzzwords. And we don't realize that these buzzwords are really technologies or things that have purposes, that has cross, that has cons, that has flaws, that has situations in which you can actually apply them and a situation where it doesn't make sense to apply them. But then when one of these words comes out, everybody, it's just...
jumping into them and trying to use them and to solve all the possible problems. When you have a shiny hammer, everything looks like a nail. And that's what happened. It has happened with the same pattern. It has happened with Agile. It has happened with Solid. It has happened with microservices, with crypto, with AI. You see a bunch of crazy things going on. It's like, oh my God, this is another gooseberry.
Something that caught my attention when you talk about the bridge and the civil engineering, I think it's some type of cognitive bias that we have about the software. In software, it's very easy to start. is very hard to finish. It's very easy to launch a stack, put a couple of APIs, and have this illusion or feeling that you are making progress.
But people tend to miss the real effort of actually getting something in prod or getting something at prod level at the right bar with all the features that you need to replace this old system. That is... extremely hard. I remember going through my career like 15 years ago, probably, and having this realization that starting and the middle part of the journey is the easy part. But when you get to that last 20, 15% of a project,
That is where it gets really hard because it's all the fine tuning and fixings and making sure that every rule, every feature works as expected to fulfill the purpose of this software. That takes a ridiculous amount of time. And I suspect a lot of software delays happen because people underestimate the final closing aspect of it. Yeah, absolutely. And it's so easy to do that because...
All that fun part, the early part, you start a new package, you whip some features up, you have it running on your desktop. You could do that in a couple of hours. And then your CTO can look over your shoulder and look at it and go, that looks feature complete. It's working.
But really, you're so far away from having anything that's actually deployed, running into production, that's observable, that has performance metrics, that is not going to crash, that it has secure, all those things, they add up to a huge percentage of it, and it's all sort of hidden. or even that it is truly equivalent to the whole system. That truly equivalence is harder to achieve than people think.
So you deploy this thing and say, user says, this is 95% that I need, but these 5% of features are critical, you know, and I cannot get away without them. And then you realize that those are really hard to implement. And they are in the old system. The old system somehow fulfills that feature.
I know we went through this exactly on my first team at Amazon. We had this shiny new system we were building and we focused on this one early use case. We put all this effort into solving this one very narrow use case and we launched it and we're like, cool. but we had spent all this effort on this very narrow use case. We didn't solve anything else. So we weren't able to actually throw away the old legacy system.
In the end, the legacy system won out and our thing died because we couldn't ultimately deliver enough valuable features fast enough. to convince the business that this was the future. That was the right call. Honestly, I'm not bitter about it. But yeah, it was because adding all of those other things, all those other use cases, we couldn't actually replace the legacy system entirely. We were trying to go for this.
again, very naive, but like sort of like fell swoop, replacing everything at once. We couldn't do that until we did all 100% of the features. And the last 5% was going to be an enormous amount of effort.
Yeah. And in some cases, there is no other way. Like if you have something that is running 20 something years ago with extremely old technology, the problem with this type of systems is that they also imply a risk. There is a moment in which you start having security risks, you start having availability risk. and they could also imply costs.
If you have to run this in outdated hardware or if you have to hire people that has extremely specialized knowledge that was there 20 years ago but is not there today, that could also become extremely expensive.
In those cases, you have to find a way to replace them. It's not that you can chip away stuff or do some 20% to improve it and get an 80% of improvement. But in cases like the one you mentioned, it's probably not worth it to actually go and just try to replace the thing. Yeah, yeah, yeah. Do you see any other problem running this type of systems? I think those are the main ones. I think you articulated really well what you have to do with these systems that are still chugging is
you have to do a good job of illuminating the risks to the business. The things like the availability risk or security risk of a system that's been chugging away, working fine for 20 years.
That's not obvious usually to business leadership. They don't think about, oh, this could suddenly have a security vulnerability that impacts our company heavily or, oh, this could go down. And now because we've left it running so long in order to get it up and running, it'll take three days instead of an hour. Those kind of risks are not obvious to the business, and that's what we as engineers have to do a good job of explaining those and quantifying those.
One of the problems that I see sometimes with engineers is that maybe this has created bad reputation for us. It creates some miscommunications with management and business. Sometimes engineers like to do things because of how cool they are, things that they want to do, and they don't think that whatever we do has to have business justifications.
And I have seen the trend sometimes of people just trying to do microservices, for example, just for the sake of microservices, not because there is some real justification or value there. So I think whenever we try to propose those projects, it's incredibly important to boil down the reasons to such a business value and business justification. Like, hey, this system...
Sure, we don't have to replace right now. We don't have to do anything with it. We can keep it running. It is safe. The security risk is low. The cost is manageable, etc. Maybe we don't have much changes and add new features. So adding the cost of adding new features. It's not big enough. So we don't need to do much with that system. Or with this other system, it's 20 years old. It runs in outdated hardware or uses extremely outdated technology that nobody is maintaining.
There could be security issues there and there is high security risk or a high cost of operation. We do have to be brutally honest and transparent in that case. And this is something that we need to think of replaced. And these are the why's, the true reason. If you cannot justify business value, then it's very hard.
Yeah, I've seen teams and I've seen like senior leadership teams that you can tell they've been burned. They've almost lost trust in engineering. They have this feeling like, oh, engineering is always just going to propose all these.
vanity projects or pet projects. All they want to do is fucking microservices everywhere. You know, you can tell that they've sort of lost trust in engineering and that's a really bad situation because In that situation, they didn't have the technical depth to be able to evaluate all those risks that they were missing, but they didn't trust the engineers when they told them about various risks.
because they were worried the engineers were just, oh, they always want to throw it away and start from Greenfield and we can't do that. So, you know, it was kind of at an impasse. That's one of the big risks of going too far down that path. You lose the trust of your leader. And at that point, you have to do a high extra effort to actually regain the trust.
Probably our listeners are thinking like, OK, but you guys stop talking BS and give me some specific pointers of how to deal with my day to day. What are your thoughts about other approaches? I mean, we talk about respect what came before and try to approach with a different mindset. We talk about the 80-20 rules, try to approach to that 20% of the task or the improvements that give you 80% of the impact. What else do you think?
Yeah, it depends on what kind of a system you're talking about. There's this great article that I read a while ago. It's by Martin Fowler. He was one of the agile people and he wrote refactoring maybe or something like that. But anyway, he wrote about this pattern. He calls it the strangler fig design pattern or something. But basically the strangler fig are these plants.
these vines that grow around a big tree in the swamp or something. And they grow up around the tree, sucking away nutrients from the tree. And ultimately, sometimes the tree will itself die and they will leave just these strangler figs all around it. Anyway, that's a metaphor that he uses. for how do you incrementally evolve and adapt a large legacy system?
Some of it was stuff we've already talked about. You can read the article, but basically he has defined the concrete outcomes that you want to have happen, the specific improvements that you want to make. and then figure out ways to sort of wrap the existing system, providing interfaces that maybe model how you want the system to evolve later. There's many different ways you can do this. There's many different sub patterns.
But you can slowly replace or adapt or rewrite pieces incrementally of a large system, as opposed to what my team and many teams have tried to do, which is throw something away entirely and replace it wholesale. And so more and more, I believe that the incremental approach is right. That's kind of vague and high level. It's hard to like immediately be actionable, but finding concrete, small things that you can isolate, modularize, split apart.
so that you can, again, incrementally work supporting the old system and the new system at the same time, finding ways to redirect. We're going to move more of the business logic pathways over to this new package or start making incremental small cleanup things. But don't try to do this big wholesale all at once kind of boil the ocean change. Usually you're not gonna be able to deliver enough business value before you lose the trust of leadership. That's what it boils down to.
Yeah, I step into that article that you mentioned years ago. I barely remember details. It's a great piece of writing for our listeners. It's definitely worth going and reading it. Things that I have in my notes, it's like working incremental, it's important. It is thinking, what is the next small change that I can do in the system? What can I chip away? What can I isolate? What can I wrap?
Sometimes you have these big components that have a lot of dependencies and they are doing multiple things at the same time. It's like, how can I split this thing into two? And sometimes this is not about splitting one service into two. Sometimes this is splitting some package or something like that into two. Before you had one single package with 20 dependencies. Now you have two packages and this has seven, this has...
maybe 12 and one, nobody needed. So you remove one. Just start clarifying, what are your dependencies from different pieces of code and code approach to that? You just start adding tests. that actually tells you that the current behavior is correct. And then that helps you start adding changes. Thinking of Martin Fowler and refactor. It's a slow process of constantly and brutally refactor and do small changes until you try to get to something that is closer to the understate that you want.
Adding tests is huge. I didn't mention that, but that's probably one of the biggest things that you can do that basically gives you more confidence in making changes to the system is adding some tests. And a lot of times it's too hard to go back in and do testing the right way.
quote unquote, unit testing first and then integration. A lot of times you start from the top down as opposed to bottom up. A lot of these legacy systems that I've worked on, one of the first things that we do is we go in and we build some kind of a big integration test suite. that lets us exercise large chunks of the functionality in a test. And that's backwards from how you do it if you were building the system from scratch because you'd design unit tests.
and then add on a small layer of integration testing. A lot of times with these legacy systems, I'll end up having lots and lots of integration tests and that's it. But it's because it's the most convenient place to inject in the testing. And then once you have automated testing, then you have your security blanket that you can go in and make changes and you can do that with a higher level of confidence.
Yeah, at least you can start refactoring things and knowing that you did not break anything obvious. That's a huge advantage and a huge confidence booster to keep doing these changes. It's great. approach there other things that i would say is operate the system you probably will have to do a lot of reverse engineering and understand how this thing works
and be curious, operate the system and learn from the operations of the system itself. If there's something broken, everything that breaks, there is a lesson about the system. And start finding lightweight systems or ways to document what you are learning and go from there. Yeah, that makes a lot of sense. Those summarize some of the key things that I would say to our listeners that we could think about this.
All right, then. I think we have talked about this for a while at this point. How do you feel that you have grown? How do you think about legacy systems in your career after your long years of experience? Yeah, I think we've touched on this many times here. It's really just a mindset shift. You know, at the beginning of my career, I looked at legacy code with sort of trepidation. I didn't like it. I didn't like working on it.
I would avoid it. Any chance I would get, I would say, oh, let's start a greenfield project here. And over the years, I've shifted to the point where when I see greenfield projects being started up inside, I used to be thinking, oh man, I bet that'll be fun to work on. And now I'm thinking, oh man, nobody will use it.
There's a part of me that's trying to get excited, and it is legitimately excited about working on legacy systems because they're really used and they're really doing something real. More and more, I'm trying to appreciate that and the challenges that that brings. Yeah, that means that you can't just throw it all away and replace it with some stupid React app. when you do go through some careful exercise to refactor it out or improve it and you're successful at that, man, that's
rewarding because you took this complex system that many customers are using that's making money and you improved it and upgraded it and did it in such a way that it kept doing its job. And more and more I'm trying to look at that process and get sort of satisfaction out of that. I think that for an engineer, one of the most important growth that an engineer can have is not necessarily in the technical aspects of things, but it's in the mindset and the way you approach to problems and to things.
And, you know, after being 20-something years in the industry, you start seeing a lot of the pieces of the equation. You start seeing people that just joined the industry versus people that have been here for 10 years or so versus people that have been here for 20 or something years. And even you see your own evolution there. And I think a lot of... of this is regarding to the mindset of how you approach to a problem.
At the beginning, it's easy to think in terms of just launching new things, greenfield projects, new technologies, code like crazy. And when you see something that is... kind of smells a little bit of all this like I don't want to be there. And you see these things even in Amazon. It's like when people is operating systems and managers are like, oh, we need to give people new features because otherwise they're going to get...
get tired and go away. To some extent, it's like, hey, operating is an incredibly important piece of the thing. So I think you start changing that mindset and the way you approach two systems. And I agree with you, right? Sometimes there are interesting challenges and interesting learnings in these things.
that you, by the way, after dealing with legacy systems, you can ultimately argue that you can do better when you start Greenfield things. Because a lot of the lessons that you learn dealing with legacy systems, you can bring towards Greenfield projects. If nothing else, I'm a lot less casual about throwing around the word legacy. When I talk about software and when I talk about systems, I'm careful when I use that word now.
I'm definitely more perceptive when somebody actually uses the word and it's like, okay, how is it you're thinking? I don't always go out and correct people. I'm not trying to be a stickler, but I have to know, okay, are you using legacy to mean thing I don't like? Truly a legacy, a system that has been running and I'm making money for 10 plus years. Sometimes it's a bit of both.
All right. Thank you, Fals, for listening. Another episode of Byte In Balance. And don't forget that you can leave comments and reach out if you have questions or any feedback or comments. Thank you, Dan, for the conversation. Yeah, likewise. I'll see you next time. And that's it for this episode of Bytes in Balance.
We hope you enjoyed our deep dive into the world of software engineering. Thanks for tuning in. We would love to hear your thoughts, so don't hesitate to reach out. Connect with us on LinkedIn to continue the conversation or simply follow our updates.
You'll find the links in the episode description. We aim to release at least one episode a month, but with our busy lives, it might vary. Subscribe to stay updated, and you might catch some surprise episodes when we're feeling extra chatty. If you are enjoying the show, please rate, review. and share it with your friends and colleagues it really helps us reach more people in the community to learn more about the podcast check out our website
The link is in the episode description. And if you're looking for more personalized guidance, we're available for mentoring through Mentor Cruise. And there's a link for that too. That's all for now. Until next time, keep coding, stay sane, and remember... Even when it feels like a total shit show, you got this. Thanks again for listening, and we'll catch you on the next episode.