High Quality Software Development with Eugene Fidelin - podcast episode cover

High Quality Software Development with Eugene Fidelin

Jul 10, 202456 minEp. 165
--:--
--:--
Download Metacast podcast app
Listen to this episode in Metacast mobile app
Don't just listen to podcasts. Learn from them with transcripts, summaries, and chapters for every episode. Skim, search, and bookmark insights. Learn more

Episode description

Connect with Eugene Fidelin:

https://www.linkedin.com/in/eugef

https://x.com/eugenefidelin


Full episode on YouTube ▶️

https://youtu.be/alUfh7Nk4eE

Beyond Coding Podcast with ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠🎙Patrick Akil⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠

Powered by ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Xebia⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠!⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠!


OUTLINE
00:00:00 - Intro
00:00:27 - What is high quality software?
00:03:18 - Preventing software bugs completely
00:05:29 - Tests becoming a bottleneck
00:06:56 - Removing tests
00:09:49 - Shift left approach
00:12:11 - End to end tests and retries
00:13:55 - Integration tests over unit tests
00:18:05 - Test Pyramid vs Testing Trophy
00:20:08 - The most important tests
00:21:09 - Balancing speed and quality
00:24:18 - DORA metrics
00:25:36 - Metrics without context
00:29:07 - Team performance metrics
00:31:27 - Outcomes AND output
00:32:46 - Early in career experience at big organizations
00:37:32 - Hiring junior engineers
00:39:54 - The joy of working in diverse teams
00:41:53 - High performing teams
00:44:35 - Domain chapter for sharing knowledge
00:48:04 - Not working in sprints
00:52:15 - Deciding your own way of working
00:54:03 - Who is the audience of this podcast?

Transcript

Intro

Hi everyone, My name is Patrick Akil and today's episode is all about software quality, what it is, how to measure it and how to get it up to a part where everyone is happy, software engineers and the business as well. Next to that, we cover team performance and how to measure that through the lens of bigger organizations and who better to have that conversation with than Eugene Fideling, Engineering Manager over at eBay. So enjoy for today, kind of as a

What is high quality software?

main, I wanted to start off with talking about software quality 'cause I did way back, I think it's like episode seven or something, an episode on software quality. And I think there's different trains of thoughts when it comes to what is quality or how do you perceive software quality. When I say quality and what that means in software, how would you explain that in the simplest terms? Oh, in the simplest terms, I think the hardest questions

always come. Absolutely. Yeah, I'm, I'm in favour of very pragmatic approaches everywhere. So for me, quality means that the software does what you expect it to do, and it also means that it's it's hard to break. Yeah, it's not, it's not fragile. Yeah. So I think as a developer you would like your code base to be like this. So it is predictable, it is stable, it does what what you think it should do.

And it's not easy to break. Yeah, it's not that you move some piece over there and then somewhere in a different part of the of your system, everything falls apart. Gotcha. I would say that's that could be a pragmatic definition of like I would say this is the high quality software if it if it feels like that. I really like that explanation because I I I was thinking about the same question. I was like, OK, it can go into a

lot of factors, right? For example, documentation with regards to usage or purpose of the software in general, from a from a developer standpoint, from a user standpoint, any docs, I think is is good to have unit testing to prove resilience and to make sure that you can make a change without things flipping over. But in essence, resilience and predictability is is the biggest component I feel like, Yeah.

I think that's the biggest challenges that we are dealing on a daily basis because usually whenever you need to implement a new functionality, it's not a again, from my experience, if you work for the, for the big product or the big project that spans over many years, you not, not very often you have chances to start like a Greenfield project. Usually you develop on top of something that's already there was written by many people over, you know, many, many years.

And the the last thing you want to have is if you touch something and it falls apart. Oh yeah. That's the, that's the, the, that's the worst case scenario. And you want to avoid it as much as possible. And I think like tests is one of the approaches how to guarantee that that it's it it stays like as a, as a solid, it has a solid foundation. Yeah.

Preventing software bugs completely

Can you ever fully prevent it you think Because as you mentioned, in a big company with let's say numerous lines of code, actual wait? Because I think code can be a wait as well. Is it preventable that things will break at the end of the day? No, I don't. I never seen this, this happening. So it's always you need always to have a a trade off you if you want to have your software to be

like very solid, very resilient. It means that it's very hard to change as well that what business don't like they, they, they like software that can be easily adapted to the customer needs on a daily basis or on an hourly basis. But at the end they, they want to have like some quality guarantees. So it kind of contradicts each other. Yeah, that's interesting.

I, I was exactly going to say that if the quality aspects, you see that in resilience as well as predictability, then changes like the the enemy of that basically, but you need change with regards to innovation or let's say leapfrogging businesses. Yeah. I would say if your product doesn't change, it's probably that. Yeah. I can't imagine that you develop something. I mean, again, we are talking about the the business that earns money selling something to customers, some services.

I can imagine that if you wrote some CLI utility, yeah, which is part of the Linux core, probably it's stable for many, many years. It's to be done at some point, yeah. It's done and then you don't add anything and it stays like this for many years. That's, but if you're talking about like OK, building services like web services, like web applications that deliver some customer like features to customers, solve some problems, you're, you're working on the high competitive market.

The business always wants new features either to be on par with the market or be ahead or whatever business wants. So they want to be, they want to change it, they want changes. So changes is what make your product competitor. You have an advantage over over your competitors if you can change it faster. Yeah, the speed is like the name of the game there. Yeah. And then if you're looking OK, we're going to, we want to

Tests becoming a bottleneck

change things faster. So what slows us down eventually if you, let's say you have CICD pipeline, you have cloud, everything is really smooth, one button click, everything is

deployed. When you reach that stage, you will understand that OK, tests are actually the next bottleneck because probably you already had a lot of tests written in the past with their like maybe in the old fashioned way where you had the dedicated QA team, they wrote a lot of intern test in Python. Now you don't have Python engineers in your in your team at all. It's either like, I don't know, like Java, Go and JavaScript and then, OK, do you need to learn Python to maintain those tests?

Or maybe let's rewrite it in the languages that we know. So at the end, you will end up in, in, in a similar situation. And then, yeah, the test you, you need to make a decision what to do with all those, you know, tests that already exist. But the decision is not, it should not be to throw them away, right? Because that speed, let's say if you start out from scratch and you have zero tests, then speed is kind of the only safety net or tester, like the safety net

that you have. And at some point you build to a scale where you have a huge test suite and then it might become a bottleneck. But then still you need that safety net, right? Because if you're super fast but you break things along the way, then that might might in the end make you slower. Yeah, yeah. And my observation that many teams and in many like building,

Removing tests

building different types of products, they indeed afraid of getting rid of those tests. So they understand that they slow them down, but they feel uncomfortable removing them. Yeah. And you need to be like really careful. You need to prove that, prove the value of removing tests. And by removing you just you don't need, you don't, it's not throwing away. You just need to understand what the value of a specific test and maybe you can find the replacement which is faster and

cheaper, whatever. So from my personal experience, like a lot of end to end tests can be replaced by a series of let's say integration tests and maybe contract tests. If it's testing like let's say you have a front end application, it talks to some API, then you have one end to end test that basically test these two applications and it's

very fragile. Then you can replace it with one integration test that only covers the front end application, another integration test that covers the API, and then you have a contract test in between. So the combination of these three tests usually much faster and more stable, sometimes end to end test, they cover some

very specific edge case. And then you should just be pragmatic and ask yourself like this is edge case, does it bring value to the maybe do we earn money if this edge case happens on production? Maybe this edge case doesn't happen so often. Maybe you just can remove the test and just have a monitoring on production. And if you see that oh amount of those really weird situations goes up, then you need to take action. But maybe for the normal workflow it will never happen.

So why? Why you constantly run this test? Interesting. Yeah, it's the balance of then prevention versus observability, right, Because you can only prevent so much. And if it you prevent so much that it actually harms you in your speed of delivery, then that's probably also the

opposite. And it's interesting that you say that people are kind of scared or hesitant to remove the tests because having been like in a software engineering position for a long time, I've been in also projects where there haven't been any unit tests. And then the general notion is changing the code is like kind of scary or uncomfortable because what what if things break? We don't have the safety net.

I've never been in a place where there's so much safety net that we don't want to take it away because then we're scared that things break, let's say. Yeah, yeah. So the safety net is there, but the cost of that safety net is huge. So you understand that you need some safety, but not at that price.

Shift left approach

Yeah, I think it's good then that this because the example you gave is also that for example, the safety net is in a different programming language, Python versus Java or Go or whatever you use cloud natively. And it might have been created just by virtue of having kind of dev and OPS be separate and QAB in between somewhere as well by a different person or by a

different department in general. So then what they are testing is indeed more everything and making their goal would maybe because I have a very limited working experience kind of with those different departments, but I'm assuming their goal would be to make the safety net as big as possible and prevent as many things as possible. So then I understand that when it then comes to the team that, yeah, it's there and what do you do with it basically? Yeah.

I think one of the solutions like I said is to rewrite the end test and something which is much faster, at least in our company, we call it shift left approach. I think it's also the term that is used outside of the company as well. Absolutely. So basically your end to end test usually run a very at the very end of your pipeline and your union test, they run very early in your pipeline. So if you watch on the screen,

it's on the left. Yeah. So basically you want to move as much test as possible from the right side of your pipeline to the left side of your pipeline because it's also faster feedback loop. But also usually what on the left side everything is much faster. You run it inside the docker or maybe in some even like locally. You don't need any specific environment for that. It's you can run it on a dev machine very fast. So also gives you fast feedback.

Developers like that. Also, personally, I, I, I don't like end to end test that you click the button. You, you, you don't want to wait for like 40 minutes until you have the results. So you probably switch to some other topic. Yeah, either writing an e-mail or maybe reading it next Jira ticket. And then 40 minutes later, oh, something is wrong. OK, But your head, your brain is already focusing on some other

task. And then you need to roll back basically off load what you have already and then get back to the to the task that you were on. So it's very inefficient in that way. Yeah, I mean, starting out with

End to end tests and retries

end to end tests, I thought they were amazing, right? Because you mock a user perspective, you go through the application, you do front end everything that's in between up to the API and the API itself on an existing environment, probably your test or acceptance environment. In theory, it's really cool. But then I noticed, OK, sometimes there's like weird behaviour or things click too fast before things are loaded and the the tests are flaky.

So then it's like, yeah, should I just rerun it? And I've also been in Teams where the habit, it's just like, yeah, end to end test. It's not really reliable if it's. If it fails, you just reclick it and then when it fails three times in a row then you know you have an issue. Exactly. Exactly. We're completely missing the point. I can relate to that. I think all our end to end tests they have at least three

retries. Oh damn, so they only send you a failure message after the three retries? Which, if you are, if you think about it, it's kind of already an indication that something is. Wrong. Something's definitely wrong. Yeah, yeah, yeah. And that's not fast feedback then either, right? If it has to run three times and afterwards, you know, if it's something is wrong, then that's right.

Now it sounds crazy. And I think that has come with experience doing it again or doing it in a different project, having a huge suite of unit tests where I can reliably say from a back end, because I, I have done a lot of back end software engineering recently that I would not even spin up a front end to see if my API change worked or if it succeeded

because I would have a test. And if my test is green, usually everything would be fine or I would confidently push it to test and then see or then manually test them and be like, yeah, it works as expected. It's because I have the unit test, I have the feedback as early as possible. I don't have to wait for an end to end process to get kicked off. And that felt really good, I must say, Like being in the driver's seat like that.

Integration tests over unit tests

Yeah, yeah, I fully agree with you. But I would like to add 1 prospective, maybe it's a more front end related because my experience more in the front end world. So usually like from my observation, the unit test in in the in the front end application like for looking about client side application, usually they bring less value than integration tests.

And the reason for that is usually, again, if you're talking the typical client side application, it's built in the, in the modern approach of, you know, component base, so everything is different component, then what would you call the integration test? Like do you need to test each component in isolation? That would be the very pure

integration test. But then it doesn't bring a lot of value because your components might just reuse another components, another components of testing each component. You basically nail it, nail the implementation details, but still you don't have the good feeling if you if the whole application runs as as you think it it should run. So usually in in this in this scenario I would go for like integration tests, so you don't mock at least components that

you built your own. Maybe you can mock the third party components. You don't want to test the library that was written by, I don't know, Facebook and you and you try to test maybe the feature. Yeah, you have like set of components that working together, they bring, they deliver some value to the customer logging for, I don't know, shopping cart, some view item carousel, whatever. So they they have some value for the end user. So you test them in one go.

I mean, as a as a, as a, as a single module, yeah. And then I would call it more integration test because there is a lot of components. So I would say a lot of people would say it's never, it's not an integration, sorry. It's not a unit test by definition, I would say yes. So it's more it feels more like an integration one. And in my experience, if you if you follow this approach, you know, you start with integration test, you might end up that you don't really need right unit test.

So or you need only right unit test for some specific edge cases. Yeah, which and the question, OK, do you really need to cover them? And the advantage of that is that you have much less tests you probably can replace, I don't know, maybe 20 unit tests by two or three integration tests and the integration test, by reading them, you already understand what the value this functionality brings to the to the to the user.

So just by reading the test, you can already understand what's the intention of this code by reading unit, unit test. It's it's hard to to say. It's isolated. It's isolated and also it makes it easier to add new features because you can completely rebuild the internal implementation of your feature. But if the user behaviour stays the same, highly likely you don't need to change your integration test. Exactly. This is basically the first thing I do when I need to

refactor. Let's say I don't know, there is a library that's outdated. We not we, we cannot use it for maybe security purposes or maybe whatever. I need to replace it with something new. How can I be? How can I guarantee to myself that after the refactoring the functionality will stay the

same? So the unit tests I I probably will need, I will throw them away because that new library might have different APIs that I will use internally, but the whole functionality for the end user might not really change a lot. So if I have an integration test, I change the internal implementation integration test will will tell me that oh, everything is still fine. So refactoring is is done in the right way. You don't. You haven't. I haven't broke anything. Yeah, exactly.

Test Pyramid vs Testing Trophy

I think I maybe in semantics still, when I say unit test, it might be if I actually look at the semantics, it might be an integration test even on the back end. And it's interesting because there used to be a lot of talks on this pyramid model where you have your base that is like the most amount of tests with wise would be your unit tests. Then integration is in the middle. So it's kind of in between and end to end is like your peak. You only want a few of them, very critical ones.

And now I've also heard a different mindset where you have this trophy model where the unit is still a bit less than the actual integration because that's the bulk of your testing. And then the bottom part of let's say a chalice would then still be the end to end part, which would still be the least at the end of the day. I'm really fond of integration tests. I I mean, on the back end.

So I would hook into an API and I would mock any database interaction, but everything in between I would test, unless I specifically want to have one function that is like complex in isolation. And I want to do a table driven test, for example, because I just want to have many versions of input and output. Or I can even do that fuzzy so I don't even have to define the input and output myself.

Those would be more unit tests. But I I call everything a unit test even though it might be semantically more of an integration test. So yeah. Yeah, I would say I would call it integration test because you probably test like internally you have a lot of parts, probably you have some, you know, filter some middlewares that parse the incoming request. They they enrich, they validate those requests. They probably add some metadata that is being later on used in your code.

So in, in my opinion, it, it's more like an integration test. So you, you, you, you mock the incoming request, you mock everything that's outside of your service and then you test it as a black box. So I I I think this type of integration test they gives the most value per line of code I would say. Yeah, they have like a really wide reach with regards to how much they test and I like that. Especially like you mentioned that you can use some, you know,

The most important tests

fuzzy data. I know that from some how it's called like chaos testing when they just throw some really strange values and they see how your software behaves. To be honest, I, I never tried it in real project to understand the value. Instead, what I'm doing is I try to understand from product and from business what are the most important scenarios for using this API. And then I, I make sure that I test them.

Yeah, because that will give the IT, it will prove that if I change something and it's green, the test is green. Then the flow that brings money to the company is not broken on production. Maybe some edge case can be broken, but if it doesn't bring money we can fix it later. Yeah, it's not as important. It's not as important because as as you mentioned at the beginning like speed for business, nowadays is is more important and.

Balancing speed and quality

Especially, I'm curious then how your experience has been balancing and keeping A level of quality that you like and that the team likes that is like stable that is up to par with your own quality standards, as well as delivering with regards to any deadlines that might come from anywhere. Because usually that's how it is. Because I feel like people make these trade-offs and I'm really curious how you've kind of managed that situation when it comes to balancing speed and delivery.

Yeah, I, I think again in the companies that focus on earning money and the software is just a, a, a way to earn money. It's not what they are selling. Yeah, yeah. Usually business needs overweight and so delivery always overweight the quality. That's my observation. So you can fight against that. You can try to, I don't know go against that. But this is the given, this is at least the companies that I've been working on this is, this is

this is the usual scenario. Delivering something earlier is more important than delivering it with a high possible quality. So then as an engineering team, you need to, to balance that. So you have business who is very enthusiastic about delivering and they have the power to, to make it deliver because basically they give you the money. So they, they, they tell you what to do at the end, but you, you also need to have the power to, to have a, a say in this

conversation. And then and then you need to maybe sometimes slow down and say, OK, the quality is important because if we deliver something like really fast nowadays, it means that the next feature we will not be able to deliver as fast as you want. Yeah.

So then it comes to the quality gate that is like that the team has, because if the team has a very low quality gate for the business, it will be very easy to convince them, oh, just ship it, do it. People say, oh, we don't care, like let's ship it because maybe they know that OK, in the months or so they will hand over the service to another team. So if something goes wrong in production, not their problem.

So why? But if your team knows that, OK, we're going to own this service or this functionality for, you know, upcoming months or years, it's in your interest. You don't want to wake up, especially if you have on call of duties. Yeah, yeah, yeah, yeah. That's a very good motivator for the for developers to actually think twice, like, oh, I don't want to be awake three, you know, three hour, 3:00 in the midnight. Maybe I just, I need to put something extra. Yeah. I just need to make sure.

I just need to make sure because it's also in my interest, yeah, also in in the interest of the whole team because maybe it's it will be not you, but maybe your colleague will be on call. So I think every team builds that internal level of, you know, the, the, the quality that they feel is, is good enough and then they, they just try to keep on that level. There are, of course, some. This is very subjective.

DORA metrics

Of course. There are of course some metrics that tries to make it a bit more objective like in the Dora metrics there is this change failure rate, Yeah. So how often your deploy end up having some introducing new issues on production. So this is something that you can also look and I think many companies they, they look at it from the very broad perspective like OK, how we are doing it.

Also it's a nice indicator if you're moving fast, you probably want to have you, you can have this change failure rate higher for some period of time. Yeah. But after that you want to make your software more stable. So you basically do some actions to maybe reduce that that number. I think the measurement's very interesting because as you say, if I just look at the metrics and I don't know the context of a project, so if I see high change in failure rate, I'm like, OK, that might not be

good, right? But if we're in this ramp up and we're trying things out and we're breaking things fast, that's actually a fine thing at the end of the day, yes, once we've proven then something, then I expect that rate to go down. And if it doesn't, that's then alarming. Yeah, exactly. And I have a story about that. So we, the team was working on

Metrics without context

the project and it was, it had a very hard deadline. It was very important for the business. So they were actually delivering feature at a very high pace and, and, and they made an agreement they it's OK to break the master branch because it's not exposed to the end user. So everything is behind the feature flag. So even if it's broken, it's fine for us because it's behind the feature flag. But of course, this change failure rate metric, it, it went really high.

Oh yeah. And then the, the, the, the, the DevOps team that basically observes like the whole organization was start throwing emails and then Zoom calls. OK, guys, what are you doing? It's wrong, it's wrong. Stop doing that. And and then the team had to explain that guys, but we are just writing a new feature. It's not exposed to the end users. It's perfectly fine to have this metrics so high. We understand that the consequences of that and, and,

and we take this risk. But but as you said, if you just look at the metric, if you don't understand like maybe the on which phase the product is or the internals of the product, just the metric itself might give you the wrong feeling about what's going on did. The team understand the DevOps team that like raise the alarm bells in the end. Yeah, yeah, they understood. But again, you need to spend time on that or you need to explain other people.

Just say, OK guys, we understand, just don't bother us, we will fix it later. Yeah, yeah, yeah. But then in all the reports, everywhere your team is red and everyone's. Oh, why? Why So you you don't really want to have this? Yeah. You don't want your team to be associated with someone who you know brings trouble.

Yeah, like it. It depends because if in the end of the day, if that outpaces through that way of working, your competitors, for example, or allows you to deliver a feature faster and that's what the business needs, then it's fine. But if there are KPIs to that those specific metrics and therefore you can't do that way of working even though it might be more effective for what you need to do, that would be a shame, wouldn't it?

If you do 2 business decisions or KPIs or ways of working that are like more risk safe in general, you kind of limit your way of working. I think that'd be a shame because then that allows you to not really experiment or innovate. Yeah, Yeah. That's why like when when these Dora metrics were just introduced, I was really enthusiastic about that because it feels like it, it gives you a nice insights about what you're doing. But I think it it's very easy to

abuse. So you as I said, you, you can only make judgements based on those metrics if you understand the specific product. Yeah, what, what's the face of the product? What, what are the problems? Like maybe it's the very, maybe it's the product that undergo very critical rewrite. Yeah. And then you really want to have this change failure rate very, very low. Or maybe that's something new that no one is using. No use literally 0 users on

production. So the high failure rate doesn't say anything, but you don't see it from this dashboard. You only see the number. Yeah. And only the team can relate this number. Is it good or is it bad? But. How do? So for example, if you're saying

Team performance metrics

the door metrics don't give the total picture, then how would someone give the total picture on like team performance? Because it's not going to be on an output level, I feel like we we're not going to do pull requests or commits because then from a developer side, I'm just going to make smaller commits,

smaller pull requests. And at the end of the day, smaller pull requests also have a correlation with higher throughput because those pull requests all need individual reviews in some organization, even 2 stamps of approval like that's not really going to do it. But then what is or what do you think there? Like I would say an ideal situation, everything should be measured by outcome. But I also understand that often it's very hard. Oh yeah.

Because again, very, I think the use case that may be familiar to to many people product comes with a great idea, engineering team did a great job, they deliver something negative impact. So does it mean that the the engineering team failed? No, they delivered probably the best software product failed. No, maybe I don't know. They they were just coming with idea. So they just want to learn. So the learning is there, but maybe the business impact is negative.

So if you the outcome is negative, well, maybe not not like this because because people say that negative is also learning. But let's be honest, if you deliver a feature and it has a negative business impact, no one will be proud of that. It's a learning. It's a learning. But unfortunately those learnings they they have, I would say they're less important to the business that earnings, yeah, learnings versus earnings it's. Like a cost versus yes, yeah, yeah. Learnings are good, everyone

will say OK, good. But next time please deliver the feature that will bring value. Yeah, absolutely. If you're only doing learnings, then probably your team will not be perceived as a the high performance team. Yeah. Yeah. But that's, that's then the downside of only looking at outcomes because then yeah, if you, if you go through the motions, you did a really good job, but then the outcome is not there or it's it's worse, it's a negative, then that can

definitely impact. But then is it a combination of

Outcomes AND output

output and outcome or what do you think there then? Yeah, I think at least what I've what I've seen in the companies I was working in is it's a combination. So sometimes you, you you can if you if your outcome is negative, yeah, you can at least prove that by by by showing up the output. Yeah. So you can say OK, this is the the work that we did and maybe we can reuse some pieces of this work in something else. So it's, it's not that we just throw it away.

We yeah, iterate on top of that. So it, it has, it has a value. Not, not maybe as a prominent well as just real money, but it still has a value. So, but I would say it's, it's a very tough, tough question. And it it, it, it is very related to the company culture. So how the company in general, how people in the company perceive that. Yeah. Yeah, it's still subjective. Then that's kind of AI think it'll always be there.

And then if the culture decides it, it's just either how it's been or it's the people that are there that have kind of their experience and they leverage that and how they make the decisions at the end of the day. Yeah, I did AQ and a recently

Early in career experience at big organizations

and I'm really curious because you have this experience with let's say bigger organizations. There has been a lot of questions in that Q&A with regards to getting your first

job. And I'm of the set mindset that probably if you're either from a career switch, you're self-taught traditional education, your first job is most likely, I'm very generalistic in black and white, not going to be at a start up. So you're going to be most likely in an organization where things might have already been live for a while. You're going to work on a bigger project, but maybe a small part of it. You're not going to understand the whole picture yet.

That's probably going to take a long time and that's how you're going to contribute. And then the question is, OK, but how do I build up the experience to be able to do that in an organization? How can I make sure that already at when starting I have a leg up versus someone else? Because a lot of the advice out there is to build a small project, build just whatever interests you. If you like anime or gaming, make a make something yourself

out of that. But that's always going from zero to something that's then usable. It's not really going into an organization where things are already live. How can you accommodate for that skill set or how can you actually have a ramp up start when you go into an organization like that? Yeah. So I can say that in our organization, we do hire junior developers with 0 experience, like I would say paid experience, OK. Gotcha. So we have people that have

graduated from the university. They probably did some projects during this university cycle. They, they maybe had some boot camps, so and so forth, but they never had experience in, you know, the commercial software commercial industry. And to be honest, my experience with those people is actually quite good. So they're, they're really motivated. And if you have a motivation, you can quickly learn and quickly fill the gaps in in your

learning. And because again, the company is big, it can afford a certain level of, you know, a certain amount of maybe people who don't really bring immediate value. So the company can invest in in person that OK, maybe it will only deliver value. Maybe let's say in half a year on in a year, maybe as I said, for the start up, it will be a a no go. They need to deliver something immediately. Otherwise in the next round, no

money from investors. Yeah, so but I know that also junior developers also work for start-ups. So there's there's definitely a way there, but I can only say how it works in in the big company. So like usually if you have a team of only senior engineers, it's not very productive because if you're a senior engineer, you're probably a quite a peonated person. You have a you have an experience, you want to apply this experience. You have your favorite topics.

You want to find those topics and work on them because you really like them. I I, I relate to myself. Yeah. So if you have too many of such people in your team, that would be quite a challenge because there's always a lot of work that just has to be done. It's, it's maybe a quite routine work. Maybe nowadays you can automate it with AI things. But still there is a plenty of work that you need to, you need to do And, and by doing this, you are learning.

So I would say maybe your first job as a junior engineer will not be exciting. Probably it will be boring. Yeah, probably it will be just, oh, we have a, a huge set of, I don't know, CSV files. You know, you need to convert it to Jason and you do it and then no one is using those Jason for maybe for years, maybe because you need to do something for, I don't know, for something boring. Oh, we have a a legal compliance, so we need to have

this copyright on all the pages. So and then you need to go over 100 pages and update the copyright just because it's a legal requirements. It's boring work, but. It's important still. It's important still and, and, and you can learn because instead of doing it manually, you can improve your skills and automate this. Yeah, instead of like manually going through the pages, maybe you can just write a script that will do it for you.

So my, my, my experience says that it is doable, but at the but when we are hiring, I would say junior engineers, hard

Hiring junior engineers

skills, they are important, of course, but at this level hard skills are are not as important as motivation. And also I would say the team fit. Yeah. And it's maybe it's a very like vague thing. Maybe a lot of people would say, oh, team fit doesn't doesn't exist or it means different things. But for me, the team fit is like, oh, you, you wake up, you go to to the office to see your team. Like, OK, do I want to see this person? What kind of feeling I have

after a conversation with him. Exactly. He's always complaining or maybe he's he's struggling, but he's still searching for the way to do something. And he's still like maybe motivating others to do something or he's asking around, or maybe he's just sitting silent in his corner. And then like once a week, you hear something from him. So you you kind of have this understanding. Yeah. And then, and then if it matches your expectation, you say, oh,

it's a good team fit. And it it could be different in a different team. It's very subjective. It's very subjective. If your team is, you know, people who really like to gather together, have a good conversation, I don't know, coffee, beer after work, then you probably want to find the guide. Who or a girl. Let's be inclusive here. That also kind of shares the same feeling. Yeah. The same, the same, like, I don't know, lifestyle. Yeah. Or work. Work style.

Yeah. Yeah. I mean, that also makes sense because a lot of the feedback I've heard is that they go through this interview process and then it's a no. And then they very much think inward and think it's them. But it could also be the people on the other side. I told them there's many things that can go on in an organization. People might just know hours in advance to have an interview

with someone. So they're not as prepared as you would expect them to be. And that also goes into possibly the organizations or the hectic within that organization you're interviewing for. And at the end of the day, if it's not a team fit, it's from both sides, right? It's not just you as an individual, it's also just the team you're applying to that at the end of the day has its own team culture and has its own way of working. And that might not be a good fit then.

Yeah, yeah, it it has downsides as well.

The joy of working in diverse teams

Yeah. So the risk here is that if you're only looking for the people who behave similar to you or share your like values, you will end up in the team that is, you know. Very homogeneous, yes, yeah. And like diversity maybe again, another word that means too many things, but diversity helps. Yeah. So I've been working in the teams that were very diverse in terms of like age and gender and and background experience. And I would say that's the the most fun experience.

Yeah, it's you. You really enjoy working in in in this with these people. Yeah, they, they come from different backgrounds, from different countries, different ages, whatever. Also also great that if if in in the team you'll not only have like technical people, but also product marketing. It it really makes a difference. You really start, you know, feeling that the life is not, you know, it's not like it's black and white. It's not like, Oh my opinion or wrong opinion.

There's like variety of different things. That all makes sense. Yeah, yeah. I've I've shared that experience as you're describing it and it enriched me like professionally, but also personally because of that difference in perspective. I just take that with me and

Ioffer that to another team. And I've even felt like within those teams, I've been very productive as an individual because of that or because of those conversations or just because of my fondness going to work and working with that team specifically. But that's only one of the aspects when I talk about kind of productivity or team

performance in that way. That one you can go for from a hiring perspective and create teams that are diverse where you can kind of try and make that magic on paper. It still has to happen in practice when we talk about team

High performing teams

performance, how can you make a team that's non performant at the end of the day performing? Or what has been your experience with that? I'll be honest with you, I think maybe I'm I was lucky or maybe there is no out like the teams that don't perform well, but I was always working for the teams that was always performing good. That's good. That's good. You probably contributed.

I hope so, yeah. So. So basically the teams that I've been working, I will part with like as a, as a developer, as an engineering manager at the different roles. We never were perceived as a underperforming team. At least I, I, I never had this type of conversation with my manager that says, oh, you need to do something because you guys are underperforming. Maybe it has to, to do something with the, with the overall expectation from that business has.

So maybe if you manage that those expectations in the right way and then basically you're under promise and over deliver instead of doing other way around your maybe that also plays a role. But it's hard for me to answer this question because I I've never been in in such situations. I've been in a situation where we want to improve. Yeah. So we know that we are performing good, but we also know that, oh, we can perform even better because I can.

I don't know, like we have an example that it's possible. So I think that's, that's the constant, the like the permanent situation in the in most of the teams that they they perform good enough, but they also understand that. Oh, we can perform better. We can perform better because we can do many things, I don't know differently. And we can improve our developer experience.

So we we will be more productive or like we can improve our testing strategy so we don't waste so much time on end to end test instead of we deliver the feature faster. So that's the common, that's the permanent situation. So you always, there's always a room for improvement in this, in this regard. But once you reach the level that is good enough for the business, they probably won't put a lot of pressure on you. It's, it's basically the, the

need that comes within the team. So the team itself understand that, oh, we can do better. So the business is already happy with us. But we as a team, we understand that we can do even better. What did you focus on when you

Domain chapter for sharing knowledge

felt that you could do better as a team? You're already doing like it's not underperforming. You're doing good, solid. You have the feeling that you could do better because maybe you saw it in another team or what was that then that you changed? So one of the approaches that proved to work you, you kind of create like a parallel working track, let's put it like this. So I call it myself like chapter.

OK, so let's say you work in the team and the team does like a lot of different things and, and you have back end developers, you have front end developers. And then there's another team like next to you. They also have back end front end developers. And then you understand that, oh, you can do better in in let's say you pick up one specific topic. I'm I'm more into front end developers, so I will talk for that. So you say we can do better in terms of front end.

We can be more productive, we can develop front end features faster. But how do we do that? There is always some grey areas where no one really invest time because either it's it's there is no clear ownership of those areas. Like there might be some shared library that everyone owns, but no one spends enough time to, you know, make it make sure that it is up to the standards, or maybe make sure that it, it is fast, performance secure and so

on and so forth. So, so then you try to make this, you know, the chapter and then you try to get people from outside of the team. Oh, OK. So you learn from them. Yeah. So if you just stay within the team, it's really hard to become better. Once you start inviting people from other teams, you do this, you know, Cross. How to say like cross pollination of knowledge? Cross pollination of everything of knowledge of approaches of

learnings, whatever. And it gives a boost like every time I, I, I, I experience that it, it literally gives the boost in, in, in so many areas. You, you, you never imagined. So usually you have very low expectations like, OK, maybe let's work together on this shared library. But at the end, you have improvements that impacts not only these teams that are involved directly, but much more people. And then next time you say, oh, guys, let's work on something else.

You have much more people who volunteer to work because they, they really see the impact. And, and then everyone who participates in this chapter, he brings something back to his team as well. And then every each team that basically dedicate the person to the chapter also becomes more efficient. Yeah. That's, I think, very important because the bigger the organization, the more teams you're going to have and somehow the growth path can kind of stagnate, right?

If everything is new, if everything is changing, you have this growth curve because you're learning. It's probably the first time you're in that environment. And then when you find yourself in that team for year or even numerous years, then probably that's comfort, right? But there's not a lot of growth anymore in comfort unless you go

back to uncomfortable. And then this building up a front end chapter across teams, I can see the value in that bringing together minds that are from different teams to get a that's a common perspective or diverse perspective on a certain topic. You also collaborate then and actually develop within that chapter things.

Not working in sprints

That's really cool. Yeah, maybe one important thing to mention that usually you should try avoid inviting managers to that. Oh, really? Because managers, they really like planning estimations. And if you want to have a really space for innovation and improvements, you don't really want to have a very concrete plans. You want to have like a North Star goal where we want to go, but you don't want to really plan like each step. Like put it in Jira.

It's boring. Yeah. It's not what most of the engineers would like to do at their free time. It's all extra, yeah. So try to avoid inviting managers unless they are managers who also code. Yeah. So then they join as just as developers. I was going to ask that because on paper I saw engineering. Yes, yes. But I, I still do coding a lot. So I, I'm, I'm, I'm still trying to consider myself also an individual contributor. So it's, it's still doable.

So I I enjoyed that kind of combination of roles. That's great. Nice. I was thinking something that you mentioned when you're in a team and you're, let's say, trying to step it up as a team, you already saw it and you want to do better. That's something that I've had for a long time and it has absent flows, but usually it's there kind of like everything is going well, but still these aspects like I think we can, we

can do better on that front. And then someone challenged me on that and also said, OK, this tech space somehow also has a lot of burnouts, right? And it might even have to do with kind of this constant drive of improving quality software, team dynamics, a lot of aspects there, or even sometimes the terminology that we use, right? We go Sprint after Sprint. There's no marathon runner that does Sprint after Sprint after Sprint. But we do that.

And then you're in Sprint 55 and it's like, yeah, 65 is just around that. We go again. Basically. What has been your kind of experience with burnouts or people getting into that mode where they just have fatigue at the end of the day? Like, I'm trying not to work in sprints. So at least in my current team, we adopted Kanban because we also don't want to have this constant, you know, like you, you always have have, you always have a deadline like in two

weeks or in one week. And feeling that you have a deadline is not a healthy one, I would say. It's called deadline. You're dead. It's, it's nice to have some sense of urgency, but the by definition, the deadline doesn't sound really good. So that's why I really like Kanban over over the, over the, the scrum. Yeah, right. So, and then usually you plan in, in the longer milestones and usually those milestones, they deliver some value to the

customer. And, and in this case, you, you kind of run the marathon, but then at the end of the marathon, you have this moment of delight. Yeah. So you deliver something to the customer and and then you have some time to reflect on that. Yeah. So you gather some feedback from the customer, you maybe do some small adjustment based on their feedback. But it's already like, OK, you you've done the work.

You see the accomplishment. You feel the accomplishment because it is used by real people. You it's it's it's nice to have a connection with the product in these regards. Yeah. So you don't deliver somewhere. You, you know how your product is being used. Yeah. So for me, it feels like it it kind of reduces the chances of burnout because you've worked hard or you you work normal. Yeah. No crunching. Yeah. You deliver the value.

This, this value, you see that OK, this, this is the actual value for the for the for your customers. Everyone is happy, product is happy, business is happy. So you, you, you, you got those accomplishments. You, you, you now have energy again to work on the next phase. So in my opinion this is more healthy approach. Were you so autonomous it? I only have limited experience

Deciding your own way of working

when it comes to bigger organizations, but were you in the team? Were you autonomous in making that decision? Because what I've seen in organizations is that, OK, everyone has kind of the same way of working. This is standardized. They're on the Scrum, same terminology. But then people rarely have different ways of working when it comes to Kanban or Scrum or how they deliver. Luckily, in our organization, no one cares how you organize inside the team. That's great.

So there is a, a way of work to basically make sure that if you have like a cross team dependencies, everyone understands at which stage you are and when you will need the help or if you have a blocker or someone is blocked by you. So at this level, there is a like a common approach which everyone but what you're doing inside the team, it's it's up to you. As soon as you deliver results, as soon as like every business is happy about you, no one cares. That's how it should be.

Yeah. Yeah, I like that a lot. That's great, man. Maybe in our in our scenario, it helps that the teams are across the globe. So like I physically cannot check how U.S. team is doing. Yeah and they cannot check how I'm doing here in Europe because physically we are in a different time zones. So maybe it also helps to actually there is like a natural boundaries. Yeah. But maybe, even if they want, it's not that easy.

Maybe if we will, if we like in one big open space, then it might be a bit harder because OK, everyone is doing stand ups and suddenly your team is not doing stand ups. So everyone OK? Why are they not working at all? Yeah, it was good. Yeah. The physical perception would then be the the determining factor. That's interesting.

Who is the audience of this podcast?

I want to thank you, Eugene. This has been a blast. I love your perspective within the organizations and the experience you have there. Thanks for coming on and sharing. Is there anything you still wanted to share before we round off? Yeah, I just want to. Thanks for inviting me. That's also my first experience talking, so probably not everything went smooth. But I think you should do it more. I think podcasting suits you. Yeah, yeah. But it was a great experience.

Actually. We have one question for you. Yeah. So basically this podcast, how do you identify your customer? Like who is who is the customer of your podcast? You mean from a listener perspective? Yeah, yeah. So you're, you're, you're doing this product. So there should be a customer for this product. So how? How do you see that customer?

I mean, it is shifted, right? Because initially I just threw out conversations right that then I'm talking about three years ago and I was hoping people would listen to it. But now throughout the conversations, the topics have changed. They used to be very technical. Now they're a bit further away from technical and there's still

some technical here and there. And I think I learned the best who the audience is, is through those Q&A episodes where people sending questions or challenges that they have, they might be the more vocal part of the listening audience. And due to the type of questions that I get, I think I get a better feeling of where people are either in their career or with regards to the challenges they have.

Other than that, I have some superficial statistics, but that's the same as if I look at statistics in isolation. So I think those questions give me the context, then pairing with those statistics to know who they are. They're age wise, very similar to me. I like to also see myself as a listener on that aspect because I cover topics that I would like learning. So then I would also like the listening experience.

Hopefully they're very open minded or they're looking for perspective and what they do with it. I only have hopes, but if it anchors and they use it later, I'm already really grateful. So yeah, OK, thanks. Thanks, man. Thank you so much for coming on again. I'll put all Eugene's socials in the description below. If you're still here, leave us a comment. Let us know what you think of this episode. And with that being said, thanks again for listening. We'll see you on the next one.

Transcript source: Provided by creator in RSS feed: download file
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast