Measuring Engineering Productivity with Walter de Bruijn - podcast episode cover

Measuring Engineering Productivity with Walter de Bruijn

Sep 11, 202437 minEp. 174
--:--
--:--
Download Metacast podcast app
Listen to this episode in Metacast mobile app
Don't just listen to podcasts. Learn from them with transcripts, summaries, and chapters for every episode. Skim, search, and bookmark insights. Learn more

Episode description

Transcript

Hi everyone. My name is Patrick Akil and for today's episode, we cover engineering productivity in scale up like environments. We answer questions like what to measure, how to measure, what experiments to run, and how to start right now in your own team. Listening through this episode, you'll wonder, should I start experimenting and should I start measuring? My answer is yes, absolutely. Start right now. Who better to join me at this table than Walter Abran as Head

of Engineering productivity? He's done this in many organizations and I love this conversation, so enjoy. One of the main reasons why I invited you was because I listened to this talk about developer productivity and also specifically measuring that and how you've implemented it in the organization you work for. Can you shine a little bit of light on what it is you're doing as well as what you've achieved so far? So I'm active in the field of

engineering productivity. And if you look at engineering productivity, I always get a question like what is it? And to explain it, I think I also have to explain what it is not. And what it is not is a way to do performance measurement, measurement on individual or team level. But it's a way to create insights on your effectiveness of engineering processes, like what is going well, where can you maybe see improvements being

done? And if you start rolling not improvements, is it actually getting better or not? That is the part that engineering productivity brings to the table. And if you do it at scale and if you can like create an efficiency of 1%, like an improvement that you're creating there, if you have a hundred 700 plus workforce, that's huge what you can do there. The essence is that you create the insights for your leaders. So leaders can lead in creating the improvements or protect what

is already there. Yeah. I mean, from my product side, I love measuring things, right and seeing the impact as well as kind of estimating than what people are working on specifically from a product side, because I can then influence whatever road map or whatever we need to work on, whether that's Tecta or more functionality or getting more juice out of the existing

functionality. But then I put my engineering hat on and this thought of, let's say, my way of working being tracked in such a way that we can actually make those decisions, that does sound a bit scary. Well, then you make the assumption that we start tracking your work. Individual tracking, individual work. I think it's not going to work. I always give the example if an engineer is just staring outside of the window thinking that can be a very productive moment, can

take an hour even. But in that hour a decision can be made that made that prevent us from a huge architecture mistake down the road. Is that productive? I think it is. It's super valuable. Can you measure that? No. Should we even start thinking about measuring that? No, we should not do that. But what I can do is look in how efficient, for instance, are we doing product engineering? How are things flowing or where is it stuck?

Let's be honest, the things where it is stuck or what slows down or what is annoying to do are also the things that you would love to see changing. So let's bring that forward. Let's see if we can get some data. Can we improve that? Let's have a debate about that. Let's try something. Let's see how that works.

In the value chain of then thinking of a feature going to design until it lands in the hands of the developers that create that feature, and then eventually when it lands in production, is that the chain that you measure or what part specifically? I think if you start thinking about engineering productivity, my advice would be to start on the engineering side. Why? Most engineering flows are quite well defined. You have a CICD pipeline, which is a quite a stable flow mostly.

So let's start there. But you can further down the road expand that further. So where does your CICD pipeline end? Are you looking to? OK, it's released in Canary releases, for instance, and now it's hitting our production servers. Or are you going for general valuability? It's like rolled out into our entire server form. Why do you stop on that aspect? And on the left side, you can include more from the product cycle as well.

Like how long does it take us from discovery until we actually start developing something until it actually lands on our production server? So these lead times super interesting. But the more you go to the left and to the right aspect, the harder it will be to start measuring things. Why you have more actors. The process has become more complicated, so it's more work

to get actual good data there. So when we're talking about then specifically this engineering process flow, would I be able to pick up what you've implemented in measurements and then apply it within my own domain because of those, let's say, standardized practices or? If you have highly standardised processes, I would say yes. The truth is that mostly that's not the case. For instance, let's start with a

very classic example. There are environments operating in a monolith environment totally different when it comes out of the metrics that you want to start tracking. Comparing to a micro service environment where there is like the deployment speed is way higher, the PR length and therefore the amount of PRS maybe that you're doing is totally different in volume when you compare it to a monolith, for instance. So it really depends on the

context. And I think it's also super important to take that context into mind when you start measuring these things. So a single metric that will tell you a lot like, oh, if you measure this, everything will be fine and you can start making improvements. I've never seen them. If you look at Dora metrics for instance, who out-of-the-box give you some examples which you can track, that's great. Good starting point, mostly fairly easy to implement if you have some sort of

standardization. But if you take Space, for instance, you're looking at multiple aspects in the space framework. So not only on the Dora metrics, which are part of that, but also on how do we productize satisfaction from people. So you're scaling up in what you want to measure. The complexity there is also being added along the way. Could you explain the Dora metrics and then afterwards the space framework?

To keep that very simple, this is for you also an example if you really want to dive deep, there's excellent material in how to get started with Dora metrics. But for me, the Dora metrics are a set of metrics that give you engineering insights in how well are you able to produce code and bring it to production and how often do you feel and can you

correct on that? The space metrics, if you would implement them fully, like from all 5 letters, you're going not only at the metrics that you're tracking on software development, but also the satisfaction or how well are we productizing this part. So what's the outcome basically that we're creating all the way to the experience that we're bringing to individuals. So the set is broader and there's less specifics on. There's only a couple of

examples in the space framework. Oh, you can try this. You can try this. So that can be a good starting point, but it's also at finding the way in. What do I want to measure and how am I going to measure that? That is mostly a journey where you need to grow into because you all start with messy data. We don't measure that at all. We don't know how to measure that. Start with what you have, normalize it along the way, get it better, and then start building out your data set to

draw more conclusions. Both frameworks actually allow that, although Space is more aimed at that growth journey. Cordora is OK. This is a fixed set where you can start with work towards getting that inside. Gotcha. Yeah, well, we're then talking about this engineering process flow, right? Regardless if it's micro services or monoliths, What are some of the measuring sticks that you can either choose to implement or choose to ignore based on your context?

So tracking everything on the individual level, I I would say my recommendation, don't do it. Why it it doesn't tell you anything? I gave the example of staring outside of the window. There are many more. So what? What you can see on the individual level is minimum, so avoid that when it comes to teams. There are some interesting things in your CSED pipeline. What I really like is looking at

PR life cycle. So if you start creating PRS, how quickly are things being picked up and reviewed and how long does it take until it's actually merged in? Why is that of interest if I bring up APR and it takes some time before someone starts reviewing that, that's an interesting fact if that goes up because I can ask things like do we have the proper knowledge to actually review this PR within our team or are we well aligned with other teams that need to do

this review? So that gives you a lot of insight on are we managing our stakeholders properly? Do we need to develop our teams? And then it comes to getting eventually the commit in emerged in because that's where the value is created because now we can start deploying. How long does it take us there? That also includes things like testing. Flaky tests are coming into play. Our speed of test, are we able to test all the way to deployment.

So we are the lead time overall broken down into various cycles. I think it's a very interesting one that you can often quite quickly explore. There are many more. I would definitely also recommend in looking at the deployment side, often deployments break. Do you know that? How quickly can you fix it? What is the cause here? That is additional data because that's the moment in time only when it's deployed. The value is delivered to your customer, so have an eye on that

as well. That makes sense. I read an interesting blog post recently, one of my friends, Luca Rossi, he has a sub stack newsletter and he broke down the different types of interactions with regards to pull requests. Actually it was this morning in the train and he says you have two extremes right? Either you can have a group of senior engineers that doesn't do any pull requests, doesn't do any reviewing. And then their throughput initially is really, really quickly.

And then you can see towards the future that there's knowledge silos or there might be different conventions and different code qualities here and there just by virtue of not

having any reviews. And then on the flip side of that, well, people say, OK, we need more digestible smaller PRS where then all of a sudden you bombard the team with pull requests and then you're reviewing and context switching is constantly and you get like this fragmented feature that you have to puzzle in together just by virtue of having multiple pull requests. I'm very sure somewhere in there

is like the sweet spot. But then also you have this completely separate train of thought that is like, OK, ensemble programming. So more programming or pair programming might be a really good alternative as well, because then every line that is written just by virtue of having another person next to it is reviewed on the fly rather than afterwards. What is your train of thought with regards to kind of this pull request life cycle?

Well, here you go. This is an excellent example where you can start experimenting. So let's take, for instance, DP, our life cycle and that we break it down as I mentioned before, like first commit, first refill, first commit. But let's also talk about the throughput, like how many PRS are you doing? I would love to see a team that we maybe agree that on, on Sprint, we are going to create smaller PRS. Let's round that up. Let's see if that actually happens. Your triplets should go up.

What happens with the total lifespan of the PRS? You can actually see what it does and you can have in your retrospective, maybe a discussion with your team members. What are we seeing in the data? What are we experiencing? And here we're bringing quantitative and qualitative data together. And I think that brings very powerful insights to have a discussion. And you can also start experimenting maybe with, OK, let's do some more programming there. Let's start organizing that.

Let's limit our capacity maybe in the screen, otherwise we cannot do this. But let's look at the data afterwards. So running those experiments, and I would highly suggest to use the experiment canvas for that. So you really define what are we going to do here? How are we going to do that? But also what is the outcome that we are and what want to see happening and how long are we going to do this in order to see

that outcome? So driving those experiments, I think that's the key that you want to unlock when it comes to engineering productivity. And this is an example where you try multiple ways of let's see if we can bring down, for instance, the PR life cycle, but not influencing our throughput or, and maybe the throughput even goes down or up. You don't know that. Is there a trend you've seen in running these experiments to see what makes generatively?

And I hate speaking in generics, but in any case, what trend have you seen that works well for teams in most cases? It really depends on what you're doing, what your domain is. If you're for instance deep into databases and you're handling schema definitions and changes, your world looks totally different than you're focusing on front end. Or if you're focusing solely on backhand or infrastructure as code. So that was the first thing that you need to acknowledge that that's the case.

So I would say just start exploring what is the sweet spot for us? Where do we feel that the team is running at a good cadence where we can connect well with our surroundings because we're not working in planet isolation. That is super important. So find your cadence and I think it's different for everyone. In the stages of the team maturity, how long have you been a team? How well do we know each other? What is the knowledge that we

have in our team? These are all factors which influence basically those metrics and also in how you should treat PRS as an example like in sizing complexity. Maybe a fun fact, what I did saw using code generators from AIS that created perfect code between brackets, but eventually the refill took longer because you need to process sometimes very like sugar coated syntax and like. Is this correct or not? So the refill times took longer and actually the throughput went down.

Oh, interesting. Yeah, I didn't expect that actually. When it then comes to let's say the these quantitative data points, because I want to go into qualitative later, how do you measure? Because I've used GitHub, we're on Azure DevOps now. I've used big bucket before. Do you measure through those or more so your ticketing system in a JIRA that you might use WE? Operate on a multi source strategy, so your git repositories are definitely a good place to start.

They are fairly standardized an approach, yeah. It's not an easy task to just pull that data from the API's for various reasons. Sometimes they break, sometimes give inconsistent data Even so I would say there are quite some good products in the market that are able to like pull that data and abstract it for you and put it in a data like for instance, and start querying on that perspective. You can also use your ticketing system for additional

information. I mentioned in the beginning that if you shift left more to the product side, for instance, if we create an initiative for an epic in your issue tracker, that's maybe the start where you say, OK, now I want to start tracking how long does it take us for this initiative to do the discovery cycle. So when do I actually see the breakdown of epics there and starting to flow of stories? So that's where the issue tracker comes in.

But as I mentioned as well, the complexity grows there. As an example, there are teams that treat their backlog as A to do list. So whenever it's an item is created, that's what we're going to do. And we have different ways of doing our discovery. We're doing that in spreadsheets or we're just doing that on white boards. So there are multiple options that you can explore, but there are also teams that use their backlog as a a reminder list and everything that we think of, we

dump it there. So you need to make sure that both cases will produce valid information. As I mentioned, that's where the complexity starts growing. Yeah. That's the differences in process flows, which is maybe not as streamlined as engineering flows, I guess. Yeah, Yeah. I mean, right now I have the product hat on because I've done product management since January.

And for me, engineering productivity has become more and more, let's say, on top of mind, because I'm like, OK, we have a lot of things to do, at least in my environment. We're in EC, I'm in a banking domain. Everything is new and we can do everything. And everyone says everything is equally important. And then I'm like, OK, But then the things we work on really,

really matter. And I don't know if I've ever realized that, let's say with my engineering hat on being part of a team, because then decisions were already made with regards to priorities and I would just execute. But now I'm more so also tending towards, OK, what is the quality of what we deliver? Because if I could test my assumptions faster and then they turn out to be wrong and we don't need that feature, then I might not need 100% of the quality needed to go live basically.

And those are more so top of mind when it comes to our decision making. I want to be faster and more agile in the way we deliver also with regards to quality. And I want the quality to accommodate for how sure we are with regards to these bets that we're making. And that's a completely different mindset. And then your talk about engineering, productivity and measuring, it just hit home with me. I was like, that's what I want. And now I'm trying to figure out

how to get there. Well, once again, just start by collecting the data and insights and what I think really resonates well if you're from the business side and you start measuring outcome of your products, hopefully you can tell if you launch a new feature, is it bringing this stickiness for

instance that we are hoping for. But if you can in addition determine how efficient were we in actually producing this feature or could we improve ourselves there, that will totally make or break potential future business cases.

So creating those insights and start combining them, I think that's a really interesting spot to have discussions when you start prioritizing on your portfolio, not only saying shall we build this feature, but also determining scope and as you mentioned, fast iteration getting traction, it is key.

And if you can actually show that the first iteration and getting it out of the door is something teams can do very quickly, avoiding, for instance, certain complexity for the future for like scalability or quality. I think it's a very interesting conversation with business to complete the case. Let's get the first iteration out. What is that first version? Let's minimize it and see if it sticks and then build on top of

that. And that mindset resonates as well with a lot of engineers in let's get something shipped and not get a six month project landing on our backlog without any customer value delivered. Yeah. And these business cases like when they're obviously when I'm like, OK, we save 2 clicks and then the complexity is worth, I don't know, X amount of hundreds of engineering hours, then I'm like, OK, there's no business case.

But when they become more fine grained, I think the better quality of data and the more data availability, those nuances are going to make the difference between those decisions then and right now because I don't have any insights in those data points. They're more so a gut feeling. And with gut feeling comes communication and more challenging buy in. I would think that with more data points I would get buying more easily, that's why I don't

want to move there. Yeah, that and there is a lot of happening currently within that field in it's the field of forecasting actually. And you're getting more advanced on, on the left side on like product creation there as well because you need to have the entire chain otherwise it's you are not able to deploy. So there's no value created. And that forecasting part is really why you now see traction.

But this requires a lot of data points, for instance, your throughput, like what is the normal flow of this team in terms of throughput? What can they handle? How often do they break things that they need to correct that? So there are multiple factors. If you have good data on that, you can actually use Monte Carlo techniques to start doing predictions. You will learn that what I've seen so far over the years that teams are roughly, you know 4050% in how well they can estimate.

How long does it take for a feature to be delivered? If you have some data behind that, it shouldn't be perfect, but just something that you can at least explain it brings reason to the table and start a debate. Are we going these going for these two products or shall we first ship this one based on what we see in terms of forecasting? We can't do both. Maybe initial guess is yes. Yeah, data shows no. No, it's not going to happen then. No. How do you combine?

Because we touched mostly on quantitative data with regards to throughput, with regards to engineering productivity. How do you combine that with the qualitative data? And in the end, do you need both or because I, I mean engineers are very data oriented, quantitative data points usually when I would say, but you noticed, I mean you mentioned qualitative data points being important as well.

It's super important aspect to bring the human factor in data will definitely not tell you the entire truth. And there's also this famous say that it's fixed only when the engineer believes it's fixed. That's where you may need the data, but you can also verify that by asking. For me, there are true two options there. First one is the chat at the coffee machine or meeting each

other in the office or wherever. But I think it's super important to you be strongly connected with your engineers and actually understand like what are the challenges you're facing for what is really going well. Why are you very happy about certain improvements that were deployed recently?

You want to keep that. So don't change that part because if you're going to drive change and you're going to touch something that people are actually quite happy about, that's why you get also a lot of questions. And then there's the part of surveying, which is a whole different distance. I don't know about you, but whenever I get a company e-mail saying, oh, can you please fill in the survey? There's another survey. And if I look at my Slack channels, I have multiple

requests for. Can you respond to this? It's just very quick. They only take 15 minutes. If you're lucky, yeah. So but it's a contact switch and often you don't know exactly what comes out of that. So we're solving that by over communicating what we're going to do with the results and the next iterations of improvement. So that 50 minutes survey is maybe consuming eventually 4 hours of my time if I want to read all the follow up communication about that as well.

What I really like is if you are able to do tailored surveying in channel team level. So there are some excellent surveys out there with validated questions regarding engineering productivity or developer experience that you can reuse

there. And where you take 1 or maybe 2 questions that are popping up in a team channel, let's say every week or every two weeks and where you can just just answer a question with a single click, an emoticon or whatever, but it's reduced already a lot of insights based on the outcome of that question determine what is the next follow up question that I may ask based on the feedback that I received or or will I bring to a retrospective for a team meeting or maybe the last

five minutes of the stand up to quickly discuss. And I think those low like touch collaboration that you're seeking and surveying are producing a lot of results. Hopefully also in in the stand up or retrospective why you once again connect with the people. I see that sticks better than the big surveys that you're going to send out with five topics taking 15 minutes of your time.

Yeah, I feel like you lose people as well when you do those huge surveys and then people don't see the outcome of it. Or you might see it then in half a year, in a year, and you're like, oh, one of these, like, I didn't see any change then. Why would I do that? I feel like you lose trust a lot when it comes to that. If you can't deliver or you you don't have the mandate that you're able to deliver, don't

ask. What are some of the learnings that you had which didn't really work out, where you had some assumptions where you thought, OK, this is really going to be effective in changing way of working or changing what we measure, which turned out to be just false in practice? So one example that immediately pops in mind is, is we had a concept, it was called the team held dashboard. And the idea was that there was a single stop where teams could go to and they can view their key metrics.

And it also contains some company wide metrics, for instance, up times of our systems or incidents that we encountered on our product suite. First learning there don't overwhelm people. People will drown in their own data. There's so many things where to start. Do I need to start? No OK, then I will click this away and I will move on with my day-to-day. And the other part is showing metrics which A-Team or a stream like depending on how how you organise.

But if you can't influence it, why am I looking at this? So overall time, for instance, it's great, but I can't influence it as a team. I can be a breaking point for sure, but I can definitely not ensure that our time is on par with what we're aiming for. Yeah, it just happens. Yeah. So why am I looking at it as a team? Why am I going to discuss this? That should be discussed in the larger form.

So make sure that if you're presenting data to teams that is relevant and make sure you only present the things that are really important. What if you are in the need that you do need to have additional data for teams? Let me give an example here. We mentioned PR cycle time in the beginning and I mentioned additional items there like first response to a PR request and to merge it in. How long does it take us to merge it in? What is the throughput on the PRS there?

So these are additional metrics that you may want to have available as well by using a technique of layering where layer one only shows you, for instance, the PR cycle time and when something is off, you want to learn more. But present that in a layer 2 where you click and go to a different environment, maybe a different page from your reporting engine and there you see a more detailed information.

Make sure that the user experience and if it comes back to productising again, it's just a good experience showing only what is relevant. If you want to learn more, you have the option to learn more, but if you don't want to learn more, it's not distracting you. Interesting. When I thought of productivity and also in preparation to this conversation, I was thinking we can optimise productivity,

something I like. I like being productive as an engineer, but does this also put strain on the people? Additionally, are we removing time downtime that is waste or that also people need? When it comes to just food for thought, I would say I don't know what we are optimizing sometimes and there is this epidemic of burnout within the tech field that we have. Should we really focus on productivity and making people more productive? Does that also increase happiness or what have you seen?

It definitely should increase happiness. I think people who are happy, who are enjoying the the tools that are given that they are able to do their job, which they fully understand and understand why am I doing this? That is bringing the productivity part. So don't use it as a performance mechanism for oh I want you to hit a certain amounts of deployments for instance. They will cheat the system. Always waste to cheat the

system. So if you start doing that, you will lose any trust in using any data. If you start only optimizing on oh I want to go faster and the quality shouldn't be influenced there and push people out of like over the boundaries, you will pay the price eventually as well. The machine will come to a stop. People will either leave your company, they will go on sick leave or they will be just less focus and they start shielding themselves or entire teams and by the team leads themselves

start shielding them. It's just not beneficial doing that. What do you then, because you're measuring data points and I still feel like there's this gap in my understanding of, let's say I'm a software engineer, my individual performance is not measured, but my team performance is. Will that then influence my performance at the end of the year or how does it affect me basically? That it starts with the assumption that you can't measure the overall team

performance. What you can do is have a discussion and what have you learned from the data that was provided, What kind of experiments that you run to actually improve some of the metrics that we were tracking? And in addition, did it work or did you get any good learnings out of that? That is an ultimate expression of continuous improvement, I guess.

So what if we agree that a good performance is actually operating this continuous improvement cycle and by utilizing this data or any other data points or quantitative data that you're getting, that I think is the expression of performance on the team level. And you can't place a sticker on how can I determine if a team should hit? And like, oh, this is your benchmark on the amount of deployment, but what if 22 team members are ill? I'm not going to hit. It doesn't work.

No, it doesn't work. So don't never do that. But then everyone in this process needs to understand what to measure and what data points to look at, like this continuous improvement cycle that has the goal rather than the data that comes out of it, right? If someone doesn't understand that, and all of a sudden you have one manager and one team saying, OK, you need to do X amount of commits and X amount of requests, this whole system goes downhill. Yeah. How do you educate people?

That's, I think my main it starts, it starts with the creation part. Like what are you actually going to measure? And that depends on the needs of the people. If you're going to push out a, let's say 10 metrics and this is what you need to measure, there's no need for it or people don't understand that, then it's immediately being this being perceived as, oh, we're being tracked here. So either starts playing the system so the numbers look good or we totally ignore this.

We never talk about it. But in both cases it's it's a waste. You don't want to have that. So take people along in a very early stage and what can we do? And let's take the engineering process as an example. Why do you feel that something is off or we could improve and can we just visualise that? Can we bring some data in? Can we have a discussion? Can we the small surveys, microtransaction surveys, like can we learn something there that mostly sticks with people like, OK, let's see what

happens. Make sure that you also have the ability to drive and change there. You have the mandate that leadership is willing to commit in it as well. When people see that it's being used in a manner that actually improves life, that's where you can maybe introduce the next step. So that is also what you see currently in the market.

There are a lot of solutions, engineering, productivity solutions and oh, full sets of like dashboards and you just plug them in and you get repository and managers can see like, oh, what are we working on and how efficient are we doing that? And teams can dive into the efficiency of pull requests along the way. It's overwhelming. People don't really understand what I'm looking at. Data quality is sometimes debatable. You lose trust.

People are not going to look at it or they will start playing the system again. As a final thought, I'd be fun to think about this thought experiment because in my own environment, I'm responsible for two teams. And it was really, I'm not possible to do 2 separate teams. So I merged them into one because it was also feasible size wise. And now we have one team, two stakeholder landscapes, there are two separate teams and plans for next year to increase in teams even more.

And I'm thinking of developer productivity as well as more so on the left side, even the product side of the things. And I want to start measuring things because in my head right now we're measuring next to nothing. Basically I'm wondering where do I start? Do I start within my team and running small experiments there? Or do I need a separate team that overarching standardizes for multiple teams? I'm just thinking of how do I start in the 1st place with measuring things.

I would bring it to the team and ask the question to the team, like if we have the ability to gain some insights on our engineering processors, what would be something that you are interested in and learning more from? Get that first insight that will, but people know like, oh, this is the annoying thing in my day-to-day, why they're experiencing actually day-to-day and annoying. So they will bring it forward if there's enough trust in the team. So I assume that that is all

there. So ask that's one and make sure that you have the enablement to actually create those insights, get the data validate and have a discussion, but also get the management buy in that that data is not being used to track, it's being used to drive continuous improvement by the team. The team is in the lead, they own it. We have those conversations in retros and one of the things that was most on top of mind was pull requests and throughput time with regards to that. We spoke about it.

We thought of running an experiment with it. Things are going better now and it's everything is qualitative, right? I don't have any quantitative data. With this initial upfront cost of implementation and getting some quantitative data points, how much usually does it take to get stuff up and running and get your first data points in? Yeah, that depends on how complex your engineering landscape is and how much enablement or comes from your organization.

So if there are turnkey solutions already within your organization, then it's way easier to do. In my experience, there is always a team in the organization that has already done something, either by using some open source stuff, building their own connectors. There's someone who's passionate about is creating their own like data engine that pulls data. So find these people in your organization and see what you can reuse. That's a great starting point if it's not provided by your

organization already. In addition, how much effort should you actually like spend here? If people feel like, oh, we're in a good spot here, I would ask again, OK, what would be a thing that we start looking into? Because mostly when people feel like, OK, this is going well, it's not annoying for us. I wouldn't spend much time in creating a lot of data inside. It's not worth it.

I love the initiative of looking at neighbouring teams and see whatever they've started measuring or just experimented with and learning from that as well. I feel like we're not doing that enough, but I've, I've loved this conversation so far, Walter, I must say, it's been a real pleasure learning about developer productivity. I'm going to ask my team to read, listen to that. And if you do, let me know in the comments. But in any case, before we round off, is there still anything you

want to share? Well, I really would like to invite everyone to start exploring engineering productivity, but do it and a team conversation. We in this podcast went over several things that you can like start measuring with your down on PR. We also shifted left and all the way to deployments on the far right. There's so much that you can

actually start measuring. I would love to invite everyone to make a very simple visualization of how their product creation is being done, what their understanding is within their company. Bring that to a retrospective and ask your team members, where do you think we're dropping the ball currently and see if you can get some insights. And there are many techniques out there. Start simple, take the data that you have, Just take part of the data where you feel like this is what we can trust.

It will never be perfect. And start implementing that. It's the conversations that will lead to scaling up this initiative of gaining those insights and do that as a team or with just surrounding. Even even better, if multiple teams are start joining this initiative, that will be my call to action. Beautiful. Awesome. Then thanks again so much for joining us. This has been a real pleasure

and thank you for listening. If you're still here, leave some love and comments in the comments section below and we'll see you on the next one.

Transcript source: Provided by creator in RSS feed: download file
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast