Relationships Ruin Your Code Reviews - podcast episode cover

Relationships Ruin Your Code Reviews

May 10, 202415 minEp. 77
--:--
--:--
Listen in podcast apps:
Metacast
Spotify
Youtube
RSS

Summary

This episode explores the influence of relationships on code reviews, drawing from a 2023 study. It discusses both the benefits of code reviews and the biases that can arise from personal feelings towards colleagues. The podcast also covers strategies to mitigate bias and highlights the positive aspects of relationships formed through code reviews, alongside the importance of systematic review approaches.

Episode description

Key Insights:

  • Importance of Code Reviews: Code reviews are essential for error detection, understanding new features, adhering to coding standards, and ensuring only reviewed code is deployed.
  • Emotional Impact: Emotional dynamics play a significant role, with 30% of developers reviewing code from less favored colleagues, which can lead to biased judgments and negative feelings.
  • Striving for Objectivity: Despite personal feelings, approximately 76% of developers strive for objectivity to maintain professionalism.
  • Impact of Developer Experience: The experience level of a developer also influences the depth of code reviews and the manner in which feedback is provided.
  • Perceptions Formed: Reviewers' perceptions of code quality can affect their views on the author's skills or character.

Strategies to Mitigate Bias: The episode outlines multiple strategies to reduce bias in code reviews, such as involving multiple reviewers, standardizing review criteria, and implementing anonymous reviews.

Additional Resources

Conclusion: The podcast sheds light on both the positive and negative impacts of human factors in code reviews and emphasizes the need for strategies to minimize bias, enhancing both code quality and team dynamics.

Transcript

Hello and welcome to the Soft Engineering Unlocked Podcast. I'm yours, Dr. Michaela, and after... some time of radio silence on my end, I'm so, so happy to be able to be back on air and to talk to you. Today's episode is all about cult reviews and especially the impact our relationships. that we have with our colleagues have on the code review outcome. Today's episode is very much influenced by a paper that's called how social interaction can affect modern code review.

It's research done by a bunch of different people coming really from different universities around the globe So I will link it in the in the show notes. You can really check it out And I found it very interesting because they are investigating what's the impact of our relationships and how we feel our emotions that we have for our colleagues.

on our code review process and there are actually other papers that also show that there's a big bias actually in code reviews and how we conduct code reviews and the findings and the pushback that came out of different studies a lot of open source studies as well where we see that really

the gender for example or the age and so on how we perceive the person really influences how we review the code right so that there's a lot of bias often even bias that we are not aware of in culture this study now is a study from 2023 so a pretty new study looks into the emotions that we have towards our colleagues and also if this influences our judgments.

Well, so the first thing that the researchers looked at is if the people that they interviewed found code reviews beneficial and all of them said yes, I think this is probably also a little bit of bias of the study because If you're not interested in code reviews, you're probably not willing to spend time to talk about code reviews. But yeah, it's really good news, right? All of them said, well, code reviews are very, very important.

And then what I found interesting as well is the benefits that people reported because finding errors was again one of the most reported benefits of COVID. There were 36% of the people said, well, this is the number one reason, right? Then it was about understanding new features, helps to really get knowledge sharing across, right? Another reason was make you follow code standards. Yes, that's true.

And then also another one that stood out with 20% was that it allows us to make sure that we only push code, we only ship code that has been reviewed. So it's a safety guard as well. And then this research really dives into this emotional perception and 70% of the people that they interviewed, they did an interview, they did a grounded theory study around 25.

developers and 70 percent of them indicated that they feel close to the people who review their work right and they also feel that it affects the way they and there were several camps in that right it's not one direction it's several directions so some people say well they feel like they are stricter because they have They have a relationship with them because they know them. Maybe one person said, well, this is my cousin, so obviously I'm...

I'm a little bit stricter there and I'm looking more closely what they're doing. And other people said, for example, no, I try to be nicer because I have a relationship with that person, right? I like them and I try not to be rude. to try to protect our relationships and what we have going on. And most of the people that they interviewed were really aware that those relationships influence how they review code and how they give comments.

So this was the positive relationship somehow that they have, but the researchers also investigated. what happens if you do a review code of people that you can't stand. And around 30%, so still some significant percentage of people said, well, yes, I have to review code of people that I don't like, that I can't stand, right? And a small percentage of those admits openly that the emotions that they have are impacting their judgment, right? So they feel maybe aggression.

anger. A person said, I wish to rewrite his code. Or another person said, I didn't want to let his code go to production. and another person said if a person was extremely unpleasant to me i simply ignore his request right so there's a there's a tendency not to review a code of people that you that you don't like or maybe you let them wait.

And I've seen that quite often, right? Not even if you don't like the person, but also if you know the code of the person isn't really good or they are writing a mess. And this can also happen or that they are really writing this large... PRs right so which is related to a person you know what you can expect and then sometimes people are not willing to take on this request

yeah this pushback of code of people that we that we don't like um was also reported in open source but i think here's a different angle right because in open source often we don't even know the person that we are reviewing with right so the biases are coming from maybe avatar, picture that they have, the name or what we think about the person. Here the people are really working with each other, the researchers. made sure that the people that they interviewed and that they

talked to had at least six months working with their colleagues in their team. So that they are not new to the team, that they know a little bit who they are talking to. So this is really a team that we are...

But the majority of the people reported was around 18 percent almost right said well i really try to be objective objective right so even if i know i don't like that person i try to be objective and i try to see the professional side of things i try to separate the code from the person and and and all of that right so this is this is a very good attitude but from other research we know it's not always you know always happening that way right even if you intend

Another interesting aspect that they dived a little bit into was the experience of a developer. So if you perceive the other person as senior, as experienced, as knowledgeable, as skilled. Quite a few developers indicated that this impacts how they review code. So it could be that they, for example, say, well, if I know they know what they're doing, I'm not going so deeply into that code.

as if I know this is a person new to the team or a junior or an intern, then I really go into depth of the code and remark all the nitty-gritty parts as well.

another aspect that they looked into was that the code that we review right again influences how we see the person right so not only the person influences how we review the code but also the other way around right so if we get poorly written code or if you see people make the same mistake all over again or they make mistakes that we don't think they should actually be doing at this influences our perception of the skills or even of the character of the author right even though a couple of people

said they try not to have that influence their perception. I think it's a natural cycle that's happening, that's going back and forth. Another last aspect that people actually did here was thinking about how we can reduce the bias right so they brainstormed with the developers that they interviewed um not together but one by one what they think what could help to reduce these biases and there were a couple of ideas i want to throw them out

I think, well, I personally think some of them are just not doable, not feasible, but it's good to think about them, right? And I would love to hear what you think about them. So one of the ideas that... that evolved out of this research was to involve at least two reviewers in the review process. I think this is a good idea and a lot of Research papers actually suggest two reviewers as a better means of or a better number of reviewers than one person.

But we also have to think about the turnaround times that we really drastically increase, significantly increase with two reviewers. That's probably double the time of what we normally have. so there's a lot of workload involved but it's a good measurement. to discuss the review criteria with the team and have some standardization. I think this is excellent way to do that, right? To really have a very concrete way of

knowing what we expect, having a shared understanding. This is what I'm doing also with my coaching workshops a lot, right? Working with the teams to have this shared understanding, have this standardization, have this... review criteria in place and even maybe the review approaches that they use to do the code review.

Then another one was conversations. Well, there's not a lot of info given what that means, right? Having conversations around that I think maybe can be that we are changing the channel, which is also good, right? Not only giving written feedback, but also... you know getting face to face and we can see there is research actually by Kruger for example at all that shows that if we have this face-to-face conversation we are reducing our bias right They also talked about the allocation of team.

team extensions, right? They talked about anonymity while doing anonymous code reviews. Helping or getting help from the team leader right and I think in Involving somebody else if we have like Some conflicts and so on can be a good tactic. Checking soft skills during onboarding. team hierarchy and enabling the creator of the pull request to replace the review I found that a very interesting one. Well, yeah, so that's it sort of from this study.

I would like to hear your ideas and your thoughts on these ideas to reduce the bias in code reviews. As I said, from multiple studies we know there is significant bias in code reviews. There's pushback. often for not valid reasons or reasons that are not connected to the code but more to the person, to what we perceive from the person and so on, which is obviously not something that we want.

there are a couple of negative impacts from that. We are missing errors for people that we perceive as really good or we are unwilling to review work from some people or we are giving pushback for people that we don't like and so on.

so this is really interfering with our objective what i missed a little bit from this work is the look on the positive things right Because while it's true that our relationships have this negative bias, but there's also positive things come out that come with this relationship. So, for example, I think friendships or mentorships, right, and they might be formed through a culture.

I even had mentorship relationships that were purely based on code reviews, that we never really worked together in another capacity than doing code reviews together, looking at pull requests together, communicating via pull requests and so on. Also the admiration that you can have for a person. They briefly mentioned that in the study but didn't go really deep into that. So you can really learn a lot.

from people but not only you know objectively learn but also really feel admiration and feel like oh thankful right having a relationship to a person that you read their code that you're learning from them Even though, yeah. you don't see them on a day-to-day basis. Some people are completely remote and maybe communicate only via code reviews and I think they are forming relationships and not only in a negative way but also in a positive way.

yeah so that's it that's it for today I hope you liked this episode but before you go don't forget to look at my project AwesomeCodeReviews.com. Last week, I finished a new article. It's about the 10 best code review approaches, which directly fits to this topic today. Because if you have concrete, systematic code review approaches, not just ad hoc. reviewing code however we feel like is the right way to do today. But it really is a systematic and explicit approach.

then we can also reduce the bias of our code reviews more. We can have these shared expectations in our team, right? So what are those approaches that I'm talking about? Well, code review checklists that are very well known, I think. But you can also do a change impact analysis or do some cross-referencing or you have a data flow or control flow inspection in your code, right? All of that makes

code review is much more systematic, much more explicit. It also means that we have a much more thorough and coherent approach through our team and reduce our bias. so yeah hop over to awesomecodereviews.com check out this article i will also link it in the show notes as well as the research article that I talked about today and yeah I hope to see you soon I can't promise when I will be on here again because there's a lot of things going on in my life yeah anyway

I will be back on probably in a couple of weeks with some news on code reviews or developer experience. So I'm really looking forward to that and have a great day. This was another episode of the software engineering unlocked podcast. If you enjoyed the episode, please help me spread the word about the podcast.

Send the episode to a friend via email, Twitter, LinkedIn, well, whatever messaging system you use. Or give it a positive review on your favorite podcasting platform such as Spotify or iTunes. would mean really a lot to me so thank you for listening don't forget to subscribe and i will talk to you in two weeks bye

This transcript was generated by Metacast using AI and may contain inaccuracies. Learn more about transcripts.
For the best experience, listen in Metacast app for iOS or Android
Open in Metacast