The collaboration metrics are the representation of team workflow for the actions depending on other team members. Actively following the habits and relation between actions can help team leads understand collective behavior and reduce bottlenecks in the process.
Since reviewer and submitter responsibilities are different, metrics are categorized for both sides separately.
The metrics are evaluated in the following four further metrics to breakdown insights better;
- Unreviewed PRs
- Comments Addressed
Responsiveness measures if team members respond to a review in an acceptable period. This shows an average time from a reviewer’s interaction with the PR to the submitter's code update or comments to the provided review.
It would be highly logical to drive the responsiveness as minimal as possible, usually, it is considered good practice to keep it around half a business day depending on the development frequency and workload.
While determining the convenient time to pass on responsiveness, teams in different time zones should be taken into account. However, even with separate time zone teams, the responsiveness should not be more than 24 hours.
Receptiveness tracks the commits happening after the comments.
This is a tricky metric to follow because what it reflects can be interpreted differently. After all, not all comments require the submitter to change the code.
However, if receptiveness is very low it may mean that the developer is not open to the reviewers’ suggestions in any measure.
On the other hand, having a high receptiveness can mean that the developer does not trust the code being pushed. They are fully trusting the reviewer evaluation and leaving obvious bugs to be handled in the review and testing processes.
Unreviewed PRs show the percentage of PRs without being subject to the review process.
This can identify if the PRs are getting an adequate level of review in the development process.
The problem here is that added codes without a review can result in various bugs which will cause high traffic between development and test departments eventually delaying the development process.
The best practice here is to set up a flow that will not allow any unreviewed PRs or be notified when there is such PR in the product development.
We strongly suggest that any team leader eliminate this type of PR so this is a crucial metric to track constantly.
Comments addressed metric demonstrates if the comments are recognized by the team members.
This is the percentage of comments that trigger action as a reply or code change.
It is highly possible that if the reviewer takes the time to put up a comment, it is worth at least responding to it. However, it is not recommended to look at this metric as a definitive conclusion but to use it as a parameter to understand generic behavior.
The best practice here is to increase this number because it shows if the submitter takes any action based on the reviewer’s comment rather than how quickly it finds any response, so this is different from Responsiveness analysis.
The metrics are evaluated in the following three further metrics to breakdown insights better;
- Reaction Time
Reaction time is a reflection of the time the team spent from PRs to be opened to get reviews.
Combining reaction time with Responsiveness would highlight the issues in the review process to keep the PRs open for a longer time than expected.
Reaction time is calculated by dividing the total time from the submitter’s PR action to the first response taken by the first reviewers by the total number of responses provided by reviewers.
Comment, review, approval, merge, or close
Commit, comment, merge, or close
Try to keep reaction time lower to gain faster cycle time that will enable your team to have a better development process and ship new features faster.
Involvement is the ratio showing if the reviewers own their reviewer role. It is also a valuable metric to identify which reviewers are more involved than others.
The ratio to be involved can vary from the individual perspective to the team’s perspective because while the reviewers are attending to most of the PRs addressed to them, it may play a small role in the whole review process of the development.
Therefore, involvement should be reviewed depending on the perspective you are looking from.
Be aware that each reviewer may be required to be involved in the PRs opened based on their specialty so having a low involvement ratio doesn’t always mean a lack of performance in terms of review. They may simply have fewer PRs addressed to them.
It is only reasonable to expect that team leads, experienced developers should have a higher involvement to make sure that process is solid and reduce the issues in the development cycle.
The involvement ratio is calculated by dividing the number of PRs reviewed by a contributor by the total number of PRs reviewed.
Influence metric measures if the submitter is influenced by the reviewer's comments and makes changes in the code (push new commits) after the review.
Influence is another insight that may indicate different perspectives for submitters and reviewers.
High influence may not be a plus for the reviewer but it may reflect that the submitter is either not making the right developments or not capable of standing up for the changes they make. They may just go with the flow even if they have a reason to make the development in a specific way.
It is expected that the team leaders and experienced developers may have a higher influence rate than others. In case such users have low influence, it may indicate that the review is not worth making any changes or the submitter is not acting on the comments.
Review collaboration is a report that shows every submitter and reviewer and their respective actions of submitting or reviewing that are taken by each team member.
While it presents the number of commits by each user on the submitter side, it also presents the users who reviewed each submitter’s PRs on the reviewer side.
By hovering over both sides’ users the relationship between reviewers and submitters can also be evaluated.
Even at first look, this report represents the most active submitters and reviewers to understand the team members actively involved in the process. If there is any user who should get more responsibility on any side, proper adjustments can be made in light of the information provided here.