Git Analytics

Hero image for Git Analytics

When working on client projects, Git analytics metrics are used to ensure the efficiency of processes, the early detection of bottlenecks, and the delivery of quality products with reliability and efficiency.

While Engineering Leads are at the forefront of leveraging these metrics, Team Leads, Developers, and Product Managers in the squads also benefit from the data they provide for their day-to-day activities.

Development activities generate a high number of data points. However, not all of them are meaningful to the type of work done at Nimble, and some of them are contrary to the way the team thinks about development metrics. Therefore, the following metrics have been curated for the value they provide and their alignment with the engineering team’s culture.

While based on industry standards, all the recommended optimal values for the below metrics are specific to Nimble. Other organizations or teams might have different standards.

Delivery Metrics

Cycle Time

Cycle time is the gold standard metric as it measures the total development time from the first commit to the deployment of the code.

An efficient Cycle Time is 2-3 days on average.

In practice, it represents the following standard development flow for developers:

  • Day 1: developer A picks up a task, commits the changes to a feature branch, and opens a pull request. For larger backlog items, the pull request on Day 1 might still be work-in-progress, but a pull request must be opened in any case.
  • Day 2: other developers in the squad pick up the pull request for review, and developer A engages in the code review process.
  • Day 3: the code review process is completed, and the PR is merged.

A cycle time of 4 or more days is a deviation from this standard development workflow. The source of a longer cycle time can vary from Engineering deficiencies (e.g., technical skills, code reviews, etc.), Product deficiencies (e.g., stories’ requirements, QA, etc.), or client dependencies (e.g. API readiness). Whatever the reasons, a longer cycle time can result in the delayed delivery of backlog items and the sprint as a whole. Therefore, monitoring cycle time allows detecting bottlenecks earlier.

Coding Time

This metric measures the time between the first commit and when a pull request is issued in a ready-for-review state.

An efficient Coding Time is less than one day on average.

The shorter, the more it signals that developers work on well-sized stories and do not face any technical or product-related blockers.

Pickup Time

This metric measures the time between the opening of a pull request and its first code review submission.

An efficient Pickup Time is less than one day on average.

The shorter, the more it demonstrates that the squad prioritizes code reviews efficiently. A pickup time under one day ensures that no pull request gets stale for more than 24 hours.

This metric works in tandem with the throughput metrics of Pull Requests Merged. Indeed, the shorter the pickup time, the faster pull requests can be merged.

Review Time

This metric measures the time between the first code review for a pull request and when the pull request is merged.

An efficient Review Time is between 1 to 2 days on average.

Similar to the metric of Pickup Time, the shorter, the better. Indeed, it demonstrates that the squad prioritizes code reviews efficiently, with pull requests authors being prompt in making the required changes and reviewers being prompt in performing follow-up reviews. It also signals that there are no technical or product-related blockers discovered late in the development process. While a healthy amount of review comments is beneficial to code quality, lengthy and too frequent discussions at the pull request stage can also signify that there are issues to address. For instance, it can indicate that a developer has deficient technical skills causing lots of rework or that the acceptance criteria are unclear, not well understood, or worse, incorrect. That is why this metric is often the culprit for inefficient cycle time and therefore must be monitored very closely.

As a warning, a review time of less than a few hours can signify that the squad performs insufficient code reviews, thus endangering code quality. This metric must therefore be checked in relation to the quality metric of Review Depth.

While Deploy Time is a metric often used to measure efficient delivery, the team does not generally use it since the squads deliver once per sprint. All deployments are also automated through Continuous Deployment. As a result, Deploy Time is never a bottleneck, thus it does not need to be monitored closely. However, not having a release in a sprint is an issue that can be detected via the Cycle Time metric.

Quality Metrics

Pull Request Size

This metric is based on the lines of code (LOC) in pull requests.

An efficient Pull Request Size is below 300 LOCs on average.

Smaller pull requests are reviewed faster and with fewer errors. Therefore, this metric impacts not only code quality but also throughput indirectly.

Closely monitoring the pull request size is also crucial to detect product-management-related issues e.g., wrong-sized or wrong-scoped stories.

Review Depth

This metric measures the average number of comments per pull request review. It is an indication of the quality of the review and how thorough reviews are done.

An efficient Review Depth is above 4 comments on average.

The higher, the more it demonstrates that the squad dedicates enough time to perform detailed code reviews, maintain a high bar for quality, and ensure defects are detected before code is merged, thus reducing the need for rework.

However, similar to Review Time, while a healthy amount of review comments is beneficial to code quality, too many discussions at the pull request stage can signify that there are issues to address either on the side of the pull request’s author or reviewer. Both parties must follow the team conventions for code reviews to get the most benefits from this process.

Throughput Metrics

Commits/day

This metric measures the number of daily commits pushed across all the relevant branches in all the project repositories.

While the workflow of each developer varies and the stage of the codebase can have an impact on the velocity of developers, a standard average is no less than 3-5 commits/day per developer.

This metric allows assessing if a developer commits efficiently and follows the team conventions when using Git. Smaller and regular commits are better.

This metric also acts as a sanity check on daily activity. Significant gaps between commits can often be an early indicator that a developer is stuck or not working efficiently on a task.

Pull Requests Opened

This metric measures the number of pull requests opened.

An efficient average of Pull Requests Opened is any number close to 1 on average per developer. Given a squad with three developers, an efficient average of Pull Requests Opened would thus be 3. Since there are backlog items of various sizes, the standard is usually below one. The closer to the latter, the better as it means that each developer can complete the development of at least one backlog item per day.

Since all squads work in one-week or two-week sprints, knowing how many pull requests are opened at any given time (per week, per sprint) allows detecting bottlenecks in the development process. For any work in progress, the earlier the pull request is open, the earlier squads members can identify implementation issues thus reducing the risk of delays. Inversely, if a developer commits regularly but does not open pull requests, it can signify that a developer is stuck or not working efficiently on a task.

Pull Requests Merged

This metric measures the number of pull requests merged. It is closely related to Pull Requests Opened and shares all of the same concerns.

An efficient average of Pull Requests Merged is also any number close to 1 on average per developer.

Not only it detects bottlenecks in the development processes but also helps to address code review issues. Indeed, the squad might be speedy in opening numerous pull requests, thus having an efficient score for the previous metric but not managing to merge pull requests efficiently. Deficiencies in this area correlate closely to the delivery metric of Pickup Time and Review Time.