Our Metrics
In our development process, we are currently measuring a set of key performance indicators (KPIs) to evaluate and improve our efficiency, quality, and collaboration. These metrics are aligned with the SPACE framework and include additional relevant metrics from DORA and other frameworks.
SPACE Framework Metrics
Performance
- Code Review Velocity: Measures the speed at which code reviews are completed.
- Code Review Approval Rate: Tracks the percentage of code reviews that are approved by users and teams.
Activity
- Commits: Counts the number of commits made to the codebase by users and teams.
- Lines of Code: Lines of code of the commits by user.
- Deploy Frequency: Measures how often code changes are deployed to production.
Communication & Collaboration
- PR Merge Times: Tracks the time taken to merge pull requests.
Efficiency & Flow
- Code Review Timing: Measures the duration of the code review process.
DORA and Other Metrics
Efficiency
- Merge Frequency: Measures the number of merges per developer per week.
- Deploy Time: Measures the time taken to deploy code changes.
DORA Metrics
- Deployment Frequency: Measures how often code changes are deployed to production.
Quality and Predictability
- PR Size: Measures the number of code changes in a pull request.
Others
- When We Are Working: Measures when commits and pull requests are created to track activity and performance by hour and day of the week, aiming for efficiency and worker well-being. For example, we have metrics to identify who works outside of regular hours or on weekends.
- Teams Contributions by Commits: Tracks contributions in commits by teams, providing a percentage of how many commits each team made over time.
- Pull Request Status Alerts: Monitors the number of pull requests that remain open or closed without a merge by month and year. This helps us review "code that was forgotten to be pushed to any production environment."
These metrics provide us with valuable insights into our development process, helping us identify areas for improvement and track our progress over time.
GitHub Copilot Metrics
Seat usage
- This metric measures the number of seats or licenses being used compared to the total available. It is useful for evaluating resource utilization efficiency.
Activity per user/team per day
- This metric monitors the number of actions or tasks performed by each user or team in a day. It helps understand productivity and daily engagement with the tool.
IDE per user/team
- This metric measures the use of Integrated Development Environments (IDEs) by user or team. It is useful for understanding preferences and usage patterns of development tools.
Sentiment analysis
- This metric evaluates the sentiment of developers towards the tool and is measured through the weekly survey sent out. It is useful for measuring the tool's Net Promoter Score (NPS).
Language per user/team
- This metric tracks the programming languages used by each user or team. It is useful for identifying which programming languages are being used the most.
% Suggestion accepted
- This metric measures the percentage of suggestions accepted by users compared to the total number of suggestions made by Copilot. It indicates the receptivity and efficiency of each suggestion delivered.
% Lines accepted
- This metric measures the percentage of lines of code that are accepted or integrated into the developer's code. It helps evaluate the quality and acceptance of the suggestions.
% Suggestion by chat
- This metric measures the percentage of suggestions made through GitHub Copilot Chat. It is useful for understanding the dynamics of Chat usage.
Daily usage heatmap of GitHub Copilot
- This metric provides a visual representation of the daily usage of GitHub Copilot. It helps identify peak usage times and patterns over a period of time.