Metrics That Matter - Moving from What is Easy to What is Impactful
Metrics - for many it may seem like a dirty word. We know velocity is not an interesting metric. But we also can’t use that as an excuse; behind every organization asking about ‘improving their velocity’ is an honest concern - “how do we know this change is working?” It is a valid concern, and right after that question comes the metrics question.
Let’s be clear - The best metrics provide insights that help us take action to improve the current situation. Metrics provide us with data to help us answer a question - ‘should we continue to invest? Should we change our investment? Should we stop investing?’
Sadly, metrics are often arbitrary or they tell an incomplete story. Single metrics fail to capture the interplay and tradeoffs between different metrics. We’ve heard many stories of how organizations optimizing for one metric created detrimental results overall. (We’re looking at you, capacity utilization.)
So, What Should We Measure?
We do the majority of our work with helping teams learn and make their work easier. The primary goal is to foster learning. We need to measure the effectiveness of that learning and ultimately, we need to measure the economic impact that learning has on the organization. But it’s not learning at any cost. We’re aligned with Don Reinertsen on this point.
In product development, neither failure, nor success, nor knowledge creation, nor learning is intrinsically good. In product development our measure of “goodness” is economic: does the activity help us make money - usually through making work easier or making better products.
We have started organizing our metrics into three groups to help with this discussion as not every organization is ready to jump to the deep end on metrics. Our goal is to help orient stakeholders, leaders, and teams around what actions these metrics will help them take. We also want to help them understand the level of effort required to collect the metrics and the timeframes in which they will be available.
THREE CATEGORIES OF METRICS
Simple To Capture
These metrics simply show the amount of “activity” in an area.
- Number of teams doing a ‘practice’ (maybe agile)
- Total number of certified people
- Code Coverage
Astute readers may critically call these “vanity metrics” and they would not be wrong. These metrics do not equate to impact. They don’t help us answer the questions “Were the practices helpful?”, “Were the certifications applicable?”, or “Did the code coverage help with future changes?”
However, these metrics are simple to collect and can be used as leading indicators once we know some ideas are working. For many organizations, these metrics are important because they imply value early on, even though they don't prove it. They are metrics everyone is comfortable with. But comfort does not equal good.
Harder To Capture – Directional/Team Based Improvements
Metrics in this category are more important than the previous category in the sense that these metrics look at the directional impact of a change and the impact it is having.
- Distribution of automated tests over time
- SQALE code quality index
- Percentage reduction in defects
- Cycle time reduction to deliver a product increment
Again, these metrics are far from perfect. The testing related metrics do not prove the right tests were written (or the right code for that matter). Metrics showing products were built faster don’t shed any light on whether those products should have been built in the first place (what if nobody buys them?).
What these metrics do show is the incorporation of product delivery practices that are being applied - practices that our experience and the experiences of other organizations have shown to have a positive impact on lifecycle profits. These metrics can be collected with agile project management software, SonarQube, Hygieia, or other comparable tools.
When we use these types of metrics we need to have a baseline. It’s helpful to have data for teams for two to three months prior to introducing a change.
Difficult To Capture – Impact/Economic Improvements
Metrics in this group are challenging - not only to collect but also because using them to drive action challenges the way many organizations work. These are the metrics that force us to look at the question “Is this initiative having a positive economic impact on the organization?”
- Increase in sales conversion
- Cycle time reduction for a delivery with impact (not just delivery, but a delivery that mattered)
- Systematic cost reductions (not silo optimizations that may have detrimental effects in other areas)
- Savings resulting from killing bad product ideas early in the discovery/delivery cycle
Metrics like these can prove initiatives are having a positive impact on lifecycle profits. These metrics will be substantially harder to collect. We need to collect data for a much longer period of time. We need to align with the finance department in our organizations. And, we need whole product communities aligned around a shared understanding of what successful outcomes look like. In addition, we need to understand how to separate real signals of change from noise. (This post has more on that topic.)
Ultimately, this last category of metrics is what drives concrete decisions.
Maybe it is Simpler
Call it the hippy in us, but economic profits are not the primary motivation for many, including us.
Maybe these metrics are hard to do; hard to collect and judge. They require us to admit we are wrong at times.
What if it were easier? What if we simply looked at ‘are we making it easier to build things people love?’ What questions would that drive us to ask and what changes would we make? We imagine a world where people are connected to their users and wanting to truly help them (even without writing software!). A world where managers focus on making it easy for people to work and build things they love. A world where executives look at truly serving and not dictating, removed from reality. A simpler, more interesting world.
But that would be crazy, go make your velocity higher ;)