While reading Accelerate: Building and Scaling High Performing Technology Organizations, I came across an interesting fact.
“There is no trade off between improving performance and achieving higher levels of stability and quality,” the authors write.
That is, engineering teams can deliver software faster than other teams without sacrificing stability. And that’s exactly what high-performing teams do.
High-performing teams perform better across all four measures of delivery performance for engineering teams the Accelerate authors identified:
1. Delivery lead time
2. Deployment frequency
3. Time to restore service
4. Change fail rate (what percentage of software changes fail and require immediate fixes or rollbacks)
And the highest performers are pulling away from the pack. Organizations with high-performing technology teams were consistently two times as likely to exceed the profitability, market share, and productivity goals of low performers. In an increasingly winner-take-all economy, it’s more important than ever to ensure your software teams are out-performing the competition.
So why does the myth that moving quickly hinders a team’s ability to achieve higher levels of stability and quality persist?
I think the idea that speed, stability, and quality are in tension originates from the cost-quality tradeoff triangle:
If you’re unfamiliar, it’s the idea that in any undertaking, you can have as much of any one attribute as long as you’re willing to sacrifice one of the other two. So you can have low cost and high quality, as long as you’re willing to wait a long time. Or you can have something fast and cheap, but the quality will suffer. For the vast majority of undertakings, this is an ironclad rule.
But software is different.
Simon J. Ince describes how measures of delivery performance for engineering teams interact with each other in a blog post. He uses performance, scalability, and stability as a framework to think about the supposed tension. These map well to the Accelerate authors’ measures of delivery performance.
Performance and scalability = delivery lead time and deployment frequency
Stability = time to restore service and change fail rate
Ince writes that he often confuses performance, scalability, and stability. “Why? Well, I’m not really confusing them – it’s just that they’re so closely related I don’t think you can consider any one of them in isolation.”
Here’s how you can think about speed, stability, and robustness in software.
Ince also uses a triangle to illustrate the relationship between three attributes.
But he uses it to illustrate the point that these factors are interdependent, not exclusive.
In the cost-quality tradeoff triangle, doing things faster degrades their quality and increases their expense, all else equal.
In software, according to Ince, performance, scalability, and stability rise and fall together.
For example, performance increases stability. Running faster decreases the likelihood that multiple users will be trying to do the same thing at the same time. That decreases the likelihood that they’ll be fighting for scarce resources, like database access. And it means fewer locks, deadlocks, errors, and stability problems. Similarly, a system able to handle more simultaneous processes is faster and more stable by definition.
Accelerate offers another example. The authors asked engineers which of the following options best describe the change approval process at their organization:
1. All production changes must be approved by an external body
2. Only high-risk changes require approval
3. We rely on peer review to manage changes
4. We have no change approval process
They assumed organizations with more arduous approval processes would have longer lead times, less frequent deployments, and longer restore times. But the tradeoff would be lower mean time to recovery (MTTR) and change fail rates.
But as it turns out, teams who use peer review or no approval process had higher software delivery performance. “In short, approval by an external body simply doesn’t work to increase the stability of production systems.”
The faster and more robust a piece of software is, the more stable it is, and vice versus.
So how do you simultaneously improve performance, scalability, and stability? First, it’s helpful to know how the high, medium, and low-performing teams performed on each KPI in 2017:
This can provide some goals or benchmarks for your team going forward.
Based on what high-performing teams do, the Accelerate authors recommend teams adopt continuous delivery methodology.
“Teams should be monitoring delivery performance and helping teams improve it by implementing practices that are known to increase stability, quality, and speed, such as the continuous delivery and Lean management practices described in this book.”
Specifically, teams should:
Teams that use continuous delivery also tended to identify more strongly with their organization, which is a predictor of organizational performance. And teams who use continuous delivery enjoy statistically significantly lower levels of unplanned work and rework. High performers reported spending 49% of their time on new work and 21% on unplanned work. Low performers spent 38% and 27%, respectively.
The first step to performing better is realizing it’s possible! The idea that performance, scalability, and stability are at odds is a pernicious myth. High-performing teams prove that working to improve any one area can actually improve all three.
Then, consider implementing continuous delivery. For more information, check out Accelerate: Building and Scaling High Performing Technology Organizations.