Performance in the Software Industry: Principles of Measurement and How to Optimize Your Business

July 25, 201911 min read

Recently, I read this book recommended by Martin Fowler on his blog––Accelerate: The Science of Lean Software and DevOps, written by Dr. Nicole Forsgren, Jez Humble, and Gene Kim. Nothing in this book was really a revelation to me, but it put into words what I was already thinking about.

Through four years of groundbreaking research, Dr. Nicole Forsgren, Jez Humble, and Gene Kim set out to find what drives software delivery performance—and how to measure it—using rigorous statistical methods. Put simply, the book tries to explain what makes a software company perform efficiently and how that performance can be measured. As most of us in the field are well aware, performance in the software industry has always been a source of debate and contradictions. It’s refreshing to finally discover a method to evaluate performance in a way that makes sense and has a positive impact, and that uses actual data and measurement to validate our assumptions or disprove certain preconceived ideas.

Can we fully relate to the authors’ analysis? If you ask me, a software developer for 20+ years, the answer is yes—100%. The picture they paint is familiar: From the delivery pain and the long, inefficient development cycles to the estimations that were required during most of my career as an employee. Or perhaps it's the pressure that pushed us to reach impossible deadlines and lousy performance measurement (number of lines of code or umber of bugs by individual, anyone?). And I haven't even mentioned the environments of constant interruptions or having to split one’s time as a developer among several different projects. I’ve seen all of that.

Now what Accelerate says is that software development could be more systematic, and the people involved don’t have to (literally) break their backs just to turn in what could be considered a ‘good’ performance.





Software is dominating the world


As a tech-enabled company, if your software is inefficient, you are dead (or at least, falling behind). Typically, no company can continue to do business as usual, in that case. Enterprises, after all, have to continuously improve every aspect of their IT because all their competitors are doing it. This is the first takeaway that I got from the book. In fact, all the companies that were surveyed within the time the study was conducted, ushered in improvements in their IT initiatives.

But then again, there’s often a disconnect between executive and practitioner estimates of IT maturity and progress. Executives tend to overestimate the technological maturity of their company when in reality, down in the machine room, the practitioners are desperate for improvement. Consequently, it is crucial to communicate in a precise and measurable way the reality of the situation up the hierarchy ladder.

Another thing going against the industry is that it’s plagued with misconceptions. One of them is the fallacy that states “We missed the deadline because we are too slow.” Truth is, there are actually numerous variables that can affect that equation. And the most significant of which is the fact that humans are pretty bad at predicting pretty much anything. So the assumption that we are ‘slow’ probably means the estimation was inaccurate in the first place.

One other common misconception is the idea that “doubling the people in the team will ensure that the work progress will be twice as fast.” This is simply not true. Experience tells us that adding more developers to a project that’s already falling behind may even be counterproductive and cause the work to get more delayed.

As estimations are good primarily for making decisions and choices on the path the development should take, for a very long time we looked for an effective way to measure what went well and what went wrong during project development. Below we discuss these measurements.



Measuring performances


Measurement is often misunderstood and hard to implement because, in the software world, there is no fixed inventory of tasks or goods to work with. An initial inventory of tasks or development plan can quickly change and evolve for reasons that are often unpredictable such as market changes and the like.



Consistently, attempts to measure performance in the software industry, so far, have failed because people focused on the output (e.g., worked hours matching the estimation, number of line of codes, etc.) instead of the outcome (i.e., working software), and on the individual instead of the team (e.g., flaws or shortcomings of each team member).

In autocratic (power-oriented) and bureaucratic (rule-oriented) companies, measurements often result in unfavorable outcomes, and so one needs to be careful with them—perhaps consider changes in the corporate culture before starting to calculate performances.

Also, it is easy for a developer to create a false impression of performance. Where there is fear, you'll be sure to find wrong metrics. To solve that problem, focus on good metrics. Bad metrics can be falsified and may present good results even when the outcome didn't improve. Good metrics are ones that even when falsified to show improvement, can still result in a positive outcome.

To have a better understanding of this, let’s first take in the 4 software development performance metrics as described in the book, which it recommends we use in the industry:


1. Deployment frequency (number of deployments by cycle)


Define a cycle/period (for example, by sprint) and count the number of deployments you realize during that time—the more deployments, the better. There are three aspects to this.

First, increasing the number of deployments by period will force the team to quickly deliver a software that is constantly working. To do that, past a certain point, the team will have to use automation to ensure that the quality is high and discover the bugs early during the development life cycle.

The second aspect is this, by creating small incremental improvements we fasten the feedback loop. End-users and customers can see continuous improvements on the platform and provide feedback as well as have the assurance that the product is evolving.

Third, by decreasing the size of the release, we also minimize the impact of the changes made. Keeping everything small is a good way to stay in an environment that is humanly understandable and manageable.

For users, the worst thing to know is that a bug in the software you're using won’t be resolved for months to come. For developers, on the other hand, it is to not know whether the six months invested in development time will actually work or make sense. This also destroys the client’s or end user’s trust in the company’s ability to deliver something that works properly as well as to solve their own problems.

For that reason, the old-school monolithic approach that leans toward heavy administrative and documenting processes is definitely not the industry standard anymore. We want to always move forward—and fast.


2. Lead time (time to deliver a requested feature)


Following what was said above, we want to quickly release new features so we can quickly gather feedback from our customers. To calculate this metric, you can use the creation date of the feature ticket and the date when it was marked released. Sometimes, it’s better to use the date when the ticket was moved to an active sprint or when it was selected for development.

3. Mean time to restore (time to fix a failure)


Following the two metrics above, we want to quickly fix problems when they arise. These metrics will give you a good idea on how fast you're able to fix a failure and release that fix. To calculate it, you can use the creation date of the bug ticket and the date when it is marked released to production.



4. Change fail percentage (number of failures by changes made)


Change fail percentage here is meant to break the “fail fast” adage. We want to increase the deployment frequency, but at the same time, we don’t want to increase the number of bugs or fails. This metric will force the development team to implement good and automated practices for quality assurance standards like automated build, testing, integration, and deployment.

To calculate this metric, you can simply count the number of bugs created after a release. Then you can refine the list as you go along by determining the cause of the bug(s).

The reason why I think these metrics are sound is not because you can’t fake them (it’s easy to do more deployment than usual to make sure we improve the metric), but because the outcome when faking them is going to be positive—that is, more deployments increase the speed of the feedback loop. Also, deployments cannot be of poor quality. Otherwise, the change fail percentage metric will have a corresponding increase. And nobody wants that.



“The score takes care of itself”


In his famous book “The score takes care of itself,” Bill Walsh explains that it’s more important to define the principles that drive the outcome than to try to predict or measure the score. This is because these principles will inevitably lead you to success.

For this concept to be relevant in our industry, you obviously have to know what those principles are first before you can apply them. And such knowledge only comes from experience and consulting with other experts.

I would like to emphasize this point, as I think that too often some leaders focus on measurement when they should actually focus on improvement. It’s especially true when you measure with flawed metrics, which, as explained above, could consequently lead you in the wrong direction. You should also be careful with short-term improvements that could potentially backfire in the long run.

All that said, let’s now take a look at the standards that can be put in place or be leveraged by your company—or those that you can request your software provider to implement to ensure that the performance will improve or guarantee the success of your development.



Principles for improving performance


If all the issues on performance improvement can be summed up in one sentence, it would be this:

"Strong DevOps drives successful software delivery."

So then, how do we determine what constitutes strong DevOps? The principles below (and the specific practices that define them) should serve as a valuable guide, to start with:

Continuous delivery

  • Version control. If you haven’t already jumped on the boat, let’s just say you are a bit late. Version control is inevitable. After all, Github was recently acquired by Microsoft for $7.5B for a pretty good reason.
  • Trunk-based development.
  • Automated deployment process. Automate the deployment process to avoid human error.
  • Continuous integration. Whenever a change is made, the application is automatically integrated and tested.
  • Test automation. Automate your tests and make them fast, reliable, and isolated following the test pyramid
  • Test data management. Have a set of up-to-date test data that cover a wide range of use cases.
  • Security from Day 1. Include security concerns starting from day 1, as later on they might become harder and more expensive to implement.

Architecture

  • Loosely coupled architecture. This means multi-tiered or microservices.
  • Empowered teams. The team members can decide for themselves what architecture/infrastructure they want to use.

Product and processes

  • Customer feedback. Ensure that you have fast and efficient feedback loops.
  • Visible and understandable business flow. This is especially meant for developers and is aimed at encouraging transparency. Often, business people try to translate their ideas into technical terms, which can end up in a huge confusion, as their technical understanding is different than that of the developers.
  • Small batches. The goal is to have small, incremental deliverables that can be quickly tested.
  • Experimentation. It is highly encouraged that you perform experimentations.

Management and monitoring

  • Lightweight change approval process. Cut the bureaucracy. Remove the blockers for better innovation and self-responsibility. This also helps keep the motivation high.
  • Application and infrastructure monitoring. A clear, accessible and transparent monitoring system is a good way to let everyone know about the current status of the application.
  • System health checks.
  • Work-in-progress limits. Focus on one or a few tasks at a time, and finish it before you move to something else. Avoid interruptions and multitasking. Do away with the ‘developer as a screwdriver’ fallacy.
  • Work-in-progress & quality visualization.



Cultural

  • A generative culture. Promote a generative culture that places emphasis on accomplishing company goals and missions timely.
  • Learning & experience. Both of these should be used as foundations for improvement.
  • Collaboration. Facilitate collaboration in all stages of the development.
  • Apt appreciation. Meaningful work should be appreciated. Asking somebody to do something and then throwing away what they just did is probably the best way to show them their work is meaningless. It’s also the best way to lose them as faithful employees.
  • Transformational leadership. Demonstrate transformational leadership. Good leaders help people improve and learn as well as guide them toward better solutions.

As you can see, DevOps & Site Reliability Engineering processes are a big part of the deal. They fasten the feedback loop and increase the quality by improving visibility and allowing more considerable leeway for experimentation. They also diminish the fear of change (with the utilization of automated tests and delivery), foster motivation, and encourage improvement and learning of new technologies.

Time to look at your performance


With software development revolutionizing the tech and business world, IT companies are making significant strides to improve. The question is: Are you, too? It’s high time you take a long, close look at your own organization and see what changes you can make to take your performance to the next level. Moreover, understand the differences between output and outcome. The moment you stop putting too much weight on the former and instead start focusing on the latter, you will get better results. Be cautious with measurements. Make them positive so you can motivate instead of instilling fear. And remember, continuously revisit and refine your DevOps and company culture. They are the golden keys to improving performances.


Eric Jeker

Eric Jeker

Software Architect

Eric Jeker

Software Architect

Eric has been working as a software engineer for more than 20 years. As a senior architect for Arcanys, he works closely with the developers to instill the habit of learning, clean coding, re-usability and testing with the goal of increasing the overall quality of the products delivered by the teams.

Need a better tech hire option?

Let’s connect