In a world where innovation reigns supreme, businesses are sprinting towards digital transformation, and the concept of “faster, better, stronger” is stitched into the very fabric of how we perceive success.

Enter DevOps, the dynamic force; the technical and cultural foundation needed to successfully undertake digital transformation. A promise of speed and agility, continuous integration and delivery, collaboration and communication, scalability and resilience, and a culture of perpetual improvement.

How do we fuel its fiery engine of productivity, while unlocking the full potential of automation and tooling? In this article, I share how organizations aspiring to achieve top-tier performance have no alternative but to embrace DevOps methodologies. How the tools you choose can strengthen or weaken your competitive advantage. And how value stream mapping and telemetry can help you take your DevOps to the next level.

What is DevOps?

DevOps is a combination of many things: cultural philosophies, practices, and tools that can increase an organization’s ability to deliver applications and services faster. But why should organizations adopt DevOps? How should they start? And how can they continuously improve?

How do elite DevOps do it?

Google’s Accelerate State of DevOps 2021 report by the DevOps Research and Assessment team (DORA) took data from more than 32,000 professionals worldwide and divided it into cohorts: low, medium, and high. The results were clear; elite performers (that excel in software delivery and performance) follow four key metrics: deployment frequency, lead time for changes, time to restore service, and change failure rate.

These organizations are able to deploy code significantly faster than low performers (972 times faster, in fact!), and recover from incidents in production thousands of times faster (6,570, to be precise.)

So, what can your organization do to start the journey toward becoming an elite performer? For those familiar with the DevOps Handbook, it starts with three key principles:

  • Understand the flow by using value stream mapping (VSM) — which can assist in making work visible, and understanding the steps within the many DevOps phases.
  • Create a culture based on feedback by using metrics to get you started.
  • Support learning and experimentation to improve the outcome for all phases, and ensure continuous improvement of the flow.

How to improve your DevOps flow with automation and tooling

By using cutting-edge technology and robust tooling solutions, organizations can unlock the true potential of DevOps. Appfire, for example, has over 200 apps that will add efficiency and quality to your daily work.

The right tooling can help improve the efficiency of your DevOps flow while enabling your teams to focus on the organization’s core business. Appfire’s Webhook to Jenkins for Bitbucket, TFS4JIRA Azure DevOps Integration, and Yet Another Commit Checker are just a few of the many tools that can support your DevOps flow.

DevOps Tooling

Why not develop your own tooling?

You may be tempted to develop your own tooling to automate internal manual processes, but keep in mind — the necessary maintenance often takes the attention of your development teams away from the core business. So, whenever you’re trying to improve the speed for a DevOps phase, and you know what the issue is, check for available tooling. 

When scaling or dealing with complex environments, and automating manual tasks, a tool that grows with your growing needs, and can offer you enhanced security and reliable support is always your best bet.

To each their DevOps flow

Every organization has a development process, with particularities regarding the source control management, how automated the build process is, and whether there’s integration of the deployment process. Organizations are at different stages, implementing different processes based on their own phases and steps. For example, some have very sophisticated ways to monitor applications in production and how everything operates — sending feedback to the flow for continuous improvement — while others haven’t even started.

DevOps Flow

So, how do you go about improving the development flow, which includes all of the phases needed for the work to be completed — from planning, coding, building, and testing, all the way to production. First you must understand how the flow is, by making all the work visible, for each development phase. An excellent way to accomplish this is to use value stream mapping.

What is value stream mapping?

What is value stream mapping and why are so many organizations looking into implementing value stream mapping today? Gartner published a paper called Market Guide for Value Stream Management Platforms which states that by 2023, 70% of all organizations will be adopting VSM to improve flow in the DevOps pipeline, for faster delivery of customer value.

Value stream mapping is not new. Some say VSM existed as early as 1918, and was later adopted by Toyota to identify and eliminate waste. Toyota’s triumphant implementation of lean manufacturing practices served as a cornerstone for many efficiency-driven methodologies, including Six Sigma, and proved fundamental to organizations looking to introduce efficiency into their day-to-day functions.

Value stream mapping – high-level, low-level


When we talk about value stream mapping for software, a high-level development lifecycle can be mapped using the phases of planning, building, integration, and deployment. 

At the next level, you identify all of the steps within those four phases. 

Value Stream Mapping

As an example here, I’ve defined code, build, test and approval as steps for the Build phase. Your organization will have its own steps and phases, and even within the same organization, teams might have different value streams. Just remember that when building a value stream map, all parties should participate such as product managers, operation managers, developers, testers, project managers, IT, security, and so on. This way, you’ll be able to understand and define the entire value stream chain by having subject matter experts making work visible.

Once you’ve understood the phases and steps, you can then identify areas for improvement — which is where certain metrics come into play.

Important metrics to measure in VSM

Process time is the time needed to get the work done in a particular step. For example, how long it takes for a requirement to be done in code, or for a particular task to be tested.

%C/A (% Complete & accurate) measures the percentage of work that the next step can consume without needing to rework. This metric is usually related to the quality of the work being done in a particular step (or in other terms, how much work needs to be redone in the next step).

Wait time is the time where no work is being done, or where work sits still. Unfortunately, this time adds up quickly, and so, understanding why work is being delayed can drastically improve the efficiency of the flow.

These 3 simple metrics, Process time, Wait time and % Complete & accurate can give you a head start on value stream mapping, helping improve your DevOps flow by identifying areas of improvement.

Amazing finds you’ll get

Lead time is how long it takes a code to be committed to when it’s ready for production. This is the total time needed for the build phase (in orange) and CI\CD phase (in purple), and can help you understand how long it takes for the work to be ready. It is calculated by adding the process time and wait time for both phases to understand how long it takes the work to be ready.

Deployment frequency measures how often you’re able to successfully deploy value to production. Based on the metrics you already have, simply add the process time and wait time for all the steps within the phase to understand how often you can deploy to production.

Cycle time measures the total time for each phase, calculated by adding the process time and wait time for each phase.

How to accelerate delivery and improve code quality with value stream mapping — a use case

Use Case:

In our use case here, the development team is stressed. Why? Because they’re unable to deliver as fast as they need to, even when adding developers to the mix. Code quality is an issue — alongside other critical issues happening in production — and tech debt is growing. What should they do to improve? Where do they begin?

The goal for the use case is to show you how value stream mapping can help you identify a sequence of tasks that teams should undertake to deliver value. This holistic view will provide a strategic direction for improvements while making work visible as a means for data-driven, strategic decision making. The use case will also show you how monitoring three simple metrics can lead to the overall improvement of the DevOps flow, allowing the organization to deliver value ten days faster.

Identify teams and stakeholders

To set the stage, we first identified the dev team we would like to have as part of our DevOps project. We also identified all stakeholders and team members, including developers, operators, system admins, product managers, and subject matter experts.

It’s always a challenge to introduce new processes and practices; focus on starting small and choosing the right team. Once you’ve got your dream team, identify key stakeholders and subject matter experts, as well as every person responsible for the many phases of your development process, then ensure you have executive support.

Gather process time and %C/A

After some instrumentation, we were able to gather Process time and % Complete & accurate. During that same process, we also identified wait time. And so, with these three key metrics, we’re on our way to great findings — starting with Cycle time (calculated by adding Process time and Wait time.)

Value Stream Mapping

Likewise, we calculated the total process time and wait time for each phase, with the simple formulas discussed earlier. In our case, we calculated the % Complete & accurate, cumulative for each phase, by multiplying all of the percentages (80/100 * 70/100) from each step within that phase. Note that cumulative % Complete & accurate encapsulating values from all phases is quite low here, which takes us to further analysis of existing metrics around that area, because we know code quality is an issue.

Further analyze existing metrics

Let’s focus on the Build phase. Remember that the % Complete & accurate for the test phase represents the percentage of work that can be consumed by the approval step, without the need to rework. Which means that 40% will rebounce from the approval phase, and go back to testing — raising red flags that we need to further investigate.

We start with the basics: how much of the tests are automated, how are environments provisioned, the test coverage for functional testing, feature testing and test coverage for recent code commits, and so on.

Through discovery, we find out that the test coverage is the culprit; that the approval step was receiving committed code which had not been tested. And given that the approval process was not catching onto all of the code commit without test coverage, issues ended up hitting production.

Value Stream Mapping

Now that we know tests are not covering committed code – it’s a start on where to look. If tests are not created, and developers are responsible for 1. writing the tests and 2. making sure committed code is working, then we should look at what’s going on during the Code step. This is what we found out:

  • There’s no process to enforce reviewing the code to be committed, including test coverage with unit tests.
  • Builds are triggered manually, leaving it to the developer to start the build process and select the tests to run.
  • Code commits are difficult to line up with requirements.
Value Stream Mapping

We have a clear picture now, and we’ve identified areas we could improve in order to increase code quality, which could, in turn, improve the overall efficiency and total cycle time for the Build phase by reducing the amount of work to be redone. We could improve the process by:

  1. Requiring at least one reviewer prior to the code being committed
  2. Finding out a way to automate build triggering with a code commit/pull request
  3. Making the definition of a requirement ID obligatory when committing code

Now that we’ve understood the pain points, what do we do? We look for tooling.

Research tooling

After doing research, we settled for additional settings in Bitbucket and two apps:

  1. We changed settings in BB to enforce reviewers.
  2. We set up Yet Another Commit Checker, enforcing code commit to include valid Jira issues and check for valid users committing work.
  3. We also set up Webhook to Jenkins to trigger builds automatically based on specific pull requests and branches.

Since we’re monitoring and measuring each step, we understand the impact of the changes we’ve just implemented. Making reviews obligatory has led developers to produce better code. Also, when pull requests are approved, builds automatically make the Build step consistent and reliable — directly improving the test step by 25%. Gains achieved during the earlier phases/steps have a cascading effect, meaning that the results may represent efficiency gains for the entire DevOps flow.

The results

Thanks to three simple changes, we’ve improved: the quality of the work being completed by 9%, our lead time by seven days, and the total cycle time by 10 days. Which means that we’re able to deliver value to our customers ten days faster!

Value Stream Mapping

Now that we’re able to deliver value faster to production, how do we know in which way the application is behaving in production? And how can we make sure that we’re getting fewer critical issues or outages in production? To answer questions like these, we have to look at metrics from the operations side of the DevOps flow. So far, we’ve only looked at the development side.

Telemetry is your best friend

Telemetry can help you identify areas of improvement by pointing out patterns and measurements you can act on, or check the results of your latest experimentations.


For instance, Mean time to recovery is the time it takes to go from a problem happening in production, to it getting fixed or operational again. Mean time to repair is the time it takes from when repair begins, to when it gets fixed or operational in production. Mean time to respond is the time it takes to go from when the alert gets triggered, to when the problem gets fixed (or things are operational.) You want these three to be as small as possible.

Then, there’s also the Mean time to resolve, which is the time from when the problem happens to the time when a fix has been found and implemented in production. And Mean time to failure, which is the time from when the problem happens to when the problem happens again. You want these two to be as big as possible, indicating that problems are not happening as often in production.

The right tools will improve your DevOps flow by helping you monitor apps in production and assist with operations.


  1. Own your value stream mapping.
  2. You will find many items to work on; prioritize and go for quick wins.
  3. Don’t forget to make changes visible and show off the great results — including to the executive teams.
  4. The most important element required to continuously improve your DevOps flow is to keep monitoring, learning, and experimenting.
  5. Look for available solutions that can help improve your flow instead of trying to develop an internal solution.
  6. You’re likely to find several apps that can help you with each phase.

See how Appfire can help you unleash the full potential of DevOps — visit the website, or get in touch.

Last updated: 2023-07-31

Recent resources

Back to Top