The DevOps Dilemma

Are we focusing on resource efficiency at the detriment of flow?

As many DevOps and Agile teams know all too well, teamwork very much makes the dream work in terms of flow efficiency. There are few things more satisfying than an efficient DevOps operation and ticking items off the to-do list. But with today’s DevOps teams often stretched thin, it’s easy to start focusing on the wrong things and neglect to consider the bigger picture. We’re talking about resource efficiency versus flow efficiency.

Prioritizing resource efficiency above flow efficiency could be holding teams back and causing significant discrepancies in terms of big-picture progress. In this article, we’ll discuss why examining and measuring how items flow through the system is just as important as assessing individual efficiency.

A better way to track teamwork

Focusing on the output of an individual contributor in a value stream could actually be harming the overall performance of the system. It might seem counterintuitive, but in essence, DevOps teams must begin to look at the bigger picture – in other words, the overall organizational efficiency – before highlighting and breaking down resource inefficiencies.

In order to achieve this, DevOps needs to start monitoring the right things. For example, tracking time spent on coding projects is great, but it only measures individual output, meaning you’re less likely to acknowledge the full impact of the collective group.

Instead of measuring individual resource input, managers could consider monitoring cycle time. Examining the duration from the start to finish point of each project could give a better idea of flow efficiency, before homing in on individual output. Aging is another metric that could be explored. Looking at how long something gets held back at a certain stage could allow managers to make better decisions in the future and shift allocations accordingly.

Likewise, it could also be worth monitoring WIP (work-in-progress) levels in each mode and aim to minimize this. Reducing batch sizes in both story size and movement of items between modes in the value stream could mean you have a steadier rate of progression. It’s also good practice to ensure that items progress all the way to completion through the value stream before allocating new tasks to the same team member.

Mitigating work starvation

One key challenge faced by DevOps teams arises when developers focus solely on completing their work in its entirety, meaning that they hold off on releasing tasks to the next phase until every task on their list is completed. This then creates bottlenecks, in turn resulting in inefficiencies in terms of wasted resources, time, and money.

Switching to smaller batch sizes could help to mitigate this issue. Large batch sizes often lead to ‘starvation’ of work in the testing or implementation areas of the value stream and tend to increase the cycle time. This is because the amount of work on someone’s plate at any given time can warp their sense of efficiency. Smaller batches enable speedier feedback on smaller iterations of new features and updates, allowing the project to progress quicker overall.

Making visibility a priority

To truly eliminate (or at least reduce) work starvation, and reach a smoother level of community effort, the entire team must gain better visibility of the entire process. Having a bigger picture view of the progress and status of the full value stream is essential in streamlining the flow of tasks throughout the product team.

Implementing a value stream management platform can lead to much greater clarity, enabling better visibility and control over every team, tool, and pipeline throughout the organization.

With the right software delivery dashboards, managers can better examine the rate of value delivery in contrast to desired business outcomes. More specifically, being able to analyze value stream flow metrics means businesses can view their overall production from a wider lens, empowering better knowledge and stronger decision making.

These valuable flow metrics can also provide better insight into the organization’s workflows in general. Naturally, achieving better consistency is the ultimate goal. With elements such as Cumulative Flow Diagrams (CFD), managers can see how efficiently any given task is progressing throughout the workflow.

Thanks to the clear way in which a CFD presents the data of a project, every team, and individual member can visualize how everything flows well, with no glitches, bottlenecks, or work starvation periods. Likewise, being able to see the bulges, inconsistencies, and discrepancies in graph form signals to managers that tasks are getting held up, not being completed, or aren’t being passed on to the next phase.

Occasionally, managers may notice that lines in a CFD can disappear altogether. That means someone is not getting work passed on from others, or one of the team members is keeping hold of their batches of work. Although progress will always be made – indicated by the trend of the graph never being in decline – managers will clearly see the areas where they will need to focus on honing better flow efficiency. By looking at the whole value stream in this way, project managers can synchronize their team’s tasks effectively and allocate duties so that everyone is working in tandem.

Essentially, many developer teams are unwittingly damaging the businesses’ overall efficiency simply by not seeing the bigger picture and focusing on resource efficiency which will often lead to flow inefficiency.

At a time when software development becomes increasingly competitive, agile and DevOps professionals must move away from the individual approach to value delivery and switch to a more system-centric way of managing to better optimize long-term flow efficiency.

BobDavis

Bob Davis, CMO at Plutora, has more than 30 years of engineering, marketing and sales management experience with high technology organisations from emerging start-ups to global 500 corporations. Before joining Plutora, Bob was the Chief Marketing Officer at Atlantis Computing, a provider of Software Defined and Hyper Converged solutions for enterprise customers. He has propelled company growth at data storage and IT management companies including Kaseya (co-founder, acquired by Insight Venture Partners), Sentilla, CA, Netreon (acquired by CA), Novell and Intel.

Unlocking productivity and efficiency gains with data management

Russ Kennedy • 04th July 2023

Enterprise data has been closely linked with hardware for numerous years, but an exciting transformation is underway as the era of the hardware businesses is gone. With advanced data services available through the cloud, organisations can forego investing in hardware and abandon infrastructure management in favour of data management.

The Future of Cloud: A Realistic Look at What’s Ahead

Erin Lanahan • 22nd March 2023

Cloud computing has transformed the way we work, communicate, and consume technology. From storing data to running applications, the cloud has become an essential part of our lives. But what does the future hold for this technology? In this article, we’ll take a realistic look at the future of the cloud and what we can...

Ab Initio partners with BT Group to deliver big data

Luke Conrad • 24th October 2022

AI is becoming an increasingly important element of the digital transformation of many businesses. As well as introducing new opportunities, it also poses a number of challenges for IT teams and the data teams supporting them. Ab Initio has announced a partnership with BT Group to implement its big data management solutions on BT’s internal...