3 Tips for Dissolving QA & Testing Bottlenecks on Agile Teams
The modern-day quality assurance and testing practitioner is inundated with tools, techniques, and methodologies for achieving a higher level of software quality. Various best-in-class tools, such as Azure DevOps, have doubled-down on the features and functionality available in their QA and testing modules. For instance, Azure DevOps now has more advanced exploratory testing functionality with a Chrome add-in called Test & Feedback. This add-in allows Scrum team members to quickly contribute to a sprint by participating in the exploratory testing cycles of a sprint.

As more organizations embrace the Agile mindset with the notion of faster feedback loops, minimum viable products, and lean methodologies, the yesteryear processes of test design, approval, and scheduling interwoven into a typical Waterfall release schedule have become less of a focal point. This does not mean that proper test planning in QA and testing teams isn’t happening; it’s just happening in a more ‘Agile’ way. Now that Scrum teams are fully responsible for the design, build, test and deployment of software features, they have become both a jack and master of all trades.

The old way of delivering software created silos, especially with disparate QA and testing teams designing manual and automated tests from outdated requirements. Or worse, QA teams were rushed into designing and executing tests from the lack of visibility into solution scoping activities. All of this led to inefficiency and oversights.

Today, Agile teams are running at breakneck speed with everyone contributing to manual, automated and exploratory testing. In other words, quality is everyone’s responsibility and typically not centralized in one specific QA team. The Agile and DevOps revolution coupled with the advent of modernized best-in-class QA technology such as the ‘Test & Feedback’ Chrome add-on from Azure DevOps makes it possible for everyone to participate. This reality has improved the speed at which organizations can meet QA organizational standards.

Whether it’s time spent re-testing defect fixes, the manual steps involved in creating additional test data in lower environments to reproduce a bug or the idle time accrued between handing off a step within a test case between SMEs across different product lines, bottlenecks in QA continue to slow things down. Organizations and vendors alike continue to devise and implement new methodologies and tools to dissolve these types of bottlenecks. Too often, organizations treat the symptom of each bottleneck with a point solution. For example, there are many software product add-ons that help with test data generation, along with collaboration-type tools to help reduce idle time with the hand-offs that often occur in lengthy User Acceptance Testing (UAT) cycles.

Organizations need a holistic methodology to oversee, measure and take decisive action across the entire QA value stream without relegating them to one platform. Take for example individuals who have been diagnosed with heart disease or a build-up of plaque and calcium in their arterial walls. Most doctors agree that heart disease is largely attributed to the cumulation of lifestyle choices that contribute to the clogging of the main arteries that lead to an individual’s heart. The impeding flow of blood in and out of the heart chambers typically causes an individual to experience symptoms such as: angina, decreased cardiovascular output and overall sense of malaise. Modern medicine has taught many physicians in the United States to treat this disease by intervening with open heart surgery, stents and medication that allow blood to pass either through the existing arterial walls or around the blockages.

But what if doctors were able to prescribe a holistic treatment that dissolved the calcium and cholesterol buildup in an individual’s main arteries without intervening with stents or bypass surgery? Wouldn’t it make sense to identify, measure and treat the root cause of a bottleneck impacting the blood flow by prescribing lifestyle and nutritional measures that could be incorporated immediately? In fact, more physicians are now recognizing that their patients have been able to reverse heart disease with nutritional and lifestyle guidance if caught early enough.

That is the operative phrase: if caught early enough. Luckily, the progression of medical technology has introduced and popularized the heart CT scan, which is used to find calcium deposits in plaque. According to many sources, it’s the most effective way to spot atherosclerosis (plaque build-up in the arteries) before symptoms occur and is saving thousands of lives every year.

Like the heart CT scan in medicine, new frameworks in software delivery such as the FlowFramework™ are designed to provide similar insights into the end-to-end software delivery process. The heart CT scan has revolutionized the identification and measurement of calcium buildup in the arteries that lead to the body’s most vital organ – the heart. The plaque that builds up in so many people’s arteries is analogous to the impediments that delay the delivery of valuable software to a community of end users. Your ability to survive depends on the nutrients derived from the blood delivered to every corner of your body. One must first identify where these bottlenecks occur.

With the FlowFramework, organizations can actually “see” where the bottlenecks occur across their value stream networks – many of which often occur within software QA and testing. The issue that many companies face is that they don’t identify the ‘slow down’ or the plaque buildup soon enough to start taking corrective action to ensure the flow of software remains consistent.

Before long, the bottleneck in specific areas such as test data generation or hand offs for user acceptance testing has become so severe that sprint cycles are inadvertently impacted. The capacity of Scrum teams to deliver user stories steadily declines, much like the cardiovascular output of an individual on a treadmill suffering from unseen blockages or bottlenecks. The underlying premise of the FlowFramework is to first connect the tools used to deliver the software, while incrementally developing your value stream network and flow metrics. You might consider the ‘flow efficiency’ a metric of the FlowFramework, as the actual calcium test score of a heart CT scan. The lower the levels of calcification in arterial walls, the more efficiently blood flows through the arteries into and back from the heart and other major organs. This is exactly what QA and testing practitioners need to focus on: How can we improve the efficiency of defect turnaround, test data generation and handoff between individuals involved in a user acceptance testing cycle?

Here are three tips that will help you dissolve the most common QA and testing bottlenecks facing Scrum teams today:

  • 1. Create a Value Stream Architecture Diagram

    Even though you are practicing Scrum and using a tool like Atlassian Jira or Azure DevOps (formerly VSTS) to groom your backlog, plan sprints and write user stories, your business counterparts might be using tools like TargetProcess, Jama or BluePrint for solution scoping and requirement definition. Perhaps your QA team is also leveraging HP (now Micro Focus) Quality Center for test execution and defect tracking.

    As a QA manager, do you have access to Jama, BluePrint or TargetProcess to view the final requirement, including the associated data flow diagrams and suggested test data? Maybe you do, but does your entire QA team have early visibility into solutions scoping efforts for all the product lines they support? If not, it would benefit you to identify how the business requirements are formed and how they flow from an upstream tool that the business is leveraging to a downstream tool where the Scrum team lives and breathes.

    Identifying bottlenecks or fragmentation between tools with a value stream architecture diagram is your first line of defense for dissolving the most common QA and testing bottlenecks. If you can pinpoint where communication breakdowns occur upstream, you can either bridge the gap or implement a manual process and procedure for clear upstream visibility.

  • 2. Granulate your testing layers, focus on the fundamentals and develop an overarching master testing strategy

    Too many organizations that dive head first into Agile stop maintaining a master strategy documentation for their QA function, often citing ‘working software over comprehensive documentation’ from the Agile Manifesto.

    Working software over comprehensive documentation is more relevant than ever and should remain a guiding principal for organizations adopting Agile, but too many QA practitioners are focused on the day-to-day minutia without a solid grasp of how all the moving parts in the software delivery lifecycle fit together. According to the 2017-2018 World Quality Report, a large proportion of QA and testing managers lack skills in test strategy and design. The report stated that roughly 22 percent of QA testing managers were deficient in strategy and design skills, whereas in 2017 it has grown to 32 percent and even higher in 2018. A key recommendation of the World Quality Report is for organizations to define their QA analytics strategy at an enterprise-level.

    In order to do that, QA and testing managers must create and maintain a master testing strategy (often referred to as an ‘MTS’) that clearly articulates how the various layers of testing fit together: unit, system, integration/interface, UAT and regression. It is the one strategy document that you cannot afford to overlook.

    Writing this strategy document will help you identify the handoffs that occur between testing layers – the prime location of bottlenecks. Take for example the fact that most organizations have wasted a tremendous amount of cycles re-designing test cases from scratch when they could have simply pieced together a string of unit tests created much earlier in the cycle? This occurs often with system and user acceptance testing design. By understanding and documenting how the handoffs occur, you can eliminate waste between testing layers.

    Additionally, astute testing managers look for ways to gain efficiency in the generation of test data by reusing lower level test environment and timing system refreshes in such a way that aligns with their upcoming testing cycles embedded within sprints. It will take time, effort and coordination to write an overarching master test strategy – but the process forces you to think through the various points in your value stream where bottlenecks will occur and slow down software delivery.

  • 3. Understand the lifecycle of a defect across product lines and business units

    Visibility into potential bottlenecks requires you to diagram the lifecycle of a defect from start to finish, along with where the inputs to fix and re-test a defect are flowing from. In medicine, the experts often speak about how the chances of survival are much higher if they gain early insight into disease progression. In software, we need to connect the dots between the tools we’re using that feed into the resolution of a defect. Doing so will provide early insights into potential bottlenecks that could block the flow of value.

    Diagramming your defect workflow lifecycle in a tool like LucidChart or Microsoft Visio will help all teams understand the layers between ‘new’ and ‘closed.’ Organizations generally monitor ‘defect aging reports’ that monitor the time elapsed between ‘new’ and ‘closed’ – but don’t take the time to monitor the aging of the in-between statuses, such as ‘awaiting re-test.’
    By diagramming the overall defect workflow lifecycle, you can begin to dissect ‘awaiting re-test’ into what it truly means and all the inputs (both tangible and intangible): test data generation, resource availability, system availability, etc. There are many hidden bottlenecks lurking within one status in a defect workflow lifecycle.

Over 1,660 executives from 32 countries contributed to the 9th edition of The World Quality Report and the most common challenge cited in this report is the lack of early involvement of their QA and testing teams in the inception phase or sprint planning. A key recommendation of the report is also for organizations to define their QA analytics strategy at the enterprise-level. This common challenge and key recommendation underscore the need for enterprises to adopt a holistic view of their quality assurance and testing departments through end-to-end value stream integration. By starting with a value stream architecture diagram, documenting an overarching master testing strategy and visualizing the lifecycle of a defect – QA leaders can detect the early onset of bottlenecks. Much like modern medicine’s call for early detection of disease with advancements such as the Heart CT Scan, methodologies like the FlowFramework™ help us to see the forest through the trees.

About the Author

Matt Angerer - TasktopMatt Angerer – Sr. Solution Architect
Mr. Angerer is a Sr. Solution Architect for Tasktop Technologies, the industry leader in value stream integration. Prior to Tasktop, Matt served in a variety of QA and Testing Leadership roles. He has over 15 years of experience in enterprise, mission-critical system implementations, and support for global multinational companies. His editorial contributions can be read on LinkedIn. You can reach Matt directly at matt.angerer@tasktop.com.