Soon after we made the offer on our new house, we asked to get back inside with a tape measure. We wanted to plan where to put furniture, to ease decisions on move-in day; would the bookcases fit along that wall? Naturally, there were several measurements that we completely forgot (the distance from the kitchen entrance to the dining room door) or marked incompletely (was the master walk-in closet 6 feet wide or 6 feet deep?). While we did well overall, some of the house’s finer details came as a complete surprise. Not all of them were pleasant ones.

So it is with software testing. You only find the bugs you look for. In recording and tracking the development process, you can only detect trends in the things you measure.

Collecting lots of information “just in case” isn’t a solution, either. The more irrelevant information you require, the more the database fills up with junk, and people become less willing to use the tool. I asked developers and testers about the information they find is really worthwhile to track, and how they use their tools.

What’s There Works

When I started asking developers for examples of nonstandard items they track—things besides the mundanely obvious, such as date reported, priority, description—I usually got a blank stare. Relatively few people seem to add new fields to their bug-tracking tools, and they rely on whatever is built in.

At first, I thought I was asking the wrong questions. Developers tune everything, don’t they? And testers create all kinds of custom utilities! However, it turned out that I hadn’t happened upon a gaggle of uncommunicative or uncreative developers; I’d found a positive point. For the most part, the defect-tracking vendors are doing the right things—at least in this regard—by providing the right set of fields for most users’ needs. Few testers need to customize the database or their reports, and they get their work done, so I must conclude it’s a best practice of a sort. It’s certainly nice to discover an area of our industry where the vendors are doing an “unremarkably” good job.

However, I certainly found some tips and techniques worth sharing.

Track How Often A Bug Is Reopened

Sometimes, it isn’t the nature of a defect that’s worth investigating so much as the process by which it occurs. During a recent project, Darrell Grainger, a senior QA engineer at Quest Software, noticed a tendency for defects to regress, so he began to track how often a defect was reopened. “I could then should how many defects were filed for a given period,” he says, “and chart those opened once, twice and more then twice.”

Grainger discovered a clear trend for multiply submitted defects. As it turned out, Team A would fix defect #1; then Team B would fix defect #2, and regress defect #1. When defect #1 was reopened, Team A would fix it and regress defect #2. Aha! It’s a team problem, not a software problem!

Because Grainger tracked the regressed defects, the company was able to discover what was going on. As a result, company management encouraged the two teams to work more closely. “They also scaled back the scope of the project and added time to remove many of the dependencies and [to] refactor the code,” Grainger says.

Ghulam Hasnain, a release manager in Utah, agrees on the importance of watching the multiple-submission issue, because he’s suffered its effects at several companies. As he explains, “Access to logging defects should be provided to all who may encounter a defect; however, a filter mechanism should keep the same defect being logged again and again and from skewing the metrics.” When effective filtering of duplicate defects during logging was not in place, it had to be accomplished either through further analysis or with self-discipline. Says Hasnain, “This is a waste of effort, because the effort should go into figuring out the defective or qualitatively poor domains to which the metrics should point—areas that need to be redesigned or optimized.”

Track the Solution

It can sometimes help to record supporting evidence that will demonstrate that a defect was indeed fixed, or at least that it appeared to be that way. Suggests Alejandro Ramirez, QA analyst at Maritz in St. Louis, “When a defect is fixed, retested and marked as ‘passed,’ some companies like to capture a screen shot of the ‘successful’ moment that provides supporting evidence that it indeed worked. This may be included in the attachments section.”

Among the metrics that Ramirez tracks are the length of time it takes developers to fix issues, and how long it takes testers to retest issues. He also tracks defect reproducibility ratios—how many times a tester can reproduce the defect before he submits it.

Choose the Right Values

While most developers and testers rely on their tools’ predefined ranges (such as a 1-5 or 1-10 scale for defect severity), others feel that QA and development teams should define the values appropriate to the team and the project.

For instance, Daniel Suciu, a process specialist for a telco in Bucharest, Romania, had problems trying to manage consistently using the Status field in his team’s defect tracking system. While “Submitted,” “Ready for Testing” and “Closed/Fixed” were clear, other values were subject to debate. What one person considers “Open” or “Assigned” may not agree with another’s view. Different states make reporting more difficult, or at least they obscure management’s understanding of the reporting.

It’s very important, Suciu says, to define the list of predefined values your team will use. That’s especially important for fields that can spark interdepartmental debate, such as “Resolution” or “Issue Type” (egos get involved when you call something a bug). Nor should one person call the shots, he believes. “The decisions should be agreed on by all people involved—business analysts, developers, testers—based on everybody’s needs, but I’d say that these should be put together by QA.” Everybody should contribute, he says; QA should set up a framework, and the project management team should customize the schema to suit each project’s needs.

Suciu suggests two fields to watch out for. One of them is the “Comment” field. Most tools have one, and it’s often used for information that should be tracked in other fields. Unfortunately, stuffing that data into an open-ended text box makes it impossible to track.

Another problem you may encounter is the vital “Functional Area” field; many teams try to use this as a closed list, Suciu says, but once the project is under way, the granularity is lost. (You might list “database,” for instance, but discover that most of the problems should be listed as “SQL Server.”) Other projects start with an open list, permitting the user to select from an existing value or create a new one. According to Suciu, that’s relevant enough for fixing defects—maybe even better, because of the granularity—but almost useless for metrics and reporting.

But don’t get too granular. For example, says Srinivas Sripathi, a tester at i-flex Solutions Ltd. in India, don’t scale bug severity or priority on a 1-10 scale. “It should be (L/M/H)…. For scaling, we should have proper documentation with categorizing the bug. I don’t think the senior-level people—Testing and Development—will have time in preparing this doc.”

If your defect tracking tool permits it, you may want to consider adding role-based fields.

According to Ainar, a test manager in Riga, Latvia, different development roles need different information. In the simplest form, a developer needs to know what to fix, a tester is concerned with which defects were already reported, and the project manager wants to know the number of current critical defects. You may want to ensure that your schema has fields to track information that each type of project member needs. Says Ainar, “We have ‘Defect Due Date’ and ‘Defect Status Due Date’ fields, we have fields like ‘Planned-for-Version,’ ‘Tested-in-Version,’ etc.” In fact, the system he developed tracks some functions down to the needs of a single person.


About the Author

Esther Schindler