We've been talking about using defect count to predict future bugs on the software-testing yahoo group
I've been trying to keep out, but they keep pulling me back in. :-)
The basic idea was to count defects on past projects, thus predicting the defects for the next project, thus both predicting the test effort and knowing when you are "done." Here is my reply
>There is the issue that software defects are not measured or reported the same way in every project.
>Not all defects are found, not all are categorized, and not all are fixed.
>Well, that and not all defects are created equal.
In order for defect count prediction to be accurate, they need to be
created equal -- or at least, they need to fall into a general "bell
curve" distribution of severity.
Thus we can say that:
/IF/ we have a predictable curve of defect severity,
and /if/ the project team is roughly the same skill and ability as last time,
and /if/ the technology stack matches what we did on past projects,
and /if/ the problem domain is similar to previous projects,
and /if/ nothing changes that is different than previous projects,
THEN /we hope/ that we can make meaningful predictions.
In my life experience, I have come across a few projects like this.
Generally speaking, those projects don't offer a lot of value. Almost by definition, they have to be cookie-cutter projects.
In my experience, in most cases, the right solution to a cookie-cutter
project is for a brilliant developer to write a code-generator to
abstract away the problem.
I am /*completely*/ serious; this is how Paul Graham made his millions on yahoo store.
Bottom line: I'm skeptical, and have the chops to explain why.
I would be excited by anyone who can demonstrate otherwise.
My old professor of software engineering had a picture to describe this kind of thinking: It's like driving a car by looking out the rear-view mirror, and making sure you don't run over the sides to the right or left:
Everything works just fine unless you have unpredictable oncoming traffic.
No really, he used to harvest corn in the fall on the weekends when he came home from college, and some neighbor kids painted the truck window black as a Halloween prank. He couldn't finish harvesting without a new window; he couldn't get a new window without going to town.
So he drove to town by looking out the rear-view mirror and making sure the truck stayed in the center of the road.
That worked fine because it was 1950-something in rural Illinois, and back then, there was nobody else on the road.
So this might work to create yet-another-report in COBOL using JCL and MVS - as long as the tools, people, and processes stay the same. But those are the technologies that offer the least value, at least in terms of differentiation and competition.
Even if you did try defect counting on a COBOL project, this assumes your team is not learning, improving, and working to prevent categories of defects in the future.
Now think for a moment about what would happen if you did defect prediction and the team did improve; if the developers gave fewer defects to the test team. What would happen to the metrics? To the schedule?
If anything, my original post was not worded strongly enough.
Notice that I'm not down on all forms of prediction - I'm a big fan of slip charts, risk estimation tools like Riskology
by the Atlantic Systems Guild, and Evidence-Based Scheduling
. All of those strike me as better ways to do test estimation
that tradition "add up a bunch of tasks" or "whatever the big boss says" methods of doing estimates.
... But about those black swans
All of these techniques are also prone to black swans
. That is to say, predicting the future based on the past leaves you open to change. Consider, for example, the time it took to cross a continent in 1890, verses how long it takes today.
Now consider how much more quickly software technology has changed, compared to transportation technology.
It's similar with software. Planning tools will only get you so far.
Planning tools that actively push you toward ignoring black swans?
That's something to take with a serious grain of salt.
People who use those tools without realizing that is what they are doing?
I'm sorry. It's just very hard for me to take that seriously at all.
But I have been wrong before, and I'm open to dialogue.
Let's reason together.