Last time I talked about two Latin root words, Qual and Quant, and how
the human mind was capable of grasping quality through evaluation, while
the computer can only deal in quantities. Also that when A Nueral Net, a
form of Artificial Intelligence, looks like it is evaluating, all the
computer is actually doing is try to compare the current input to
decisions other humans made earlier.
The concern I tried to express before was an obsession with metrics,
with hard numbers, with analysis and formula over evaluation and
thinking. Read any critique of modern education, or a critique of the Master of Business Administration or of Management Consulting and you'll be likely to find this type of argument.
This discussion about an over-reliance on formula and method goes beyond
education; Paul Feyerabend applies is to the science of philosophy in
his book titled, yes, "Against Method". Other fields, like CS also spend too much energy attempting to make their work formulaic or predicable, like some of the older, more
well-understood branches of physics. A term for this is "physics envy."
In Software Testing, this shows up as psuedo-science, metrics like
defect count or developer/tester ratio used without context. Cem Kaner
has done some great work in that area; I don't feel a need to re-hash it here. Of course those
measures are bogus -- I'm more interested in how they gain popularity
and what we can do to change it.
To do that, let's talk about a real example of physics envy in software
testing. The Google Test Automation Conference in 2009 had a panel discussion on proving the value of testing, on ROI and metrics. At one
point, one of the panelists said that getting to hard numbers on testing
was sort of like searching for signs of life on mars. There might not
have been been life on mars goes the thinking -- or we might never find it. "But we
must do the search. Even the search has value."
Well, okay. Maybe. There might be some value there, sure.
But something really bugged me about that statement and they way it was
presented. I spent a fair amount of time thinking on it, and came to one
conclusion: The entire discussion was about quant. There was nothing
about qual; nothing about what we humans are good at.
I realized that a main endeavor of several of the folks at GTAC was to
do exactly that - /Automate/ /Testing/. To do that, you've got to pull
the entire 'quality' field out of Qual and put it into Quant.
To do that, you've got to completely ignore evaluation -- treat it like
it doesn't exist.
And if you do that, you need hard numbers, so you end up making silly
statements, such as the claim that getting hard numbers about our value
is like finding life on mars.
Do I think it's a silly statement?
The claim is that getting hard numbers is a imperative; suggesting that is the only
way to do it. To prove that is not true, all we have to do is provide a
single other way to do it.
More in part III.