I've heard this term lately - Risk-Based Testing
. The idea, is, essentially, to prioritize your tests by risk, and do the riskiest (and most painful if it fails) things first.
If you think about it, that means finding the tasks that have the highest bang for the buck - and doing them first.
Now isn't that just plain good testing?
Or, to put it a different way - can you think of a form of good testing that does not consider risk?
I brought the question to the twittersphere this morning and got some interesting replies. Ben Simo and Ron Jeffries pointed out that Acceptance Test Driven Development, and some implementations of TDD, often don't address risk.
Is it fair for me to call that "bad testing"?
Well ... maybe. It depends. It's probably time for me to introduce the Bowl of Fruit problem.
Imagine a Bowl of Fruit. It has a lot of things in it. It's got some bananas, some grapes, some oranges. We all like the bowl of fruit.
We got to Fruit conferences. We get up in front of people and talk about the Fruit. We argue a lot.
And, suddenly, I wake up one morning and realize that you are interested in grapes and I prefer bananas.
That is to say - we keep using this word 'test', but we get different value from it.
Some people value testing as a form of risk management - as an investigative activity to enable decision makers. Others are more interested in using tests for a different purpose.
For example, Acceptance-Test-Driven-Development folks might be more interested in exploring and communicating requirements than they are in critical investigation. Developers using TDD might be more interested in enabling refactoring or to help explore the design or API of the software.
In both those cases, the person is talking about 'testing' but not particularly excited in risk management. Oh, they might be interested
in risk management, and appreciate
it as a side-effect
, but it's not on the top of the stack. They are interested in the grapes, not the oranges.
One way to tell this is my the language used, as inevitably you'll hear something like "... and it's not just testing
, you also get (benefit x)."
Nothing's wrong with that, except perhaps using "just" as a pejorative, which minimizes it's impact. I, personally, am interested in "just" testing - testing for it's own sake - as a part of the value proposition of delivering working software, which is the super-goal. (Or making money, having a fulfilled life, and other meta-goals.)
But when we focus on other attributes of the bowl-of-fruit, we shouldn't be surprised that risk-management isn't covered well. So, you might say that aspect of software testing isn't covered well - and that aspect (the one I care the most about) - is done poorly.My take-aways
1) One thing I think the "risk based testing" movement /has/ done is move the conversation toward making explicit and conscious trade offs about risk, instead of doing them implicitly. Another is to provide tools to people who might not otherwise have them. In that, I think it's a good thing.
2) Instead of arguing about approaches or words, we can instead start by focusing on the goals of testing. If someone has different goals than I do - well - of course they'll come up with a different testing strategy. And that might be just fine.Note 1:
Thanks to my colleague and friend Sean McMillan
, who introduced me to Bowl of Fruit problem with regards to software requirements
. The original idea, as far as I can tell, came from Collaborative Game Design Theory.Note 2:
Please don't mis-read this to mean "Heusser thinks ATDD or TDD are bad testing." When, as a developer, I've used TDD, a large portion of what I used it for was risk management. As a tester or PM, when we used ATDD, a large portion of what we used it for was risk management. But then again, I am actively interested
in risk management. Some people have ... less interest.