It all starts in a bar in Germany.
No, really, I was at a bar in Germany last week, talking about the differences between "Agile Testing" and "Traditional Testing", with Xu YI, Huib Shoots, and Pete Walen.
I was explaining the traditional disappointment with the Agile Testing book by Crispin and Gregory.
"Yes, yes, it explains how an Agile is different", I argued. "Yes, it talks about unit and integration and system and acceptance, the four quadrants, and all the rest. But where in the book does it actually talk about testing?"
Where does it help me figure out what tests to run to decide if the software is good enough to ship?
Answer: It doesn't.
What is Going On Here?
Xu Yi suggested that that there is no single book on testing is ever going to be enough. Real testers, skilled testers, will need to read from a variety of sources, one of which being the Agile-Testing Book.
If that is true, then the Agile-Testing book needs to be about how testing under agile is different; not the stuff that is the same.
In which case, you wouldn't want a lot of material about how to, say, come up with test ideas on a specific piece of software, or how, once you ran the first test, you might adjust your strategy in real time. That sort of problem applies to any kind of testing. The argument is that these details need to be in the "Just Testing" book.
I want to acknowledge Xu's position -- the guy has a point.
At the same time, I think that something else is going on.
Imagine that it is 2002, and you are testing the eWidget 2.1 application. To do that, your boss drops a CD on your desk and points to some word documents on the network. "We need you to test the subtotal function. It is the major new feature for the release."
What does "testing subtotals" mean?
You sit at your desk, read some documents, write some documents, and do some testing. You probably file some bugs or produce a report.
Now think of all the pressures on you. You have to find the important bugs fast. You have no communication tools, no connection to a developer or business person. Now, those of us who were smart and able, back in the day, would find a way to have lunch with the product owner, to drop by the programmer's cubicles with some excuse -- when we were actually on a fishing expedition. The team might have a 'team status' meeting once a week with the project manager.
But by and large, we were by ourselves.
In order to cope with this we came up with a bunch of skills. We had quick attacks, exploratory testing, test strategy models, domain tests, equivalence classes, boundaries, state transitions, decision tables -- we had a pile of tools.
We needed them.
Now consider the "Agile" Team, all living in the same (open!) room, breathing the same air. Before the programmer starts coding, we get the key players together and talk about what could go wrong, and how bad that might be. When the programmers are coding, we talk to them about what they are doing, and, after they finish, what risks they see.
Compared to the "sit in my cube" model of testing, we need the techniques much less -- the risks jump out at us!
I hold that Agile-Testing radically de-emphasizes the importance of traditional test techniques because "The happy path is pretty easy, and, hey man, if we just get everyone in a room and talk about it, the big risks become obvious."
Okay, they didn't actually say "hey man."
Still, I can get behind the idea of de-emphasizing the test techniques.
I'm just not so sure about the radically part.
I call this problem of what to test and how to test it the "Great Game of Testing." It is something I pursue, aggressively, as both hobby and profession. Yet I find it under-represented in the literature.
It is under-represented on the web.
It is under-represented at conferences.
As my friend James Bach puts it, if we hired people off the street as helicopter pilots, gave them no training, and expected them to fly aircraft, we would expect a lot of crashes. Yet that simple naiveté, that expectation that testing "should be easy" and that "anyone can do it" - that focusing on the accidental elements of testing without talking about the essence -- we find it everywhere.
Over the past few years, I have seen less and less focus on where test ideas come from on some teams, and more and more releases with buggy software.
Agile Software Development provides us with some techniques to decrease the gap; to make discovering what to test easier and more helpful.
But should traditional techniques go away?
I don't think so.
Some Good News
I have been trying to understand, for years, why the agile folks were so reluctant to talk about the “Great Game”, and why, when I brought it up, they yelled something about “Whole Team” to me.
“Whole Team” really can change the way we think about the work. With “whole team” we don’t expect a tester to figure it all out and throw down blame when the bug gets through. Instead, the whole team discusses the expected behavior, business and technical risks.
If a bug gets through, the whole team failed, not one person.
This is downright healthy.
So there are plenty of good things here.
One thing I think we have room to contribute in is more focus on test techniques and building skills. We want to spread experience and knowledge to both testers and, to a lesser extent, the whole team.
Between Blogs, Twitter, books, training, conferences ... I think we have a fair chance of doing just that.
Agile Software Development changed the world.
Let’s go do it again.