First, the video:
Seriously, please, watch the video. It's only a minute. Then, and only then, scroll down.
(This space intentionally left blank.)
(This one too.)
(More intentional whitespace/delays for people who's eyes wander)
(You get it)
Ok. The video demonstrates that when we focus deeply on one subject, our brain has tunnel vision - we miss other things that may be happening in the application. The more scripted the work, the more likely we are to miss failures outside of the specific parameters of the test case "guidance."
Now for application. Let's consider a typical "test case" for a simple calculator. Press 4, Press Plus, Press 4, Press equals, expect to see 8.
What if the screen layout is wrong? What if, after we press plus, the cancel button becomes disabled? What if, on the second display, one of the icons loses it's transparency? What if we run it twice in a row and get "44" as an answer? What if anything else goes wrong
besides the number eight appearing on the screen?
I submit that behind every test case is the hidden concluding sentance "... and nothing else went wrong
'Oh, that's obvious', you say - yet every day I talk to people and organizations who certainly act as if this is not true.
The general conclusion for testing is an increasing skepticism on heavily scripted ('manual') tests, and I agree. We can write that test at many different levels of specificity:
- Click 4, click plus, click 4, expect 8.
- Add four plus four, expect 8
- Add two whole numbers
- Add two numbers
- Experiment with adding
- Perform arithmetic operations
- Test the calculator
I think that's a good thing, and credit is due: I first heard of this video from Jon Bach at STAREast in 2007, and I understand Eric Peterson used this insight at Agile 2008. Still, I hadn't seen it on the web and thought it might be new for you
But there's more
Consider the two most common kinds of GUI-driving test automation: Record/Playback and Keyword driven.
Record/Playback has to see a successful test run in the first place in order to have something to compare to. After that, it will notice every
change in the GUI - every single one. Resize your window, display a different day in the lower-right hand corner, move a button, and it'll throw an error, regardless of weather or not a real problem occured.
Keyword-Driven test automation has the opposite problem; it will only check to see that the result is eight. It would miss all of the examples above. Literally
an automaton, it would concentrate all it's effort on counting the balls in the example above and totally miss the moon walking bear.
That's the thing about test automation; it lacks intelligence
James Bach covers this issue with his use of the term sapient testing
, and Michael Bolton with his distinction of testing vs. checking. Yet I haven't seen this distinction applied much when it comes to browser-driving tests, say with selenium. The discussions tend to imply that driving the GUI is 'just' developer-facing testing cranked up a notch. Yet with developer-facing tests, the developer controls the interaction and can examine every little bit that comes back from the function call.
Suddenly, with a GUI, a lot more can go wrong.
Let me ask this question about customer-facing 'test automation': How much of it is counting the balls that are passed around by the black team? How much of it is missing the Moon Walking Bear -- and how can we compensate for that?
(Someone is going to point out that there are specific branches of test automation literature that recommend adding randomization to tests, or calling them in a random order, or otherwise attempting to introduce something closer to artificial intelligence, and that's fair and important . It's a good thing. I can't help but ask, though - Why is that such ideas have had such narrow adoption and discussion?)
More to come.