Three or four things came together recently which made me want to write this article. The examples below, as with most of this series, involve the browser, but they don't have to. These types of problems occur with Windows and even batch-oriented software.
What is "The Gap"?
I've been revisiting an old blog series called "When should a test run unattended
". I closed the series with the idea that what browser-driving automation does is not what humans actually do; humans can notice, think and respond in the moment. That difference in capability is something I will call "The Gap."
My friend, Smita Mishra, replied with this comment:
Tons of queries!! waiting for your guidelines post..putting one query here - wont it defeat one of the basic purposes of Automation if we say that Automated tests execution will also need to be monitored by humans during execution...probably examples will help - not that I disagree - but wish to understand more...
Reading her comment, I think she picked up a meaning I did not intend. Given that writing has no feedback mechanism, and we come from different earthly, and technological cultures, it isn't a huge surprise either -- so I want to be clear.
To speak plainly: No, I am not saying that all tests must be watched by a human, all the time. I am not saying that all gui-driving tests have to watched by a human, either. Certainly, many of my clients get a great deal of value by running a suite of checks after every build, automatically, and reporting green or red results.
I am, however, saying that those checks can't think. They follow simple decision rules, and the software could easily be broken on ways the checks are not looking for. They may be helpful, but they are not sufficient.
Or, at least, you take on risk by relying on those checks alone.
A Different Way To Look At It
It's possible that, to you, this risk is no big deal. You have millions of customers, and, if a few hundred see the problem and tell you, then you can fix the problem, pushing out a new build in minutes, with only a modest hit to reputation. Perhaps you are twitter. In that case, automated checks before deploy might be enough - you might be able to live with the risk of bugs that automated checks can't catch, but a human can.
But those folks aren't really the primary audience of this blog.
A Second Data Point
Recently I was in a conversation with an Agile Software Expert. While they didn't sign the manifesto, this person certainly had a reasonably large name in Agile circles -- larger than mine.
She made the assertion that automated checks, running after every build, can prove
that the software works correctly. The word she used was prove, and she used it repeatedly.
Even if there was no gap between the thinking human and the machine, there are some things a typical browser-based automation strategy just won't check. Here are a few examples:
- The physical printing of pages
- Tab order between form elements
- What happens when you quickly flip between browser tabs
- Hitting ENTER for submit instead of the submit button
- Google Maps Style Drag-And-Drop
- Are form values retained when you click previous->next->previous in the browser
- Changes in form elements due to browser resize
- Did anything look
wrong - mal-formed tables, mis-aligned text?
What Most Companies Actually Do
At most companies, there is some sort of informal system to compensate for this risk. An analyst, a programmer, manager or tester, someone
is exploring the software for these sorta of things, at least before major releases.
That testing is implicit. The company thinks it is free, and the folks who are doing it often feel bad because they are failing to live up to some ideal of "proper testing" that is 100% automated.
All I am saying is that it is not free; it should be explicit and accounted for. You might try to do it quickly and effectively -- I am all for that. Your company might train the staff on how to do it; one company I know of in West Michigan has something like thirty programmers, with one single exploratory tester.
But they do have
an exploratory tester; they are not counting on it for free.
Mind the Gap
The term "mind the gap
" is a caution to train passengers, to be aware of the space between where a train ends and the train platform begins. That's my advice for today: Figure out the gap in automation capability and what actually needs to be tested - then make sure your test strategy explicitly minds the gap.
Here's one way to check: Bring up the subject with the whole team, not just the testers. Dig for the contrarian reaction.
There's gold in "them thare conversations".