I've been building an answer to this question "When should a test run unattended?
", choosing my words carefully, differentiating between checks that run automatically and greenbar unattended verses using tools to become a sort of "cyborg tester."
But there is another problem -- that pesky term "automation."
When we automate things, we generally assume that the automation is replacing
the human, doing the exact same
standard, repeatable things that the human does.
When we think of tests as cases, where you type "2" in one box, "2" in another, change the drop-down to "minus", click "submit", and see the number "0" in a specific element, than that sounds right.
Is it that really how testers work?
In my experience, good testers go off script. They are curious. They notice things that go wrong that are not
in the "expected results" list. They notice things that are not wrong, but learn in that direction -- they thing "hmm ... that was a little slow. I wonder, if the query is a little more complex, can I make it too slow?"
Computers don't do any of those things.
For that matter, there are huge number of curiosity, if not solidly required things, that a human tester might check (like "tab order"), that are seldom tested in an automation fashion, if ever.
My colleague, Michael Bolton, uses the term "Testing Vs. Checking
" to describe this difference. He differentiates a human thinking from a pass/fail result determined by a rule.
So, when you are writing the automaton, running it, watching it, and adjusting it, you are testing.
Once the automation is checked into a build and is running automatically without human oversight, it won't pick up anything outside of it's original decision rules -- which, over time, tend to become invalid and out of date.
In Michael's words, a 'check' is:
- an observation that is linked to
- a decision rule (one that produces a bit—yes/no, 1/0, pass/fail, true/false) such that
- the observation and the decision rule can be applied non-sapiently.
There are plenty of times when we may want to automate checks. "Test Driven Design", or TDD, is one situation where the programming community agrees that writing automated checks saves time and improves quality.
The downside of all this is that when we talk about "test automation", we are probably misleading people.
Which means the right question might not be "When should a check run unattended?" but instead be "When should a check
This is tough stuff. It's challenging. It's not trivial. The naive approaches are intuitive, obvious, simple ... and fail a lot.
I submit that those of us who try to do better might have some communication problems. We might need to educate the market. We might lose a few gigs or leads because we use funny words.
But the words we use will be accurate, and our work will be good, and smart people tend to turn out okay.
Maybe I'm tilting at windmills.
Then again, ten years ago, Cam Kaner and James Bach realized that the term "Best Practice" couldn't keep it's promises, and started the context-driven-testing
movement. They seemed to have done all right by that.
For now, when someone asks "When should a test be automated?", I won't have an answer.
But I will have a conversation,
and I have some guidelines around that.
It's a start.
"... and Miles to go, before we sleep."
Alan Page just released a blog post
that is a smashing good explanation of what I'm trying to say here. Page covered the same ground, did it better, and did it in a way I admire stylistically - with more examples. Go check it out