Quality can be defined as meeting the customers' needs. Juran posted this definition in 1988, defining quality as
"fitness for use"i
. Testers like this definition as it gives them a "north star" to guide their actions and it
reinforces a role that most testers relish, that of customer advocate. However Crosby's older definition of quality
as "conformance to requirements"ii
still holds strong sway, and many test processes are guided by test cases correlated
directly to requirements. Requirements are not the same as the customer though. I will describe here a more customer
connected approach for software development and quality.
By requirements I refer to the classic list generated early in the project, and then used to drive test case design in
a many-to-many traceability matrix during Requirements Based Software Testing. Capturing the customers' needs and
documenting a shared understanding of what is to be accomplished are good things, but requirements like these have
- Requirements are hard: Many software professionals can
attest to this from experience. Matt Heusser makes this point number one in his Requirements Methodologyiii
and goes on to say creating good requirements is a hard learned craft. If the insufficiently skilled create
the requirements and they are bad, then any quality efforts derived from them will be equally bad.
- Requirements are often missing: This is so common that
the web is full of advice to testers on how to test without requirements; one such example is Daven Kruse's
recent article, "No Requirements? No Problem."iv It is telling that how to test without requirements is a
mainstay of software tester interviews.
- Requirements are fragile: The upfront requirements document
is not able to adapt as the project progresses and new information is learned.
- Requirements are not well understood: The goal is to create
a shared knowledge across the team, but troubled projects are typified by developers who create features different
than what management wanted, and testers who disagree with developers on what was supposed to be done.
Do we dispense with requirements altogether? Agile methodologies are often mischaracterized as doing this, but in actuality
they attempt to implement processes to increase customer input and better respond to changes through iteration. Whether you
use Agile or not, there are some best practices that you can employ to move beyond requirements and focus on customers' needs.
At the heart of this approach is the scenario, a narrative story told from the customer's point of view that explains their
situation and what they want to achieve. But the specifics of what constitutes a scenario are subject to wide interpretation,
so here I will describe specifically what constitutes a scenario in best practices employed at Microsoft.
To address the customer's needs, capturing their point of view is crucial. The scenario should tell a story about the
challenges a customer may be facing or the tasks that the customer needs to accomplish. It should tap into the emotional
experience, perhaps the frustration of the user at being unable to accomplish what they want, or the satisfaction they'll
experience when the feature or product in question is implemented. It has to be their personal story in the scenario.
Perhaps the hardest part of the customer connection is that the scenario must leverage deep insight about the customer needs.
It is by tapping into the sometimes unarticulated needs that the biggest product breakthroughs are made. Think about almost
any run-away hit product, like the Xbox 360 Kinect or Apple iPad. They enabled users to do things not previously thought of,
and things that even the users would not have asked for prior to the introduction of the product.
But a good story can easily delve into the realm of fiction. In developing software for our customers we need to deal in fact,
so the scenario must be based on research. Start with just a few customers and talk to them. If your product has a wide market,
then talk to a representative cross-section of your potential customers (multiple personas). The goal here is to get a deep
understanding of who they are and what they need. During these customer interviews respect their opinion and do not assume their
experience is the same as yours. Figure out how they make sense to themselves. Don't jump to conclusions, but instead gather as
much data as possible without attempting to synthesize insights during the interview. Look for those unarticulated needs that the
customer doesn't realize are possible to implement solutions for, or problems they workaround on a regular basis and have given up
trying to find a solution for. Based on this research generate hypotheses and then test these to arrive at solid conclusions about
your customers and their needs. Such testing will generally be more statistically rigorous and involve large numbers of users compared
to the interviews stage. Focus groups and polling can generate useful data on what customers think. For websites, instrumentation
can let us know what customers do (click, request pages, checkout a shopping cart) and when. Online experimentation (also known as A/B testing)
can use this data to test users in production and evaluate hypotheses.v
Since customers do not know they are in an experiment we
get a true measure of what they do, not what they say they would do.
Finally the scenario needs to be implementation-free. The focus is on the customer and their needs, not the solution. An implementation-free
scenario gives us the freedom in the next stage to come up with many ideas on how we may design solutions to satisfy the scenario. Not only
should multiple ideas be generated initially, but design also needs to be an iterative process so that new information about the customer, or
about the chosen solution, can be accommodated. Brainstorming is an effective way to generate the initial set of ideas, then to test these
ideas it is effective to get them in front of real users as soon as possible. As mentioned previously online experimentation is one way to
do this, but there are other ways to test in production. Early user testing involves exposing a limited set of users to an early version
(beta) of the product and collecting feedback. Microsoft has automated this process with its Internet Explorer and Office products by
supplying easy to use wizards built into the beta from providing feedback. Even earlier than beta, we can expose users to prototypes.
Besides being early and rough, a key aspect of a prototype is that it is not the code on which the final product will be built which gives
developers the freedom to produce prototypes cheaply. They neither need to be well coded or maintainable as only the interface exposed to
the user is what counts - under the hood can be hamsters spinning the wheel. Prototypes can even be produced without code at all, and
presented on whiteboards or with pen and paper to the end user. The user "clicks" an element and a presenter manually moves, erases, and
draws to represent the result.
The role of the tester here threads through all these stages. Activities like online experimentation and early user testing are test
activities where testers should play a role in developing the test scenarios and evaluating the results. Even during the research phase
it is important that test be represented so that they share the deep understanding of the customers' needs, and can play a role in the
composition of the scenarios. But it is ultimately these scenarios where the tester finds their connection to the customer and can drive
the product quality most effectively. Instead of static requirements generating test cases, the scenario should drive a narrative test
approach. Narrative testing means to test the customer scenario end to end, and therefore is an adjunct to functional testing. Indeed if
much of the functional testing can be covered by developers in unit tests, then it is with the end to end narrative testing that a tester
can provide highest added value add and least redundancy. All of the standard testing skills apply here; happy cases and negative cases
should be covered. Edge cases are also effective with narratives where aspects of the scenario are exercised at plausible extremes. Hans
Buwalda coined then name Soap Opera Testing for such cases calling them, "tests... based on real life, exaggerated, and condensed".vi
This is an example of a scenario which embodies the practices discussed here:
Danielle is an aspiring author. She composes her works, consisting of short form and long form fiction, directly on her
laptop computer. Her work is her livelihood so data loss is not acceptable. Since she is composing on the fly anything
that interrupts her flow of creativity frustrates her, so meta-tasks like spelling and formatting should be handled as
transparently as possible. She often shares unfinished versions of her work with peer reviewers for feedback, and ultimately
needs to share her finished works with editors and potential publishers.
Is the solution a word processor with auto-save feature, review tools, and a common file type? Or maybe a cloud storage solution,
leveraging existing word processing software with automatic cloud to PC synchronization, and cloud-based secure sharing and review
tools? The solution may be found in many implementations. The scenario is implementation free, but it captures the customer's needs
and provides a template for rich narrative testing.
While scenarios are not a panacea for all software quality problems, they can be a powerful tool in software engineering. In a world
where requirements often are bad, go stale, or do not exist, scenarios offer an alternative that gets the software as well as all the
engineers building it closer to the customer. And if we can create software that taps into those latent, unarticulated customer needs,
then we can almost guarantee that software will likely be a success.
Seth Eliot will be presenting Testing in Production: Your Key to Engaging Customers
at the Software Test Professionals Conference 2011
Mr. Eliot has over 16 years of software engineering experience and is the Senior Test Manager at Microsoft where he solves Exabyte storage
challenges for Bing. Visit him at: bit.ly/seth_qa
i J.M. Juran.; Frank M. Gryna (1988). Juran's Quality Control Handbook: McGraw-Hill
ii Philip Crosby (1979). Quality is Free. New York: McGraw-Hill
iii Requirements Methodology, Matt Heusser, Jan 2007
iv No Requirements? No Problem, Daven Kruse, Sept 2010
v Ronny Kohavi, Thomas Crook, and Roger Longbotham (2009). Online Experimentation at Microsoft
vi Hans Buwalda (2004). Soap Opera Testing