Just back a little… bit… more…” Jenny struggled with her words, so much of her energy being consumed in processing the image in her viewfinder. Her assistant Martin drew the disc-shaped diffuser back an amount so small that the model could hardly notice. Back lights, diffused and reflected, flashed as Jenny took the picture. “OK, let’s see how that looks.” Jenny and Martin moved almost apprehensively to the computer screen to look at the photo that had been beamed wirelessly to the nearby laptop (yeah, the technology is cool and amazing). “Nope, not there yet. His face is still too dark, we’re not getting it.” Martin lunges to the reflector stand, turns it left, then right, Jenny again focuses on the image in the viewfinder. “Yes! That’s it! That’s got to be it.” Together with the art director, they again crowd around the laptop to see the image. “Ok, we got it; now let’s work on his expression.” As the model changes expression, freezes, lights flash, bits get beamed, images scanned for any possible flaw, sighs of relief, expressions of joy and excitement, when finally the ‘right’ frame flashes into view on the screen. And the whole process is repeated until the scene is ‘complete’.

I’ll try not to romance photo shoots in this article since I’m sure it’s not all glamorous, but there are many wondrous things about the process that are worth highlighting in the context of exploratory testing. Let’s start with the end in mind; what does the end of shooting a scene really mean? What does the end of an exploratory session mean? I like the thought that both are creative processes without a definitive ‘end’. In both cases, it requires judgement and skill, sometimes of more than one person, to decide to move to the next target. The skills are multi-disciplinary, too. In testing, we investigate technical, functional, usability and aesthetic concerns simultaneously and independently, mixing and mashing them as we go until we believe we’ve captured the essence of the test target, that is, the feature we are investigating. In photography, there are lighting and elements of composition, among others that I won’t claim to understand, again used to capture the essence of the subject. What I learn from this part of the analogy is that the “end” is really the beginning of someone else’s understanding of the subject, be it test target or subject of a photo.

With this mindset, you can see how important it is to show your work and communicate your observations. Communicating only conclusions such as “pass or fail,” subverts your audience’s participation; it gives them a shortcut to drawing a conclusion but it also leaves them in the dark. That hardly describes making an informed decision.

The idea that exploratory testing shines light on both good and bad characteristics of a feature might be new to testers that are used to going bug hunting. Observing what works needs to be in your repertoire just as much as observing what doesn’t. In some situations, the ages-old consulting adage of starting off with the bad news and ending with the good news – the solution, the way through – is a useful style when talking to other developers. It demonstrates to them that you have taken the time to understand their mental model of the way the feature works, and that gives your observations more credibility. In my personal practice I’ve taken this one step further. My goal is to eliminate bug reports in favour of feature reports, a reporting style that I’ve started to refer to as “feature advocacy”. My investigation results in a report, and that report might contain observations that I found delightful, and it may contain observations that didn’t match my expectations, or what I believed to be the expectations of those stakeholders I am investigating on behalf of. It’s the feature that goes backwards in the work flow if the observations cause the decision-maker to reject it; it’s the feature that will be referenced in the user and support manuals; it’s the feature that will actually be used. So for now, it makes sense to me to enhance and share information using the same basis. Calling it a feature report also gives me the chance to lavish praise on those that build delightful features; in practice it is possible with a bug report but at the same time, incongruent.

Let’s move on to the exploration itself and more about making decisions. As investigators, we certainly make decisions. We decide when to end the investigation as above, but along the way we also choose the way; some may be drastic changes of direction, others may be slight course adjustments, depending on what we’ve found so far. I mentioned the “right” frame when describing the photo shoot. In testing, sometimes “right” is just as artful, that is, your characterization of the feature requires judgement and all the observational skill you can muster. In photography the automatic focus doesn’t take the place of the critical assessment of the resulting image. In fact, sometimes the automatic focus has to be abandoned in place of manual focus, depending on the lighting conditions and the requirements of the scene. You still go for the shot. In testing, automated test results are not our decision, they are input to it. They inform the test process, but do not define it. True liberation for the tester is to have a portfolio of styles, techniques, and heuristics to use as needed. The conditions might not permit automation, but you still go for the shot; that is, you still test, and you still decide your next step based on those results. The team might also have used unit-level and feature-level test automation to build the right thing the right way, but again you still have to evaluate and decide your next step based on those results. Ideally your next step is enhanced by that level of automation so that you can spend your energy on the really interesting points of investigation. As the photographer relies on her powers of observation, so does the exploratory tester. With sharpness taken care of, thoughts of composition can take over; similarly with the right thing built the right way, thoughts of stakeholders, personas, value, and delight can take over.

So as we explore, we take notes and we describe what we find so that we and others can judge whether or not we know enough about that feature, or if we need to investigate further. As the photographer brings together elements of photography such as lighting and composition to create that perfect shot, an exploratory tester brings together elements of technology such as functionality, performance, and usability to characterize features. Composition in photography is arguably about five things: patterns, symmetry, texture, depth of field, and lines. To compose is to manipulate those elements to highlight the subject. What’s “composition” in testing? Are there only five? Are there even only five categories? Here are some examples to see the analogy through, but know that my message is for you to build your own set of heuristics, that is, your own way of highlighting what is good or bad about features.

Workiness. You’ve heard of truthiness. This is like that, but different. Seriously, whether you can or cannot complete the feature’s happy-path scenario repeatedly is part of your feature report.

  • Patterns. We use patterns as we observe things; they also help us communicate. On one project we extracted all the text from all the bug reports regardless of current status and created a word frequency chart using Wordle.net; we used this to highlight a hot spot in the application that wasn’t apparent before. See the attached image for an example of a word frequency chart created from bugs reported for Adobe Flash Player (generated from the first 1000 results from the public Adobe Bug and Issue Management System). Your feature report would identify if a feature is part of a pattern or not.
  • Consistency. Highlight a feature that stands out from others, in good or bad ways. The feature may be knock-your-socks-off delightful, or it might be so different that it, uh, defies logic. I’ll never forget the first application one of our clients out-sourced; the application did exactly what they said it needed to do, but there was no File menu, no Edit menu, and no tab order on any of the screens. You get the picture. We expected consistency with other applications that might be found on a typical desktop even though that wasn’t in the requirements our client had sent to the supplier. This consistency problem was external consistency, but there is easily a form of internal consistency too that also might be an element of a feature report.
  • Utility. How smooth might the user’s adoption of that feature be? Clearly related to workiness, yes, but there is more context that you can apply. I remember in our test lab deciding that a solution was ready for on-site user acceptance testing. The day testing started was a cold winter day and the temperature in the room where the test machine was located was about 5C (40F) so our business tester wore gloves. Writing on paper wasn’t a problem with gloves. Typing and using a mouse was a problem with gloves. We eventually did launch, but not with that feature in the workflow and not with the machines installed in that room.
  • Depth of Impact. Direct, or downstream workflows affected, or both? Positively or negatively? You don’t have to work on a business intelligence team to know that data quality is a big deal for a lot of workgroups. Wouldn’t it be terrific to be able to report that a feature saves time downstream?
  • Pathways. Was the experience the same no matter how you accessed the feature?

Notice that I’ve not made my list about “composing” test cases since I generally don’t do that anymore. For my exploration, composing is about describing the feature and how I can continually improve at that. Shifting from focusing on bugs to focusing on features is a first step to guided exploratory testing, eloquently framed by Rob Lambert (http://pac-testing.blogspot.com/2009/03/normal-0-false-false-false-en-gb-x-none.html) and an idea that I’ve been promoting using checklists. And it’s easy thinking about elements to describe features in the negative sense but I encourage you, as part of your exploration, to practice thinking about features in the positive sense so that you can characterize them in both negative and positive ways; this builds credibility with those writing the software since it proves you have taken the time to understand what they’ve done.

In the end, it’s about helping the team and stakeholders to make informed decisions. Since they make decisions about features, it feels appropriate that I align my compositions with what they have already identified, described, discussed, demo’d and delivered. The enjoyable aspect of testing using an investigative, exploratory style is that nothing limits the way you do it. It’s your exploration and therefore your art. You just have to poke that box and do it.


About the Author

Adam Geras Adam Geras is a researcher, coach, and speaker for those that test software. He specializes in systems delivery methods, particularly methods that enhance communications within teams. Most recently Adam has been adapting Agile principles and methods to large-scale test management services for enterprise programs and projects. Adam has worked in IT for over 20 years initially as a developer and architect and over the last 8 years, quality assistant, test manager and coach to large projects.