He’s worked at some of the most innovative companies in Silicon Valley, co-authored a best-selling test textbook, keynoted at every international test conference, and, yes, has primary credit for popularizing the term “exploratory testing.” Did I mention he’s a high school dropout who unschools his son? (For more on unschooling, see the article “How Children Learn (to Test) on page 26.) Of course, I’m talking about James Bach, our invited interviewee for this, the exploratory test issue of STQA Magazine.

James Bach will now answer your questions.

Question: I’ve had a few people suggest to me that exploratory testing is just a ‘fancy name’ for ad-hoc testing. How would you suggest I respond? Peter Walen, Grand Rapids, Michigan

James: Yes, and “dance” is another way of saying “boogie down.” So what? They both refer to rhythmic movement. Yet there is a difference in connotation.

Many years ago I promoted the idea of sophisticated, professional ad hoc testing. But I could not get any traction with that. Most people don’t read dictionaries, I guess, and they kept thinking that “ad hoc” meant sloppy. It doesn’t mean sloppy. It means something prepared for a particular situation. I soon grew tired of fighting that battle and decided to embrace Cem Kaner’s term for the same kind of testing. ET had no baggage, so it was easier to get past the silly stuff and communicate the essence of the ideas behind it.

Exploratory and ad hoc are two different concepts, but they both are descriptive of non-scripted testing.

Question: In the environments I’ve used it, ET has proven very useful. However, it has been suggested that large testing organizations, both large in number of testers and large in distribution of locations, are not suitable for Exploratory Testing because of the extreme difficulty in the ability of management to effectively control test efforts. Again, I am curious – how would you respond? Peter Walen, Grand Rapids, Michigan

James: I hear that a lot from people who mistake a particular form of exploratory testing for the whole enchilada. Look, exploratory testing means you think while you test, and that your thinking affects the design of your test as you go. Is there anyone who truly says that a large organization wants me to stop thinking? And if so, is it possible that whoever is saying that has already, um, stopped thinking before they started speaking? No organization that seeks to organize intellectual work can do that by telling its people to stop thinking. That’s just a silly idea on its face.

Imagine if someone claimed that allowing people to drive their own cars was unrealistic in a large city because traffic would get all snarled up. Well, obviously we do drive our own cars, and yes, traffic CAN get all snarled up. But the alternative requires a great deal of investment to obtain a result that is a great deal less flexible.

All testing is exploratory to some degree, or else it isn’t testing. The reason people get confused is that they think ET is a rejection of organization and structure, which of course it isn’t. We apply whatever structures we think will help us.

Question: Can you explain the focus and de-focus technique used in Exploratory Testing? I have read about it, but have not attended a session where it was demonstrated. Sherry Chupka, Pittsburgh, Pennsylvania

James: Focusing and defocusing is not really a technique, in itself. Focus is an attribute of techniques. Any given technique of testing focuses you in some ways and may also defocus you. To focus means to work within the constraints of a specific pattern of testing (hence, a test script is a focusing method); to defocus is to change your pattern. Focusing also means to restrict your attention to a smaller area more meticulously, while defocusing means covering something larger but with less fidelity. Generally speaking, focusing is for studying something specific (such as investigating a bug), and defocusing is for searching for something new to study (like finding a bug).

All of testing can be seen as a process of strategic and tactical focusing and defocusing.

A great example of the dynamic is what we do when tuning an old fashioned analog radio. We spin the dial quickly, hearing static. Suddenly, as we speed by a radio station, we hear a moment of music. Then we slow down and go back, thus acquiring the station. Going fast was defocusing; going slow was focusing. If you tried focusing all the time as you tested, you would find few bugs. But if you never focused, you would not be able to claim that you had done a thorough job of testing, since you would not be able to relate testing to any specific model of coverage or risk.

Question: If I may, I have a question for “The Buccaneer Scholar”: What are the subjects/thinkers outside of testing and computer science or engineering you think are crucial for developing as a software tester? Curtis Stuehrenberg, Seattle, Washington

James: I suggest that you study the social sciences. Here are some of specific social science fields that I’ve been inspired and helped by. Google these:

  • Grounded Theory
  • Naturalistic Inquiry
  • Situated Cognition
  • Philosophy of Science
  • General Systems Thinking

As for people, check out the
work of Richard Feynman, the physicist, or Julian Baggini, who has written the most readable books on general philosophy I have seen. I think the scientific and philosophical illiteracy of our field explains the strange longevity of stupid ideas that don’t work, and have never worked. Our industry seems to be run by fairy tale logic.

Question: I do a fair amount of testing in an exploratory style and I am familiar with some of the basics – but what can I do to elevate myself from getting the job done to excellence? That is to say, I am interested in ways to increase the value I provide my team and customers. David Hoppe, Dorr, Michigan

James: Here are some ideas for you:

  • Can you do a stand-up test report on zero notice? What if I gave you five minutes to prepare one? Will your test report include all three levels (i.e. product status, testing activities, justification of testing activities)? Practice that.
  • If I were to ask you what your test methodology is, could you give me a five-minute chalk talk that looked good and also was true? You should have a mental picture of testing in your mind, and some way, under pressure, to access that model. I use guideword heuristics, for instance.
  • Learn session-based and thread-based test management.
  • Have you studied the ET skills and tactics list that my brother and I published? Ask yourself how you stand on each item.

You should also develop a colleague network and make use of it. For that matter, I do free coaching over Skype, as does Michael Bolton and Anne-Marie Charrett, also. Come see me for a session.

Question: How little can a user know about the application’s goal, purpose, and assumed workflow in order to do effective exploratory testing? Brian J. Noggle, Springfield, Missouri

James: Effective testing requires knowledge of the product. Exploratory testing is a way of testing whereby you learn as you test. Therefore, you can do effective ET with no knowledge (since it is effective to be learning) but the quality of your testing will not be at its best until after you’ve learned enough about the product to know how to observe it, control it, and recognize important problems in it.

Scripted testing is the same way. You can’t write great test scripts unless and until you know the product. That’s why most people create scripted tests by doing exploratory testing first.

Question: What are similarities and differences between exploratory testing and something like hacking, white hat penetration testing or security testing? Would it benefit these groups to interact more and learn from each other? Daw Cannan, Raleigh, North Carolina

James: Penetration testing is inherently exploratory. Hackers are inherently exploratory. I’ve dabbled in this quite a bit, myself. I would recommend that testers read 2600 magazine and books about hacking.

Question: Can you provide one or more specific examples of what it looks like to successfully integrate exploratory testing with other test approaches on a medium to large size test project? Sean Stolberg, Seattle, Washington

James: First, it’s already happening on every project that has ever been done. Exploratory testing is not some exotic weird thing. You do it whenever you investigate a bug, for instance. Geez. This is going on naturally. Have you ever been exploring a product and then decided to make an outline or test matrix to describe specific test conditions? If so, then you already know the answer to this question.

Mostly, when I’m asked this, the questioner turns out really to have been asking a different question, so I’ll answer that different question:

Q: Can you provide one or more specific examples of what it looks like to do great testing in a way that doesn’t worry and confuse busy-body managers who don’t understand testing or trust testers and therefore wish to apply silly management theories to testing such as “you should write down each test in a procedural scripted form?”

A: Yes, that’s why we created session-based test management. In this form of test management, unscripted testing is done in sessions. Sessions are time-boxes within which the testing occurs. Each session has a charter (a little mission) and results in a session report. We can create metrics from these which are reasonable and tend to give managers warm and fuzzy feelings inside. Managers get something to count, and testers remain relatively free to do their jobs. See more about it on my website.

Question: In your Rapid Software Testing training class and in your book Buccaneer Scholar, you start off by saying, apparently proudly, that you’re a high school drop-out. I think the readers would be interested to hear about how that has helped your professional success? Bernie Berger, New York City, USA

James: It helps me and it hurts me. But the reason I tell people is that I like how it defines me. (And people with degrees can say exactly the same thing.)

The kind of people I want to work with are those who understand that a vigorous habit of self-education is absolutely necessary in order to stay on top of a technical craft. By telling my institutional educational status, I hope to frighten away the kind of clients or colleagues for whom education is about symbols of power rather than skills and good ideas. I also want to establish myself as a maverick. That I’m a dropout communicates that I have a long history of doing things that I think are right, even when they are unpopular.

As for how being a dropout helps me, specifically: well, since I have no ceremonies or rituals to rely on for my success, I rely instead upon the simple idea that, for me, success comes through actually knowing a lot about my craft and being really good at it. I’ve known all along that I need to struggle to succeed. Nothing is handed to me because of my social status. I think my educational history has made me perpetually hungry, in certain ways.

I’m not against universities, though. I admire many people who are highly educated by institutions. Dr. Cem Kaner, for instance, is my hero, and he has helped me elevate my standards of scholarship.

Question: As a programmer, what can I do to maximize the value of both my (unit) testing effort, and my tester’s ET effort? How can I avoid wasteful overlap without leaving holes? Sean McMillan, Kalamazoo, Michigan

James: When testers know testing and love testing there is rarely wasteful overlap. However, a simple thing you can do is encourage your tester to perform scenario testing, rather than merely simple functional testing. Scenario testing means creating complex and realistic situations with the product. Long flows of actions.

I also strongly suggest putting function-level logging into your system, so that you can automatically see what functions and features have been touched during the testing. This makes testing more playful, and yet still reliable.

Question:I feel as though as a profession, testers have failed to communicate the value of testing in a way that businesses can understand. This is part of the reason that so much misunderstanding still exists about what software testers do. It is also why so many otherwise capable business people cannot tell the difference between checking and testing, or understand the importance. Do you believe it is important to advocate for good software testing even today? What is your approach? Do you expect the testers in the trenches to get further someday than the first wave of thoughtful exploratory testers did? Or do you still fight the good fight and hope that business people will someday appreciate what makes testing good and understand how to identify context driven testing from other schools of thought? Have you given up on business people at this point? When you teach and create content, who is it for? Are there any areas you’ve given up on? Lanette Creamer, Seattle,
Washington

James: These are a lot of questions. I can address a lot of it in this way: Yes, those of us who study testing must vigorously advocate it to people who don’t study testing. It will always be that way, because testing is an intangible craft. What would help a lot is if there were some really famous testers out there who were good role models; and some famous projects that were saved in part by good testing. In the same way celebrity chefs can make cooking skills cool, celebrity testers could do that for testing. I’m trying to do that. I’m also trying to help others to get there.

My approach to explaining testing varies with my mood. Sometimes I do a chalk talk, where I’ll draw a picture of my test model. Sometimes I start with an example of a big bug found at the last minute by good testing. Or perhaps I might do an exercise that demonstrates the difficulty of complete testing. My goal is always the same which is to show that testing is a skilled and challenging craft.

When I teach and create content, I’m aiming it at the 5% of testers who truly want to become excellent, and their managers. No one else, really.

Question: Test automation seems to be gaining in popularity. For example it seems like most job listings are for SDETs (Software Design Engineer in Test) and very few for Exploratory or Manual testers. What are your thoughts about this trend? Adam Yuret, Seattle, Washington

James: SDETs are a fetish of Microsoft and Google, primarily. You are in Seattle, so maybe that’s what you are seeing. Still, test automation is important, in certain parts of the industry. Some technologies require heavy duty tool support. The challenge is the same for SDETs as for any other tester – can you get them to study their craft? I hear noises from Microsoft that they care about being good testers, but that’s not the impression I get from hearing the specifics of how the “Engineering Excellence” program works over at Microsoft. I’d like to see them really commit to testing excellence, and I believe they eventually will. When they do, they will recruit a variety of people into their test groups, because it takes a variety to build a powerful test team.

Question: Software testing is an ever-changing occupation. Testers who care about what they do for a living constantly study/build skills. But, the test managers may not necessarily be doing so. They may be spending way too much time in meetings with other managers and not enough time in the trenches to see that the light at the end of the tunnel is a train…

How would you approach a test manager about this? How would you bridge the gap between what were the testing “standards” and how changing them is essential? How would you suggest that management gets on board with the evolution of testing? Michele Smith, Maine

James: Is it changing so much? I think what we’re doing today is basically the same process as it was when Jerry Weinberg wrote his chapter on testing in the book Fundamentals of Computer Programming, in 1961. Sure, the technology is more complicated, and more people are impacted by technology. I agree there have been changes on the surface. Ripples and waves, you could say.

But regardless of change, I agree that we have a big problem with middle and upper managers who don’t comprehend testing. My general recommendation is that companies urgently grow test leads who can act as “master sergeants” or “centurions.” These test leads would not be tied up with endless meetings, but would bridge the gap between testing supervision and communication to management. A big problem I see is that the connection between management and actual testing breaks down at the tactical leadership level first, but then really collapses at the next level up. Too many testers seem frightened to push their frustrations and ideas up the management chain. They also lack the skill of explaining testing to management even when they have the nerve to do it. Into this vacuum, management then pours their weird ideas about controlling and measuring testing.

That’s why test case metrics are super popular, despite being about as meaningless to test progress as, say, a pile of chicken bones. If we had better bridges to management, we would not have such struggles.


About the Author

Matt Heusser A consulting software tester and software process naturalist, Matt has spent the past 12 years or so developing, testing, and leading in dev/testing of computer software. Beyond the work itself, Matt has had notable roles as a part-time instructor in Information Systems at Calvin College, a contributing editor to Software Test & Performance Magazine, the lead organizer for the 2008 Workshop On Technical Debt, and most recently as Senior Editor for the “How To Reduce The Cost Of Software Testing” book project.