No, not the singer. No, not the guy from Office Space; and he is not related to any UN ambassador. Instead, Michael Bolton is a consulting software tester based in Toronto, Canada. After working as a programmer, trainer, and technical support manager, Michael worked in his first job called “tester” at Quartedeck in the 1990s. He then served as a program manager, primarily on a little piece of software called QEMM-386. You may recall that someone quipped that no one would ever need more than 640 kilobytes of memory. Well, QEMM-386 was the program that allowed people to access over 640 kilobytes in MS-DOS, and became, in its heyday, the world’s best-selling software product for that operating system.

In 1998 Michael went independent launching his company DevelopSense offering testing, training, and consulting services. In 2003 he began teaching James Bach’s Rapid Software Testing course, and became co-author in 2006.

Personally, I know Michael best for looking at simplistic solutions to testing problems, breaking them down, pointing out what they miss,and then re-assembling a more complex – but better – approach to the problem. He’s a world-class consultant that has offered up his time to answer your questions for free, and get your 15 seconds of fame by having your name in ST&QA Magazine.

Michael Bolton will now answer your questions.

The “testing vs. checking” writings have generated a lot of discussion ranging from the nature of automated testing to the importance (or lack thereof) of semantics. As you continue to engage with testers around the world, have you seen great adoption of this distinction (whether literally, or in principle)? Do you feel that the distinction has provided significant value to the testing community? Alex Kell Alanta, Georgia

Michael: If my writing on testing vs checking has generated a lot of discussion, then by a first-order measurement it has generated some value too. As I’ve said a number of times, I don’t care that much if people agree with me. That would be very nice, but my goal is for us to think about the issue.

I have seen a number of people that I respect adopt the distinction, and I feel pretty good about that. My intentions were to spark discussions and to be helpful. It’s very gratifying to watch the idea spreading.

There are people who say that semantics aren’t important. I don’t understand what they mean, but I guess that doesn’t matter to them.

Seriously, though, I’d remind people that semantics is a term that refers to meaning. When people say, “That’s a semantic distinction” or “You’re arguing about semantics,” I reply that Yes, I am, because I think it’s really important to figure out what we mean. I’ve noticed that in general, people who object to a conversation by saying, “You’re arguing about semantics” often really mean “You don’t understand what I’m saying” or “I don’t care about the distinctions you’re making” or “I feel like you’re nitpicking and I don’t understand why you’d bother.” I’d encourage those people to see if there’s anything that they can do about those problems. Try starting from a sympathetic position. Assume that, for the other person, there’s real and legitimate significance that you’re not seeing, and then try to work things out collegially.

Here’s one I’m currently struggling with; what should a good test plan look like? This has multiple facets: the medium (word doc, Wiki page, generated from a test case management tool, etc.), the content (for example, should there be pages of preface containing standard definitions of what type of testing is being planned, what level of detail should the test cases be described in, should expected inputs and outputs be listed, etc.) and the minimum / maximum number of pages.Just to bound the question a bit, let’s take a functional test plan for a dynamic website like softwaretestpro.com as an example. Gareth Bowles USA

Michael: “What should a good story look like?” Does that sound like an odd question? It should, because when we pause to think of it, a story doesn’t look like anything. A book might look like something, a movie might look like something entirely different, and a radio play doesn’t look like anything at all to the listener. There’s a common problem in software development in general and in testing in particular: reification, which is the act of taking a construct, an idea, and turning it into a thing; or mixing up an idea with the thing that represents it. Like a story, a test plan isn’t a thing, but a set of ideas. A plan is a combination of strategy (the set of ideas that guide your test design) and logistics (the set of ideas that guide your application of resources). So a test plan isn’t marks on a page; it’s a set of ideas in your head.

The things that you’re talking about –artifacts – are representations (literally “re-presentations”) of some fraction of the test plan ideas in your head. So to answer your question, since it’s thought-stuff, a test plan should be completely invisible unless there’s a good reason to represent some part of it. As James Bach and I point out in the Rapid Software Testing course, people say “That should be documented”, when what they really mean is “That should be documented if and how and when it serves our purposes.”

So what’s your purpose? Is it to remember some things overnight that you don’t want to forget in the morning? In that case, what would be wrong with a list written on a paper restaurant napkin? Is it to illustrate ideas about your test strategy to the programmers on the team, so that you can refine your ideas about test coverage? In that case, could you have a conversation aided by a diagram on a whiteboard, followed by summary notes and a digital photo to record it? Do you need to present a record of test ideas, test data, and results to an FDA auditor? How about an Excel spreadsheet that contains a list of test ideas in the first column,test data in subsequent columns, and an account of your observations in the last column? If your answer is “The FDA would never accept that,” I’ll ask: aside from the fact that your organization may have been giving the FDA the same piles of stuff for years, how do you know the FDA would never accept it?

Notice that I didn’t provide answers to the situations that I proposed. Instead, I provided questions. My general questions with respect to test artifacts start with:

  • Do we need an artifact at all?
  • Would conversation and immediate interaction be just as effective, less expensive, or more valuable?
  • Do I need to extend a conversation with some text or some visual aid?

Then, if I’ve decided to produce something more than a conversation, I ask,

  • Is this document a tool (for me),or a product (for someone else)?
  • For what, and for whom, would we prepare this artifact?
  • What is the most concise way to convey all of the information that is genuinely necessary, in a format that is most clearly understandable, easily maintainable, and appropriately temporary?
  • What are the costs of this artifact (preparation cost, development cost, and maintenance cost)?
  • What is the burden to the future in having this artifact?
  • Do we have a means of deciding that we’re done with this artifact, and then getting rid of its maintenance burden?

If I were going to prepare a functional map of SoftwareTestPro.com, a few years ago I would have used a hierarchical list. These days, I’d more likely use a mind map that generally follows the structure of the user interface. But the point of preparing either map or list would most likely not be planning, but learning. I’d test and create a map at the same time. Combining the activities helps to structure my exploration and understanding of a product. Building the map helps me learn about the product; learning about the product helps me to build the map. I’d also decorate the map or the list with risk ideas, or I might create a separate running set of notes about risk.

I’d most probably use this kind of artifact to guide conversations with the web designer, programmers and the product owner. The primary purpose here would not be to use the map to control my testing. The purpose would be to develop my mental model of the product, and to compare the map to the understandings of the people on the project. They’ll usually disagree with my model, and with each other. Yet through those conversations, we’ll all gain a better understanding of the product and how we think about it. As we work stuff out, we’ll change our mental models or the maps or the product. To paraphrase Dwight Eisenhower, the map is nothing; the mapping is everything. I would not use test cases in the form of a list of steps to follow and observations to record, neither for myself nor for other testers. These days, I’m preparing a lot of writing on the subject. For now, I’d suggest that people look at James Bach’s presentations on The Case Against Test Cases.

I’m quite convinced that the next great leap in testing will come not from the testing world, but from outside of it. If you had to give someone a map of where to look for the Next Great Big Thing, where do you think it would lead? -Adam Goucher, Ontario, Canada

Michael: Currently I’m intrigued by anthropology. For years, there’s been a naïve belief in testing that the really important stuff is the functional behavior of the program. We’ve paid all kinds of attention to understanding the products and the machines on which they run, but not so much to understanding people – what, how and why they value the product; why those values differ from one place to another; and how people actually use the product or rather how they say they use it. To me, really excellent testing is about the relationships between the product, the systems with which it interacts, and its users. Understanding those issues would require us to study people more carefully, so I think anthropological field work would be an interesting place to look for new ideas about testing. Jerry Weinberg provides some tantalizing hints inQuality Software Management Vol. 2: First Order Measurement. Duguid and Brown also talk about some of the benefits of anthropological approaches to observing people inThe Social Life of Information. I’m currently browsing through Social and Cultural Anthropology: The Key Concepts, and plenty of stuff in there has a direct line back to testing.

I think most testers could learn a lot more about models, risks and how to reduce the numbers and impacts of Black Swans. For that, I’d recommend The Black Swan, by Nassim Nicholas Taleb, and Jerry Weinberg’sAn Introduction to General Systems Thinking and General Principles of System Design.As for reliable prediction and estimation (as Taleb himself would say), fuhgeddaboudit. Testing work always responds to some development work that’s been done, and by its nature, there’s plenty that you won’t know about that. You can allocate time for investigative work, but you can’t schedule it precisely, because new information is going to change the focus of your investigation. So emphasize preparation for testing, rather than planning it. Give testers opportunities to interact with the rest of the project community and with the testing community outside of the project. Instead of assigning testers to write overscripted test cases, do short bursts of testing, capture knowledge in outline form, and then write concise charters to guide future sessions. Use the time saved for reading, exercises, and skills development.

We need general systems thinking and the skills of anthropology to observewhat actually happenson projects, so that we can see the futility in most approaches to “test estimation”, “standardization” and “test planning” and stop wasting time on them.

Do you have any guesses about how the field of software testing will change over the next 5 years? – James

Michael:I don’t really make guesses about stuff like that. To me, such things are kind of a waste of time. They’re guaranteed to be wrong, incomplete, or both. I’ve presented Two Futures of Software Testing, but neither future is a prediction; they’re proposals for the kinds of futures we could choose.

Either way, I doubt that things will change radically in the world of testing. I argue that the certificationists have done nothing to innovate in our craft, and that the certification movement has helped to commoditize testers. That’s bad. The focus in testing, as in so many other fields these days, has been on cutting cost rather than increasing value. If we’re focused on doing not-so-valuable work at lower cost, rather than on how to do excellent work with what we’ve got, that’s bad too. Very few managers understand testing really well, and that leads to lots of dysfunction. Hiring testers who aren’t very engaged in their jobs; measuring testing in very odd and unhelpful ways; confusing testing with quality assurance; and so forth – that’s bad. So we’re not only going to have to get better at doing testing. We’re going to have to get better at explaining it and in clarifying our missions with our clients. It’s very hard to say if that will happen any time soon.

If there’s hope, it’s from an increasing number of people who are building communities online and in person, sharing ideas, writing about their ideas and their experiences. The Weekend Testers movement is a great example. They are people who are eager to try testing real software as a social exercise, building skill and community at the same time. It’s a lot like people who meet on the weekend for a bicycle ride. They’re getting in shape by challenging and inspiring each other. I think it’s fantastic. I’ve participated in several sessions and recommend them.

What are the top three challenges you have faced when leading a test team? – Anonymous, Denver, CO

Michael:Each team is different. Common patterns have been 1) Persuading testers that they’re not responsible for the quality of the product. 2) Persuading everyone else that testers aren’t responsible for the quality of the product. 3) Persuading people that it’s okay to say “I don’t know” and to ask for help. In school, we learned that when we’re stuck, we have to tough things out and go it alone. We have to unlearn that kind of behavior.

Based on your experience, what are some of the ways in which the field of software testing has changed over the past 10 years? – James McCaffrey, Bellvue, WA

Michael:If you follow certain sources, testing has allegedly changed to adopt Agile principles, and Agility has changed testing. I’m more skeptical. Change has happened in some places, but I don’t think that it’s happened much. A lot of people who claim agility are only going through the motions. I’ve been to lots of morning Scrums that are really morning status meetings without the chairs—or sometimes with the chairs.

The Agile focus on confirmation rather than investigation means that instead of doing repetitive, low-value work manually, we’re doing repetitive, low-value work with machines. That’s progress, I suppose, but it’s still a pretty limited version of what testing could be. No wonder sometesting work has been outsourced.

In the Agile world, I see a bunch of programmers who speak very enthusiastically about testing, but who seem disinclined to study the most important aspects about testing. I’ve seen the rise of so-called “Agile” testing that focuses on test tools and development processes, but pays little attention to the mindset and the skillset of the individual tester. Technical prowess is very important, but it’s a means to an end. I think that we need much more attention paid to thinking skills—systems thinking, the critical thinking, the social science thinking—the parts of testing that have to do with understanding complexity and not being fooled.

How do you choose what to read? Do you schedule time for reading? How do you retain what you’ve read?– Peter Haworth-Langford, Aylesbury, England

Michael:So many books, so little time! My system of choosing what to read includes my professional and social networks, certain periodicals, radio shows, and things I’ve read already. Some people are connection points for lots of ideas, like the hub figures in the Six Degrees of Kevin Bacon game. I notice them when I look at the footnotes and the indexes of books I enjoy. For a while there, practically everything I was reading mentioned either Herbert Simon or Daniel Kahneman or Thomas Kuhn (or all three) in the index. (Simon was an economics scholar; Kahneman is famous for his studies of heuristics and cognitive bias; Kuhn was a historian and a philosopher of science. Simon’s and Kahneman’s work centers around systems thinking, and they get connected to other ideas in books by Malcolm Gladwell, Gerd Gigerenzer, Dan Ariely, and Dan Gilbert). Simon Winchester writes wonderful histories, especially about geological events and very eccentric people. The Economist, the New Yorker, and the Globe and Mail are great sources for reviews and citations for interesting stuff, and I’ve bought a ton of books by the New Yorker’s writers. Oliver Sacks, Jerome Groopman, and Atul Gawande’s all write great articles on medicine, often with the structure of detective stories. The Canadian Broadcasting Corporation’s Ideas program generally and the How to Think About Science series specifically point me to many interesting things. James Bach, Jon Bach, and Ben Simo have pointed me to terrific sources of information. And my in-laws consistently delight me with cool choices for Christmas books—The Brain that Changes Itself,The Omnivore’s Dilemma,The Stuff of Thought. Finally, I’m in airport bookstores a lot, and I’m a sucker for a good cover blurb.

I’m finding it harder and harder to read these days. I’ve been writing. I’ve been teaching, and going to conferences. I have a family that I love to spend time with. I’m finding it harder to allocate chunks of uninterrupted time to reading. Airplanes and airports are really good for that. A book that’s good enough stops all business while I gorge on it.

How do I retain what I’ve read? By sheer luck, I’ve always had a really good memory. Yet I think I remember things better because the reading I do from different disciplines is like a network of ideas with lots of connections between them, rather than a collection of segregated piles. As I said, index-surfing is a fun sport.

A past colleague from the Help Desk was in town and she told me her helpdesk does most of the functionality and user testing prior to launch.Is the use of help desk for software or application testing a common practice in most, many, or few organizations? And what can we learn from your answer?– Rich Hand, Denver, Colorado

We used this approach at Quarterdeck, which was the first place that I had a job called “tester”. When I started with the company in 1990, there was no independent test group. Everyone tested—the programmers, the technical support people, the documentation people, and even the sales people. In those days, in that place, our salespeople knew tons about our products and how they worked; that made them capable of explaining the products’ benefits. Sadly, that didn’t last as the company and its product lines grew larger.

I started in technical support myself. Quarterdeck’s first test group was staffed by people from support. Support people can bring a lot to the table–a strong customer focus, experience with the product, troubleshooting and problem-solving skill, good ideas about things that might add to or threaten value. Often they’re a diverse group, too; they certainly were at Quarterdeck.

Some people who came from support had an affinity for the combination of technical and social science work that’s at the heart of testing; others not so much. Occasionally, when we got near the release date, we would recruit people from the help desk to pitch in. What we very quickly found was that people who had experience, training, and mentoring in testing tended to get good at it, such that trained people were typically around an order of magnitude more productive than those without training.

So if you are going to use help desk staff for testing, that’s fine, but by all means train them. Give them testing heuristics, give them practice in testing, give them testing exercises, give them rapid feedback from a skilled test manager or test lead.

I would argue that it’s not a good idea to give them scripts. They’re expensive and time-consuming to prepare, and they’re even a little insulting. Let support people combine their product knowledge with some exploratory learning and some training support, and you’ll get better results and more capable testers faster and less expensively.

Is using help desk people to test a common practice? I don’t think you should worry about that. Will it workfor you? What are the factors that would help things along? What factors would present obstacles? How will you tell whether things are working for you or not? Where, and how often, would you look? How would you test to be more confident that you’re not missing something important?

When you do what other people are doing because other people are doing it, you’re likely missing out on why they’re doing it, and why it might fail or succeed in your shop. Treat any talk of “best practice” as a rumour that requires serious investigation before you consider doing anything like it.


About the Author

Matt Heusser A consulting software tester and software process naturalist, Matt has spent the past 12 years or so developing, testing, and leading in dev/testing of computer software. Beyond the work itself, Matt has had notable roles as a part-time instructor in Information Systems at Calvin College, a contributing editor to Software Test & Performance Magazine, the lead organizer for the 2008 Workshop On Technical Debt, and most recently as Senior Editor for the “How To Reduce The Cost Of Software Testing” book project.