Imagine a CMMI Level 3 government project with big-name enterprise tools for requirements management and testing. Does that sound familiar? Have you worked on that project? It was one of the past stops in my testing career. In that environment, the test team had to concern itself with contractually mandated testing elements like requirements traceability, a step/expected result/actual result format to test documentation, and independent verification and validation (IV&V) cycles after releases. As someone who was new to the business domain, new to large government contracts, and new to even the particular software platform we were leveraging I had a great deal to learn. In this article I share my experience of using elements of session based test management (SBTM) to make my testing more effective while climbing the steep part of the learning curve.

The software we worked on was ”wide enough” and “deep enough” that the training materials available only covered basic functionality and navigation. It had a significant number of highly complex business rules and data combinations, all of which were described separately across multiple requirements documents. To make matters worse for a new guy like me, those documents were very abstract and cross-referenced other abstract documents. This made learning the functionality and business domain almost impossible for me at times—the documents assume a baseline understanding I lacked, and often left me confused.

I had joined the team mid-release-cycle, so I relied heavily on exploratory testing for my first few assignments. Exploratory testing helped me learn how the requirements tied back to the software and start to get familiar with the business domain while still finding issues and logging defects. In addition to my preference for working in an exploratory fashion, the team had recently changed its approach and was no longer maintaining a large suite of scripted tests. Given the freedom to create my own test documentation, I decided to use SBTM to manage my work.

SBTM to Fit The Context
My challenge was to achieve the contractual obligations of requirements coverage traceability and test evidence while still getting the “goodness” I wanted from exploratory testing. I believed SBTM would enable me to do this, so I decided to try to adapt some of SBTM’s principles to my work. I knew that team-wide session creation, coverage tracking, and debriefs after finishing sessions were all off the table, but I decided to try it anyway.

Once I received my test assignments for a release I did what everyone else did at the start, researched the feature or fixed with whatever documentation was available. I’d then talked to subject-matter experts and developers to get a better understanding of things. While performing my interviews, I came up with lists of risks and session ideas, adding and re-prioritizing the lists as I went along.

Sometimes sessions mapped conveniently to specific requirements, but that wasn’t the focus of this initial effort. My goal during this phase was to make sure I developed session ideas that achieved what I felt represented the depth and breadth of the general coverage I wanted to achieve. Then I ranked-order by the level of risk I perceived at that time, and look for dependency-based sequence I needed to follow. The general goal I wanted to reach by the time code was delivered was to be ready to:

  • Use one or more sanity-testing sessions to make sure the software was in a reasonably testable state (most often uncovering deployment or configuration issues)
  • Run sessions where I felt like there was very high risk (driven by technical/feature factors, project factors, or threat-to-user factors)
  • Run sessions that covered critical up-front dependencies
  • Run sessions I created to cover areas where I didn’t feel like I really understood the scope and risks of the feature (which is in itself a risk)

The exact order of things was fluid by necessity, and varied from assignment to assignment. The fact that it could be fluid was one of its strengths. Regardless of order, I worked through these types of sessions before I started trying to achieve the requirement-specific coverage and to create the required test documentation and evidence. I feel that this approach gave me two big advantages:

  • It helped me find important problems or problem areas quickly, which meant giving the development team as much runway as I was able to get things fixed and back into testing.
  • It gave me more information about what I was testing than my advance research could, and was quick feedback on my initial ideas of risk and coverage. If I had really missed the mark somewhere, I was finding out sooner rather than later in most cases, which saved time, and potentially defects, down the road.

Working this way gave me the freedom to break up work in ways that didn’t map neatly to a linear trip through the requirements (in whatever order they are written). This was an incredibly freeing experience. Sometimes working through requirements in a very linear fashion has a dulling effect on me. If I’m tired or losing focus, it can cause me to look at coverage as a measure of how far through X number requirements I’ve gone, rather than focusing on covering the most important things earliest and achieving the coverage I’ve mapped out. Or worse, I might work linearly through the requirements and rush through what I consider lower-value testing.

After covering my up-front sessions, I shifted to addressing the contractual requirements of the testing. Ultimately I had to make sure all requirements were addressed by my sessions, and in a specific format. Though not using pre-written test scripts, I could still fit the action/expected result/actual result format required of our test documentation. I wrote a requirement or portion of a requirement (and any required navigation) as the action, documented the expected result, and then recorded whatever I encountered as the actual result just like I would any session testing notes.

Instead of writing “as expected” or “pass,” my actual results could include detail and narrative of what I had found. I often ended up also recording:

  • what I thought of the result
  • issues and bugs
  • how sometimes the result seemed wrong but consultation with a BA or developershowed that it was correct after all

Since the contractual requirement did not specify how many scripts were used to achieve coverage, I was free to slice and dice the requirements into however many sessions I felt they required, or into whatever sequence I wanted so long as they were all covered. This meant I could deliver contractually-obligated test documentation in the flow of my prioritized list of session ideas.

Not By the Book
I wasn’t able to use every aspect of SBTM that I would have liked. Some parts of SBTM as laid out by Jon and James Bach didn’t fit for me on that project. For example, I didn’t always keep sessions between 45 minutes and an hour and a half. After I got through the high-risk/ high-priority test sessions where I found most defects, I’d often use much longer sessions to achieve requirements coverage.

While I would have liked to have “done it right” or “by the book” I still feel that adapting key parts of SBTM to my context helped me add value and deliver good testing to the project. I missed a bug or two, but to my knowledge never a critical one. I eliminated the up-front work of test script creation and the downstream work of modifying test scripts when I misunderstood what software was being delivered. Most importantly I had good success finding important issues earlier in each cycle. For this project, a partial implementation of SBTM was far better than none.

Some background on ET and SBTM
Exploratory testing and session based test management have both been around for a while. If you haven’t run across them yet in your study, I encourage you to take a look at them now.

Exploratory testing is characterized by simultaneous learning, test design, and test execution. (See James Bach’s article, http://www.satisfice.com/articles/etarticle.pdf, for a great overview). Since I couldn’t glean the business context for everything I was testing from the documentation alone, exploratory testing really helped me get familiar with the software more effectively even without experience in that business domain.

Session based test management is a way to organize and manage exploratory testing by chartering specific, time-boxed sessions of work to explore specific features, functions, or areas of risk or required coverage. As you test, new sessions will come up, and the session list can be re-prioritized to keep the most important sessions at the top of the list. The number of sessions remaining gives you a rough estimate of test effort and/ or time remaining, and looking at the kinds of sessions listed gives you insight into the coverage goals–not just as a number or a percentage, though you get that too, but in a way that helps you understand what was really tested, and with what kind of breadth and depth. In concert, these qualities allow SBTM to extend exploratory testing in a way that makes it organized, traceable, and much more flexible in terms of managing and understanding coverage you’ve achieved and risks you addressed. I was fortunate enough to see Jon Bach present this topic once, but there are plenty of materials online that can give you more detail about SBTM. (If you’re new to SBTM take a look at Jon Bach’s paper, http://www.satisfice.com/articles/sbtm.pdf, or the overview at http://www.satisfice.com/sbtm/.)


About the Author

Rick Grey I’m the Director of QA at MoneyDesktop, a startup in the PFM space. I’ve been testing software for nine years, with twelve years experience in the software field overall. I’m a subscriber to the principles of the context-driven school of testing. (When it makes sense, in context.)