When STP announced the call for nominations for the Software Testing Luminary award, it got me thinking about how software testing came to be a role separate and distinct from software programming with unique and identifiable skills and responsibilities. It also got me thinking about the pioneers who blazed the trails that today’s luminaries strive to bring out of the shadows and into the light, sometimes revealing new trails along the way that improve software’s journey through what I like to call “testerland”.
“Those who cannot remember the past
are condemned to repeat it.”
– George Santayana
Lest we wish to be condemned, let’s take heed of George Santayana’s famous quote and start from the beginning. We frequently hear that software testing is a very young field, but I think we forget sometimes that it’s the software part that is young, not the testing part. In fact, I’d argue that virtually every deliberate invention since the beginning of time was the product of testing.
Once when asked if his work was yielding results, Thomas Edison famously said “Results? Why, man, I have gotten a lot of results. I know several thousand things that won’t work.” Of course, the person asking the question was trying to determine if Edison had been successful in solving the problem at hand, while Edison replied with what was effectively a test report. We don’t often think of it that way, but how would Edison have found several thousand things that wouldn’t work if he weren’t testing them? It makes me wonder just how many tests were conducted before the wheel was invented, or the telescope, or the cotton gin, or hook-and-loop fasteners (a.k.a. Velcro®).
The word test has more than a few dictionary entries, mostly delineated by field. Stripping the field specific language and focusing on the commonalities across the definitions, I’ll summarize a test as “the means by which the presence, quality, genuineness, abilities, skills, capacities, or traits of a thing is determined, measured, and/or assessed.” To paraphrase, a test is how we evaluate the degree to which an object conforms to a premise. So every time someone “checks to see if…”, or “tries it to find out if…”, they are testing. I hesitate to think just how many tests that means have been conducted throughout history.
So if the act of testing is omni-present throughout history, one could easily conclude that testing software should be no big deal. Clearly, we know that would be a faulty conclusion. The thing that makes it faulty is the complexities introduced by this thing we call software. Even so, I believe we’d do well to pay attention to the lessons that have already been learned about testing and apply those lessons, if not the solutions stemming from those lessons, to our testing of software. I’m referring to things such as the scientific method, systems thinking, and modeling. I know that studying those lessons has helped me in my career as a software tester every bit as much as studying the lessons taught by software testing’s pioneers and luminaries.
“There has to be this pioneer, the individual who has the courage, the ambition
to overcome the obstacles that always develop when one tries to do
something worthwhile, especially when it is new and different.”
– Alfred P. Sloan
Much like the pioneers who scouted out and populated previously unsettled lands, the pioneers of “testerland” had to blaze their own trails, frequently facing significant resistance from those tied to the established way of doing things. Some of those that I think of as pioneers in software testing didn’t actually spend the majority of their careers as software testers… in fact, quite a few considered themselves to be programmers or developers. But whatever their background, there could be no pioneers in “testerland” before the land of software was established, so let’s start there, shall we?
The first programming languages started to appear in the 1940’s, but the term “software” wasn’t coined until 1953 and didn’t appear in print until 1958. During this period there was no “software testing” per se, programmers debugged their programs and that was that.
Enter the person I consider software testing’s first pioneer, Charles Baker. In 1957 Baker did something that seems incredibly simple yet was arguably the birth of our field. Baker made a distinction between program testing and debugging in a book review. I don’t know about you, but I find it somehow both appropriate and amusing that our field came about due to a book review, since what is a book review other than a report on one persons testing of the content of a book.
Next on my list of software testing pioneers is Jerry Weinberg who, among other things, co-authored with Herbert D. Leeds Computer Programming Fundamentals (McGraw-Hill Inc., 1961), which contained the first published chapter dedicated to testing software. Once again, something that sounds simple on the surface, but it was this high profile reference to software testing as a topic deserving independent treatment that seems to have validated Baker’s distinction and began popularizing the concept. Weinberg, of course, has gone on to write and teach about software testing both directly and indirectly throughout his career, but to me those later works fit more into the luminary category than the pioneer category. I think of pioneers as the first to blaze a path and luminaries as the ones who light and clear the path to make it easier for the “common traveler” to traverse.
By the mid 1960’s, the software testing movement was gaining momentum. There were high profile calls for a disciplined approach to software testing, and a NATO report on Software Engineering directly addressed software testing and quality assurance. But this is a brief history of software testing. If you’re interested in more detail, I recommend the history and timeline sections of TestingReferences.com (http://www.testingreferences.com/), which I found extremely useful while composing this article.
In 1969, Edsger Dijkstra famously reported to the NATO Science Committee that “[software] testing shows the presence, not the absence of bugs.” This may be common sense today, but the concept that no matter how much testing we do we can never guarantee the absence of bugs was not only revolutionary, it bordered on heresy. Imagine what “testerland” would be like if those words had never been spoken. That thought makes me grateful for Dijkstra every single day.
Several pioneers blazed new trails across “testerland” during the 1970’s, but three stand out in my mind. Bill Hetzel who edited the first entire book about software test methods, Program Test Methods (Englewood Cliffs: Prentice Hall, 1973). Glenford Myers who stated in his book Software Reliability: Principles and Practices (Wiley, 1976) that “The goal of the testers is to make the program fail” and then authored what is now considered the first book on “modern” software testing The Art of Software Testing (John Wiley and Sons, 1979). And Tom Gilb, who authored Software Metrics (Winthrop Publishers, 1977), a book that is still frequently referenced and debated today. In many ways, these three pioneers blazed the trail leading to the first permanent settlement in “testerland”.
It was during the 1980’s that software testing finally became widely recognized as a discipline separate and distinct from software programming. It was during this decade that the first conferences specifically targeting software quality and testers were founded, several companies specializing in software testing tools were founded, the IEEE 829 Standard for Software Test Documentation was created, the V-Model and Spiral Models were published, the British Specialist Interest Group in Software Testing (SIGIST) was founded, and the Capability Maturity Model (CMM) was published. During this time several individuals established themselves as leaders in the field who are still actively blazing new trails and illuminating old ones today. In many cases, I think it’s still too soon to tell which of the individuals who provided leadership during this time were truly pioneers. Be that as it may, I know for a fact that at least a few of them have been nominated for this year’s Software Testing Luminary award.
The years from 1990 to today are filled with people and organizations continuing to blaze new paths and illuminate existing ones in “testerland”. I am confident that historical summaries of software testing in the future will honor them accordingly, but as this is about the past, not the future, I’ve chosen to specifically call out only two individuals from this time period, both of whom I believe, have completed their pioneering in “testerland”. The first is Boris Beizer. Beizer is most broadly known for the second edition of his book Software Testing Techniques (International Thomson Computer Press, 1990) in which he both published his oft referenced bug taxonomy and coins the phrase “pesticide paradox” as a means to highlight the risks involved with conducting the same, or very similar tests, repeatedly and exclusively. The second is Alberto Savoia. Savoia pioneered modern software performance testing, a region of “testerland” that before his trail blazing had been largely unexplored and unsettled. I debated leaving Savoia off this list because one could argue, that software performance testing is distinct enough from the software testing that the other pioneers on this list pioneered as to be a separate discipline altogether. In the end, I decided that as a performance tester myself, I simply couldn’t resist the opportunity to honor the man who blazed the path that I have built my career around.
“To design the future effectively, you must first let go of your past”
– Charles J. Givens
When I look at the evolution of software testing I think the path we have followed from the beginning through approximately the turn of the century is pretty clear. From the time of our first pioneering moment in 1957 through the end of the 1970s, software testing was fighting for its independence from programming. During the 1980s and 90’s, software testing was dominated by significant advances around why we test, how we test, and the overall value that testing provides. After the turn of the century, the path gets distinctly less clear.
I am concerned that the last 10 years have been dominated by several divergent paths vying to become the path most traveled. One path seems to lead back to where we came from, with software testing once again being absorbed into software programming. Another path seems to be leading software testing even further away from software programming, and toward testing software using methods similar to how mass produced products are tested as they move through an assembly line. The third path seems to be leading toward more or less independent advancements that are not overly valued by those on either of the other paths.
I am concerned because I cannot come up with a good reason why those blazing and illuminating these paths should be competing rather than collaborating. When I think about software testing, it seems clear to me that there is value in programmers testing the software they write (e.g. Unit Testing, or Test Driven Development), in software-as-a-product level quality assurance (e.g. User Acceptance Testing), and in the software testing that happens in between (e.g. Scenario and Integration Testing). Further, I am unconvinced that the testing that can reasonably be conducted along any one or two of those paths alone can compete with the value of coordinated and collaborative testing along all three paths.
My hope is that when future generations look back at the period between 2010 and 2020 they see unifying pioneers who put the battles of “which path is best” behind them and instead collaboratively blaze a path that leads software testing in thedirection of providing more value to more stakeholders more often.
About the Author
Scott Barber Scott Barber is viewed by many as the world’s most prominent thought-leader in the area of software system performance testing and as a respected leader in the advancement of the understanding and practice of testing software systems in general. Scott earned his reputation by, among other things, contributing to three books (co-author, Performance Testing Guidance for Web Applications, Microsoft Press; 2007, contributing author Beautiful Testing, O’Reilly Media; 2009, contributing author How to Reduce the Cost of Testing, Taylor & Francis; TBP Summer, 2011), composing over 100 articles and papers, delivering keynote addresses on five continents, serving the testing community for four years as the Executive Director of the Association for Software Testing, and co-founding the Workshop of Performance and Reliability.
Today, Scott is applying and enhancing his thoughts on delivering world-class system performance in complex business and technical environments with a variety of clients and is actively building the foundation for his next project: driving the integration of testing commercial software systems with the core objectives of the businesses funding that testing.
When he’s not “being a geek”, as he says, Scott enjoys spending time with his partner Dawn, and his sons Nicholas and Taylor at home in central Florida and in other interesting places that his accumulated frequent flier miles enable them to explore.