Most of the functions of today’s vehicles are controlled by a computer. When your car needs an annual inspection or has a problem, your mechanic plugs in a diagnostic unit and obtains all manner of useful information on its inner workings. Now, why is it that a car that costs tens of thousands of dollars has testability built into it, yet software applications costing hundreds of thousands or even millions of dollars do not?

During my 20 years in software test automation, I have watched with mounting disbelief as each generation of software development technology has delivered applications faster and with more functionality, yet with less automated testability and lower quality. The annual cost to the U.S. economy of software failures is estimated to exceed $100 billion, and the internal cost to fix these errors is set at $59.5 billion. As they say in Texas, this is real money.

Granted, quality has to be designed in, but for huge systems under constant maintenance, there is simply no substitute for comprehensive regression testing to discover unintended impact. The reality is that years of cumulative functionality cannot be thoroughly tested by hand within anything approaching a reasonable budget and schedule. And yet test automation is becoming increasingly complex. Let me explain why.

In order for an automated test to execute against an application, it must be able to perform the test in a predictable and repeatable way. This means it must be capable of recognizing the application’s components—screens, windows or pages, and the fields or objects they contain—and of interacting with them. So, for example, to test a securities trading application, the test must be able to log into an account, navigate to the page for executing a trade, enter the data for the stock symbol, the number of shares and type of trade, press the button to execute the trade, and verify that it is confirmed.

It would be nice if the software you were testing gave titles to the pages you were traversing, such as Account Login, Place Trade and Confirm Trade, and designated names for objects such as Account Number, Stock Symbol and Execute Trade. But alas, in this world of dynamically generated applications, where the user interface is served up as code snippets on the fly by servlets, portlets and such, your windows are likely to all have the same helpful title, “default.asp,” with your objects’ names having equally meaningful titles such as “Text1” or “Button2,” or something even easier on the eye like “AX@:00115b.”

Aliasing these objects in your test scripts with meaningful names would be a good start, but will ultimately produce a dead end as you discover that “Text1” morphs into “Text2” the next time around because another field appeared, and “Button2” becomes “Button1” because a button disappeared. So your account number winds up entered into the Name field and the Execute Trade button presses Cancel instead.

It’s more work than it’s worth. Dynamically generated applications defy test automation, because you can’t read your tests, maintain them or reexecute them without constant, massive effort.

Why don’t companies appreciate the irony here? What good does it do to develop applications faster if, by doing so, we slow down testing by forcing it to be done manually—and thus degrade quality because we can’t get it all done by hand in our lifetime?

It seems to me we are at a crossroads. As an industry, we can either start actually behaving like engineers—which will mean requiring and enforcing the discipline needed to produce software that can be tested efficiently and comprehensively through automation—or we can continue to sacrifice quality for development expediency until the liability lawyers step in, or until some other part of the world takes the lead in the software industry the way they did with automobiles: by delivering quality products.

Let’s assume we choose the former option. If applications were actually written to be testable, the majority of testing would be automated instead of manual. Test automation scripts could be written concurrently with the software code. The effort now expended on convoluted workarounds and massive maintenance changes could be reallocated to extending test coverage. Quality would improve, and time-to-market could be reduced by orders of magnitude.

Test automation would shift from a struggle for viability to being an accepted discipline integral to assuring the timeliness and quality of the final product. We could rechannel the billions of dollars of costs arising from escaped defects into new products and functionality.

Unfortunately, if the past 20 years is any indication, we are firmly committed to the latter path. If this is true, it will only be a matter of time until the government steps in to regulate liability in the software industry à la Sarbanes-Oxley, or we are trampled by foreign competition, only to wonder again what hit us.


About the Author

Linda Hayes has founded three software companies, including AutoTester, which produced the first PC-based test automation tool. She is currently the founder of test automation tools maker WorkSoft and an independent consultant.