Often when people contemplate the idea of automated software testing, they make the mental leap to what I think of as The Holy Grail: automatic testing. In other words, not just the automated execution of predefined tests, but the automatic generation of the tests themselves. Not only hands-free, but brain-free.
This pilgrimage usually takes one of two forms, either seeking to generate test cases from requirements or seeking to generate requirements from the software itself. The first one is possible but not effortless, and the second is not possible. But even if it were, it would be pointless.
Automated test cases
What a great idea, a tool that can generate all the necessary test cases from the requirements. Notice the word “necessary” as opposed to the word “possible.” If you generate all possible test cases, you will probably get a few trillion of them, since there are likely infinite numbers of possibilities in even straightforward applications. Generating only the necessary test cases means you have selectively identified only those which represent a unique case, which is the only way to achieve complete coverage with manageable volumes.
A commercial product called Caliber RBT actually can do this. Given the requirements, this product will generate all of the necessary test cases to assure complete coverage. It employs a model known as cause-effect node graphing, which has its roots in the testing of hardware and is a proven approach.
For this to work, however, you need to do two things. First, you must be able--and willing--to specify your requirements with sufficient detail and internal integrity to support the model. Said another way, this is as far as you can get from ambiguous requirements; they are specified with mathematical precision. Second, you must still develop the test scenarios, or pathways through the application, to actually execute the test cases. For example, you must create a test scenario that has the steps needed to navigate through the application and provide the inputs and verify the outputs that comprise the test case.
Although this is a worthy goal and actually achievable, it is not “automatic.”
What an even better idea, have self-testing software that creates its own requirements and then generates the test cases accordingly. The idea is that you have a tool that examines the software source code, divines what it is doing, then generates the necessary test cases. These test cases are needed to exercise every pathway and every input or output, including boundaries, ranges, types, and even random values. If this sounds too good to be true, that’s because it is.
At the most basic level, requirements define what the application should do, not what it does. So, if you derive the requirements from the software itself, then you have a completely self-referencing model. The software should do what it does. Hmm. Obviously, this means that if the software is missing features or has implemented them improperly, then any self-generated requirements will likewise be incomplete or incorrect. Said another way, if the student writes the test, they will certainly pass it. But that doesn’t mean he or she has mastered the subject.
Even if you dismiss this by saying that all you want to do is make sure the software does what it says without causing errors, then you have a different problem. The fact is that the vast majority of an application’s functionality does not exist in the source code as static information. Given that most modern applications rely heavily on multiple interfaces, components, and objects, and may take advantage of concepts like inheritance (where objects are reused but also extended or modified), it is impossible to determine what the actual behavior will be until execution occurs.
And, although there are ways of tracing through the source code during execution, the point is that to effect execution you have to drive the system. This means you must supply some form of input actions or data. So you are back to the original problem--you must know what test cases you want to execute before you can trace the code to see what it does with them.
The most persistent advocates of this theory try to work around this problem by proposing that you just fire enough random events or data at the system to cause the different states and pathways to occur. This is akin to the idea that enough monkeys, banging on enough typewriters, given enough time, would eventually produce Shakespeare’s body of work.
While I am highly skeptical of whether this can ever be done, given the complex interrelationships that govern behavior, let’s assume for the sake of analysis that this was possible. The real question is, what does it tell you? For any given set of actions, you have a set of results. Are these results valid or invalid? How do you know? If they are truly random, you will have to trace every action and reaction to decide whether it was handled correctly. How do you know if it was handled correctly? Gee, sounds like you might have to have some requirements to decide. So, you are back where you started.
It’s the requirements, stupid
The bottom line is that testing requires not just effort, it requires skill. While test automation tools can remove the drudgery of manually executing or designing tests, they cannot remove the need to understand what should be tested.
Gathering and defining requirements remains the single most challenging aspect of software development and testing. And it is one which cannot be automated or otherwise avoided.