I test embedded systems for a living, and I’m really good at it. I’ve been in the business for almost 30 years, and I’ve pretty much seen it all. I know the types of mistakes software developers make, whether due to inability, laziness or just human error. Some of the developers I’ve worked with refer to me as ‘evil’, ‘the devil incarnate’ or their ‘worst nightmare’. When I interview for a testing position, I let the prospective employer know that within a year or two, there’s a high probability that at least one of their software developers will quit and find work in another field, having realized that they aren’t anywhere near as good as they thought they were. I figure that’s the polite thing to do.

I generally take an analytical approach to testing. I evaluate the system under test and its requirements, develop an overall test strategy, then generate a set of test cases that fully exercise the system’s functional, non-functional and performance requirements. I also use ad hoc and exploratory testing when it’s appropriate. Together, those testing methodologies have served me well.

But I have a confession to make: part of my success as a tester comes not from being analytical, but from being clumsy. Over the years, I have refined my clumsiness and elevated it to a testing technique I call Purposeful Clumsiness.

Here’s an example: a long time ago, when embedded web interfaces were still a new thing, I was testing a device that allowed the user to download firmware updates via web interface. While testing that feature, I accidentally selected the wrong file; rather than sending the device a valid firmware file, I sent it a JPEG file – a really big JPEG file. The device happily accepted the file in its entirety, automatically rebooted, then rolled over and died. It was, to use the common technical term, ‘bricked’. The software developer, who was a junior programmer, had never considered the possibility that someone might download the wrong type of file.

Another example: I was trying to reproduce a problem a customer had reported in their network of frequency-hopping radios – after running in the presence of a significant amount of interference, one of the radios would fail in a very odd way. I recreated the radio network on my test bench, using a signal generator to simulate the interference. After running the test for several weeks, I wasn’t able to reproduce the problem. As I was checking the signal generator settings, my hand accidentally brushed one of its controls, disabling the interference signal. As soon as I realized what I’d done, I enabled the interference. At that moment, one of the radios under test began behaving oddly. It wasn’t the failure mode I was looking for, but I didn’t ignore what had happened. Following the clues led me down an interesting investigation path that eventually led to another obscure defect in the radio firmware; one that only occurred when the interference came in bursts.

Now, I make mistakes just like everyone else, and I always try to learn from mine. But after a few of these episodes of clumsiness bore fruit in the form of defect reports, I started thinking: could clumsiness somehow be an asset in testing or in test planning? So, I started documenting these situations, and Purposeful Clumsiness was born.

Now, you might argue that being clumsy during testing is no different from testing error conditions. I would say that Purposeful Clumsiness is a really neat way of identifying error conditions.

Purposeful Clumsiness isn’t exploratory or ad hoc testing. Nor is it ‘monkey testing’ – banging randomly on the keyboard until one of Shakespeare’s works appears. It’s a well thought-out plan to exercise those pesky error conditions that keep developers awake all night and provide job security for us testers.

As a tester, how do you go about being purposely clumsy? It’s pretty simple, actually: analyze the system-under-test’s external interfaces, and figure out how the entities that use those interfaces might do something unexpected, but reasonable, in the context of system operation. For example:

  1. When user input is required, type in something very close to, but not exactly the same as, what is expected. The same applies to mouse clicks – if you’re expected to click one of two buttons, find another button or click somewhere else on the screen.
  2. Follow a process or set of instructions, but execute them out of order.
  3. Hook up power and/or data cables incorrectly. If a connector isn’t keyed, try connecting it in different orientations.
  4. Turn off or briefly interrupt power to the system-under-test at the most inopportune time. Whenever I see a message like “Download in progress. Do not hit any key or remove power!” my hand starts to move on its own towards the keyboard or the power switch.
  5. Disconnect external I/O at the most inopportune moment.
  6. If an external system communicates with the system-under-test via a defined protocol (i.e. an expected sequence of back-and-forth messages), simulate that external system and change the sequence. After all, a human programmed that sequence into the external system, and they might have gotten it wrong.
  7. Do the unexpected…

I don’t claim that Purposeful Clumsiness is ‘the’ silver bullet for testing. It is, however, a valuable tool in my testing toolbox that helps expose defects more traditional testing techniques might miss. Coming up with ‘clumsy’ test cases can be a lot of fun, too.

Remember, as testers, your job is to find defects before the customer does, so it’s important that you try to second-guess what a customer, even a clumsy one, might do with your system.

If you’ve got any clumsiness-induced success stories from your testing, I’d like to hear from you!


About the Author

Bruce Butler, P.Eng., CSQE Bruce Butler (a.k.a. a software developer’s worst nightmare) has 30 years of experience designing and testing embedded software systems in such diverse areas as mining automation, autonomous underwater vehicles, electronic charting, vessel surveillance, marine navigation and telecommunications. Bruce is registered as a Professional Engineer in British Columbia, Canada, is a Senior Member in the American Society for Quality and is certified by the ASQ as a Software Quality Engineer. When he’s not making life miserable for software developers, Bruce likes to participate in ultramarathons and ironman-distance triathlons.