"Given three numerical inputs, determine if the combination could be a triangle, and if it could be a triangle - what type. Scalene (all sides are different), Equilateral (all sides are the same), or Isosceles (Two sides are the same.)"
This is a classic problem in software testing; you can find it in a number of books on the subject and even more conference papers and blogs.
I've referenced it a few times myself; first to point to an on-line test about two years ago. The next day I did a follow-up post explaining how reducing the problem to "just" input/output of three numbers might cause a team to miss defects. Based on that I am working on some theories of where errors come from ... but that's a different blog post.
For today, I would like to do two things. First, clarify that follow-up post, and second, maybe, try to provide a different way of looking at the triangle problem, one that might cause us to see the problem differently. Here goes ...
About "What's an SDET -- Part II"
I wrote "What's an SDET II" in March of 2009, coming from the perspective as a tester/developer and former developer. Since I wrote the article, a couple of people have raised some objections to this line:
In my experience, Developers tend to think in terms of automatable business process -- when exactly what needs to be done up front isn't clear, developers claim that the requirements are "inconsistent" and refuse to program.
Now I meant to say this as credit to programmers -- that if you don't understand a process, if the process isn't straightforward, well, then, you can't automate it. A few people thought I was saying that developers are sort of sticks-in-the-mud that insist there jobs be easy -- or at least I was portraying some sort of us vs. them mentality with regards to the programmers.
So yesterday I took a second shot at it with a posting to the agile-testing list. It's a complex subject, but I think this explains my position a little better:
When I was a programmer, people would often express 'requirements' that were not thought through. The problem with that is that code is very specific and exacting -- it needs to know what to /do/. You can't tell code "figure out what is appropriate to do and do it."
The other programmers I would respect, like me, would point out than an empirical process requiring discernment and judgement could not be automated -- we needed to make the process /*explicit*/ and /*defined*/.
In most cases, we would engage the customer in a conversation and figure out what the software needed to do. In some cases, we needed to bake in a 'human intervention step", where the code blocked and a human stepped in and made a decision. And, sometimes, the project would be scrapped because artificial intelligence just wasn't advanced enough yet. (Jerry Weinberg has a good example of this kind of problem: A program to create a list of all possible license plates, and remove every combination of letters and numbers that anyone could be offended by.)
The worst programmers -- or at least the naive ones -- would not notice that the work was undefined, they would code up something ... and they would fail, or, if they were lucky, get stuck early.
It turns out that software testing is not a straightforward, well-defined business process.
So, when a team tries to 'automate' [customer-facing] tests, if the programmers aren't throwing up their hands, engaging in conversation, and baking in human intervention steps, you gotta wonder: What's that all about?
Hopefully by now you folks see where I'm coming from -- even if more discussion is still needed.
Now back to the triangle problem
So the web-based application that provides a triangle problem is called "Triangle Test
Now we go meta.
You see, the app isn't just the triangle problem. Instead, the app has ten different kinds of "test cases" it expects you to run. The idea is you'll enter in a bunch of different values until you think you've done a good job "covering" the app, then click evaluate and see your 'score.'
Not only are the test cases limited, but the program has to try to /map/ your attempts onto on of those test cases. In my experience with the application, it's common to guess wrong "oh I see you are testing for (thing you were not testing for.)
Not only that, but if it decides you are testing for the same condition, it will tell you to stop wasting time, you already covered that case -- even if you were looking for a different failure mode.
Having written up a triangle determiner myself a time or two as a programming challenge,
I know of at least two common failure modes, that I would run as two different test attempts, that the software tells me are redundant.
Now don't get me wrong, this is a nice piece of training software, especially for the price. I'm glad the gentleman created it and threw it out there, i've used it and encourage others to use it.
What I am proposing is that we wring even more value
out of this problem by testing-the-test -- finding ways it could be improved, or test cases it did not consider.
More than that, it'll be fund.
What do you say? Can you find a bug in the triangle test
(Aww, I know this audience. "Can you find /ten/ bugs", that's more like it, right?)