Two weeks ago I published the blog entry "Are Testers Going Away?"
-- it's caused a fair amount of controversy.
Let me begin by saying that, while I'm proud of the article as published, these things can always be done better.
For example, a few people who read the article thought I was coming out against having programming skills, and others said "but of course, he isn't a programmer so he's down on it."
Neither of those are are the case. First of all, yes, I think programming is a skill
, and it certainly can be useful for testers. And, while I don't want to make a big deal of these things, I did get my Bachelor's Degree in Math with a Computer Science Concentration, and I did earn a master's in CIS while programming during the day. I did spend a decade as a developer, then project manager, before landing a position with 'tester' on my business card.
So now, I'm not against testers having development skills - I just think the two skills are different. Like target shooting and skiing, each skill can take a lifetime to master and improve, but you can find people who do both. I'm just not sure you want your entire test team staffed with these people.
Even if you do staff your team with all programmer-testers, that's not exactly the ideal that I take issue with.
I take issue with the ideal that wants to click a button, walk away, get a cup of coffee, come back, see a greenbar, and say "It's tested -- ship it!" or even more so, the continual checkout-build-automated-test, if it passes, ship it.
That's just not my ideal. I'm not sure it's a noble goal, or a worthy one. In practice, when I've seen it tried, I've seen organizations that had five full-time testers last year now have seven full-time (and more expensive!) SDETs this year, yet have the same complaints about the testing bottleneck or issues with quality or usability. (Or, it turns out, when you dig into it, they do employ traditional testers after all
Here are a few reasons I see to prefer a diverse team that seeing testing as an investigation:
A) There's more to testing than conformance to Spec.
How many of us have seen a spec we through, really, truely captured the key issues to the project in a way that was easy to read, simple, and straightforward? Even the few of us who have seen that, is it possible the spec covered all the possible defects?
I've seen or heard of a few that could - mostly in embedded systems - avionics, transmission controllers, and the like. The specs are generally expressed in something close to a symbolic notation. For the rest of us that have to deal with customers who are humans, and not output devices, well, it's going to be tough.
B) Automation tends to look at -- and mitigate -- certain categories of risk
But it doesn't look at all of them. So you do a great job eliminating certain kinds of defects, but the things that are hard to measure (like usability) tend to be ignored.
If setting up the software is pain, but you automate away that pain, your test team won't feel it So your product might be bug-free, but no one wants to use it.
C) Who is the target customer?
If your application is Microsoft Visual Studio, it might make sense to have all SDETs test it. After all, they are close to the target customer. But if it's amazon.com or wordpress or basecamp, I suspect you'll want some people who eat, drink, breath, and think like the customer - and that means a diverse test team.
----> Now please allow me to clear up a few things.
None of this applies to unit, or developer-level testings. I think TDD is the bees knees, and will generally improve quality, and time-to-market, before the code gets to the testers. Nor am I pre-cluding or being negative about what is commonly called Acceptance Testing or ATDD. I think a sprinkling of ATDD, as examples, can be a great way to drive development. Nor do I want to preclude model-driven testing, which I would put in a different category. I'm just saying those alone are not sufficient
Do I see a place for a programmer skill-set on a test team? Absolutely. I've done a fair amount of SDET-like work, and, in transactional systems and batch-oriented systems, say, for example, insurance or banking -- where there is no Graphical User Interface, then this kind of testing might be the way to go.
It's when we introduce the GUI, or any UI, that those humans
get involved. Having tried a few ways, and studied a lot more, my approach to testing is, well, human-centered. So I might want to use some automation - like a chain-saw - but I'll have a human being standing behind it, doing the driving.
The common pattern I see with SDETs is that companies with a strong development-focus like to hire them as a sort of a trial. If the tester does well, he can be promoted and become a programmer.
Thus, while we /say/ the SDET should have developer skills plus tester skills, in reality, we pay him less. So the SDETs you get are less passionate about testing and more passionate about getting a job in programming
Only they aren't great programmers either.
You can see where I'm going with this; it least us back to the seven SDETs who cost more than six traditional testers with no great difference in output.
Abou a week ago I got a private message that the big advantage of SDETs is the career path is clear. Test Management, test Architect, or Development. Without a development option, the SDET 'career pyramid' looks pretty narrow.
Except I see a fourth and fifth option for the software tester.
Think about the skill set: Testers eed a wide understanding of the application. They need to understand the business processes as a whole. They need to understand the customer. They need analysis skills, risk management skills, time management skills, critical thinking, the ability to effectively argue, influence and curiosity.
When you remove all the cultural attitudes about what job is superior to who's, I submit that if you really look at it objectively, testing is surprisingly good at developing the same skills as are desperately needed in general management and the executive suite.
That is, if you're into that kind of thing.