So last week I started the "This Week in Software Testing" podcast. With a name like that, you'd expect it to be, well ... weekly, right?
Then the screen went out on my macbook. No, really, it's an apple warranty issue
. They are replacing it free. No worries.
But until I get it back, I'm stuck on this old dell with no good recording software. So no TWiST this week.
I can, however, recommend some good alternatives. My friend, Željko Filipin (A walking internationalization test), is involved in TestingPodCast.com, a site that compiles and lists podcasts about testing.
I think the tag line they have for the site is just great:
Audio podcasts on software testing. All of them.
Of course, because they index all of them, the results are a mixed bag. At least most of them aren't boring. So you might listen to a podcast that you disagree with in order to understand the other side's position -- and, maybe, to deconstruct it.
If you listen on your computer, maybe a lunch, a video might be good, in which case I'd recommand Simon Stewart's latest "Test Engineering at Google"
In the talk Simon covers what Google calls SETS - Software Engineers in Test - what I've been calling on this blog SDETs.
I find Simon's perspective fascinating. According to Stewart, the SET is actually a programmer who spends most of his day, well ... programming. Not necessarily writing programs to directly test, though, the SET spends his time on test infrastructure, writing frameworks and tools to help the end developers at Google do their own test automation. Another big piece of his job is education and training on how to use those tools.
The radio of dev to SET at Google is something like seven to one. So developers are expected to do more testing and test automation themselves. Instead of testers, this picture is one of SETs as accelerators.
To my knowledge, google also employs a third or possibly fourth role of "Test Anayst" or "QA", most of which are contractors and do what you and I would call "humans in a seat, using the software" testing. They seem to be reluctant to talk about that.
Finally, my friend Alan Page at Microsoft has a blog post
where he claims that while SDETs at Microsoft need to have developer skills, that does not mean that they have a vision where 100% of all tests are run by a computer unattended by a human being. Alan sees it as a continuum, with 100% on one side and 0% on another, and most good testing is somewhere in between. So instead of test automation "Yes or no", we need to be asking where and how much.
The truth is that Alan Page and I agree on a whole lot - it's just the things we disagree on are the most fun to debate.
So, sorry, no TWiST this week. But between Željko's podcast list, InfoQ's videos, and Alan's writing -- somehow, I don't think you'll starve.
More to come. Serious this time! :-)