Right now I am in Dallas, Texas at the Software Test Professionals Conference.
It's been an amazing week, if a little exhausting.
I'm afraid I didn't get to many sessions, as I was generally either presenting or prepping. This week, Pete Walen and I presented an all-day tutorial re-introducing software testing
, we did a track talk On Complete Testing
, I did a little talk on Reducing Test Cost
, and, then in the evening, moderated a role-playing exercise called "Werewolf!"
Whew. The good news is that my responsibilities are over and I can spend a few hours enjoying the conference. I know, crazy.
... and there's video!
The people at uTest
have been interviewing speakers all week, including Shawn Hudson
, JoEllen Carter
, Rob Walsh
, Noah Sussman
, Lanette Creamer
, Karen Johnson
, Justin Hunter
, Pete Walen
, Fiona Charles
, Catherine Powell
, Rob Walsh
(again), Karen Johnson
(again), Lanette Creamer
, and Matt Johnston
And there were talks!
Speaking of Matt Johnston, that guy ran gave an amazing keynote at the conference; it covered some serious testing issues SoLoMo (Social, Local, Mobile Applications) and the combinatorial test explosion of supported dozens, if not hundreds of devices in different locales. Matt talked about ways to augment your existing testing services with a mob of strangers, including way that don't slow you down. You might, or example, have the mob run over a weekend to get test results when you come in Monday morning -- or throw the mob at the application as soon as it goes live into the public, to try to make sure you find the defects (and fix them) before your new customers do. (Matt's slides are available for free, for everyone, on SlideShare.net
What impressed me most about Matt is how he runs his business. This is a guy who has indicated that he'd like to work with me, who knows I've done some small contract projects for uTest, but never once, not one time, asked me for a public endorsement. This is extremely rare; so rare that I thought I'd give him one anyway.
There has been a lot of criticism of the uTest folks in the testing world -- at least, there certainly was a couple of years ago. Since then, I believe the uTest folks have demonstrated that they 'get' the Software Test community, that they are out to benefit it, and that they strive to be the kind of professionals I mentioned toward the end of my talk -- people who don't just take money, but instead build something of value to society, and are compensated for that work.
And there's more!
This morning started with the "Rapid Fire Challenge
", a collection of five-minute talks by Speakers and Test Luminaries. A few of my favorites:
, a rule of thumb he uses to decide what to test under schedule and time pressure. In his exercise, Scott hands out cards with these items listed:
egal or Contract
The next thing Scott
does is give each stakeholder five things to test (or some number of things to test), inspired by these cards. Write each item you need tested on the back of a card -- That's it. If the stakeholders want more than five, they have to negotiate with each other. This forces a specific understanding that testing is limited
in time and resources. Scott collects the cards and then, wham, that basically generates your high-level test plan or strategy.
After Scott, Lanette Creamer
talked about free test tools, including SoapUI
(a web services testing tool), Jing
, a screen capture tool, Skype (for asking for help from friends) and Adam Goucher
(a friend worth tapping into.) Yes, Lanette called Adam a tool -- but she meant it in the nicest way.
Dawn Haynes also did a five minute session, where she talked about putting on different modes of thinking as a tester - to think like a casual user, a power user, a newbie, or her natural state which is a sort of Tazmanian Devil
After the Rapid Fire Session, Fiona Charles
gave a wonderful keynote on managing testing - what is actually happening - vs. Managing the "test process", creating vacuous documentation and spreadsheets because we are asked to.
Stuff I did not expect
So I went to Shaun Bradshaw's
session on Managing with Metrics. I expected to heckle, and I was pleasantly surprised. Shaun started and ended with acknowledging the criticism of software metrics used for control purposes, and the problem of construct validity. (For example, not all test cases are equal, so adding them up and taking averages of them might not mean what you think it means.)
Then, instead of giving us a bunch of things to measure with no context, he dove deep into one specific metric, telling a story about it's use. Now, I do not recommend counting test cases -- I just don't. Yet after Shaun's talk, I can reluctantly admit that some of his tools will help you do a better job if, for some reason, measuring test cases is what you are doing.
Likewise, I was pleasantly surprised by Bradley Baird
's talk "QA Needed: Testers Need Not Apply
Again, I expected to heckle. Bradley surprised me by asking
for criticism from the beginning. Not just acknowledging it, but requesting it. You gotta admire that.
Second of all, although the title was a bit extreme, I found that I agreed a great bit with Bradley's premise that testing along does not provide QA, and that (title) can provide input that improves the product -- not just providing conformance to specification, but in advising decision makers on fitness for use. We may have slight differences in rhetoric, but I see where the guy is coming from, and he does a good job dealing with competing themes and issues in our work.
Whew. Here I am typing up my conclusions. In a couple more hours, I'll be getting on a plane heading back to the midwest.
It was pleasure.
If I missed you, well, I missed you.
See you in New Orleans in March?