About STP / 877.257.9531
Log In Join Now

Author



Rating

3


Published

Monday November 12th 2012 8am

Call For Volunteers

Software Testing Test and QA
I'm fond of saying that This Week In Software Testing (TWiST) is the longest continually-running professionally produced test podcast in North America.

The "continually running", "professionally produced" and "North America" bits are hedging.  Last time I checked, there was one podcast in South America that was longer than us, and who knows if some person has a garage podcast that runs every day that no one listens to?

St this point, I think TWiST may be the only consistently produced software testing podcast.  There are plenty of consistently-produced podcasts that involve some testing (software engineering radio) and there are plenty of testing podcasts that are produced on occasion, but if you want week-in and week-out testing, you are probably coming to TWiST.

And we could use your help.

Call For Volunteers

A lot has changed since I started TWiST, and since Michael Larsen came on board about two years ago.   I went independent, my involvement with the magazine has increased, my travel schedule has increased any my free time has disappeared.  If you know Michael at all, you probably know that his situation is like mine but still more intense.

TWiST can use another show producer - or two.  We currently have enough material; the challenge is getting a raw .mp3 file edited 

The new producer  will be contributing as part of a mutli-skill, globally distributed team.  They will be helping to advance the practice of software testing -- and have an ear to hear the raw conversations (and, sometimes, arguments) between people who are serious about this stuff.  TWiST volunteers hear about all the news, from conferences to who is hiring to what the new trends are.

The primary editing application I (Matt) use to edit audio is audacity; a freeware tool that works on the Mac, PC, and Linux environments.  You can learn enough to be useful with audacity in ten minutes, and enough to do interesting magic tricks in an hour or two.  We don't really care about process as much as results, so if you want to edit in GarageBand or some other tool, well, that's great.

As for the wrok, we have a distributed team, so it'd likely be an hour or two of editing every-other week, perhaps a little bit less. 

If you are interested, have the time, energy, and attention, please drop me an email:

Matt.Heusser@gmail.com.

This is a volunteer position.  I'm afraid we do not have budget to compensate the role at this time.  My hope is that he value of the work will be it's own reward.

That decision, of course, is up to you, and I don't blame you for turning it down.

As for me, I started blogging for free on my graduate school web server around 2000, and that turned into xndev.com around 2003, and xndev.blogspot.com around 2006.  A few years later, I started blogging for Software Test Professionals.

I would say it worked for me.

But again, it is your call.  

A few thousand people might read this.

We only need one or two.


Author



Rating

1


Published

Monday November 5th 2012 12pm

Two Ways to Deal with Bias

Testing Software
Issac Asimov, the Grand-Father of Science Fiction, once said the essence of sci-fi is this idea that things could be different.  Maybe this way, maybe that way - but different.  Maybe, just maybe, even better.

I'm going to describe a test strategy today.  It is working for many professional services firm actively deploying commercial software.  It's very different than the test strategy of my typical audience -- but I hope you'll play along.

Maybe things could be different.

Let me tell you about it.

The Strategy

Today I listened to a wonderful seminar by Pradeep Soundararajan, the managing director of Moolya software testing services in Bangalore, India.

Pradeep talked about how they use mind maps at Moolya to examine the product, to show all the risk areas the team has identified to the customer - to ask what they have missed - then have the customer prioritize those risks.  So, for example, one generic starting point the folks at Moolya use to kick of test ideas is the COP WHO FUNG GUN mnemonic; here it is presented as a mind map:



(Like the idea?  Find free web-based mind mapping tools at bubbl.us.  For a native operating system tool, consider XMind.)

After the mindmap is customized and prioritized, it looks a whole lot like a test plan to burn down a cycle of testing.

Each node becomes a charter - a twenty minute to one hour block of time to explore the application. 

For each session, the tester produces another mindmap - a sort of lightweight documentation of what he tested, new risks exposed, and bugs found.

At the end of the test cycle, the group has a nice pile of reports they can use to create a master report, or to file bugs against, or to examine for newly emergent risks.

This is an extremely manual-intense test strategy for one test cycle.

But that's not what I want to write about today.

I want to write about what happens next; after the test report goes.

The folks at Moolya are likely to get another build.

What next?

Two Possibilities

We still have our original mindmap to define a cycle.  Add a few sessions for this or that, perhaps subtract a few, as the risks on the project (and the customer priorities) change, and you've got the cycle two project plan.

But now there's a problem.  If we want repeatability, we can go re-use the old session reports, re-run then - but now we've defeated the point of exploratory testing, and turned our selves into scripted testers.  Yuck!  (For more on why I am not excited about this, see James Bach's work on the Minefield Analogy.)

The way Pradeep explained, there are at least two possibilities.  One option is to rotate a fresh tester onto the project, give him the same charter, and allow that tester to start with a beginners mind.

But what do you do on cycle five when you have no fresh testers?   Or on a small project?

You could try to forget all that you learned on the project and start fresh.

On the other hand, if you aren't ready to go there yet ...

You can embrace the old exploratory test report.  Call it version 1.0.  Re-save it as version 2.0.  "Re-run" the mind map, see if the old bugs are fixed, if any new ones are created along that path.  That won't take you as long as the original session.

So if you have fifty-minute sessions, you'll finish the re-run in thirty, and can take the learnings from that session and try new techniques, expanding the version 2.0 map.

That won't be all the testing; the team will create new exploratory testing charters to cover any new risks that emerge.

I told you, maybe this way, maybe that way, but different.

Thank you for playing along.

If you picked up an idea or two along the way, hey, that's just gravy.


Author



Rating

0


Published

Tuesday October 30th 2012 6am

99 Agile-Testing Problems, Scrummerfall is one

Life Cycle Software Testing Agile
Someone wrote this question on the Agile-Testing list yesterday:

Hey All,

OK, so for those of you that have successfully worked as agile testers, or been on a team that made the shift, I need your help.

I've got a team with an agile tester; he's great. Very smart, very driven, 
good sense of humor. We hired him this summer into a team that's been around for awhile with a couple mature production apps.

The main issue is, we're in scrummerfall; testing is still last, and it's 
not uncommon for several programmers to deploy to QA within a day or two and thus the team's work bottlenecks. This also has a tendency to happen near the end of a sprint, or when a release is being prepared.

How do I "test-infect" the thinking of the programmers, and how do I 
"develop-infect" the tester? I'd like to even more away from such role labels as in my mind they tend to reinforce this arbitrary break down.

I've been reading up on this, though while it's clear this is a cultural 
shift, finding actionable approaches has been elusive.

Some in progress items that might be of interest:

- the team as a whole has asked for some leadership here from me, and 
tasked me with bringing ideas to them in the next couple weeks
- the tester does not yet have a functioning local environment to really dig into the automated tests, or be able to write them. Remedying this has been pushed out many, many times and is now coming back into a high-priority scope.
- we are, among other things, attempting to practice BDD, though again while the concept is clear, actual implementation is more elusive
- we have a new product owner as well and we've been spending more time on user stories up front to have them be a more concise slice of work. We have not yet gotten the tester involved in these early conversations, although we have identified it as valuable and something to do.

I would very much appreciate any thoughts and experiences on this, and 
thanks in advance for taking the time!!!


I thought this was a reasonably-framed question.  The author gave the specifics of the situation, the who, what, where, why, how -- and why -- along with some of his attempts at solution, to blend roles, do BDD, etc.   I thought I had enough information for a real reply.

This is what I wrote:

Hello Kevin -

I think we agree on substance, there are just a few rhetorical things in your message that give me a little pause.

Or, perhaps, alternatively, I might be looking to solve the same problems, but in a different way.

Granted, I only know what you wrote.  I think you did a good job describing the system forces, and I am willing to problem solve here, even without a perfect picture of the situation. That said ...

If the team feels like scrummerfall, I would consider two things - (1) Reducing work in progress inventory to move toward one-piece flow, and (2) Requiring QA be involved in defining 'done' for the story before the testers start writing code. As a manager, I might consider that the developers /demonstrate/ done, preferably in an automated way -- but, realistically, I might defer that a bit,
maybe give them some build automation stories.

If you make those small system changes, it might encourage the proper behavior, where 'encouraging' pairing of dev and qa and blurring of roles might struggle.

But that's just me talkin'. Do you have any questions?

regards,

--heusser

Giving advice over the internet in bite-sized nuggets is really hard.

My question for this audience:  What do you think of my advice, and why?

(Also, knowing me, you probably noticed other things I did, like entirely ignoring the discussion of BDD.  What/why do you think of that? :-)

More to come.


Author



Rating

3


Published

Monday October 22nd 2012 8am

Lessons from the STPCon 2012 Test Competition!

Software Testing
At STPCon Miami last week, I organized a test competition as a double-track session.  My friends Andy Tinkham and Scott Barber assisted out as judges.  Petteri Lyytinen wrote some software to test, and Matt Johnston of uTest provided a reference website to test that had just had a rewrite -- that will be important later.

It is also important to note that, while this may be a new idea at commercial test conferences, it is certainly not my idea; James Bach organized a test competition at CAST, the Conference for the Association for Software Testing, in 2011, which to a great extent inspired this format.

Here are the final scores:


A Little Bit About the Rules

To begin the contest, we handed out a single sheet of paper with the rules of engagement.  Teams had two and a half hours to test four different websites.  The final judged deliverables were the bug reports and a final test status report.  At the beginning of the contest the judges announced three major prizes: Best bugs submitted, best status report, and best overall score. The program brochure made it clear that best overall score would include interacting with the judges and team coordination.  At 5:30, we stopped the clock in order to hold a fifteen minute retrospective.

Observations

The first thing I noticed was the three uTesters in the room.  When I asked them to sign up, they were reluctant. "We are in marketing", they explained, "we are here to blog the proceedings, not participate."

In order to understand what was happening, they had to test, and once they were testing ... they were hooked.  Granted, testing is just part of the culture at uTest, but if we can get a marketer to test, then perhaps we can get many people to test come crunch time.

The second thing you'll notice is that the uTest scores weren't terrible -- and I think I know why.

The uTest folks might not know much about software testing, but they were aware of that weekness.  So they asked for guidance and talked together to try to figure it out.  This gave them points for interacting with the judges (the "product owners") and for teamwork and strategy.  Another thing the uTestes did know, which again, is probably cultural, is how to write a reproducible bug report under time and information pressure.

The need for guidance was an explicit, planned part of the competition.  

We handed the teams software and said "test this", then told them we were product owners.

Our intent was to simulating the majority of the projects we, the judges, had worked on.   And, just like the majority of the projects I have worked on, we (the judges) expected certain things from the teams, but they had to ask.

Team uTest was the only team that asked what we wanted to see in a status report.  

This turned out to be very, very important.


It All Came Down To One Word

If you look at the scores across the board, team Lemon PoppySeed Chicken (LPC) is in the the lead.  When we periodically looked up and observed, we saw the team talking to each other; when we asked them which websites they tested, they had specific reasons.  LPC was also one of two teams that actually asked the judges which of the websites were more important and valuable to test.  The bugs they reported were more  important to the decision makers, and Andy was able to reproduce more of their defects than any other team.

It all came down to the status report.  

Remember, uTest was the only team that asked what the status report should look like.

Team LPC produced a list of bugs.  Personally, I would not call that a status report; it is more like a bug report.

As product owners, our goal was to figure out how close we were to shipping, and, if we had work to do, to use the status report to triage work and assign fixing resources. With the LPC report, we were overwhelmed.  Was it good enough?  Could we ship?  What bugs were critical?  We did not know.

This is where team Highlander shined.  Highlander delivered a test status report that gave high-level insight to decision makers, which is what we were looking fro.  This was probably a result of habit (more about that later) than insight; team Highlander didn't ask for direction either.

Lessons

The conditions of the test competition were a little different than your classic project.  For example, when we are back home, we generally have some idea of project context.  We are in the status meetings and standups, we talk to the developers and decision makers, and we know what matters when it comes time to ship.  Thus we develop habits of "just testing" the software.  

When it comes time for a test competition, we do not "rise to the occasion"; instead we fall back on our habits and training.  This worked for team highlander; I think it is what tripped up LPC, Red Square, and Team Dragon.

Dragon's case was interesting; they found a large number of Android device defects.  At least one member of the team tests native Android apps for a living, so it makes sense, for him, to test Android out of habit.  If the team had asked, we'd have suggested our main concern was not mobile devices, but instead browser compatibility for popular browsers on laptops and desktops.

One of the products dragon tested is generally fit for use on the desktop, but has significant problems under Android.  This led to a dichotomy on the final status report, where the testers wrote this literal text: "Due to the good quality of the website we think the website has yellow status."

If I was a product owner, I'm not sure what I would do with that.

The biggest lesson I took away from this is project context; that doing things that might make a team radically successful in one environment might fail in another.  The project itself was probably closest to a uTest project, which, combined with their asking for help, may explain why team uTest did so well. Yes, Matt Johnston was on the team, and he did provide me with the software, but it had just had a major site redesign, and the members of the team clearly had not been testing it before.  I chose to believe them and let them compete.  If they had scored highly, we would have had an awkward conversation -- but it turned out not to be the case.

Finally there were some struggles with the test report, and that makes sense.  Most of the teams I work with lately are shipping often; they don't have a huge work-in-progress inventory that would justify a formal report.  Instead, the team discusses the known issues at a standup, in person.  Still, writing a status report is a valuable skill to have, something James observed at his testing competition in 2011.  I am interested in simulating standup meetings in future competitions, but I'm still working on it; we have room to grow.

Overall

Out of a possible 35 points, nine points were all that separated the top team from the bottom.  

So in one sense, the competition was close.

If you look a little more closely, you'll see that scores tend to average overall, but some teams did much better or worse in a given category.  This points me not to equality, but to a great variety of expertise.  You see this when you dig into each team. 

Team LPC did extremely well in execution, but struggled with the test report.  Highlander rocked the test report and also collaborated well, but we weren't excited about the bug reports they produced.  Red Square found important bugs we cared about, but seemed to struggle with coordination.  Team Dragon was the only team to decide a bug reporting format up front, but they fell back on habits of testing -- they did not clarify the mission of the test.  

(I do not mean to be overly critical above; every team did well.  It is just that by writing about this, there is an opportunity for people who did not attend the event to learn something.)

The next morning, Rich Hand, director of membership for STP, presented a $25 gift certificate to team Nicole Rivera and Karl Kell, who formed team LPC.  Team Highlander, which included Brian Gerhardt of Liquidnet and Joseph Ours, a consultant at Cohesion, won best status report and best overall.  Team Dragon took home most entertaining bug report.  (Team Dragon found that the bug submission system contained a defect.  If you wrote your email address in all capitals, the javascript error popup appears that you must enter a valid email address!)

Keep in mind, the goal of the test competition was to learn things while having fun.  I believe that happened, mostly because, except for uTest, teams consistent of mixed skill people from different companies. There was a sort of cross-pollination that was a pleasure to watch.

Still, there was potential for more.  If we had run the event as multiple rounds, we could have run a mid-competition retrospective to discuss what makes a good reproducible bug report, how to get into the mind of the product owner, and so on, and, hopefully, see teams improve in round two and three.

Tomorrow

I am seriously considering running a web-based test competition in December, 2012.  Here's how it would work:  A week in advance I would create a chat room or skype id and broadcast it, along with the time of the competition.  At the exact right time, probably noon Eastern USA, a blog post appears here with rules and how to submit bugs.  Three hours later, the test competition closes, and we start judging.

Of course there are timing problems, logistics problems, judging problems, and so on.  I believe those are all solvable problems.

If you'd like to be involved, as a judge, volunteer, or to assemble a team (of one to six people, on or off site) drop me a note.  We may just do this.

Otherwise, STPCon Spring 2013 will be in San Diego, California in April.  You can probably guess the type of session I just proposed ...  :-)








Author



Rating

3


Published

Tuesday October 16th 2012 9am

Testing Lessons from the Princess Bride

Testing Software
I am at the Software Test Professionals Conference in Miami, Florida.  At lunch I will be presenting on "Testing Lessons From the Princess Bride", based on the 1987 Movie and earlier book by the same name.

No really, it's a seven-minute talk, a little light-hearted, a little entertaining. Hey, I may even provide an actual idea or two.

Are you in Miami?

If not, how about I post my talk here?

Lessons Learned from the Princess Bride

First, let's meet five of the major characters -

1) Vizzini - The Sicilian, Vizzini will do anything for money.  He is full of plans but is not very bright; his catchphrase is "Inconceivable", even when it is conceivable.

2) Inigo Montoya - The Spaniard, Inigo's father was killed by the Six Fingered Man when he was eleven years old.  From that day, Inigo dedicated his life to becoming an expert swordsman, in order to get revenge.

3) Fezzik - The Giant.  He is not know for his intelligence but, instead, for his strength - though he does like to do impromptu comedy with rhymes.  "Does anybody have a peanut?"

4) Princess Buttercup - The damsel in distress.

5) Westley - The Hero.  Westley is in true love with, and is to marry buttercup, but he leaves to work on a ship to raise money for the wedding.  Along the way his ship is attacked by pirates and he "dies."
Of course he is not really dead; he comes back to save Buttercup five years later.

The Lessons

1) When they are climbing the cliffs of dispair", Fezzik is carrying the entire bad guy team (Vizzini, Inigo, Fezzik, and the captured Buttercup) on his back while climbing a rope.  They are being chased by Westley, who is gaining on them.  Vizzini yells at Fezzik to "hurry up".

Of course, Fezzik is going as fast as he can.  Vizzini's yelling doesn't actually help anything -- in fact, it just injects stress onto poor Fezzik.

Lesson: Don't be a Vizzini, the bad boss who's only advice is "work harder."  Convetional wisdom is that pressure works in the short term, but I am not so sure of even that.

Don't drive; lead instead.

Lesson: We all know pressure doesn't work long-term; of course we do.  But do we always act like it?

2) Fezzik, the Giant, quickly realizes that Vizzini, the Silician, is a "bad guy" who is leading them into bad choices, but he allows himself to be a follower because he believes the intimidation tactics of the Sicilian.  Fezzik thinks he can't find a better job, and will wind up where he started, unemployed, in Greeland.

Lesson:  Given the job the giant actually has -- unemployed in Greeland is probably better.

3) Inigo Montoya has spent his entire adult life (and most of his youth) studying fencing, yet he is beaten by Westley.  For that matter, later in the movie, when Inigo wants to break into the castle, he says "I have no gift for strategy."

Lesson : Different problems require different skills; you never know exactly what skills a project will require. Long term, the multi-skilled person (strategy+fencing, wrestling+game theory) is going to be more valuable in  than the master of one technique.  If you don't have the skills, and, maybe, don't want to develop them, you can recruit someone else to form a team to storm the castle -- and you get points for that, too.

4) Buttercup gets captured and Westley has to capture her.  She also (twice!) falls into a logical trap where she "has" to marry the prince.  This not a universal idea; in "Ever After", the girl gets captured and rescues herself.

Lesson: Three strikes and you are not out!  There is always one more thing you can do to influence the outcome.  Becoming a victim is a choice -- and we don't have to choose it.

This is only a few of my ideas -- what are your favorite lessons from the Princess Bride, or any other popular movie or TV show?








Advertisement
STPCon Spring 2015



Friend SoftwareTestPro on Facebook
Follow @SoftwareTestPro on Twitter
Create or Join a Crew

Upcoming Virtual Training

12/15/14 Risk Based Testing
12/17/14 The Agile Tester




Explore STP