About STP / 877.257.9531
Log In Join Now





Tuesday December 18th 2012 4am

‘Jiggling’ II - Getting It Wrong

Testing Software Development Test and QA Technology Teams Strategy

Last time I talked about situational awareness, and how a good consultant (scratch that, a good friend) can ‘jiggle’ the other into reconsidering the situation.Sometimes that goes wrong; sometimes horribly wrong.  But even then, there is usually something I can learn from it.

An STPCon Story

Todd Schultz was presenting about Mobile Application testing at STPcon last fall. Toward the end he spent a few minutes on automating mobile tests.

Now creating an system to run a website through some exercises and come to conclusions is a tricky business.  Just like desktop systems, every moment spent writing (test) code is a moment not spent testing.  Also like desktop applications, there may be other client combinations (browser, OS) where the check fails - but it could work just fine in our sample.  With mobile applications, there are more suddenly more operating systems, make, model, and browser combinations than ever before.

Another distinction is the short timelines of some mobile applications -- many of them are built, thrown out, and never modified.  Traditional, scripted automated checks can only be created and pass when then the application ‘works’ for that feature; the only value they have is in regression.

If the Mobile App is going to be built in a month, deployed, and never touched again, that automated GUI check isn’t going to have a ton of value.

... And then Matt Jumps In

So, in my joking, Matt Heusser style, when Todd is talking about how to slice up our time, I say “Why not automate all of the checks on all the devices?”

I’m trying to be tongue-in-cheek here, to give Todd a chance to explain that the time invested in building and supervising the test runs would exceed the economic opportunity for the project - that 100% automation of all configurations is not some ideal good we should automatically aspire to.

Instead, he takes the question seriously, and answers me “we didn’t have time to do that.”

The problem with “we didn’t have time to that” is that it says Todd is failing; that he had to cut corners -- when I think he actually made a solid, reasoned, economic decision.

After the talk, someone approached me, more than a little worried that my attitude toward testing was so automation-centric.

Wait, what?

Clearly, we hadn’t met.

The trick I was trying to accomplish is to find a difference between the ways things are supposed to be (“all tests should be automated”) and the consequences of that idea.

It wasn’t just Todd.  During the panel discussion, I asked a few questions like that, Dr. Phil style “...well yes, because that always works.”

One of the participants said to me later “I didn’t realize until halfway through the session that you were joking, because your delivery was so deadpan.”

When I told the story to my friends who had been in the room at the time, they had no idea what I was talking about -- they had all picked up the signal.  They knew I was kidding; they were in on a little joke.

Me?  I learned to be aware when people didn’t know me well; to smile, nod, or wink, to indicate that I was joking.

I started out trying to jiggle others, but I learned a thing or two myself.

It’s funny how that works, ain’t it?





Saturday December 15th 2012 10am

Situational Awareness

Software Test and QA Teams Strategy Process Project Management Quality Assurance Requirements

Have you ever been in a place and realized, in a moment, that something was amiss?

It may have been a section of town you were unfamiliar with.  Perhaps you took a turn down an alley that wasn’t lighted well, or realized that you were alone.

Suddenly you realized that things were not quite right.

The earlier you have that moment, the better.  You’ll be in better shape - a more familiar area of time, with better lighting, more options.

Police and Military personnel call this situational awareness.

It is a skill, it takes focus, and it can be trained.

Back In the Software World

Have you ever seen someone walking around a project as if in a daze?  As if they are actively ignoring every kind of feedback in order to maintain a fantasy?

There are a lot of reasons for this -- one is the focus on goals in our culture.

The person may have a cash bonus to hit a ship date, or some other incentive.  At the very least thy have a goal -- something decades of management literature has told us is a great idea.  Not only is it a goal, but it is a "goodl" goal.  In this case, the goal is Specific (you know what you have to do), Measurable (you either ship or not), probably Achievable/Realistic, and Time Boxed - S.M.A.R.T.

Now say I come to him with bad news about the quality, or the features, or the state of the build. 

Does He Even Want To Hear It?

Probably not.

After being independent a year and a half, let me tell you, the appeal to remain blissfully ignorant about a deal going forward is incredibly tempting.  For me, if the deal falls through, at least I have a negative consequence.

The dude with the perverse incentive still gets the bonus.

The Harvard Business Review put out a blog post this week stating that goals setting can cause “loss of focus” on anything except the goal.

In other words, it puts people in a fog.

That means they lose situational awareness.  If everything is going well, that isn’t a problem.  No one was practicing active risk management, but then no risks emerged.  Everything is okay.

Sometimes, though, there really is a problem conspiring to get you.

There are exercises you can do to improve situational awareness, starting with trivial games like “I spy” that many of us remember from our childhood.  A significant amount of my writing is about project warning signs, and I am a far cry from the lone voice in the wilderness.

But what if you are not the person who needs to notice and take action? What if it is someone else?  How can you get them to notice?

One way is something that consultant and author Jerry Weinberg calls “the jiggle.”

The Jiggle

Last week I was on Majesty of the Seas, watching my children climb the rock wall.  Behind me, two adults were playing shuffleboard.

One of the players, a gentleman in his middle years, kept sliding his foot past the end line, and I said “excuse me, but I couldn’t help but notice that your foot is going past the edge.  That’s a two point penalty.”

Then I quickly added, in a joking voice “I just made that last part up.”

Everybody laughed.  It was a little silly, a little fun, and no harm done.

But I did notice that the man corrected his feet.

When we execute a jiggle, that is our goal:  Changed outcomes.  We can’t force a different outcome, because the other person gets to decide what to do.  What can do, or try to do, is to get them to think with a a different perspective, to refocus ... to have a little more situational awareness.

To do that, I want to make the other person laugh.  I want to intersect two ideas that do not match up in a way that elicits a grin, or a frown.  I want to do it in such a way that the other person is not embarrassed, so it should be in a small group.

Sometimes it is a question that calls for reflection “yeah, ‘cuz that always works” and “How’s that working for you?” are to questions that do it - although tone of voice and delivery is very important.

Sometimes it works; sometimes, we fail.

Even when I fail at the ‘jiggle’, there is usually something I can learn.

More about that next time.





Tuesday December 4th 2012 2pm

Stop the Life Cycle, I want to get off

Agile Software Testing Process
I've never been a fan of the term "Software Development Life Cycle", or SDLC.  It feels vague, abstract even.

Several times when I've heard that term, it felt a bit like we were in sixth grade biology class learning about butterflies.  We weren't going to actually go observe the butterflies in nature, or grow them ourselves ... instead we were going to read about the butterfly in a book and memorize some theory.

Not every time, certainly, but some of the time.

This is something I've been mulling over in my head for a few years now; I've been blogging about it as early as 2007.

Then two weeks ago in Germany, Scott Barber made my point for me.

The Ongoing Evolution of Agile Software Testing

Scott was giving a talk at the Agile Testing Days Conference in Potsdam, Germany.  His title was the "Ongoing Evolution of Testing in Agile Development."  He was kind enough to put his slides online and, yes, I did catch the audio to use later for the podcast.

About halfway through his talk, Scott put up this slide:

Then Scott started to talk about his actual process as a consultant.

According to Scott, a few years ago, he would lead with questions about the "lifecycle model" the team was using:  Were they using an incremental model, a Iterative model, Waterfall, or Scrum, or Extreme Programming, perhaps?

Today, he asks about what the team does and how they do it.

It's not just Scott; I've experienced the same change in conversation.

Ten years ago, when I was interviewing project managers, part of the interview was process was to ask what the project managers did at their previous or current job.

At some point I stopped caring about the question, because the answers were predictable. 

What Methodology Did You Use At (Company-Name-Here)

I'd say that we always got the same answer, but there were really a small set of answers.

The most popular answer: "We invented our own methodology that blends (Scrum, XP, RUP, list of buzzwords here)."

The second answer: "What's a methodology?"

That's it.  Those were the only answers.

But say you manage to get beyond that hurdle.  You find a team that actually claims to be doing Extreme Programming, or perhaps Scrum.

You gotta watch for the sly grin, head bob, and downcast eyes, then press hard.  Because eventually the answer you get is something like "... But we're not doing it right."

Nobody Is Doing It Right

The vast majority of the folks I talk to have their own hybrid thing they do to develop software.  It's messy.  It has problems.  Decisions are inconsistent.

Yet somehow the team is shipping the software anyway.  The company is not closing its doors. 

Besides, most modern models require periodic refinement anyway - which means a year after your team starts doing Scrum, it is unlikely they are are still doing "book" Scrum.  That would be stasis, and the heart of Scrum is inspect and adapt, right?

Since nobody is doing it 'right', conversations about process using labels tend to become arguments about terms and assumptions -- not discussions of the actual work.

Talking About the Actual Work

When I start talking to software teams today, I talk about practices, not labels, and typically start with this one simple question:

"How Often do you ship new builds to production?"

Based on that, I will ask a few more questions: 

How are the teams organized?  
What is a project team?  
Do you practice daily standups?  
Pair Programming?   (If yes, what does that mean to you? If "some", how much?)
How do you 'do' regression testing?"

You can think of these as questions designed to help me understand the context.

If I'm tracking with someone, I may ask them to explain the software development practices in an organization - or a smaller component, like new feature testing, or the build/deploy process, the project portfolio, or how projects are sized and scoped.

If we are tracking well, I may ask them to explain the process standing on one foot.

When we are really in a groove, on a good day, when they describe the 'new approach', I can predict the outcome of the process - up and downside - without being told.

The conversation is imperfect.  We mean different things by using the same words, and sometimes we have conflict.

But I'll take that over SDLC labels any day.


After fifteen years of developing commercial software, when I have conversations with teams, they are, more and more, about the actual work itself, not a metaphor, analogy, or abstraction of the work.  We did this on the last project and had that result so we are thinking of tweaking with this other thing.

Less labels, more specifics.

Software Delivery may not be grown up.  Hey, I'll be the first to admit it.

But you know what?

I software development, as an industry, might just finally be graduating from the sixth grade.

What's next?





Monday November 26th 2012 10am

The Divide Between Agile-Testing and Others

Agile Testing Test and QA

It all starts in a bar in Germany.

No, really, I was at a bar in Germany last week, talking about the differences between "Agile Testing" and "Traditional Testing", with Xu YI, Huib Shoots, and Pete Walen.

I was explaining the traditional disappointment with the Agile Testing book by Crispin and Gregory.

"Yes, yes, it explains how an Agile is different", I argued.  "Yes, it talks about unit and integration and system and acceptance, the four quadrants, and all the rest.  But where in the book does it actually talk about testing?"

Where does it help me figure out what tests to run to decide if the software is good enough to ship?

Answer: It doesn't.

What is Going On Here?

Xu Yi suggested that  that there is no single book on testing is ever going to be enough.  Real testers, skilled testers, will need to read from a variety of sources, one of which being the Agile-Testing Book.

If that is true, then the Agile-Testing book needs to be about how testing under agile is different; not the stuff that is the same.  

In which case, you wouldn't want a lot of material about how to, say, come up with test ideas on a specific piece of software, or how, once you ran the first test, you might adjust your strategy in real time. That sort of problem applies to any kind of testing.  The argument is that these details need to be in the "Just Testing" book.

I want to acknowledge Xu's position -- the guy has a point.

At the same time, I think that something else is going on.

A Difference

Imagine that it is 2002, and you are testing the eWidget 2.1 application.  To do that, your boss drops a CD on your desk and points to some word documents on the network.  "We need you to test the subtotal function.  It is the major new feature for the release."

What does "testing subtotals" mean?

You sit at your desk, read some documents, write some documents, and do some testing.  You probably file some bugs or produce a report.

That's it.

Now think of all the pressures on you.  You have to find the important bugs fast.  You have no communication tools, no connection to a developer or business person.  Now, those of us who were smart and able, back in the day, would find a way to have lunch with the product owner, to drop by the programmer's cubicles with some excuse -- when we were actually on a fishing expedition.  The team might have a 'team status' meeting once a week with the project manager.

But by and large, we were by ourselves.

In order to cope with this we came up with a bunch of skills.  We had quick attacks, exploratory testing, test strategy models, domain tests, equivalence classes, boundaries, state transitions, decision tables -- we had a pile of tools.

We needed them.

Now consider the "Agile" Team, all living in the same (open!) room, breathing the same air.  Before the programmer starts coding, we get the key players together and talk about what could go wrong, and how bad that might be.  When the programmers are coding, we talk to them about what they are doing, and, after they finish, what risks they see.

Compared to the "sit in my cube" model of testing, we need the techniques much less -- the risks jump out at us!

I hold that Agile-Testing radically de-emphasizes the importance of traditional test techniques because "The happy path is pretty easy, and, hey man, if we just get everyone in a room and talk about it, the big risks become obvious."

Okay, they didn't actually say "hey man."

Still, I can get behind the idea of de-emphasizing the test techniques.

I'm just not so sure about the radically part.

A Problem

I call this problem of what to test and how to test it the "Great Game of Testing."  It is something I pursue, aggressively,  as both hobby and profession.   Yet I find it under-represented in the literature.

It is under-represented on the web.

It is under-represented at conferences.

As my friend James Bach puts it, if we hired people off the street as helicopter pilots, gave them no training, and expected them to fly aircraft, we would expect a lot of crashes.  Yet that simple naiveté, that expectation that testing "should be easy" and that "anyone can do it" - that focusing on the accidental elements of testing without talking about the essence -- we find it everywhere.

Over the past few years, I have seen less and less focus on where test ideas come from on some teams, and more and more releases with buggy software.

Agile Software Development provides us with some techniques to decrease the gap; to make discovering what to test easier and more helpful.

But should traditional techniques go away?  

I don't think so.

Some Good News

I have been trying to understand, for years, why the agile folks were so reluctant to talk about the “Great Game”, and why, when I brought it up, they yelled something about “Whole Team” to me.

“Whole Team” really can change the way we think about the work. With “whole team” we don’t expect a tester to figure it all out and throw down blame when the bug gets through.  Instead, the whole team discusses the expected behavior, business and technical risks.  

If a bug gets through, the whole team failed, not one person.  

This is downright healthy.

So there are plenty of good things here.

One thing I think we have room to contribute in is more focus on test techniques and building skills. We want to spread experience and knowledge to both testers and, to a lesser extent, the whole team.

Between Blogs, Twitter, books, training, conferences ... I think we have a fair chance of doing just that.

Agile Software Development changed the world.

Let’s go do it again.





Tuesday November 20th 2012 2am

Lean Coffee

Software Agile Conference Presentations How To Process
Years ago, I was in charge of a quality assurance committee for the Software Engineering group for a modest IT organization -- we had perhaps 130 people in the group.  (Strictly speaking, I suppose we were a "Software Engineering Process Group", and if you looked around hard enough, you could find people who actually used that phrase.)

I remember in one meeting, a friend of mine, Paul, objected to what we were doing.  He said that without an agenda, given out in advance, he would not show up.

At the time I was a little hurt.  I mean, c'mon man, I have an idea of where we are driving to, but an Agenda isn't my style.

Ten years later, I understand where Paul was coming from.  Without some sort of Agenda, it becomes very hard for the group to actually get anything done.  Everyone has a different perspective, and there is a tendency to talk around issues, instead of driving to decisions.

But what if talking around issues is fine?

What if it is exactly what you want to do?

Friends often meet at a pub, or a dinner table, to get together.  I'm sure you are familiar with this - it is a part of friendship.  Sometimes we solve problems; sometimes we share horror stories over a beer.  When people do this more than once, it means they have decided the event is worth their time -- arguably the single most precious resource we have on this earth.

If you wanted to do that with tech people, what might that look like?

Well, you could just get your friends together and have a beer at the pub on Friday night, and that's fine -- but if these folks aren't your friends ... what then?  Everybody has a different expectation, everyone wants to talk about different things.

The Lean Coffee Format

LeanCoffee.org defines itself this way: 

"Lean Coffee is a structured, but agenda-less meeting.  Participants gather, build an agenda, and begin talking.  Conversations are directed and productive because the agenda for the meeting was democratically generated."

Ok.  Now What the heck does that mean?

This week in Berlin, and two weeks ago in Sweden, I was at a conference, and Lisa Crispin ran a lean-coffee style meeting in the morning.  This is what it looked like.

Each morning, before the conference, people that actually wanted to attend - who were more motivated to meet than to sleep - came to a coffee shop to talk.

The meeting starts with the facilitator handing out markers and sticky notes; people write whatever they would like to talk about on the sticky-notes.

Lisa Crispin, top-right.  Continuing clockwise, we have Dani Almog, and Bart Knaack.

Once we've written down all the topics, we do a dotting exercise, where every person dots say, three cards, the ones that person is most interested in.

After that, we create three stacks - TODO, Work In Progress (WIP), and Done.  Then we sort the cards by the number of dots, moving the highest-voted dot to the top of the "work in process (WIP") stack, and the rest to TODO.

The facilitator uses a timepiece to give a specific amount of time to each card -- we finally settled on eight minutes.  After eight minutes, the group could vote - up, down, or neutral - to continue the discussion or talk about the next thing.

What does this do?

It means what we talk about is Emergent, based on what people want to talk about right now, and decided in a democratic way, and based on real-time feedback.

In today's lean coffee, the highest voted card was "Rest, Exercise, and Healthy Eating for Serious Nerds."  We also voted on cards on "Crucial Conversations", "Struggling between independent test teams and embedded whole teams", and "True Collaboration" - (each of those could be a blog post if you are interested.)

Entire conferences are based on something like this, where the sessions have perhaps an hour for each card.

But, for now, if you are looking to improve something in your teams but don't know what - or to meet with software folks in your area but aren't quite sure about what -- why not let the whole team decide?

You might be surprised what they come up with.

STPCon Spring 2017

Friend SoftwareTestPro on Facebook
Follow @SoftwareTestPro on Twitter
Create or Join a Crew

Upcoming Virtual Training

New Classes Coming Soon!

Explore STP