Known locally as a sort of ‘dynamic duo’, Pete and Kristin are the Test Lead and Director of Software QA for ISD Corporation, a Zeeland, Michigan based firm that develops and delivers electronic payment software solutions for retailers.

Kristin has been there since 1996, starting out as a programmer after graduating from Michigan Technological University with a degree in Computer Science. After cutting her teeth programming time and attendance systems and Telxon hand-held devices she moved on to IBM 4690 POS Systems. Kristin migrated through a variety of roles including Project Manager, Development Manager, Director of Store Systems and was selected to create a QA and Testing Group at ISD.

As for Pete, he has spent an eclectic two and a half decades working for most major west Michigan employers, from Michigan State University to Grand Valley State University and most places in between, including Spartan Stores and a variety of other firms. He came to testing honestly, accepting a job at Michigan State to ‘just’ test an overnight warehouse refresh that took thirty-six hours on the first run… and never finished. Since then he’s been bitten by the testing bug and has recently started writing, speaking, and becoming more involved in the test community. (You probably saw him at CAST this year at the sign-in table, right?)

As a team, Kristin and Pete, presented at the Software Test Professionals conference: No Box Mixes: Building a Test Group from Scratch. Pete presented, Test Process Improvement: Lessons Learned from the Trenches. The two of them are always open to discussing the topic. Here is the abstract: Testers are under ever greater pressure to increase depth, coverage and quality of testing in shorter amounts of time. Test leads and managers will often turn to the process of testing itself to gain these improvements. Even when testing is “good,” the question becomes, “Is it good enough?” Can it be improved without disrupting what the test team is already doing well? What other factors may be impacting the testing effort? Assessing the current state of testing, identifying the reasons for improvement while communicating and gaining needed support from management and the test team presents difficulties. We will look at what can be done to help the testers themselves as well as what others may do to allow the testers to be more efficient. Using examples from the presenter’s experience, we will investigate what has and has not worked.

“When it comes to assessing teams, I think the challenge is to focus on the reality of what it is, not what we think it is or would like it to be.”

We asked our readers what they would like to know about this subject – Kristin Dukic and Pete Walen will now answer your questions.

Question: How do we assess our teams? Where do we start?

Pete: When it comes to assessing teams, I think the challenge is to focus on the reality of what it is, not what we think it is or would like it to be. All of us have been trained so that in interviews we frame weaknesses or areas that can be improved to really be strengths.

It seems to me that the hardest part is to strip that away. All of us can rattle off any number of things that we are good at. We can rattle off more things that our team is good at. The hard part is to be honest with ourselves first as individuals, then as a team, with what we don’t do as well as we would like, or as we could.

The approach I like to take is not to dictate the good points and bad points, but to get the team as a group to first recognize what they do well and then discuss and draw a consensus on what their weaknesses are and where they need improvement as a group. If you can get everyone to agree on at least some areas that need improvement, then you can move to the next step and look at ways to mitigate, minimize or correct those areas.

The hard part in many cases is keeping this effort independent of individual performance review kinds of things. Everyone has strong points and weak points. By fostering an environment where both can be discussed candidly without repercussions will go a long way in getting a real feel for what needs to be done.

Kristin: Generally, I agree with Pete’s comments. Sitting the group down for an open and frank discussion about their thoughts around their strengths and weaknesses as a team is a great way to obtain their support and discuss how to go about making improvements.

I do believe that there is another piece to assessing the team and that is through observation as a manager. Giving the team tasks and observing the outcome can provide some insight to be taken into the meeting with the team. Once the team has discussed their strengths and weaknesses, if there are others that you have observed that haven’t been discussed you could throw them on the table and get their feedback. The important thing is to make them feel comfortable that this is an exercise to be applied to team efficiency and not to gain material for performance reviews.

Question: Participants will come hear your experiences. This ought to be the best way to learn. But, chances are, they will go back to work and learn all the same lessons the hard way (if they are lucky enough to learn them). Why is that? How can we help people learn from each other and not re-invent the wheel every time? Lisa Crispin, Denver CO

Kristin/Pete: That’s kind of the heart of the matter with all presentations and learning opportunities, isn’t it? You get ideas presented to you by people who are enthusiastic and knowledgeable on the subject. The blood is pumping and you are motivated and ready to take on the world. Then you get back to the reality of the office.

It seems that the answer is two-fold.

First, there are the bosses themselves. They spend money to send their staff to conferences and training with the hope that they will return with new ideas to spread throughout the team. These same bosses need to remember that what their staff learns may not be entirely lock-step with how the organization functions now. In order to try these new ideas and techniques, the environment must be cultivated so that the seeds carried by their staff are allowed to take root and grow. This takes time and effort that must be spent before gains can be realized and the money considered well-spent.

Second, we as presenters bear a heavy burden. We attend these conferences to share our successes in the hopes that others can take back our ideas and apply them in their environment. We remember as a child hearing, “we learn from our mistakes”. Wouldn’t that apply today as well? We as humans aren’t perfect. We all make mistakes, so why not share them so others can learn from them as well? After all, isn’t that how we were driven to our success? As presenters we should be open, honest and make the message personal. We can share how we came to the realization of what it would take to be successful by using our mistakes as examples of teachable moments.

Now if our presentation does not convey our own passion on the topic we are speaking about, is it realistic to think that we are instilling the drive and passion people will need to go back to their offices and change the world? We have heard far too many speakers with the enthusiasm of a gnat, how many of us learned anything, let alone were inspired by these speakers?

When these things happen there is a chance to avoid re-inventing the wheel each and every time.

Question: Your talk at the Software Test Professionals Conference addressed the pressures that testers are under to increase depth, coverage and quality of testing in shorter amounts of time. One of my favorite testing techniques that aims to simultaneously addresses these challenges is using pairwise testing (and other related Design-of-Experiments-based test design approaches). Have you used these approaches to design your tests? If so, what have your experiences been? If not, why not? Justin Hunter, South Carolina

Pete: Great question, Justin. I’m kind of the dirty-hands tester in the pair so I’m fielding this one. While we are not talking on specific test techniques, pairwise/all-pairs is a good tool to have in the tool kit. I have used it in a couple of shops, with good success, and a third time that was a little less than optimal, for other reasons.

The first time was testing a large university student records system where the volume and sheer number of permutations was overwhelming. A colleague and I sat down and mapped out what we could reasonably do, designed our tests around that and went forward. It was some four or five years later that I even knew it had a name.

I experimented with it a bit at another shop and while it worked fairly well, the Omega Tester syndrome kicked in and I found myself incredibly limited with what I could actually do. We had a lot of business experts doing testing on some of the larger projects. They caught on quickly but each project meant training a new set from scratch. With time constraints I eventually dropped the effort as a structured method, explained the basics in a “how to test” introductory session and let them have at the system. Not optimal, but it worked after a fashion.

I’ve done some on my own at the current shop and it has proven effective. One of the challenges considering Design-of-Experiments based testing is to draw other people on the team into the concept when they have never been exposed to some of these ideas before. I find that personally challenging to overcome. Lynn McKee had a blog post recently on how thinking “outside the box” depends on what box you are actually in.

Question: If someone is interested in getting into testing, where do you suggest they start? David Hoppe, Dorr, Michigan

Kristin/Pete: We would suggest to first look into a local testing group and attend a meeting. Get to know some of the participants and talk with them about what it is they do on a day to day basis and how they got started in the industry. If you have the means and there is a local or regional conference, sign up and go. Here you will be able to not only attend sessions to learn about testing, but you will also be able to network with other attendees. There are many online resources that may be helpful, softwaretestprofessionals.com is just one example. There are also other online discussion forums around testing and QA as well.

Question: When you engage with a company to test, what makes a great environment? What are you looking for in terms of space, process, and work environment? Michael Swieton, Kentwood, Michigan

Pete: I don’t know if there is a single set of what makes for a great work environment.

For me, I find it most rewarding to work with smart people who are willing to both learn and teach me. That counts for a lot with me. If people are willing to try things and experiment and support each other, then an awful lot of the other “stuff” becomes less important. I know some folks are concerned about casual dress and being able to be comfortable in what they are wearing while at work. No problem. I like not wearing a tie. I also don’t mind a tie(usually). Good equipment/workstations are important. Having a laptop instead of a conventional desktop for the “work machine” is a huge bonus – providing I can dock it and share screens with a large (or larger) monitor. That is becoming a bigger thing for me than it was a few years ago. I’ve worked in cubicle-ville and open offices. I don’t mind either one. Both have points in their favor. Either way, I want a good chair, one with good lumbar support and that I can make fit me.

Now for processes. I don’t have a favorite “best practice” process that I insist on. I want a process that makes sense for what work needs to be done by the people who need to do it in a way that allows them to be productive, happy about their work and feel that at the end of the day they have done the best, productive work they are capable of.

Kristin: I believe a great environment starts with a diverse team that works well together. Team members should be able and allowed to trade off with each other for ideas, roles and support. One can be a lead on one project and a supporting tester on another. It is important for the team to be flexible. Team members should feel comfortable contributing ideas and suggestions for the team’s improvement, and be open and participate in constructive criticism.

It’s possible that the people who sign your paychecks insist that your work conform to an existing (ethical ) process standard. In that case, you’ll want to follow the process. In many cases, the ‘right’ level of process is a strategy flexible enough to be modified or tweaked to fit the project.

Question: If I can follow up on that- When hiring a tester, how do you differentiate between the “I write and then execute a script over and over” testers and the best exploratory testers and test engineers? Michael Swieton, Kentwood, Michigan

Pete: That is a question that is almost begging for a one word answer: Carefully.

When I’m talking with people and trying to assess their skills, experience and views, I don’t look for cookie-cutter/rote answers. In fact, I rather dislike them. I’d rather hear considered answers, preferably drawn from their experience. If they have a different viewpoint than I do, then I look forward to learning about why they have that viewpoint and learning something.

When someone tells me that they run the same detailed scripts with the same data and the same key strokes, I ask them how often they find bugs. If they find a lot of bugs with every build, that says something interesting to me about the overall development processes. If they say they don’t find too many, or any, bugs, I ask if they thought about changing the values they enter in the script.

Kristin: When looking to add people to the team, you need to find a way to measure how open and flexible they are. Explicit scripted questions in an interview may not give you the information you are looking for. I find that I can gather that information from the entire interview, asking open-ended questions to get the candidate to talk about their specific project experiences. For example, ask about how they went about testing a successful project in their career, how they worked through a very difficult project in their career and how they could have had a different outcome if the one they achieved was not acceptable. These questions allow the testers the opportunity to provide information about themselves that can lead to an open conversation where you can learn more about their actual experience and dig deeper on issues that will provide you with what you need to differentiate each one.

Question: How is regression analysis and related testing addressed in Agile? What is looked at, what is documented, how does it affect the backlog and sprint planning, and if the regression testing is found later to not have been effective how does that system problem get corrected? David Walker, Kalamazoo, Michigan

As Kristin has no real-world experience working with Capital-A, “branded” Agile Software Development TM, Pete will field this one.

Pete: Regression testing is an attempt to solve a problem (right?) to deal with a certain kind of risk. Something worked yesterday and it doesn’t work today. I think it’s fair to say that the Agile “toolbox” offers you a number of ways to mitigate that risk. For example: How are the sprints structured? You could have a regression-run at the end of every sprint, or have a sort of regression test ‘cadence’ before deploying code. Or you might have a staging or beta environment before a full deployment. Are the programmers in the Agile team (all of whom are part of “Development”) focused on just delivering code or are they involved in producing working code? The answers to these questions are part of the answer to your question.

Now, I’d like to turn to regression testing itself. I’m curious about the aspect you raise around regression testing not being effective. What is it that makes regression testing effective or ineffective? If testing does not find a lot of bugs and few bugs are found in production, was this successful or unsuccessful? If testing (of any flavor) finds bugs, and more are found in production, was testing successful? After all, there were bugs found.

When it comes to Agile, I hear and read many proponents advocate how successful Agile teams have good communication and all the team members are focused on producing a top quality product and everyone takes on responsibility for the quality of the product. I agree. Teams that work on projects that are successful in an Agile environment will often have those attributes. That does not mean those qualities are exclusive to successful Agile teams. I have seen teams on very successful “traditional” or “waterfall” projects with the same attributes. Then again, I have seen teams with the same attributes where their projects miserably fail. There is a difference between them, but exactly what that difference is, I’m not often able to ascertain.

I think the key to any effort is the sense of craftsmanship that each team member brings to the effort. When there is a high sense of craftsmanship, then the odds of passing stuff off will be reduced. When this does not happen, you will see problems in that people look to regression testing to clean up problems and find bugs the programmers turn over. This often comes from a joint attitude among the team that “quality” can be achieved by having lots of folks pushing to find as many bugs as possible. Rather than a focused effort, you see a shotgun affect.

Regression testing will often be unsuccessful, by any measure and in any environment, if you view it as based on a procedure, rather than based on an information mission. The bugs should not be there to begin with and the earlier the testers can get involved and get everyone in the group thinking about it, the better off the group and the project will be.

Question: How can professional testers more effectively guide their teams about the economic impact of specific problems with the software? Eric Willeke, Chicago

Kristin/Pete: The question around economic impact is partly based in the nature of the software being produced. A testing group focused on software to be used within the company will have a different driving force than a software publisher or SaaS shop.

For shops whose software product does not have any exposure outside of the company itself, this can be challenging. Lost or decreased productivity within a group you may rarely work with can be hard to measure, let alone used to influence people’s behavior. If there is no obvious measurement that can be used to demonstrate the scope of the problem, motivating the teams to produce higher quality code is problematic. At one time, if internal software customers had problems or were dissatisfied with the products they were provided, there was not much they could do. That has changed in the last several years and it is now possible for many “internal customers” to contract work to outside service organizations. That may seem drastic or even impossible to some companies, yet it has happened with others.

For shops that produce software for use outside their organizations, serious or repeated problems carry the risk of customer dissatisfaction and potentially losing those customers because of the defects. If those customers deal with financial transactions, as one example, the risks may extend beyond simply losing a customer to also include certain liabilities for fees and penalties.

In either case, you need to make sure that everyone understands the potential risks and impact of failure for the project, not just the testing team. From there, you have to find the bugs – and lobby for their correction. The models /frames in testing will impact the pat hs you take . For me now, looking at “coverage ” is less about the structure an d more about intended function.

Question: And to follow up – how can professional testers inform and advise the team regarding the economic impact of the software’s current level of quality? I mean, how do you have the conversation? What do you say? Eric Willeke, Chicago

Kristin/Pete: Depending on which team is being addressed, the management team, the project team or the testing team, you may need to send completely different messages to make sure the information translates into action. At the same time, each may have different concerns and conceptions around the current quality of the product.

You can start by clarifying the role of the test group. Are they the Quality Police, the black knight in armor with drawn sword proclaiming that “None shall pass”? Are they the informant for other project participants and management of what they have found, the light cavalry scouting the landscape ahead and around the General? Without a consensus on the precise role of the test team, it is nearly impossible for the test team to communicate in a way that will be understood by all parties.

If the testers are finding defects and fail to persuade others on the significance of them, you’re only doing half the job. That’s not just explaining to the developers, but also project managers and senior executives.

In our experience, communicating an overall analysis of the quality of the product involves meeting with project leadership and representatives from each department to tell the story of what has been tested, what areas are working as expected and where there are variances from the expected behavior. Then you need to discuss the variations from the norm, so that all stakeholders can understand the actual quality and reliability of the product. This can occur as often as you need during the project’s development life.

Question: Can you relate levels of test coverage metrics (e.g., statement, decision, decision outcomes, multiple condition) to kinds of faults likely to be revealed at those levels? Paul Jorgensen, Rockford, Michigan

Kristin/Pete: That is an interesting question. When Pete was a programmer, these were ideas that were looked at and considered for unit testing and for the system level integration testing. As it is, in our shop, the questions around coverage are left in the hands of the code development groups and the testing group carries a focus of black box testing. We’re focused on user experience and integrations to non-core functions. Coverage metrics are not something we deal with on a regular, or even irregular, basis.

Pete: Part of the concern I have, and had even when working as a developer and looking at various metrics around coverage, is that sometimes people will focus on getting a desired percentage of testing coverage without combining it with other factors. It seems that some folks confuse “test coverage” for thorough testing. Cem Kaner, Brian Marick, Doug Hoffman and others have some interesting writing on this that pretty much solidified my thinking around coverage metrics in general.

I generally look at coverage more broadly so that I define test coverage as: “The proportion of the system’s known attributes and functions that have been tested.” In talking about this with Michael Bolton in early January (prompted by a twitter conversation I was eavesdropping on) he added a thought that seems pretty good and completes what I was trying to get at: “…with respect to a particular set of models.”

I agree with that and have added it to my thoughts around coverage. The models/frames in testing will impact the paths you take. For me now, looking at “coverage” is less about the structure and more about intended function.


About the Author

Peter Walen Pete Walen has been in software development for over 25 years. After working many years as a programmer, he moved to software testing and QA. Following a brief foray in Project Management and Business Analysis, he returned to software testing. He has worked in the fields of Insurance and Finance, Manufacturing, Higher Education/Universities, Retail, Distribution and Point Of Sale Systems. Pete is an active member of several testing associations and an active blogger on software testing.