Have you ever heard of the New York Times? How about iVillage, a division of NBC Universal? You have, eh? We suspected you might.

It turns out these large companies have websites that serve readers from around the world with constantly changing content, pulling data from hither and yon and back again. And they need to be up, like, you know: All the time. Specializing in this kind of work means dealing with incredible amounts of engineering intensity.

Allow me to introduce Perze Ababa, a test manager for iVillage with prior stints at, you guessed it, The New York times as both a tester manager, and, before that, a test do-er.

This month, we thought we’d ask Perze some tough questions about knowing the impossible under time pressure. With conflicting goals, competing projects, little budget, and all the other ugly stuff we have in the real world thrown in for fun.

Perze Ababa will now answer your questions.

Question: How large is the testing organization that you manage – and how large are the greater test and development organizations within the full company? Beyond that – what do you do to keep testers engaged and motivated? – Meredith Ryan, South West Michigan

Perze: We have three testers onshore, including myself, and the four offshore. We have multiple divisions within NBCUniversal, and from the two that I know of, it is three times larger than iVillage. With regards to keeping testers engaged and motivated, I believe that really started during the hiring process. I had the privilege of building a team of testers who essentially love to test and continue to learn how to improve their testing skills and outlook. From a management perspective I always try to keep the lines of communication open, typical manager-y response but as a dad of two very young kids (two and six), the only way we can engage each other effectively is if we both give each other our full and undivided attention.

Question: What are some techniques to turn feature requests into both automated tests and living documentation for the whole organization, not just development? – Charles Hooks, South Bend, Indiana, USA

Perze: This is a difficult question to tackle because we live in that nether region of wanting to adopt agile but can’t quite yet because of the challenges in the current process. The short answer is No, we don’t have any techniques to accomplish this but from a testing perspective, we are just beginning to adopt Session Based Test Management which if done effectively will provide documentation that I think goes beyond traditional Test Planning. With regards to automation, we are just getting into the groove of writing automated scripts as we go through test sessions It’s a fairly green process but we do have a mentor in the person of Adam Goucher, who has been assisting and coaching our team to get to where we need to be when it comes to automation.

Question: Do you view growing/creating leaders to be part of your management role? If yes, do you have suggestions or advice for other test managers? – Ajay Balamurugadas, Bengaluru, India

Perze: Yes, I personally think that one of the best legacies a manager can have is to have mentored a better manager out of his/her current team whether through direct coaching or mentoring by example. One of the secondary things that I looked for when I was building this team was leadership. I encourage everyone in my team to mentor each other and I think the ability to teach and understand how to teach another person is a very good first step towards leadership. Management and Testing require different skill sets but whether you like it or not, the success you have on both careers are defined by the context you are in and how you react to black swans.

Question: Just how globally distributed is your organization? What does it look like? What are your top keys for success in making a distributed team successful? – Lisa Crispin, Colorado, United States

Perze: Our technology team spans across multiple timezones here in the US and have team members who are from the UK, Spain, Ukraine, Canada, and India to name a few. Despite being an Internet company that’s been around for over 17 years, we are a very young technology team where ~95% of team members have only been with the company less than 18 months. Constant communication and feedback is definitely one of the keys that keeps the team together. The other key that I think we have is a strong and cohesive leadership team who are willing to take a step back and plan accordingly.

Question: Managing testing resources (staff) effectively in a ‘standard’ project environment is difficult enough at the best of times. I’m wondering if you have any tips for how this can be managed when working in an environment that requires 24/7/365 critical testing attention? – David Greenlees, Adelaide, Australia

Perze: The beauty of my context is that since we are a very close and cohesive team, everybody tests. It’s true that we have members of the team that are officially designated as testers but quality comes when everyone else pulls their weight. Indeed, managing resources is a pain especially when there are more business requests that we need to attend to compared to the number of people that we have available. We do work closely with our Project Management team and they are the ones who prioritize our projects and they should be given due credit. The other main critical piece is our Systems Administration team led by Christian Gils who in a heartbeat can identify what the trouble spots are from an Operations perspective. I must admit that this is a weak spot in our testing expertise and I have made a few changes that will start very soon so that my team will have the exposure and knowledge that our Ops team has. With that said, one main tip is to never stop learning about what you are testing against. Testing doesn’t stop at a product’s functionality. Learn as much as you can about your context’s oracles and heuristics so if you ever encounter a behavior at any point in time, you can ask, “is there a problem here?” and you can use that as a launch point for further exploration or to file an issue against a said anomaly.

Question: If I can follow-up on that – do you expect your staff to have any different skills, so they can cover for each other, or do you have them specialize in specific areas? – David Greenlees, Adelaide, Australia

Perze: My answer would be yes on both sides of the question. I do have specialists on my team that have really good technical acumen when it comes to our front-end or back-end architecture and do encourage them to continuously hone that expertise. I also encourage them to cross pollinate and be each other’s mentors when there are projects that have a fairly low technical depth and impact, these are mostly projects that have already undergone multiple iterations. I also encourage testing hand-offs between onshore and offshore test team members so they can effectively pick-up where the other tester has left off. This method has so far proven very helpful in establishing very healthy team dynamics and everyone generally knows what everyone in the team is working on despite being located in multiple parts of the world.

Question: When dealing with the creation of test scripts, it’s no doubt that the best testers will be the ones who have a thorough understanding of the underlying code base. With that in mind, how much of the tests should be handled by the test engineers and how much should be handled by the developers themselves? – Dan Chan, Behance, New York, USA

Perze: I always encourage the testers in my team to take a look at the code commits for any given bug fix or project, BUT also know that we should not limit ourselves with a “thorough understanding” of the underlying codebase, because that codebase is the developer’s interpretation of the solution for the problem at hand. Unless the developer is the product owner then we can use that as a springboard where we can start testing. I am a proponent but not strictly limited to Allister Scott’s software testing pyramid (http://watirmelon.com/2011/06/10/yet-another-softwaretesting- pyramid/) where the developers should be focused on technology facing checks (automated unit and integration checks, component and api checks) while the testers are focused on business facing tests/checks (browser based automation and manual exploratory tests). Hopefully that answers the question but I’d like to stress the difference between checking and testing. Checking is verifying that something we already know about the product under question is still working the way it was. Testing on the other hand is an investigative process that we use to find out new behaviors and relationships between someone who matters and the product under test.

Question: I hear a lot about ROI analysis for testing as if it means something. Is it possible to do roi analysis for good/bad requirements, testing approaches, and agile development? If it is, how do you do it? What does a good ROI analysis look like? – Keith Hinkle, Grand Rapids, Michigan, USA

Perze: ROI analysis is a popular buzzword because of how seemingly easy it is to calculate. This might be true with investment of hard currency and you are only looking at the cost of investment in relation to it’s gain. Software development, Testing and Good Management Practices are so much more than that. I’m actually very happy that you brought up the notion of good and bad requirements because to be honest with you, relying on requirements too much has it’s consequences. I’d rather ask the question from a team perspective. What makes a requirement bad or good? When do you make that determination of how bad or good a requirement is. If a requirement never changes, does that mean it’s good? The converse of that would be if the requirement always changes, does that mean that the understanding of what the business or the product team needs is not as clear as what it should be? All of the above questions can also apply to whether you are using a different approach or development process. In all honesty, from a testing perspective we should be asking questions that matter most on software development teams, i.e., What information can I bring forward that can give more value to the team or the company as a whole? This is the reason why I firmly believe in the tenets of the context driven school of software testing. I don’t rely on misleading metrics that can easily be manipulated and doesn’t really provide information on the quality of the product under test. I don’t rely on best practices, because there are none. I think there are good practices that will work on your particular context while considering the team or company’s culture as a whole. As testers, or managers of test teams, we need to be wary about metrics such as ROI Analysis, bug counts, or number of bugs fixed since these types of measurements have no bearing with the real quality of your product.

Question: What do you think works better – a larger programming and separate test team, or smaller integrated team with single, or a small number of testers? And why? – Stephen Janaway, Fleet, Hampshire, United Kingdom

Perze: The answer really depends on the context of your organization. A separate test team has its benefits because you are forced to document properly and you will need to have a very good review process in place so you can minimize misunderstanding of the product but this setup is not necessarily built for speed. If you work for a company that builds nuclear submarines or if a human life is dependent on your software, a separate or independent test team might be more appropriate. In my experience, smaller integrated teams are built for speed. It’s not necessarily for precision but it can definitely churn out products faster than you can say “I wish to wish the wish you wish to wish, but if you wish the wish the witch wishes, I won’t wish the wish you wish to wish.” Now say it three times. In all honesty, a typical testing team should be comprised of quality advocates, bug hunters and experts that can serve and provide value to your context. Being separate or integrated is a secondary and sometimes unnecessary question.

Question: What efficiency gains can be made by having teams in multiple time zones and available round-the-clock? Do the advantages of 24 hour testing out-weigh the disadvantages? – Stephen Janaway, Fleet, Hampshire, United Kingdom

Perze: I think only the business folks can answer this question correctly. I personally believe that the main attribute that defines the effectiveness of any given team is communication and how to enforce what was agreed on after communicating. I’ve been a part of multiple teams where the distributed model worked better than having those people in the same building because of certain personality issues. Having people in the same time zone is generally beneficial because it would be easier to communicate but that’s not always true is it? People are still people and will continue to be no matter what timezone you are in.


About the Author

Perze Ababa A firm believer in the Context Driven School of testing with over 10 years of software testing experience. I primarily employ Manual Exploratory testing techniques with the help of Automation Tools.

I have Test Team Management experience that deals with distributed team. Management experience also include building teams from the ground up, mentoring, leading test teams and organizational testing advocacy.