Toiling as a tech drone in one of the countless North American cubicle farms, it can be tempting to think that life must be better somewhere else. Over¬Toiling as a tech drone in one of the countless North American cubicle farms, it can be tempting to think that life must be better somewhere else. Over¬zealous vendors, endlessly incompatible code, unreasonable management ex¬pec¬tations… sometimes it’s enough to make you ponder chucking it all for some moderately exotic locale.
How about Egypt? After all, you’ve always wanted to see the Sphinx and pyramids before you die. Heck, if you like it and want to extend your stay, you can even pick up some nuts-and-bolts programming work doing something like, say, test automation. Maybe the tech life in a slightly less mature market is—dare you think it—a little more fun?
If you’re enjoying the fantasy, stop reading and start Googling those Middle East travel Web sites. The word from the mouth of the Nile River is that headaches associated with automated testing are the same there as anywhere.
Meet Cairo-based Sameh Zeid, a senior consultant with ITSoft, which provides software and services to banks throughout the Gulf region. The company, he says, maintains a monolithic code base underpinning its entire product line. And it’s fair to assume from his comments that shying away from customization makes life easier in terms of configuration management and revision control, but harder when it comes to quickly addressing new opportunities or fixing old bugs.
“We needed to shorten the release cycle time by reducing the schedule and effort of system testing,” Zeid says, explaining ITSoft’s current test-automation project, which involves IBM’s Rational Robot. “The testing was incomplete and causing havoc during user acceptance.”
The regression testing that was part of every build was like groves of date palms in the sweltering Middle East summer and early fall, when the trees are loaded with the sweet fruits—that is, seemingly ripe for harvest. And the promise of capturing and consistently reusing knowledge of analysts and testers seemed likely, Zeid says, to be “one of the biggest advantages” associated with the project.
But unfortunately, things start to get dicey here, in ways depressingly familiar to those involved in testing everywhere.
Scripts Don’t Write Themselves
First is the issue of assigning a veteran product analyst and tester to the automation team on a quasi-permanent basis, a shift in resources and headcount that resource-strapped managers everywhere, including ITSoft, can find to be prohibitively expensive. After all, automation is supposed to be labor-saving, especially for the highest-skilled workers, right?
Perhaps, but absent this support, Zeid says his team is left with the feeling that “we cannot trust the existing documented test cases. Unless we are able to automate what the veteran testers and analysts would do, we will end up with automation that is unreliable.
Next is a related problem that’s entirely cultural. We’re not talking East versus West here— rather, coders and software engineers versus the testing and QA crowd. In Cairo as in California, developers too often consider testing, even the moderately sophisticated task of generating and maintaining test automation scripts, to be second-class work.
Then there are the potentially thorny and expensive technical issues, familiar to anyone who has tried to look beyond the unwaveringly optimistic marketing promises proffered by tool vendors.
ITSoft took advantage of what Zeid said was IBM’s special subsidized pricing for Egyptian companies to purchase what’s arguably one of the better-known automation tools in the market. Rational Robot is compatible with Centura 1.5, the semi-obscure language in which ITSoft’s application is written. But Zeid worries that very few IBM consultants have experience working with Centura, especially since it’s possible that, sooner or later, he’ll need those consultants’ help.
Finally comes the time-honored uneasiness about test automation’s return on investment. The number is calculated based on n number of releases using the tool, which may or may not happen, Zeid says. And he has additional squeamishness about maintenance efforts associated with test scripts, especially given the inevitable additions and changes from one release to another.
Zeid’s laundry list of test automation concerns would sound familiar to the likes of Paul Grogan, a developer with Los Angeles–based CKE Restaurants, parent company of fast-food companies such as Carl’s Jr. and Hardee’s. Grogan, a developer whose background includes Assembler, C, C++ and Java, supports corporate applications used by 120 company users.
Making money in fast food means more than foisting ever-larger servings on unsuspecting customers. These companies, CKE included, often find themselves with vast, disjointed real estate portfolios, management of which means using software to handle myriad leasing and profit-sharing agreements.
Grogan recently employed test automation in tweaking a COTS real estate–management application used by CKE. Part of his success was finding the right tool—his firm chose a product from Worksoft—that could handle testing even when the UI was a constantly moving target.
“Users are savvier and less likely just to accept what developers hack together as far as a UI,” said Grogan. “Since today, less is happening on the middleware and more on the screen, there are more opportunities for traditional auto-test tools to break.”
Grogan’s approach to the project reveals attempts to solve several of the problems outlined by Zeid. To ensure quality, Grogan insisted that subject-matter experts, not QA staff, generate the test cases recorded by the automated tool.
Additionally, Grogan realized that his automated test system would fulfill its promise to make it cheaper to fix bugs and faster to incorporate feedback only if it was fed properly by the development team. So the testing team received a series of bite-sized, modular releases—27 in all during the three-month project—rather than being forced to digest one super-sized chunk of code when the first round of programming was done.
Better Tech and People Skills
Bill Hayduk, founder and president of New York City–based RTTS, a testing-focused professional services organization, has seen firsthand the challenges that crop up when mostly nontechnical QA and test organizations deal with increasingly sophisticated testing regimes. Like Zeid, Hayduk insists that any automated testing project, which inevitably includes managing a growing and changing portfolio of test scripts, must be treated like a full-fledged, bona fide development project.
Not that technical skills alone are enough. Hayduk says that all of his new programmers take 325 hours of training before being loosed on customer projects, which often involve test automation.
“The 325 hours are needed to teach graduates with programming skills how to best leverage these test tools, along with test methodology, project management concepts, software architecture and development of soft skills,” Hayduk adds.
One of these required soft skills, of course, is managing manager expectations. Reading that sunny marketing material, bosses may eagerly anticipate trimming the testing bill and shipping product faster, only to be disappointed with the final results.
Zeid says cryptically that this cropped up during his project and was addressed by way of ominous-sounding “awareness sessions,” which bring to mind teeth-grinding hours of meetings with bean counters who don’t know the difference between a test script and a movie script.
So if you want to head to Cairo to see one of the Seven Wonders of the World, by all means, start planning. Bring your camera, but be sure to leave any expectations of a frustration-free coding paradise at home.
About the Author
Geoff Koch