So why does automation seldom deliver the better, faster, cheaper and more effective testing that we expect?

A lot has been said and published about Test Automation, but we thought if we told you how not to automate you may have more success, allowing you to understand the issues and then planning & make provision to avoid and overcome them.

Why Does Test Automation Fail?

Before we can delve into why automation fails, we first need to define exactly what we mean by fail. We automate to save time, save costs, lessen the load on our resources and become more efficient and consistent in identifying defects. When this value is not delivered to the organization automation has failed.

So why does it fail so regularly?

  • Unrealistic expectations – especially from users, managers and business. Many of these people have heard of automation and expect it to be the silver bullet that will save all testing efforts. They have the ‘push button concept’ in mind. Just push the button before we leave the office tonight, testing will happen and tomorrow when we get back all the results will be available for review. Right? Wrong!
  • Poor Planning – some teams don’t plan at all, just jump in at the deep end and start automating with no regard as to how they should be doing it; or, they spend so much time on planning that there is very little time left to do the actual automation.
  • Poor Design – I’ve been on more than one project where just a small change to the GUI caused the whole Automated Pack to be rendered invalid, making a test pack rewrite necessary. Real world applications change continually and automation needs to be done with scalability and maintainability in mind.

Another reason for poor design is competent automation resources (or the lack thereof!). I can work through the vendor tutorial or do a 3-day automation course and call myself an automator; that doesn’t mean that I know what I’m doing. Many organizations are not willing to pay for competent resources or pay for the training necessary to create competence.

  • Upfront Effort Underestimated – sales reps usually promise the world. Business managers just hear ‘record and playback’. Real life is not that easy, in cases the creation of effective scripts need a lot more coding effort than just pure ‘record and playback’. They also tend to forget about the time, cost and resources needed to get the tool set up and configured to work on company architecture.
  • Return On Investment – you need to factor in the cost of the tool, cost of setting up the tool, the resources necessary to do that and the cost of training resources in the use of the tool. As a very simple example, just imagine producing an automation script. This script takes 30 minutes to write; executing the script saves you 1 minute off the time to perform the test manually. You need to be sure that you will be able to use this script at least 30 times, just to break even in terms of the effort expended in producing the script. And this is ignoring the cost of the tool, the setup and the resources.
  • Resistance – business users/managers/subject matter experts might not always give their full co-operation to automation efforts due to a history of failed attempts in the organization. If they don’t trust in the process, their lack of enthusiasm can cause major delays.

How Not To Do Test Automation

Let us take a look into the most common pitfalls that almost always find their way into most test automation efforts. Follow the list below and be sure that your automation efforts will fail.

  • Do Not Plan – this is not limited to the lack of planning, but also planning without understanding the objective of automation in your project. Jump right in and start scripting without assessing anything (such as the technical environment and how it will impact on your solution). When short of information or resources substitute with your own assumed logical understanding and don’t bother verifying it.

If you do plan, you do so without understanding your organization’s or client’s immediate and future intended development plans. After all, you are only there for the specified duration and by the next rollout you will probably have moved on. Better yet, try out your new fantastic ideas that are not in line with what test automation is about: build this huge application and try out your technical skills. The end result, of course, will be a solution that only you can understand, maintain or even use – remember jobs are scarce!

  • Do Not Involve Team Members – what you are doing is different and special and none of these guys know what your job is about and you love things that way.
  • Assume – assume you have a ‘one size fits all’ solution. Have an attitude of “I’ve seen this before and know exactly what the client needs”.
  • Solution – have a solution before understanding what the client’s challenges are. Instead of listening to what the client needs, rather try and sell your solution to them.
  • Focus On The Tool – if your client/organization has a tool, focus on the tool and forget about the project’s requirements. Look at the cool things the tool can do (rather than the requirements the tool needs to meet in order to successfully achieve the project goals).
  • Quick Fix Solution – focus on an immediate quick fix solution and base all your efforts on these. Assume no further technological changes, both immediate and in future rollouts or environment upgrades. Don’t bother about the sustainability of your solution; you can just build another one from scratch.

Advice to ignore: quick fix solutions are often where automation becomes a project liability: even after implementation, more money and time has to be poured into automation to meet slight environmental changes, such as Office suite or OS upgrades, whilst Regression Packs wait to be executed and go live dates are shifted.

  • Do Not Treat It Like a System Development Exercise – have on the fly requirements that are only in line with your assumed solution. Don’t bother planning or designing for your solution: have no requirements or any other supporting documentation. Have no tests for your solution, just script away and fix code defects as and when they occur, without bothering about the details – if it works it works, how you got to your solution doesn’t matter. Never have proof that your software code on its own is solid enough to test someone else’s code. Forget that you are developing code that will be used to test someone else’s code! Forget that automation is also a development exercise!
  • Learning New Technologies – do not bother learning new technologies and how they might impact on your career.
  • No Reviews – do not review or assess your work. Now that you have automated your tests, let them be and when there are failures or defects just work around them. Who cares about improvements, when it works it works: “if it ain’t broke don’t fix it!”
  • No Communication – do not communicate with your team, after all, test automation is not the same as manual testing. Why should you be aware of how they are designing their manual test packs anyway?
  • Limit Efforts To The Tool –limit your automation efforts to whatever functionality is built into the tool. Two examples of how you can limit your effort (the correct mantra is “it can’t be done”). Most tools have built in data storage components such as data tables which often have a fixed capacity limit – do not ponder on how you could deal with a larger than limit 10 GB data file. The tool is fully capable of testing the application at a service layer; however, it does not support the GUI used for application development which is how you normally automate…
  • Ignore Project History – you are in the present and that is all that matters. Re-learn lessons that have been learned (and probably documented in the past) costing the project more time and money.

Should I Automate?

To automate or not to automate? Real modern day Shakespeare! Let’s look at the questions that need to be answered before you decide to automate.

  • Why – we automate to save time and money. Can we add value to our test efforts by automating? Do we have a lot of repeatable tasks? For example, the capturing of hundreds of trades that we use for test data? It is important to define the objectives of why you want to automate right at the beginning.
  • What – what to automate? Focus on the critical areas of your system first. Make sure you automate end-to end business processes, rather than individual pieces of functionality. Remember that 100% automation coverage is NOT needed, although it will probably be asked for. Partial automation is ok and don’t let anyone tell you otherwise; just focus your efforts on the areas that will deliver the biggest initial return on investment; for example, automating the capturing of hundreds of trades, daily, to be used for test data is definitely worth it. Automating the test for changing a customer’s address in static data, most definitely not worth the effort. So give it some thought and be smart about it! Automate useful tests. There is a saying that stuck with me – if you automate a test that achieves nothing, all you end up with is a test that achieves nothing faster.
  • When – you need a fairly stable system to automate; however, know that there are still many builds/releases to come, so that it’s worthwhile spending the effort. You need to find the balance between maintenance and return on investment. The sooner you automate the more time you have in which to recover your costs (by using your scripts); however, early automation implementation requires more effort in terms of maintenance. To sum up, automate whenever it will add value to your project.
  • How – focus on your long term objectives; think strategically. When it comes to tool selection remember that expensive is not necessarily always better. There are some great open source tools available. Every tool is the best according to the sales reps; establish exactly what you need from a tool and see which tool will fit in with your organization’s vision and requirements. Keep sustainability and maintainability in mind when you automate, as well as ease of analysis. If your script fails, how long will it take you to figure out whether it was the system or the script that failed?
  • Where – there are two aspects that need to be taken into consideration, physical proximity and where to focus automation in the system under test. Your automation team should not be isolated, but sit in close proximity to the rest of the project team: this improves communication and prevents a lot of misunderstanding. As to the system under test: try and automate at the Service Layer level and not just on the GUI. This will assist in improving those very important automation factors, sustainability and maintainability.

Am I Successful In My Automation Endeavors?

There is no universal answer per se to this, but there are a few scientific ways of establishing whether you’ve been successful or not. Remember, this is a contextual exercise and much will influence how it should or could be carried out.

  • Objectives – a good place to start would be your test automation objectives. Did you define these clearly at project start? Did the objectives adhere to all the characteristics of a requirement – clear, concise, complete, quantifiable and measurable? Did all the project stakeholders (technical team, BA’s, Test Team, PM, Project Sponsor) understand them and sign them off? Did your efforts meet these objectives and do you have artefacts to support this, e.g. design documents, tests and their results, and lessons learned? Are the objectives within the defined/planned conditions, e.g. budget, time-lines, usability, portability and flexibility? Did maintenance form part of your objectives? Are the objectives in line with your organization’s future plans for your projects, immediate and long term?
  • System Development – beyond your planned success criteria, how has your effort influenced the overall system development effort/project? This is where communication and proper project analysis come to the fore, because you would need to have observed the before and after automation solution contexts. Are the Test Analysts now getting more quality time for their test analysis exercises? If yes, how have you assessed this? Assumption or scientific facts, e.g. rate of test case creation completion – do they have more time to assess and review these now that they don’t need to focus as much on regression testing?

Does every stakeholder understand the value your effort is adding? If not, did you properly inform them about the value you plan to add to their projects? Was the inverse true, did they inform you of the value they expected you would add? If the above points were set initially, does every stakeholder have proof of these being met by your exercise? Did you only meet or did you exceed these?

  • Supporting Artefacts – does your exercise have supporting artefacts and are they up to date? Is your design document in line with the actual deliverable, e.g. test harness or framework? Have you checked all the requirements against your automation solution and do you have tests to support this?
  • Scope of Solution – is your solution specific to only the user acceptance test or was it designed with the whole project in mind? Understanding immediate and future plans would be very handy, the company might have tools and technologies specific to one OS; they may be considering expanding to another OS client for various reasons. If this is the case, can your solution adapt with these changes as well? They might also have plans to upgrade to the latest technologies, how does your solution fare against this?

The information in this article was presented to the South African Special Interest Group In Software Testing (SIGIST), on the 15th September, 2010. Due to the positive reception the presentation received, we decided to publish it as an article.

This is by no means a comprehensive guide on how to avoid disastrous practices, but it always helps to have a benchmark.


About the Author

Lungelo Shembe Lungelo Shembe has been involved in Software Testing since the beginning of his career in 2004. He also holds an ISEB Foundation certificate and is currently studying towards his Sun Certified Java Associate (SCJA), with the long term plan of being a Sun Certified Enterprise Architect (SCEA). Currently he is consulting as both a Test Automation and Performance Test Engineer in the Banking Industry, as part of CentricEdge, South Africa.