Tuesday January 17th 2012 12pm
Centralized Performance Testing
Maturing Large IT Organizations - Part I
Maturing the Performance Testing Process
Most companies are immature with regards to performance testing applications. Why? Because it normally requires someone with a niche skill set to create tests which produce accurate results. Such a person would have some level of expertise in virtually all areas of IT, including development, QA testing, infrastructure, database administration, and networking.
For the large enterprise with dozens or hundreds of projects per year, the need for performance testing is so great that even multiple full time staff may not be able to keep up with the demand. Some departments may not even be aware that such personnel exist and may purchase their own testing tools and set aside additional resources for testing their specific project. This “recreation of the wheel” is even played out on a cyclical basis in some companies.
To determine maturity of performance testing in your company, find out why a test might be considered for a project today. Is testing implemented as a reactive measure to an unexpected problem? Are production service level agreements constantly in jeopardy of being broken with no known root cause? Is performance an afterthought until someone at the right management level starts complaining?
What does a company do when they’ve tested in the lab but still have a failure after release? Usually the blame is put on the testing tool itself, causing management to question the validity of the entire testing process. In reality, when applications in production fail under load even after implementing a testing process, it is due to one or more of the following:
In this article, we will discuss the four general phases of maturity for the enterprise as it relates to performance testing and what those phases look like from a high level.
- The company has assigned a person to do performance testing without consideration to their skill sets. Without the right skill set (and mind set) at the helm of the test methodology, a false sense of security can develop. Test results are trusted but the test conditions are invalid. This can cause a much bigger risk to the company because C-level executives believe they are making informed business decisions.
- The wrong roles are doing the performance testing, and trying to accomplish the wrong objectives. This could be functional QA testers who turn the performance test into a functional validation (how many defects per unit volume) when the infrastructure should be the focus. It could be developers who are only concerned with specific code components (perhaps their unit of code), and they do not test the application as an integrated product. Architects or other technical resources too close to the product can test according to their knowledge of the application without business knowledge or a view from the end users perspective.
- The test environment looks nothing like production and the performance test results compare apples to oranges. Dangerous extrapolations can occur, like thinking that doubling the CPU’s between environments can double performance. Applications are then “thrown over the wall” to operations who inherit the performance.
- Incomplete requirements, inaccurate business processes, and bad data are common roadblocks to performance testing. A test is only as good as the input on the front end of the process. Recreating reality is difficult without business processes knowledge. This can produce test results that can satisfy someone’s personal agendas without actually revealing the real problems.
- Effective and controlled change management is not in place. Developers and other resources have the ability to make code or configuration changes to environments without documentation of those changes. These get moved to production without another performance testing cycle to validate the changes. This can cause performance issues to fall through the cracks in the software development lifecycle.
The project testing stage is defined by having an ability to automate performance testing, but with the following limitations:
Testing is executed on a few key projects but only just before the application rolls out to production. There is no way to determine if the application under test is exactly the same when it is rolled out to production as no change management is in place.
- Use of automation focused on one or few projects
- Informal roles or part-time resources for test execution
- Ad-Hoc deliverables
- Non-repeatable processes
Only transaction times are used as metrics for performance. Application, system, and network monitors are not correlated with these measurements to get a true end-to-end view.
Deliverables could be different for each project. If there are multiple groups who do performance testing for their own project, they do not communicate with each other. There is no standardization.
If the technology or platform changes (i.e. client-server to web) the processes are altered because they are done “on the fly”. Resources must be re-trained on their testing approach for each project because there is usually a time gap.
Moving to a product based function introduces two important capabilities:
It allows for testing to be done anytime, anywhere, by anyone with access
It allows multiple tests to be done on centralized infrastructure
Resources (full or part time) are designated as users of the performance testing product. Documentation is standardized and reported to all stakeholders of a project. Scheduling use of the product is more formal and documented. As demand for more testing occurs, more resources and test execution infrastructure is acquired. Reorganization of resources to a centralized model begins at this stage. Other qualities include:
- 24x7 enterprise-wide testing infrastructures
- Management console and reports
- Resources scheduling
- Collaborative testing
- Development of best in class testing capabilities
At the service utility stage, what has worked in the past with a product utility is expanded to multiple lines of business. The centralized model is expanded with formal leadership and roles. The central group acts as an internal consulting group to the Enterprise as needed. They also provide training and mentoring to local LOB resources so they are enabled to execute performance test on smaller components of an application. Projects are tracked and scheduled according to a formalized process. Executive management is aware of application performance at a high level across all tested projects at any time, not just when there is a problem. It includes the following elements:
- Testing services across multiple lines of business
- Processes: known standards, collaborative workflows
- Project management
- Internal billing capabilities
- Dashboard: visibility into project status
- Web-Enabled, Collaborative testing
The performance authority stage represents the smallest percentage of the companies who currently do performance testing throughout the software development lifecycle. Why? Because it usually takes years to develop. It may require many organizational changes. It requires that a specific group from within the company make final decisions for production implementation based on performance results. This requires results that are trustworthy to make business decisions on, indicating a very mature, organized, and thorough testing team. Performance is engineered into all products throughout the entire SDLC. Performance metrics are tied into IT Governance metrics and tightly integrated into the IT Governance process. It includes the following characteristics:
- Standardized services and metrics based on best practices
- Real-time visibility and end-to-end traceability
- Knowledge / expertise sharing
- Centralized management and authority (approval)
- Testing community
When performance testing becomes an organizational push from Quality Assurance organizations, there is a natural tendency to push back. Many departments do not understand how they are affected by this change, so the immediate reaction is to resist. Security does not want to be told that access levels have to be accommodated. Development teams do not want to be locked out of a system for long periods of time. And just exactly who is the person responsible for creating thousands of test users accounts in preparation for a test?
These are the new challenges that are revealed once enterprise-wide performance testing begins. We refer to these challenges as “the monsters in the corner” because they can scare some companies away from proper test methodology. A monster might be security concerns because of access needed to monitor systems during the testing period. It could be a maze of forms and paperwork that needs to be filled out and approved, drastically lengthening test engagements. It may be that there is no support from various groups to help determine and fix specific bottlenecks. It may be that the new Agile driven development doesn’t allow the time to do things the right way – at least in the mind of the VP of development.
When it comes to implementing a Performance COE across multiple lines of business, one of the biggest obstacles is getting past organizational impacts. This is why it is important to bring in resources that are not only familiar with a performance testing product, but understand the complexities of the organizational change that must happen to be successful.
Stay Tuned for Part II - Concluding with:
Attend Scott Moore's session 203: Planning For The Perfect Storm: The Value Of Peak Testing on
Tuesday, March 27th at the Software Test Professionals Conference Spring 2012 in New Orleans. This session is part of the Performance Testing Track. Learn more at http://www.stpcon.com
- The Performance Center of Excellence
- Documenting and Recognizing Value
- Achieving Excellence