Mobile and web applications are becoming incredibly complex. Their functionality is increasingly tied to interoperability with myriad third-party services and components, not to mention networks and frameworks such as Java. The immediacy of the Internet is driving users to expect far more from their web sites and apps (and now, mobile apps) than anyone could have predicted 20 or even 10 years ago.
Those three factors are intensifying the necessity for early, thorough and frequently automated application testing. Yet many companies continue to focus more heavily on the production environment.
Mitigating flaws in production–whether they are preexisting in the application or spring up as a result of changes to services, networks or other components of the transaction chain–has been the impetus for a whole ecosystem of “application intelligence” products that provide deeply granular, automated examination and analysis of transaction lifecycles, often from the end-user perspective. (The moniker given to this approach is application performance management, or APM).
While we at Orasi agree these tools are valuable, we still believe the majority of defects can and should be caught during testing. We also believe that companies who adopt the core principles of APM in the testing lab rather than perpetuate them solely in the production environment can achieve higher application quality, shorter development and testing cycles, and fewer defects in production.
APM in a Production Environment
APM is a methodology whereby sophisticated tools monitor, profile, analyze and report on myriad key aspects of production applications with the goal of resolving performance issues more rapidly. As a consequence of the intense market pressure for high-performing applications, APM adoption is accelerating, and statistics show it can have a dramatic impact upon application downtime. Per AppDynamics, an APM vendor with whom Orasi partners, using a robust, highly investigative and intelligent APM solution can reduce application downtime and resource allocation by a factor of 10 over traditional application, transaction and business process issue troubleshooting.
Research firm Gartner has identified five dimensions in its Magic Quadrant for APM:
- End-user experience (EUE) monitoring
- Application runtime architecture discovery and modeling
- User-defined transaction profiling (aka business transaction management)
- Application component monitoring
- Reporting and application data analytics
APM Approaches in Testing
Many of the principles and approaches used in APM apply to testing, even if the specifics of their execution do not. Let’s consider end-user monitoring (the first item in Gartner’s list) as an example.
End-user experience monitoring is an important component of APM; it involves measuring and validating application performance for the end user. It generally consists of two components.
- Measuring traffic to report on system availability and business transactions when transaction volume is low.
- Agent-based isolation of inconsistencies and latencies that occur as real users interact with applications and their services.
Both of these processes can provide significant insight into application behaviors and inconsistencies that testers can apply to their own efforts. Let’s compare transaction monitoring in a leading testing tool versus an APM-style product.
HP LoadRunner, a powerful automated performance and test automation tool, includes a built-in transaction monitor. That transaction monitor provides metrics such as:
- Transaction Response Time
- Transactions per Second (Passed)
- Transactions per Second (Failed, Stopped)
- Total Transactions per Second (Passed)
These metrics help testers pinpoint transaction errors in lab testing. However, they do not highlight the location or cause of irregularities across the transaction lifecycle. A EUE tool, on the other hand, might provide much more intensive monitoring, analysis and reporting of transactions.
Some APM tools, for example, can examine how much time a transaction spends at each jump–on the server, in transit over a network, in processing time, and at other stops along the way, helping identify the cause and location of latency or time-outs. Monitoring at this level gives testers meaningful insight into activities that influence or disrupt the transaction lifecycle and helps expedite resolution of bottlenecks and breakdowns.
An Interesting Future
This example of how APM monitoring, analysis and other exploratory processes can provide value in testing only hints at the potential of adopting them. Another example is third-party services and APIs, where connection monitoring towards the end of testing will highlight conflicting, and therefore problematic, services or APIs. The reality is that inventive testers who take the time to explore the intriguing approaches being used in APM can find applicable testing scenarios where they will be beneficial, as well.
About the Author
Jon Spencer Jon Spencer, Director of Professional Services at Orasi Software, is a high-tech professional with experience and competency in a broad range of systems, methodologies and processes including performance testing, load testing, systems, database and network architectures, software development and configuration management. A dedicated automation advocate, Spencer understands when to use automation strategies and when manual testing is appropriate. Orasi Software is one of the largest and most successful HP Software sales and services partners in the quality assurance space. For more information, visit www.orasi.com.