Jonathan Lindo describes examples of automated test infrastructure utilizing both open source and traditional, independent-software-vendor-sourced software. In addition, he discusses new techniques for extending the value of automated testing by transforming the process from defect finding to defect resolution by reducing the effort required to document, reproduce, and troubleshoot the defects generated from automated tests.
Most organizations still use manual testing as their primary method for finding application bugs. It’s hard to imagine a software product that will not require manual testing at some level, even as we push for more and more automation in our software lifecycle. That said, the tools to help generate, load, and manage automated testing continue to improve, and the use of automated regression testing is now a critical part of many dev and QA processes. Beyond the obvious labor savings, there are a number of major trends that are pushing test automation:
- Agile development methodologies that increase the frequency of code releases and shrink testing windows
- Continuous integration that requires test automation in order to be effective
- Compliance and audit requirements for test documentation and repeatability
- Increasingly complicated and difficult-to-reproduce production environments such as multi-tier web applications and deployments
- An ever-changing mix of in-sourced and outsourced testing and development, most often geographically distributed
In addition, an important and related trend is DevOps. In a recently released survey on 2011 DevOps trends, 50 percent of all responders indicated that their organization uses some form of DevOps. Sixty-one percent release major updates at least once per month. Sixty-two percent employ agile development. With this level of velocity in both code development and deployment, automated testing is an absolute must have in order to achieve the goals of rapid and successful product releases.
The good news for those of us responsible for delivering software is that the basic test automation infrastructure is well understood and a typical framework can be assembled from open source components. Some examples of the powerful tools that exist include:
Function | Product |
Project Management | Maven |
Build Automation Tool | Ant |
Test Automation Infrastructure | iValidator |
Automated Test Execution | Selenium |
Once an organization has put these pieces together, the level of test coverage and use-case scenarios that can be tested and verified in a twenty-four-hour period is significant. The organization often can accomplish multiple regression passes per day with a reasonable investment in hardware, virtualization, or cloud compute cycles. However, while this may help shrink the detection part of defect management, challenges remain around the downstream process of finding the root cause and resolving issues that are detected by the automation system.
IDC estimates that bugs consume 37 percent of a developer’s work week. At typical developers’ salaries, this translates into millions of dollars per year in large organizations. Here’s a breakdown of the time spent after a bug is found:
Step | Estimated Proportion of Time Spent | Challenges |
| 20% |
|
| 20% |
|
| 30% |
|
| 30% |
|
So, in addition to the out-of-pocket and opportunity costs that the delay in resolving defects represents, it has the potential to significantly slow down release and deployment.
Automating QA testing generally does not attack this part of the problem. In fact, automated testing at times exacerbates the developer’s challenge, because it is difficult to fully describe what caused the problem and the resources to replicate the test environment often are not available. Log files are of some help, but debug-level logging is rarely enabled because of the overhead it incurs and high output volume it generates. Capturing everything in a virtual machine simply pushes the problem to the developer, who must then replicate the tests with the hope of getting more debug information.
The key to leveraging current automation infrastructure to attack the find-and-fix problem is to extend the architecture outlined above with either existing or newly added functions. Examples of this include:
- Interfacing the automated test system with the organization’s defect-tracking system so that the developer assigned to the problem has as much information as possible without having to reconstruct the original test case. Products such as Selenium, JMeter, QuickTest Pro, etc. provide interfaces to specify what kind of information should be collected when a failure occurs and leverage the APIs of tools such as Bugzilla and JIRA to seamlessly package that data in an automatically generated trouble ticket.
- Leveraging products like QuickTest Pro, Watir, and Selenium to continuously capture manual test cases for addition to the automated test library. When combined with an application-recording tool (see below), this will increase the confidence in product quality without overwhelming developers.
- Attaching an application-recording capability to the targets as they are tested to increase the level and precision of the collected defect data. Products such as CA Introscope can deliver transaction-level information, and more-precise recorders such as ReplayDIRECTOR for Java and Intellitrace for .NET can provide detailed defect information not available from the standard execution environment.
- Leveraging monitoring tools to add defect-tracking information. Selenium now integrates with application recorders that enable insertion of “markers” into the test recording at test start, test end, and when any out-of-bounds or unexpected condition occurs. Developers can then use these markers when they replay the test recording to pinpoint where the defect occurred at the source-code level.
- Using the test recordings as the defect “documentation of record.” Often, they will not only record the application as it is tested but also capture screen shots of the client activity that are associated with the defect. When detailed application recordings are included in the defect report, the screen scraping and “thirty-four steps to reproduce” that are often attached to the bug report are no longer required.
Automated application testing has rapidly evolved into an essential function in today’s high-velocity development-and-deployment environment. Many products are now available to implement the automated test process, and more are arriving to reduce not only test time but also the developer’s find-and-fix process. Close integration with defect-tracking systems streamlines communication. Aggressively capturing manual test cases to add to the test library expands the reach of the automated test suite. And, the use of application-recording tools dramatically extends the value of automated testing by seamlessly capturing detailed debug information while delivering highly accurate defect documentation to automate much of the debug process.
User Comments
Nice article. Lots of good information and insight. One thing I would like to add is on the work effort breakdown table you do not show it as an iterative process unto itself. You hint at it with the last statement in the last row (Multiple attempts at resolution). This is the one thing that is killer on any project, Rework. Rework can have a compounding effect on schedule and costs.
Also, this seems to be only from the developer/programmer view. We need to also consider and factor in other groups involvement (time & money) in defect resolution and retesting. Again, those money factors (hard & soft dollar) can have a compounding effect. But this is a good start.
Thanks Jim, very good points. It's difficult to quantify the amount of iteration that often takes place, but teams I talk to constantly bring it up, and the team that I manage certainly feels that pain. The defect resolution workflow chart always has a few lines leading back to the top with a label like 'can't reproduce, collect more data...'.
Agile and sprint methodologies has provided a nice structure for reducing unnecessary iterations with my teams by tightening up communication between qa, dev and ops, but sometimes there's no avoiding it. Refactoring/reworking definitely eats up a lot of time and budget, and not just for QA and Dev.
Nice article that discusses incremental improvements, but it sticks *inside the box* with conventional practices that don't work very well, for example:
What is automated, verified and measured is barely addressed. There's room for a huge amount of improvement here, if one is ready to think about the problem in terms of business deliverables...
The ineffiency and formatting bloat of flight-recorder log streams is unnecessary. Logs also tend to report too little information or the wrong information, and make automated parsing nearly impossible. Stick with pure data instead, and don't throw information away! ...
The persistent cost of replicating failed tests and maintaining automation is a huge drag on computationally-driven software quality.
I addressed these problems and others (e.g. slow and flaky tests) with the MetaAutomation pattern language. I published on Amazon, and am presenting at STARWEST in September-October.