Getting the most out of automation is a process of evaluating the test goals and matching the right tool for the job. Sometimes test automation solves problems, but often it creates more work than it alleviates. When does automation help, and when does it hurt? Here are a few ideas for evaluating your test goals and making sound decisions on whether or not to employ automation to solve them.
Test automation is a complex topic that has generated volumes of literature-Web searches yield a myriad of information regarding commercial tools to use, theoretical techniques for design, methodology and best practices, automation consultants, etc. Suffice it to say, with all the mass of information available for test automation, the decision to automate or not can be difficult.
Why is there so much involved with automation? The answer is straightforward: because it's difficult. Decisions about which tool(s) to use, how to architect and implement a test suite, and who will do the work are complicated. Then there are software design considerations, supporting scripts necessary to run the automation, source control, and library construction. Add to that the complexity of managing a large automation suite, continued maintenance of scripts, and adding to the test suite with new functionality…one can easily be left with the looming question, "Is this really going to be worth it?"
Here are three things to think about when making the decision to automate.
Unfortunately, test automation is not a magic bullet for achieving great test results. Software vendors will try to convince you that you can automate any and all testing your group does-this is not true. Remember, automation does not actually do software testing, it is a tool to help your test engineers test better. The time saved through test automation can easily be reinvested through test maintenance, adding new test cases, removing obsolete test cases, and improving upon the test architecture.
What does this mean for your automation effort? It means you should automate only the things that need to be automated. You can probably think of numerous applications for test automation, but of those, select the best fit and start on that first. Especially if this is your first effort in test automation, if you shoot for the moon, it can very easily backfire on you in terms of projected effort and cost. Going for the low-hanging fruit first allows you to maximize the return in the short term, realizing important gains from just having spent resources and money on tools and people.
Consider all the output of the development group in our company, including 1) core API driven peer-to-peer technology, 2) a showcase Web site to demonstrate the technology, 3) a corporate site, and 4) some internal tool development. We decided to automate the core technology first. Not only does it lend itself to reliable test automation (an API is almost always easier and more reliably automated than a GUI), it achieves the best bang for the buck for the department. Sure, it would be nice to automate the corporate Web site, so that on those small, weekly pushes of new content we could regression test the site-but why spend time and money automating something that takes two testers about an hour to test? It's not worth it.
Test automation is software development, nothing less. We wouldn't expect code written from development to be sloppy or undocumented. Ideally, we would like to see code reviews, participation, and reliability, even if it takes a little longer. Our test automation projects should be held to the same high standards. Taking even a day or two at the outset of an automation project to plan and scope the effort will pay off in the building phase. Not only will it keep you focused as you proceed through the effort, but it will result in a well-organized end result that is easily accessible to other engineers or testers, and is maintainable as well.
If you do not design for maintainability, you will spend so much time trying to fix your scripts that they will either be abandoned or rewritten completely. Either way, the goals of your test automation have failed. Architect the suite in a logical manner, comment and document the code that is written, and have peer reviews if possible so that idiosyncrasies of your programming style are understood by others (especially if they will be helping you with maintenance or writing test cases themselves to add to the suite). Additionally, write flexible code that doesn't break with the slightest change to the product. Rather than constricting your code (and validation points) down to a single point of failure, write test cases so that instead of exiting or breaking on failure, failures are simply reported and the test moves on. Good logging techniques will also help save time when diagnosing problems or bugs. I usually code with two levels of logging: 1) any and all logging I can possibly ever want to see, which I throw into an
if (bDEBUG == TRUE) {
print ("error information here…");
}
statement; and then 2) general logging I want output through the normal course of execution. If there's a problem, I can always "turn on debugging" and get all the logging I would ever want. You can never have too much logging.
- Automate only that which needs automating. Once you have decided as a QA group that you need to implement test automation, it can be very tempting to want to automate as much as possible. This inclination can be especially acute when you have spent tens of thousands of dollars on test automation software. Perhaps you have even hired a person who will do test automation full time.
- Design and build for maintainability. There are occasions for writing small, one-off scripts that either utilize record-and-playback methods or are hacked together with time as the primary consideration rather than quality. Designing and building your automated test suite is not one of these occasions. You are entering into a maintainability nightmare if you approach your automation project like that. The only way to make automation work successfully is by doing up-front planning and pulling for the long haul.
- Whether or not to automate: rules of thumb. Here are some guidelines I have used in the past when evaluating whether or not to automate on a project. They have worked for me in the past, and may work for you as well. Evaluate them and suit them to your own environment, and you may be able to come to the correct conclusion on whether or not to automate.
- GUIs are difficult to automate. Despite software vendors telling you how easy it is to use record-and-playback functionality, graphical interfaces are notoriously difficult to automate with any sort of maintainability. Application interfaces tend to become static quicker than Web pages and are thus a bit easier to automate. Also, I have found that using Windows hooks is more reliable than the DOM interface. Keys to look for when deciding to automate a GUI is how static it is (the less it changes, the easier it will be to automate) and how close the application is tied to Windows standard libraries (custom objects can be difficult to automate).
- If possible, automate on the command-line or API level. Removing the GUI interface dramatically helps reliability in test scripts. If the application has a command-line interface, not only does it lend itself to reliable automation but is also somewhat data driven, another green light to go forward with automation.
- Automate those things that a human cannot do. If you suspect a memory leak by performing a certain function but can't seem to reproduce it in a reasonable amount of time, automate it. Also particularly interesting to automate are time-sensitive actions (requiring precise timing to capture a state change, for example) and very rapid actions (e.g., loading a component with a hundred operations a second).
- Stick with the golden rule in automation: do one thing, and do it well. A robust test case that does a single operation will pay off more than a test case that covers more but requires heavy maintenance. If you design your test cases (or library functions, preferably) to do single actions, and you write them robustly, pretty soon you can execute a series of them together to perform those broad test cases that you would have written into a single test case.
I hope the above points will help you make a decision whether to use test automation on your projects. Whether you are beginning to think of exploring test automation or already have an automation suite in place and are considering expansion, it always helps to continue to evaluate how your test automation will help accomplish your testing goals. Test automation can stagnate, so there is always the imperative to keep it fresh and to produce good results. I am confident that by choosing carefully those things that you will automate, and building with maintainability in mind, you will get the most out of your test automation.
User Comments
really useful, thank you