People-driven Test Automation

[article]
Summary:

So much of test automation focuses on getting those dirty humans out of the process, but the reality is that humans have to write and maintain software test infrastructure. In this article, Markus Gärtner covers some common pitfalls and how to avoid them.

Successful test automation depends on a variety of factors. The technical aspects involved have been well understood for more than a decade [1, 2, 3], but the human aspects have not been discussed as much. Overcoming a particular approach that testers working on automation may have used for decades can be a difficult task.

Technical Aspects
In order to understand the human factors in test automation, we have to revisit the technical factors. The most basic thing is that software test automation is, in fact, software development. In order to automate the steps in a test, some software is developed that needs to be maintained alongside the production code that it is testing.

In general, automated tests consist of two parts: the test data and the code that drives the application under test. Test data is usually maintained in a separate format or even a separate repository. The automation code may make use of some public framework like FitNesse or RobotFramework, or it might be based on a company-grown testing framework.

Test Data
The maintenance of test data is a critical part of software test automation. In the worst case, the test data initially written down needs to be adapted to every change in the software and a second system using automated tests is built. The result is called the second system effect in software test automation [4]. The most common cause is test data written in terms of how the system achieves a particular functionality. For example, a test for a login page on a website may be expressed in test data by “open browser FireFox,” “load login page,” and “enter in the first text field username.” This ties the test data to the implementation details of the UI with the effect that whenever the user interface changes, the tests need to change as well.

Use case writers and requirements analysts know a simple technique to avoid the second system effect. Use case descriptions and requirements documents focus on a user goal or functional requirement in terms of what will be achieved, rather than prescribing a particular implementation [5]. The solution to the second system effect, therefore, is to write down the test data in terms of the user goal that is exercised. In the above example, this could be noted as “login as user ‘user’ with password ‘very secret.’” The test will then be independent in regards to changes in the user interface. This dependency is thereby moved into the executable code that knows how to exercise the application under test.

Automation Code
Automation code is code that brings together the test data and the application under test. This code may be built with the help of a public framework or by growing your own. Most available frameworks additionally run the tests and report the results.

As noted above, the automation code is heavily dependent on the application it is testing. Therefore, it should be developed with similar development methods as the application itself. Ideally, the application under test and the automation code will share the same source code repository so that changes to application classes are also reflected in the automation code base. Because the automation code may become rather complex over time, it should be documented and tested.

Human Aspects
Elfriede Dustin described in a 2005 interview that “50 percent of my audience had test automation tools that have become ‘shelfware’” [6]. In order to prevent this shelfware effect, software test automation needs to deal with the human aspects as well as the technical ones.

Knowledge and Skill
Because automation code may shift the work of the tester, the knowledge and skill of the individual tester is critical. Some testers are not confident with their programming abilities, may need training on it, or may not be familiar with the particular programming language used. Some companies therefore bring in a software developer for the test automation code. Other companies ask their software developers to write the automation code themselves. Additionally, developers for the automation code may make use of reusable code where appropriate.

On the negative side, the automation code developer may develop bugs in the automation code, leading to failing tests that should pass and, maybe, passing tests that should fail. Therefore, testing the automation itself is crucial. Additionally, by bringing in a developer responsible for the automation code, the tester may feel devalued, as she simply needs to write down test data. This feeling may be overcome by team collaboration.

Collaboration
Collaboration with developers and business experts is probably the most difficult frontier. Testers may be accustomed to working with business experts on defining test cases for their particular project, but yearlong calls for independent testing—interpreted as “testers must not talk to developers”—may result in clear borders between programmers and testers. You need to overcome these virtual (or maybe even literal) walls.

As Peter M. Senge points out in The Fifth Discipline [7], a shared mental model needs to be developed early on. This means that the programmers, together with database experts, business analysts, and testers, need to sit together and develop a commonly shared model of the business rules of the project. Testers may not be used to or comfortable with being involved early in the project and sitting in a requirements discussion with stakeholders and customers. So, while their contributions to discussions in this early stage of the project may not be many in number, they are vital to developing a commonly shared mental model of the project.

But, this mental model is unlikely to be fixed over the course of the project. Whenever requirements change, the team needs either to refine the existing mental model or to develop a new one. Collocation of the whole team—from a single room up to a single floor—helps to maintain this mental model over the course of the project. Because explicit and overheard communication helps to refine the existing model of the project, locating independent testers beside programmers in your project helps to improve your test automation.

Focus on the What
Focusing the test data on the what of the business use case (i.e., what it does, not how it does it) may sound like easy advice. Unfortunately, for testers new to test automation, this may mean overcoming their work habits. When faced with manual tests, testers need to know what to test and how to do it. If a tester has done this for decades, the test data likely will just reflect the manual testing style she is used to. This results in high test maintenance costs, since the test data will have to change along with the application. Then, for every bugfix in the application, all the tests need to be inspected and some may need to be adapted. Focusing on the business rule may be a hurdle for testers who have spent years in manual testing. For successful software test automation, these testers need to get out of their comfort zones. They need clear guidance on how to focus the test data. Beyond reading the great books written on the topic [8, 9], bringing in a consultant might be the only option available.

Unambiguous Test Results
Finally, test results have to be unambiguous. When a test passes or fails, what went wrong should be clear to the tester. Gerald Weinberg points out in Perfect Software ... and Other Illusions About Testing [10] that pinpointing the underlying error is a responsibility better fulfilled by the individual developer. Therefore, test results need to be reliable in this regard.

To reach unambiguous test results, the tests themselves must be written clearly, focusing on the underlying business rule. One way to achieve this is to focus on what the software should fulfill in terms of user goals. Additionally, when a test composed of several business rules fails, the tester looking at the result has to dive deeper into the test result in order to understand whether a single business rule was violated, multiple rules were violated, or something in the automation code went wrong. With a test suite with thousands of tests and maybe hundreds of failing tests, the responsible testers will spend most of their timing hunting down the problems in the automation. This is precious time spent analyzing results and not testing. In the long run, this vicious cycle leads to test automation shelfware.

Therefore, automation testers need to be trained in how to express tests with the focus on a single business rule. Because ambiguous test results probably will end up in test automation shelfware, automation success relies heavily on the writing abilities of the testers.

Successful Test Automation
Successful test automation uses good tests that provide clear test results. A pass or no-pass result must be confident and trustworthy. By focusing on what the system should do rather than how it does it, this can be achieved easily and without breaking existing test cases whenever the logic of the system is changed. To build a clear understanding of the system, testers need to collaborate with customers, business analysts, and maybe database experts and programmers. On top of that, test automators need to be skilled and have the right knowledge about their job to achieve personal mastery.

All these points can be achieved by helping testers overcome barriers built up over time. Getting out of their cubicles and into a shared office is just the beginning. Successful test automation calls for testers who continuously involve themselves in the discussions right from the project’s start. By working side by side with programmers and other team members, they help bring the whole project to a successful end instead of a never-ending death march. 

References

  1. Bret Pettichord, “Seven Steps to Test Automation Success,” 2001
  2. Elfriede Dustin, “Lessons in Test Automation,” 1999
  3. Elisabeth Hendrickson, “The Difference between Test Automation Failure and Success,” 1998
  4. Markus Gärtner, “The Second System Effect in Software Test Automation,” 2010
  5. Alistair Cockburn, Writing Effective Use Cases, Addison‑Wesley, 2000
  6. Peter M. Senge, The Fifth Discipline, Broadway Business, 2006
  7. Ward Cunningham, Rick Mugridge, Framework for Integrated Tests, Prentice Hall, 2005
  8. Gojko Adzic, Bridging the Communication Gap, Neuri Limited, 2009
  9. Gerald M. Weinberg, Perfect Software ... and Other Illusions about Testing, Dorset House, 2008
Tags: 

About the author

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.