A Project Manager Described his Recent Agile Project in this Way:
Agile practices helped us know where we were the whole way through the project, but we had a side benefit I hadn't anticipated. The testers drove the project. In every other project I've managed, the testers were downtrodden, always complaining about the time they had left to test. But here, the testers defined the acceptance tests and helped the customer define her requirements. A couple of times the testers even explained risks to the customer. In one case, the testers told the customer that postponing a specific user story was tantamount to not putting the user story in this release.
I wasn't surprised. When I've seen testers fully participate on any project, they shine. Agile projects push testers into leadership roles. Projects that don't require tests before the code exists can easily push the testers into a background role. If you've ever been on a project where the developers didn't quite meet their deadlines but the schedule didn't slip to meet the testers' needs, you were on a project where the testers were pushed into the background. On those projects, it doesn't really matter what the testers provide for information; there isn't enough time in the schedule for anyone to hear or act on the information anyway.
Contrast those projects with agile projects. On an ideal agile project, the project team discusses the user stories to roughly estimate the time it will take to implement each one. The customer ranks the user stories (1,2,3,4, and so on) to determine which user stories the project team will approach in this iteration. (If you don't have access to real users, use a surrogate, such as a product manager.) Then, before a single line of code is written, the testers write acceptance tests with the customer.
The developers can discuss how they might implement the user story, they may even prototype to discover any of the system-level "gotchas" before implementation, but they can't write any production code until the acceptance tests are complete. Testers write system level acceptance tests to drive the product development. The developers implement only enough code to pass the tests.
Some people get stuck here. "What do you mean we write tests before the product exists? How the heck do we do that?" One technique is to think of concrete examples of how the system will work. Say you're testing an elevator for a four-story building. The user story says the elevator comes when the user presses the button. Rather than "elevator response time shall be such-and-so," you think of one or several worst cases for response time (everyone presses a button with the elevator in a particular position).
Then define the response time for that worst case scenario. This scenario leads to questions about other scenarios, and therefore more tests. Those tests are easy to write before the implementation, and they tend to get written in essential terminology, which clarifies the business domain in people's minds. The tests help the developers start on their design, so that they create a product that meets your specifications.
One major side effect of this write-the-test-first doctrine is that the tests are as easy to automate and maintain as the code. The project staff and the customer start discussing the domain model in the code. The testers (with the customer) write the tests in domain terms, the developers write the code, draw mockups as GUI specifications, write the GUI code to the mockups, then test the GUIs manually. (Some people do develop GUIs test-first, though, instead of mockup-GUI-manual.) Because the domain model is the common terminology, testers can write automation from the command level or the published API. In any case, the testers define the contract between the product code and the test code, not the norm in traditional projects.
Another side effect is that performance, reliability, and other system attributes (what many people call non-functional requirements) can be more easily defined with a user story at the beginning of the project. The testers and the developers can both keep measuring those system attributes, making sure new code doesn't slow down the system or make the system less reliable. If you've ever been on a project that tried to improve performance or reliability late in the project, you know how difficult late improvements are.
To fully participate in agile projects, the testers need to be able to translate user stories into acceptance tests. That may mean that the testers need help writing automated tests, or that the testers write the automation themselves. If you can't write code or develop automated tests in some fashion, it's worth your time to invest in training so that you can be a fully participating member of an agile project. Even if your project doesn't become fully agile, the idea of ranking requirements and working on the most important ones first, of always having a fully functioning system, is too attractive to ignore.
If you're a tester, review your testing skills. Make sure you know how to test in a variety of ways, so that you can test requirements, performance, reliability, security, and any other system attributes a user story might contain. Think about how you could create automated tests that help the developers know when they have implemented only what they need for the user stories.
As you're reviewing your skills, take a look at your communication skills. Agile practices don't usually demand a lot of written documentation, although your adaptation of agile might. If you communicate verbally, make sure your verbal skills are up to the potential debates with customers and developers. When you're working on an agile project, you're all working for the good of the eventual user, not for any one group.
Agile projects require testers who have adaptable testing skills and excellent communication skills. Those who do will shine on…
Acknowledgements
I thank Dwayne Phillips and Brian Marick for their review
.