Exploratory testing—questioning and learning about the product as you design and execute tests rather than slavishly following predefined scripts—makes sense for many projects. But does it make sense for agile projects? In this column, Johanna Rothman examines how exploratory testing might work on an agile project.
It's easy to see how exploratory testing works on waterfall projects. In fact, it makes a ton of sense on waterfall projects to do at least a little exploratory testing before you decide where to focus your efforts, design your tests, or consider any automation. That's because on a waterfall project testers are much more likely to be crunched for time at the end of the project. (For ideas on how to use exploratory testing on a waterfall project where you don't have enough time see my previous column, "So Many Tests, So Little Time.")
But does exploratory testing make sense on an agile project? Of course. Yet agile exploratory testing doesn't look the same as it does on a waterfall project. There is less of a need for exploratory testing after the product is created and a greater role for exploratory testing as the team defines and develops the product.
Michael Bolton says, "When people talk about exploratory testing, they often mean the process of test execution. But the three pillars of exploratory testing are test design, test execution, and learning." In waterfall projects, the execution is most visible. But on agile projects, the feedback loop between test design, test execution, and learning is quite short—and agile projects require all three pieces.
On agile teams, there is little role for a manual black box tester who waits for the requirements, the design, or the code and finally tests according to a predefined script. That's because there's no waiting for requirements, design, or code. The product owner "defines" a feature or chunk of a feature, but the definition is more like a promise to keep discussing that feature rather than a firm commitment—"This is what the feature will do."
If the developers are using test-driven development, the tests help define the design. Even if the developers aren't using test-driven development, they tend to work in small chunks--no more than a day or so long—which decreases the waiting time for code.
There is still a role for testers who can turn their keen observations into ways to quickly test an area in depth. But those testers are even more valuable if they can develop tests and testing ideas—both manual and automated—before the feature is completely coded.
Roles for Agile Testers
- Testers can help the whole team work in a test-driven way by:
- Assisting with defining requirements in the form of user stories and generating system-level tests from those requirements. Think of this as a questioning, test-driven approach to development.
- Assisting with modeling the requirements (user stories)—by asking questions—and generating system-level tests from these stories.
- Testers provide information about each chunk, as the developer delivers it. Sometimes that information is confirmation that the code does what the user story says, does not have side effects, and is release-able at the end of each iteration. Sometimes that information is extra information about the product and other paths the testing could take.
- Finally, testers can perform system-level regression testing during an iteration. (To be honest, on well-established agile teams, the developers typically run the low-level regression tests, allowing the testers to do what they do best: explore, question, and learn.) If the organization is new to agile, the testers might run regression system-level testing at the end of the project.
Where Does Exploratory Testing Fit During an Agile Project?
Exploratory testing is helpful when working with the product owner and the project team, trying to define user stories. Imagine you're working on a banking system and want to allow between-account transfers. The developers have a scheme to use checksums to verify the From and To account numbers. You may have heard about the Fossbakk case, an example of insufficient checking for input and confirmation failure. If exploratory testers had been involved at the time of user story definition, testers might have asked, "What happens if we put in a longer account number than the system expects?" Or, during the modeling stage (which I certainly would expect on an agile project with a financial transaction system comprised of databases), an exploratory tester could ask questions such as how many ways can we make the transaction "fail?" Testers who ask questions like that all the time and explore the answers to those questions before the code is written may help the developers write better code. And, they'll have the basis for some great, nasty tests to see what the system really does. The questions lead to the test design, which leads to the test execution, which provides learning for everyone on the project.
With these kinds of questions, testers can use exploratory approaches as a first cut to defining tests on chunks of code. Or, testers can use exploratory techniques after verifying their automated tests are working and providing information about the product in its current state.
You can see that each question leads to learning or test design. In the same way, each test design leads to more questions or learning. The three pillars—test design, test execution, and learning—reinforce each other. You don't need to differentiate among the activities; which activity a tester performs is not relevant. What is relevant is that the tester performs all of them, and feeds back information to the rest of the project team.
How Does Exploratory Testing Fit at the End of an Agile Project?
The goal on an agile project is to have a release-able product at the end of each iteration. For me, that product includes the testing required for a release-able product. I prefer to do this with test-driven development, and the developers aren't the only ones who should be writing those tests. The entire project team (and especially the testers) needs to explore the product via questioning, test design and execution, and learning.
If a developer is developing features in small chunks, the amount of exploratory test execution needed at the end of the coding for a chunk or for the whole product is significantly decreased. Sure, developers are still going to make mistakes and cause side effects—that's why exploratory testing is helpful. But manual black box exploratory testing without the test design and incorporating the learning is not adequate once the developers implement by feature and not by architectural piece.
Why Does Agile Change How Exploratory Testing Works?
Developers in waterfall projects tend to implement across the architecture. There's a group of developers writing the GUI, some others writing the middleware, still others managing the platform interactions. In this situation, there is no guarantee that the feature will work as designed, because the developers have no idea what side effects they've inserted into the code.
Conscientious developers do test as they develop. They may even mock up stubs to test their "features" inside their architectural layer. But the middleware people don't know the exact details of what the app layer is doing. And the platform people don't know the details of middleware implementation. Remember, developers make tradeoffs every day in the form of small design decisions. They don't know the implications of those decisions, which is why exploratory system-level testing is beneficial on waterfall projects.
In contrast, for many agile projects, developers implement by feature, implementing the entire code for a feature from the GUI through the middleware and platform. Within a timebox, developers don't always complete an entire feature or feature set, but they do complete a coherent testable feature or chunk of a large feature. The developers and testers have deep feature-based knowledge of the code.
Contrast the lack of feature-based knowledge in a waterfall project with the focus on delivering features in an agile project and you'll find that implementing by feature may require much less exploratory execution post-coding, because the exploration needs to happen before coding starts. The exploration is best when defining user stories and also works well when developing system-level tests from the user stories. You can imagine the kinds of questions everyone can develop, the test design, and the learning from the questions and design before the code is even written.
On waterfall projects, exploratory system-level testing exposes architectural mistakes and side effects. But on agile projects, where the developers write architecturally coherent features, those waterfall kinds of errors tend to be exposed much earlier—or don't even exist.
What Kind of Testers Does an Agile Project Need?
My preference for agile testers who can script and code is based on the fact that the types of problems the testers need to discover tend to be more complex and cross features but don't necessarily cross the architecture. Those kinds of errors require critical thinking skills and the skills to get a computer to help explore the feature or feature set. Manual, black-box testers who don't understand the requirements, the design, or the code cannot do this kind of work.
Do I think all testers need to write code? In my perfect world, the answer would be yes so that the testers could perfectly know when to explore with questions, a keyboard, and with code. (My perfect developers use test-driven development and continuous integration, aside from being angels, too). But I know many valuable testers who don't write code yet understand the requirements, the system's architecture, and how the features are supposed to work. These testers can explore a feature manually or with help from someone who can write scripts or code. The key is that these testers are generalists.
Agile projects require true generalists as testers: people who have requirements-, design-, and code-understanding skills. Without those skills, they can't think critically enough about the product under development, and they might not be able to create enough variety of tests. If they understand the requirements, design, and code, they can turn that understanding into crafty tests. Some of those tests will be exploratory. Even some of the exploratory tests will need to be automated in order to repeat them. And, I've seen great testers on agile projects who can quickly create automated tests to do some of their exploration.
Agile projects require testers who can develop tests quickly and know which of those tests will need to be repeated. Testers should also be able to create those easily repeatable tests. I prefer tests that the testers can give to the developers and say, "Hey, run this and make sure you don't have those icky problems we discussed when we were talking about this user story."
Exploratory testing doesn't vanish on an agile project. It can't occur just at the end of a feature, iteration, or at the end of a project. Exploratory testing must occur at the beginning, as a part of the idea that the tests drive the system's design and code, in the middle as developers are sorting through what they are writing, and at the end to continue learning about the system.
Testers, you need to know that just using manual exploratory testing for test execution doesn't provide the speed (because you're missing test design) or depth of testing (because you're missing the questioning) that an agile team needs. Yes, exploring the issues around the user stories and the design are helpful, but it's not enough testing or speed for an agile team. Agile teams require exploratory testers, and exploratory testing in this sense does look different than on a waterfall project.
Acknowledgements
I thank Michael Bolton and Don Gray for their review of this column.