The testing tsunami is the tidal wave of testing work that occurs at the end of development. The developers' workload starts high and progressively decreases until all work is completed. This is just the opposite of the testers' workload. As more code is completed, the testers' workload increases. By being aware of the massive workload that awaits testers at the end of the project, do everything possible to move up testing activities as early as possible. This will lessen, although not eliminate, the testing tsunami.
As a testing consultant, when I'm hired to test an application under development, I am aware of the looming testing tsunami. Unfortunately, many organizations are not aware of this phenomenon, so they don't plan for it, which puts their projects at risk.
What is the testing tsunami? A tsunami is a tidal wave. The testing tsunami is the tidal wave of testing work that occurs at the end of development. Consider the workload curves in software development. The developers’ workload starts high and progressively decreases until all work is completed. I will ignore feature creep here, because this does not change the fundamental curve. Ditto for those who might say, "Doesn't it spike here and there?" Fluctuations are not important–just the overall trend.
This is just the opposite of the testers' workload. As more code is completed, the testers' workload increases. New features must be tested. Old features must be retested. A full regression test should be performed before each release into production. Integration testing is required if this interfaces with other systems. Specialized testing such as load, stress, performance and security usually begins towards the end of development. Making matters worse, the largest amount of testing happens as the delivery deadline and/or budget is coming to an end. Armed with this knowledge, what should an organization do to mitigate the effects of the testing tsunami?
The way to lessen the effects of the testing tsunami is to move testing activities up as soon as possible. This means hiring testers early on in the project. While this advice is by no means new, few companies follow this practice (I may have a skewed view of this, since I work as a consultant and tend to be hired late in the project's lifecycle or on projects already in trouble). Make sure testers are budgeted for starting from day one of the project.
What would testers do this early in development, when the requirements haven't even been gathered? Set up the testing infrastructure. Software development is a chaotic process. Testing requires a controlled environment, which translates into a large investment in testing infrastructure. The infrastructure needs will vary depending on the company's testing maturity. Are defect tracking processes, dedicated test servers, testers’ workstations, test tool licenses, test databases and development processes already in-place? Project archeology (digging and understanding artifacts) may be needed if this is another attempt at a failed project or a new version of an old system.
Testers should be involved in requirements gathering meetings, but as an observer. During this time, they would be getting familiar with the application and end-users and beginning to think of some test case ideas. Requirements and specifications are usually the source material for test cases, but the information density in these documents is usually sparse and needs to be boiled down into test cases. These documents usually focus on positive scenarios. For each positive scenario, usually many negative scenarios exist that must be developed and tested.
A Proof of Concept (PoC) is another best practice for reducing the testing tsunami, especially if specs are weak or non-existent. The sooner the testers get their hands on the application under test (AUT), the better. If test automation tools are being used, the PoC becomes key, as these tools should be evaluated against it. If the tool has already been purchased, you will get a good idea of how well it understands the AUT. What will you do if your test automation tool doesn't work with your application? Select a different tool, change your application to increase its testability or abandon test automation?
Daily builds are another best practice, especially automated daily builds. Developers check in their code whenever they finish it and each night a build is made. This has many advantages. First, developers get used to performing builds (or having builds performed). Builds are no longer a big deal, fraught with errors. Secondly, this allows testers to log bugs against these features earlier, rather than later. Why wait for five features to be completed before building and releasing into test; why create a mini-tsunami? Get it into test ASAP. Finally, it gives faster feedback to the developers, which also gives an accurate status of the project to management. It;s frightening to hear about testing organizations that will not accept code to test until the application is complete or that stop testing as soon as they find a bug. Both of these are real examples I;ve encountered.
Instead of viewing testing as a quality function, view it as a project management function. Testing is a tool that provides a true, up-to-date status of the project. When wondering, "When should we start testing?" just ask yourself "When would I like to find out something is wrong or incomplete?"
By being aware of the massive workload that awaits testers at the end of the project, do everything possible to move up testing activities as early as possible. This will lessen, although not eliminate, the testing tsunami.