In many IT organizations, Quality Assurance (QA) staff are not dedicated to projects, but are "shared resources" supporting many projects simultaneously. Vast armies of QA staff execute defined scripts to test and certify an application once development is complete. Because they lack application familiarity and test only at the end of the development lifecycle, QA staff require significant execution support, and the feedback they provide is late in coming and often inaccurate. By comparison, on Agile projects, QA staff are dedicated team members. Testers are co-located with business and development staff. Because they collaborate with the development team on formulating acceptance criteria, and engage in testing continuously through development, QA feedback is timely and relevant. In the Agile approach, QA is less of an encumbrance and more a partner in delivery, increasing the efficiency of the software development process and the effectiveness of solutions produced.
The Brute Force Approach to Testing
A number of factors conspire against the development of a robust QA function. First, QA is perceived not as active producers but as passive reviewers of IT solutions. As a result, QA does not attract the same level of funding as other IT functions, such as application development or infrastructure. Second, few IT or business leaders have risen from the ranks of QA. Third, there is a dearth of QA leaders in the job market at large, and most organizations do not invest (and often do not know how to invest) in the professional development of QA people. Finally, automated functional testing tools are seen as instruments to replace testers, but automating tests through the user interface have historically been fragile and thus high maintenance. Together, these factors relegate QA to second-class status in many IT organizations, a situation further amplified by the fact that IT is itself a second-class citizen in the overall business.
Execution models have arisen in response, but not in opposition, to these headwinds. Lacking peer status of other IT departments such as infrastructure and application development, and lacking the leaders to bring to bear in every project, QA assumes the role of "solution auditor." Its mission is not, "how can we contribute to the technical quality and functional fitness of an application being developed." It is instead, "how can we prevent a technical problem or functional misfit from escaping to a production environment?" QA requires less depth of familiarity to fulfill the auditor role, and assigns testers and leads to work on multiple projects at the same time.
To execute to any degree of success as a shared service, the onus is on QA leads to find ways to leverage their time. In artifact-happy IT organizations, this leads to creation of large volumes of test scripts. The intent is to write scripts that exercise functionality of the application, and write them in such a way that just about anybody can execute them: press buttons, navigate screens, and compare results returned by the software to that proscribed by the script, passing or failing a script at any step of the way. The expectation is that QA leads shift from project to project writing test scripts, while the full force of QA testers can be brought to bear "on-demand" to execute those test cases. When all test cases pass, the application is certified.
There are many operational risks with this approach. It assumes that the test scripts are of high quality, and that feedback is timely and actionable. These are unwarranted assumptions. Like any IT artifact, test scripts may be of poor technical construction (ambiguous or confusing to testers) or of poor functional construction (they don't test what needs to be tested). QA leads, shared across several applications, are more prone to error in writing scripts. Being part-time on every project, QA is forced to work independently from the development team. Test script production is subsequently disjointed from the rest of the project. Scripts are written to abstract specifications in the early stages (that is, before software is ready to be tested), and executed in much later stages of a project (e.g., once development is complete). They are not written in conjunction with the evolution of the software, nor in full collaboration with development. Development is often not made aware of specific acceptance criteria, nor does it receive testing feedback, until very late stages of a project.
There is financial risk as well. This approach emphasizes unit-cost efficiency of test execution over a holistic approach to quality assurance. On a test-cases-that-can-be-executed-per-person basis this model looks attractive, but to be cost-effective, there must be low overhead of execution. The greater the effort required to stage testing activity (e.g., with data or instructions), or to interpret the results of testing performed, the greater cost of execution.
This is an especially important consideration. This approach assumes that the result of a test case (passed or failed) provides meaningful feedback about the quality of the application. It may not. In the hands of inexpert testers, a failed test case may at best indicate nothing more than the test could not be passed at the moment in time when it was executed. That could be for reasons of environment (integrated components may be unavailable), test data (incorrect combination of attributes), misinterpretation of the results (it works but the tester did not recognize success), or that the test script itself is defective (it incorrectly defines a business scenario). Inexpert testing cannot distinguish the difference. It can only compare results on screen to those on paper. The "on demand" involvement with an application means that QA testers have less fluency with the application. They are limited to doing only the work they are explicitly told to do, lacking capability to solve problems and interpret results.
What this approach to QA lacks in finesse, it tries to make up for in volume: application familiarity is devalued in favor of fielding a large number of testers who can hammer transactions against an application. This is testing by brute force. Staffing greater numbers of testers only aggravates the problem, as QA leads quickly become over-committed. The most capable QA people have less time to spend on the most complex application problems. They are forced instead to dedicate their time managing the vast armyof test executors. Because the effort required to stage test execution and disposition test results rises directly with application complexity, there is little value delivered.
This brings into question the role of QA: as auditor, QA is less a partner and more a gatekeeper in solution delivery. A partner works in unity with a team to achieve a goal, collaborating to successfully deliver an application. The auditor takes a confrontational posture, preventing development from delivering a mistake. A confrontational participant that performs acts akin to blunt force trauma to certify application quality is more an obstacle than an enabler of solution delivery. Instead of being a center of excellence that partners in delivery of high-quality solutions, QA works from the sidelines, running the risk of devolving into an under-funded, under-staffed and under-achieving function that depresses returns on IT investments.
The Agile Approach: QA as a Value Added Partner
The Agile approach to QA is significantly different. QA people are dedicated, full-time participants in an Agile development team. They are co-located and work collaboratively with both the business and developers. Acceptance criteria are defined in collaboration with the business. They are also integrated with requirements. This provides significant advantage over the "brute force" approach to testing, both within projects and across the application portfolio.
The unit of work in an Agile team, the Agile Story, is a small statement of need expressed from the business perspective. A properly formed Story includes explicit acceptance criteria. Each Agile Story concludes with a clause that begins, "I will know when this Story is complete when..." Capturing acceptance criteria with every requirement gives a development team a more complete understanding of the requirement itself. This leads to higher quality artifacts, both code and QA: test scripts, for example, are more likely to reflect real business scenarios. This reduces the risk that development or testing effort is misdirected toward creating a poor or inaccurate solution.
During development, an Agile team releases code frequently to a QA environment. This allows QA to engage in continuous testing cycles, eliminating latency of test feedback to development. Also, QA works in step with development: the most current knowledge of the solution is incorporated in tests, and test execution is performed against a current copy of the code base. This reduces the risk of "false positive" failure reports as tests are aligned with and executed against current code. Finally, because testing starts earlier in the lifecycle, there is less risk that a fatal flaw will be exposed late in the development process. Continuous testing allows QA to incrementally develop and execute a library of regression tests which help maintain product quality over time. All together, Agile QA is an active and integrated partner from the earliest stages of development.
The Agile approach to QA provides more than operational benefits. It strengthens both a delivery team and the QA function overall. Being embedded with and dedicated to a development team, QA people are immersed in the business problem as well as the solution while it is being developed. Being fluent in the application, a tester is more likely recognize the difference between an application failure and an environmental or situational blocker, and therefore better prepared to correct the situation independently. This means they are less likely to submit false defect reports. These reports create "noise" that masks the state of application quality, and impairs team efficiency by wasting other people's time to disposition.
This approach also incubates QA leaders. When able to work only part-time in any given domain, QA can do little more than produce testing artifacts and perform tactical execution, whereas working in full collaboration with a development team allows QA staff to gain deep business knowledge. QA staff who are immersed in the problem domain are uniquely positioned to be IT knowledge workers. This makes them better business problem solvers, and stronger participants in IT solution delivery overall.
Engaged Participant Versus Disengaged Auditor
The intended deliverable of any IT project is a technically sound, functionally fit business solution. This is achieved through the engaged participation of all IT disciplines, including infrastructure, development, and QA. By only playing the role of auditor, QA is a disengaged member of a solution team, substituting brute force for finesse to certify an application as production-ready. Alternatively, by collaborating from the early stages of the lifecycle, and executing continuously throughout, Agile QA works as a value-added partner that directly contributes to an increased understanding and gradual evolution of a business solution. Ultimately, this approach increases both the value of IT and the return on IT investments.
About the Author
Ross Pettit has 15 years' experience as a developer, project manager, and program manager working on enterprise applications. A former COO, Managing Director, and CTO, he also brings extensive experience managing distributed development operations and global consulting companies. His industry background includes investment and retail banking, insurance, manufacturing, distribution, media, utilities, market research and government. He has most recently consulted to global financial services and media companies on Agile transformation programs, with an emphasis on metrics and measurement. He is a frequent speaker and active blogger on topics of Agile management, governance and innovation. He is currently a Client Principal with ThoughtWorks.