In over ten years of leading the test and quality assurance processes for companies that did not have formal testing, one thing I have noticed is how difficult it is to persuade many developers and their managers that quality assurance standards and practices exist, and that the company would benefit from using these latest tools and methods.
It’s a constant challenge to remove the concept that testing is a financial drain and time bottleneck that is a threat to product delivery. My counterargument has always been that post-release bug fixes from poorly tested (or even untested) software are more expensive and damaging to the company reputation, and properly planned and executed testing processes do not cause delays or excess costs.
Most of the developers at my new company had never worked with a full-time tester and had no knowledge of any software testing processes. “Testing” involved another developer on the team briefly looking at newly developed features before pronouncing them satisfactory for release to the client. Test plans, cases, reports, or any written test artifacts did not exist.
The reason for bringing me on as an independent QA person was to allow developers to concentrate on coding. Many developers had the impression that I was there to quickly review new websites and features before deployment to production. The initial idea the project managers and developers had for testing was to give me a few hours "to check if anything's broken" before giving the software to the client.
Although a step in the right direction, creating any test documentation was a gamble because the company had multiple teams that were simultaneously working on several projects, so it was difficult to predict the completion status of any software because of sudden and often undocumented changes. My pleas to be included earlier in the development process and develop automated regression suites were regarded as a good idea, but in practice they never proceeded beyond that point.
My presence in the company did improve software quality, but the "over the wall" philosophy still prevalent at the time did prevent more significant quality improvements. Luckily, new additions to the company and a switch to DevOps created an opportunity to shift the testing process far to the left.
So now I’m embarking on a new endeavor: introducing my company's teams of talented developers to the concept of continuous testing and implementing it within the DevOps framework.
Planning for Quality
A fortunate convergence of a newly created DevOps team and newly hired lead developers were the main catalyst for a shift in our thinking about who was responsible for quality, at what point quality would be a concern, and what practices would be used to ensure quality from the inception of any project.
As the quality assurance specialist, I was tasked by the new DevOps team and new lead developers with helping to create a process for testing at each stage of development. My goal was to have testing and development work in parallel and act in conjunction for their mutual benefit.
Instead of the usual practice of banging out the code for a list of features for every sprint, a prioritized list of features is recorded in descending order in a TESTME.md file, a short, markdown-language file that uses the Gherkin syntax. This list focuses both the developer and tester on the simultaneous coding and test creation for each feature by importance.
At this point, my job as the QA specialist is to concentrate on writing the project's test strategy, the sprint test plan (usually no more than two pages long), functional test cases, and test results.
Unit and Integration Tests
Instead of developers relegating the task of "checking to see if anything's broken" to the tester, very early in the project they now prioritize the features within sprints, practice test-driven and behavior-driven development, and verify the functionality with unit testing and behavior-driven test automation frameworks.
Although not a written policy, many developers have seen the advantage of test-driven development and, on their own, have begun to create unit tests and write the code for the tests to pass.
The same is true for integration tests. The developers do not have a hard-dividing line on what defines a unit test versus an integration test; they generally agree to just write a test and define it as an integration test for organizational purposes if it involves the interaction between two independent classes.
After executing unit and integration tests, the developer now moves on to API tests, which are done using an automated tool, depending on the project.
Functional Tests
By this point I’ve taken the time and opportunity to review the creative briefs, requirements, and design documents, as well as the TESTME.md list of prioritized features. Each sprint has also been planned and features for development and testing have been defined, giving me time to create tests for the sprint. I subject the tests to informal review to get more ideas about test priorities, general tips, and what to test in detail. This is an example of development and testing being done in parallel for the mutual benefit of each process.
I also create end-to-end tests, such as for a user logging into a site, searching for a product and adding it to the cart, then going through the purchase process, or for searching through a large document library for a specific document and then downloading it. I review and execute the tests to gauge how well the newly developed features work with the existing features.
The test results are collected and made available as screenshots to everyone in the project, as well as to the client. An important point in testing is transparency, and value is added to the project when weak points and unstable areas have been found and fixed. The purpose of test results is to show initial unstable areas that have been fixed, what kind of issues were found, and that the issues no longer exist in the applications under test.
Regression Test Suite
Once it is confirmed that the functional tests have passed, I create a regression set automated with a behavior-driven test tool. Experience has shown that software features that have previously passed and function reliably in production can fail after the inclusion of new software features, so tests that go into our automated regression sets confirm that features that previously passed in both manual and automated functional sets still pass. For example, passing manual and automated functional tests for a document library search will go into a regression set to verify that the search still passes.
At this point, all tests can be executed in the pipeline.
The Changing Face of QA
One observation I have made is that throughout this process, my place as the QA specialist has changed. The initial role was at the bottom rung of the development process, where testing simply "checks if anything is broken," and now I’m floating above the process to verify that the correct quality processes are done by development.
Even in DevOps, testers should create and execute manual tests. Behavior-driven tests early in the process cannot catch all bugs by automation, since the stress is on feature behavior.
At my company, a focus on software quality now permeates every project. The changes we enacted have shown that we have a faster testing process when the software is tested throughout the development lifecycle, by developers, testers, and automation alike. DevOps requires continuous testing, so we need a constant focus on quality.
User Comments
Wonderful article, Anastasios! Your transition from the bottom to the top on the priority order is a reflection of the journey that QA has traversed so far. Both developers and IT heads have started to understand the value of testing early. Instead of shifting QA to the left, right, or center, it should be present throughout the SDLC.