Many teams struggle with test creation due to miscommunication or a lack of requirements, testers not being present during design phases or discussions, a shortage of time, or incomplete information. But that doesn’t mean you should turn to guesswork. Your tests will suffer in quality and completeness. We must always strive to get the desired requirements.
Test creation is perhaps the most important aspect of software testing. Though most of our focus is on test execution and reports generation because that is the most visible output testing has, the backbone of the entire process lies in the creation of accurate and meaningful tests.
Many teams struggle with test creation during their agile journey, and they often seem to be doing a lot of guessing during test design or creation. This can be due to miscommunication or a lack of requirements, testers not being present during design phases or discussions, a shortage of time, incomplete information about which test environment or tools to use, or unknowns such as relevant positive and negative scenarios to test or overall purpose and scope of the application.
Hurried guesswork during test creation leads to bigger problems later in the project and poses a risk to the overall quality of the product.
Say your team has to test a webpage. Apart from some functional specifications about controls on the page, they do not have much information. But they have so many things to decide on before starting to test, such as what browsers to test on, which tool to use for automation based on the types of controls to be identified, what workflows to execute as part of integration testing of this page with the entire website, and so on.
It is also very important that those functional specifications about the webpage, its controls, and validations on each control are complete and thorough. When the team is designing and creating the test plans and test scenarios, ambiguous or missing requirements would lead them to assume a lot of parameters and guess some cases according to personal perceptions. For instance, a tester may assume Chrome should be the web browser under test and start using his installed version as the default without asking for customer preferences and all the versions that need support.
Suppose a username field on the webpage does not specify its upper limit and allowed characters validation in the specification. The tester may assume it to be a standard ten-character limit with only letters allowed and start testing accordingly, without asking relevant questions about regional and domain specific requirements. If the website has French users whose names contain special characters, or Indian users whose names frequently exceed ten characters, these inputs would fail from the tester’s assumptions and guesswork applied during testing.
Another problem teams may face is guessing test data. Are you using proper techniques to derive test data values, or just guessing random values to input? Suppose you are testing the function of addition. Are you randomly sending inputs of numbers and verifying the output? We should be using proper design techniques such as equivalence class partitions or boundary value analysis to derive a proper set of test values.
This is how poor communication or a lack of direction results in bad test design. We need to minimize this guesswork and make informed decisions before we begin the actual testing.
Getting Testers Involved in the Requirements Phases
My team was quick to realize this when we faced missing and vague requirements in one of our agile projects. We noted that due to a lot of factors mentioned above, we were subconsciously assuming things when designing tests and later on realizing our assumptions were incorrect, meaning many of our defects were marked invalid and we had to redo a lot of work. We decided to fight this problem.
The testing team started to request that we be allowed to participate during requirements phases, in user story fabrication as well as design discussions. This led to better clarity on a lot of requirements and functionalities, but it also ended up helping the entire team because we raised questions on the requirements and designs from a testing perspective and helped make better documents from the user stories, which later formed the basis of our tests.
The role of communication cannot be stressed enough here. We encouraged our teammates to ask the development team for more information—what to test, which environments to use, what could impact our testing, etc.—and our tests based on these solid facts and details were much more useful and successful.
Another good idea was involving the entire testing team, instead of having one or two team members create tests while the others just worked on executing them. We shared the details of the functionality to be tested with all the testers involved, and everyone collaborated on creating tests. With everybody giving their perspectives and opinions and challenging each other’s assumptions, our tests were bound to be more elaborate and exhaustive.
Reviews formed an important aspect to unbind the scope of our tests. We got peer testers, developers, test managers, and product owners to do some buddy reviews on our tests to ensure that we were not limited in our imagination and perspective. This opened up a lot of untouched areas and left no room for guesswork! It also gave us confidence that everybody had signed off on the designed tests, so it was unlikely that later on our defects would be valid.
If you don’t have enough information or communication to create your tests, that doesn’t mean you should turn to guesswork. Your tests will suffer in quality and completeness. We must always strive to get the desired details and only then create our tests based on solid grounds. This will eventually result in a lot less rework and much more valuable testing.
User Comments
This, so very many times this. As a requirements engineer, I always ask the question "how would you test this?" or, even more generally "what evidence would you need to see to show that this works / delivers as promised?". Getting testers involved early on means that these or similar questions will be asked.