In this Sticky ToolLook interview, Seapine Software's Paula Rome who takes a closer look at the relationships between requirements, test cases, and traceability in a test plan.
StickyToolLook: Would you describe the relationship between requirements and test cases as you see it?
Paula Rome: Requirements come in all shapes and sizes, from formal, functional, or business requirements to agile user stories to technical specifications. Some sources describe a requirement as a statement that defines the attributes, capabilities, or quality of a product in order to meet the specific needs of a user. Even though there's a broad range in what a requirement might look like or the methodology behind its creation, I still like to describe a requirement as "what" the product should be or is supposed to do and describe a test case as the plan that verifies what you promise to ship.
Every test case should be related to at least one requirement. Without a corresponding requirement, there's no point or purpose to a test case. If a test case isn't verifying a requirement, you have to ask yourself, "Why would I run this test?" You would be wasting time and money testing functionality that your customers or stakeholders don't want.
The flip side is important, too. A requirement needs at least one test case to verify that it makes it into the final release. This gets complicated when there is a hierarchy of requirements. A high-level business requirement might be broken down into several lower-level functional and technical requirements. Your test plan should include the details about the type and level of requirements for which you will generate test cases.
StickyToolLook: How does traceability play into that relationship?
Paula Rome: Traceability relies on the relationships between project artifacts, like requirements and test cases, to give you better visibility and insight into your project status. When you are doing a traceability task, such as looking at your test coverage or performing an impact analysis, you are essentially asking questions about the relationships of the items in your project:
- Is there a test case for every requirement?
- If I change this requirement, which test cases are affected?
- How is testing going for these requirements?
Maintaining information on the relationships between project items provides the necessary data to answer these types of traceability questions.
StickyToolLook: What are some of the obstacles that have kept traceability from reaching wider acceptance by test teams?
Paula Rome: Traceability is not a hard concept. What's difficult for many teams is keeping track of the relationships between project artifacts throughout the course of the project.
In many companies, each group within the project team has its own tools, each with a separate storage location for documents or databases.
Analysts and project managers might keep requirements in Microsoft Word documents on one server, while developers use different standalone systems for issues and source code and the test team maintains test cases in spreadsheets on yet another server.
Unless you have a common repository for the items you want to trace (like our TestTrack product, which tracks related requirements, test cases, and issues), it is very difficult to understand and maintain the relationships between those items without a lot of manual effort. And, it's the large amount of manual effort that kills most traceability initiatives. Teams that have adopted an integrated solution can afford to implement traceability best practices for their projects.