The rapid development of applications nowadays does not always leave a project time to perform lengthy assessment of requirements and time to test; as in most cases, time is of the essence. Identifying a simple process for gathering and assessing Requirements makes the testing of applications easier and lessens the risk of delivering inadequately tested software. This paper provides the steps necessary to implement the process.
Overview
Much has been written on the importance of Requirements in the testing arena. Much more will be written in the future. In some IS organization the word “Requirements” is taboo. As a colleague so artfully put it to me "you are using the 'R' word". But as all Testers know, it is imperative to obtain documented Requirements in order to perform adequate testing of software applications. "Quality takes time" and time is not a luxury that Testers have. However, the few minutes that it takes to document and review Requirements is well worth the effort.
In most cases, at most Information Services organizations, Requirements for new system development projects are documented either formally or informally or software Requirements gathering tools is used to gather the Requirements. Conversely, in some IS organizations' procuring Requirements for testing enhancements to present system and bug fixes pose difficulty, one of the main reasons is that the time it takes to formally document the Requirement for the enhancement cuts into the timeline of the project schedule. For bug fixes, the testing period is less, as the fix needs to be tested and released within a short duration, resulting in no time to document Requirements. Despite project time constraints, documentation and review of Requirements can be accomplished quickly and efficiently by determining the source by which the Requirements are obtained, identifying the type of Requirement for the Project, documenting the Requirements and reviewing the documented information to ensure accuracy.
Purpose
This paper’s purpose is to introduce the concept of gathering and assessing Requirements as a simple process for those individuals who are having difficulty with Requirements. The major steps in this process are:
- Gathering the Requirements
- Identifying the Requirement Type(s)
- Determining the test stage applicable to the Requirements
- Building a Requirement Matrix (used to trace the Requirement to the test case/script)
- Reviewing the Requirements
1) Gathering the Requirements
Requirements may be gathered and assessed through different sources. Some of the most standard sources are listed below:
- Discussion with Users and Developers. In some organizations, requirements are formerly documented in these discussions, in others they are not. When there are no formal documentation processes involved, the notes taken from the meetings should be used to document the items that will be tested for the project. These documented items can then be reviewed for accuracy, with the appropriate Project Team personnel associated with the project.
- Business Scenarios documented by Users. This source provides a valuable source of requirements to the Tester. The User performs the function of the application and has a thorough understanding on how the feature works. Since most users are not technical writers, the users could document, in their own words, how the business function operates, this information can be extracted and transposed into Requirement form and reviewed for accuracy with the Users.
- Determining Testability. This is another form of gathering and assessing Requirements. Testability ascertains the degree to which a Requirement is stated in terms, which establishes the test criteria to determine whether those criteria have been met. Asking questions about non-testable items results in determining actual Requirements that can be tested.
Listed below are examples of some additional Requirement gathering source. The terms used may differ within organizations:
- Request for Service Forms
- Change Control Process documentation
- Joint Application Development (JAD) sessions, where Requirement updates are documented
- Vendor supplied documentation (brochures, user manuals, release notes)
- System architecture diagrams
2) Identifying the Requirement Type(s)
Once the Requirements have been gathered and assessed, each requirement type should be compared against the feature(s) to determine if it is applicable. This process provides the means to establish the test stage that will be performed for the project. Requirement Type(s) may vary in description at different organizations; the following are some of the typical Requirement types:
- Functional–the objectives of the system: functionality, including navigation, data entry, processing and Retrieval and the proper implementation of the business rules.
- System Performance–how well a function is accomplished in quantitative terms.
- System Load/Stress–the performance of the target-of-test under the stress of many clients requesting data at the same time.
- Business Scenario–specific steps that the user(s) will perform using the system.
- Security–application level security, including access to data or business functions and system-level security.
- Usability–the ability of the end-user to utilize the system as defined.
- Conversion–the conversion of one or more characters of data between the target-of-test and a device.
3) Determining the Test Stage applicable to the Requirements
Based on the Requirement Type(s) identified, the test stage can be determined. For example, the Requirements for the project may be provided in the form of a contractual agreement between an organization and a Vendor, where the Vendor’s software is expected to perform transactions within a stipulated period ot time. The test stage in this instance would be Performance testing; the stage(s) of the Performance test can be determined based on the Requirements defined in the agreement. Some of the industry standard test stages are:
- Unit Testing
- Integration Testing
- System Testing
- System Performance
- System Load/Stress
- System Security/Access Control
- Functional Testing
- User Acceptance Testing
A definition of the test stages that will apply for the various architectures within an organization will assist in determining the appropriate test stage applicable to the Requirements gathered.
4) Building a Requirements Matrix
The Requirements Matrix is a simple form of tabling the project requirements. This is especially useful for small enhancement projects comprised of one component and bug fixes of a short test duration.
Data gathered from sources, such as, Change Control Process documents, JAD sessions, and Vendor supplied documentation can be extracted and placed in a Requirements Matrix to streamline the list of items to be tested for a project.
Requirement descriptions should be written in a modular fashion and assigned identifiers. The identifiers should be used to cross-reference test cases (scripts). This provides a means for determining when there are enough test cases (scripts) to cover all Requirements. When there is a change in Requirements or design, the identifiers and cross-reference provide the means to quickly determine which test cases (scripts) need to be changed and which are re-usable.
The list of fields to include in a Requirement matrix are listed below:
- Requirements ID (used to trace the requirement to the script id)
- Requirement Description
- Test Stage/Type
- Test Case (Script) ID
- Pass/Fail
5) Reviewing the Requirements
Once the Requirements have been gathered and assessed, regardless of the sources used, documented information, including Requirement Matrix should be initially reviewed with Developers and Users or the appropriate project team personnel, so as to assure an understanding of what the feature is supposed to do. This can be accomplished by:
- Conducting formal reviews with Users and Developers or the appropriate Project Team members.
- Forwarding documented notes taken from discussions with Project Team personnel denoting understanding of the requirements as outlined in discussions. This provides the verification of the data and initiation of corrections prior to testing.