One of the major challenges in software development is ensuring that all the software components needed to do integration and end-to-end testing are available in the test environment. Implementing service virtualization can remove environment setup as a blocking condition—and enable project teams to release better software, faster.
One of the major challenges I have experienced in software development is ensuring that all the software components we need to do integration and end-to-end testing are available in our test environment. Some of these components, such as services, datasets, APIs, etc., may not yet be available at all, they may be undergoing maintenance, or they may be in place but do not contain the right test data to be able to perform the desired test cases.
As a result, test cycles take up too much time or can’t be completed, and test coverage suffers. In turn, this leads to lower product quality and longer product time to market. In voke inc.’s 2015 Market Snapshot Report on Service Virtualization, the more than five hundred people who took the survey responded that before the use of service virtualization, developers and testers waited an average of thirty-two days for everything needed to move forward with work. This shows that the problem at hand affects the whole software development cycle, not just the test team.
In this article, I’ll use a business case for a project I have been working on recently to describe how implementing service virtualization can remove environment setup as a blocking condition—and how this enables project teams to release better software, faster.
Service Virtualization
Service virtualization is the simulation of the behavior of software components that are unavailable or otherwise restricted during the preproduction stage of the software development lifecycle. These component simulators, also known as virtual assets, reflect the real software components’ behavior as closely as the tests require, but in a functional (think representative test data sets) as well as a nonfunctional manner—for example, through simulating response times of the original software component.
As I write this, almost all major vendors in the application lifecycle management domain offer a service virtualization solution as part of their product portfolios.
The Case Study
I recently helped implement service virtualization at a large telecommunications company to remove some major constraints the test team experienced in the test environment. To provide a bit of background: The test team was responsible for testing the order management application (the “order manager”) that handled the various business processes related to both taking the order and servicing it (“provisioning”) for new and existing customers. During order provisioning, the order manager needed to request data from and provide data to a number of adjacent systems. A typical order required communication with around ten of these dependencies in order to be successfully provisioned.
The bottleneck was one of the backend systems, as it required manual configuration for every order that was created in the order manager. This backend system was hosted off site, and configuration for a single order could take up to a week, depending on the availability of staff. This caused long setup times for test cases and test data for the test team. As a result, test cycles were long (up to six weeks), and automated end-to-end testing was virtually impossible.
Implementing Service Virtualization
1. Service virtualization removes constraints with regard to dependency availability in test environments.
The first step in the implementation of service virtualization was to create a virtual asset that simulated the behavior of the backend system responsible for the delay. Essentially, the virtual asset behaves as if the required order data had already been configured in the backend system by sending back the appropriate synchronous and asynchronous response messages to the order manager. Upon receiving these messages, which indicated everything was OK from the virtual backend point of view, the order manager automatically continued the provisioning process without further manual intervention.
As a result, the average time required for the provisioning of an order was reduced from around a week to just two or three minutes. Furthermore, after the initial order creation—which still needed to be done by hand but only took half a minute—no more manual intervention was required for the order to be fully provisioned and ready for further testing purposes.
2. Data-driven virtual assets allow test teams to easily manage their test data and increase their test coverage.
One of the main tasks of the now-virtualized backend dependency was returning the availability of certain products and services for a given ZIP code and house number. Before the test team started using virtual assets, they had to rely on a small set of available test data, meaning they were only able to test a restricted number of combinations of available ZIP codes and products and services.
By making the virtual asset data-driven, the test team was able to simulate all possible combinations of addresses and availability of products and services. It even allowed them to simulate situations that normally would not occur in a production environment. This enabled them to greatly increase their test coverage, especially when it came to testing the edge cases.
3. Service virtualization can enable or enlarge the scope of automated testing.
Before the introduction of service virtualization, the scope of automated tests was limited to unit and low-level integration tests; therefore, test automation was mostly done by developers. By reducing the time we needed to provision an order, we also enabled the test team to implement end-to-end test automation. System and end-to-end tests needed to be performed by hand as each test case touched the order-provisioning process.
After the successful introduction of service virtualization, the order-provisioning process could be set up, performed, and tested by code, with no humans involved. Because we no longer needed to wait for configuration, it was suddenly possible to automate. This resulted in far less time required for performing repetitive regression testing, which was an important part of the overall testing process.
In a later stage, we took the integration between test automation and service virtualization to the next level by setting up the virtual assets in such a way that their behavior could be dynamically altered in the setup stage of the automated tests. This enabled the team to perform both happy flow tests (the virtual asset returns “OK”) and negative flow tests (the virtual asset returns an error if the data was not present, without the need for manual configuration).
4. Testers are not the only ones who may benefit from using service virtualization.
The testers weren’t the only ones who profited from service virtualization. Before, developers could only do unit and integration testing using static mocks and stubs. By making the virtual assets available to the developers as well and giving them their own test data set to use, they could perform better and more comprehensive tests before releasing software to the test environment. This phenomenon is generally known as a “shift left” in the software development lifecycle, and it has led to defects being detected and resolved earlier—and, as a result, to fewer defects in the test environment.
Perhaps the biggest benefit for the organization as a whole, however, has been the reduction in time needed to execute a complete test cycle—from around four to six weeks to a day or two at most. This enabled the teams to move from a waterfall development method to agile, allowing the organization to release its software much more frequently, which gave it a true competitive advantage in the fast-paced world of telecommunications.
In other words: Testing drove development, which drove delivery—with service virtualization.
It worked for us. If you are having similar problems, you might want to try it.