Controlling Performance Testing in an Uncontrolled World
Think about it ... You are responsible for performance testing a system containing over 5 billion searchable documents to an active user base of 2.6 million users, and you are expected to deliver notification of sub-second changes in release response and certification of extremely high reliability and availability. Your n-tier architecture consists of numerous mainframes and large-scale UNIX
servers as well as Intel processor-based servers. The test environment architecture is distributed across large numbers of servers performing shared functions for a variety of products competing for test time and resources during aggressive release cycles. Because it is impractical and too costly to totally isolate systems at this scale, capacity and performance test engineers produce high quality
benchmarks and stress tests on an ongoing basis. James Robinson shares the top factors that allow LexisNexis to control performance testing for their very large, complex, and highly reliable system. Whether you have a large or small system to performance test, you will benefit from the gems of knowledge that LexisNexis has learned over the years.
Upcoming Events
Apr 27 |
STAREAST Software Testing Conference in Orlando & Online |
Jun 08 |
AI Con USA An Intelligence-Driven Future |
Sep 21 |
STARWEST Software Testing Conference in Anaheim & Online |
Recommended Web Seminars
On Demand | Building Confidence in Your Automation |
On Demand | Leveraging Open Source Tools for DevSecOps |
On Demand | Five Reasons Why Agile Isn't Working |
On Demand | Building a Stellar Team |
On Demand | Agile Transformation Best Practices |