Conference Presentations

Quantifying the Cost of Test Escapes

Creating an effective test strategy is an expensive undertaking for complex software applications. While the money invested in the process is relatively easy to measure, the return on that investment (ROI) is much harder to quantify. Duncan Lane discusses an objective, metrics-driven approach his organization employs to evaluate the financial benefits of testing and to assess the right level of investment in testing. Find out how to identify the best areas for future investment in testing at your company. Use a quantitative approach to analyze your test processes, and identify the improvements needed to produce the product quality your organization expects.

  • Determine the ROI of testing in your organization
  • How to develop and use bug impact and bug cost metrics
  • Methods for quantifying hard and soft cost factors for defects
Duncan Lane, Hewlett-Packard
Objective Measures from Model-Based Testing

Many businesses are looking for the right project measures as they relate to project planning, scheduling, and performance. Mark Blackburn gives guidance on defining, collecting, and analyzing measures derived from a model-based testing method. These measures and their use are described in terms of an information model adapted from the ISO/IEC 15939, Software Engineering-Software Measurement Process. The model-based method associated with these measures involves modeling requirements and mapping modeled requirement variables, referred to as object mappings, to interfaces of the target test system.

  • Fundamental units of measure derived from model-based artifacts
  • Graphical representations of measures and how to use them to estimate project duration
  • Real-time project data used to predict the completion of an ongoing project
Mark Blackburn, Software Productivity Consortium
Evaluating Test Plans Using Rubrics

The phrase "test plan" means different things to different people. There is even more disagreement about what makes one test plan better than another one. Bernie Berger makes the case for using multi-dimensional measurements to evaluate the goodness of test plans. Walk away with a practical technique to systematically evaluate any complex structure such as a test plan. Learn how to qualitatively measure multiple dimensions of test planning and gain a context-neutral framework for ranking each dimension. You'll also find out why measurement of staff technical performance is often worse than no measurement at all and how to use this technique as an alternative approach to traditional practices. [This presentation is based on work at Software Test Managers Roundtable (STMR) #8 held in conjunction with the STAR conference.]

  • Qualitatively evaluate complex structures, like test
  • Ten dimensions of test planning
Bernie Berger, Test Assured Inc.
Test Outsourcing Approach and Implementation

If your organization is considering outsourcing some parts of its testing activities, you'll learn valuable lessons from UBS Investment Bank's approach to and implementation of test outsourcing. By utilizing a component-based operational model for test execution, automation, and environment management, UBS was able to outsource many low-level test functions and focus their smaller in-house test groups on strategy and process issues. Keith Klain discusses the vendor selection process (RFI, RFP, and reviews of outsourcing vendors), service agreements, knowledge management issues, training on in-house technologies, specific testing delivery and management processes, and lessons learned.

  • Strategies for qualifying your organization for outsourcing testing activities
  • A risk-based approach to test execution outsourcing
  • An organizational road map to outsource testing
Keith Klain, UBS Investment Bank
A Survey of Test Automation Architectures

How are you going to develop and run 1000 test cases automatically and unattended? Commercial test automation tools often get a bad rap because many organizations never get past the record/playback/fail cycle of frustration. These tools, however, can contribute to your testing needs; first you have to understand what has to be done to make them work. Jamie Mitchell outlines several automation architectures that are being used successfully today and discusses the pros and cons of each. Find out which framework or combination of several frameworks will be successful in your environment.

  • Review of commercial test automation tool categories for functional testing
  • Automation frameworks to support different testing needs
  • What it takes for automated test cases to run robustly
Jamie Mitchell, Test & Automation Consulting LLC
Cross-Organizational Change Management

The phrase "test plan" means different things to different people. There is even more disagreement about what makes one test plan better than another one. Bernie Berger makes the case for using multi-dimensional measurements to evaluate the goodness of test plans. Walk away with a practical technique to systematically evaluate any complex structure such as a test plan. Learn how to qualitatively measure multiple dimensions of test planning and gain a context-neutral framework for ranking each dimension. You'll also find out why measurement of staff technical performance is often worse than no measurement at all and how to use this technique as an alternative approach to traditional practices. [This presentation is based on work at Software Test Managers Roundtable (STMR) #8 held in conjunction with the STAR conference.]

• Qualitatively evaluate complex structures, like test plans
• Ten dimensions of test planning

Federico Pacquing, Jr., Getty Images, Inc.
Building an Independent Test Group

Are you attempting to start an independent test group or increase the scope and value of your present group? After building a highly effective thirty-person test group, Scott Eder reflects on the three major areas where he focused and the challenges he faced along the way. Take away sample work scope and purpose statements for your test group, and learn how to set realistic expectations at all levels within your organization. Find out the key processes that Scott implemented immediately to get his team off to a good start.

  • The foundations of an independent test group that is valued by your organization
  • Ways to build relationships with key stakeholders in order to foster a supportive environment for test and quality
  • How to create a sense of identity around which your test team can rally
Scott Eder, Catalina Marketing
Assuring Testable Requirements

One strategy for assuring testable software is to assure testable requirements, i.e., requirements that are clearly and precisely specified and cost-effectively checkable. David Gelperin describes two specification techniques, action contracts and Planguage quality specs, which both support testable requirements. Functionality can be precisely defined with pre- and post-conditions using action contracts. The measurement of non-functional characteristics can be precisely specified with Planguage specs. The techniques will be illustrated with examples and short exercises.

  • An application information architecture that assures testability
  • How to specify functions with action contracts
  • How to specify measures for nonfunctional characteristics
David Gelperin, LiveSpecs Software
Test Metrics: A Practical Approach To Tracking and Interpretation

You can improve the overall quality of a software project through the use of test metrics. Test metrics can be used to track and measure the efficiency, effectiveness, and the success or shortcomings of various activities of a software development project. While it is important to recognize the value of gathering test metrics data, it is the interpretation of that data which makes the metrics meaningful or not. Shaun Bradshaw describes the metrics he tracks during a test effort and explains how to interpret the metrics so they are meaningful to the project and its team members.

  • What types of test metrics should be tracked
  • How to track and interpret test metrics
  • The two categories of test metrics: base and calculated
Shaun Bradshaw, Questcon Technologies Inc
Contrasting White-Box and Black-Box Performance Testing

What exactly do people mean when they say they are going to run a "black box performance test"? And why would they choose to adopt such a test strategy over a potentially more revealing approach such as "white box performance testing"? Steve Splaine answers these and other performance testing questions by comparing and contrasting these two techniques, focusing on test design, test execution, and test results. In this session you'll discover which approach will work best for you or if a combination of both makes more sense in the context of your own projects-for some the answer may be black or white, but for others it maybe a shade of gray.

  • The pro's and con's of white and black box performance testing techniques
  • What is meant by the term "gray box performance testing"
  • Examples of post-testing performance improvements
Steve Splaine, Nielsen Media Research

Pages

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.