Conference Presentations

Risk-Based Testing in Practice

The testing community has been talking about risk-based testing for quite a while, and now most projects apply some sort of implicit risk-based testing approach. However, risk-based testing should be more than just brainstorming within the test team; it should be based on business drivers and business value. The Test team is not the risk owner-the products' stakeholders are. It is our job to inform the stakeholders about risk-based decisions and provide visibility on product risk status. Erik discusses a real-world method for applying structured risk-based testing applicable in most software projects. He describes how risk identification and analysis can be carried out in close cooperation with stakeholders Join Erik to learn how the outcome of the risk analysis can-and should-be used in test projects in terms of differentiated test approaches.

Erik van Veenendaal, Improve Quality Services BV
A Balanced Scorecard Approach for Assessing Test Value and Success

Internal test metrics--test progress, defect density, and TPI/TMM measures on process improvement-do not reveal the complete picture of test value and success. By comparing common test metrics with those found in the Balanced Business Scorecard--financial, customer, internal, and learning/innovation metrics-we see the need to also report financial and customer measures. Some of these measures are quantitative (such as profits), and others are more qualitative (for example, customer satisfaction). Learn to measure the financial impact of testing through productivity metrics and measures of how testing affects the total cost of quality. Include in your reporting qualitative assessments such as the customers' perception of the usefulness of testing, the visibility of testing on projects, acceptability measures, and estimation accuracy.

  • Set measures for all viewpoints of testing's value and success
Isabel Evans, Testing Solutions Group Ltd
Progressive Performance Testing: Adapting to Changing Conditions

An inflexible approach to performance testing is a prelude to disaster. "What you see at the start isn't always what you get in the end," says Jeff Jewell. Based on his experience performance testing applications on numerous consulting projects, Jeff demonstrates the challenges you may face testing your applications and how to overcome these obstacles. Examples from performance testing on these projects will demonstrate some of the ways that changing conditions of the projects and the information they discovered in early tests caused the testing approach to change dramatically. Find out how hardware configuration, hardware performance, script variations, bandwidth, monitoring, and randomness can all affect the measurement of performance.

Jeff Jewell, ProtoTest LLC
Test Metrics in a CMMI Level 5 Organization

As a CMMI® Level 5 company, Motorola Global Software Group is heavily involved in software verification and validation activities. Shalini Aiyaroo, senior software engineer at Motorola, shows how tracking specific testing metrics can serve as key indicators of the health of testing and how these metrics can be used to improve your testing practices. Find out how to track and measure phase screening effectiveness, fault density, and test execution productivity. Shalini describes the use of Software Reliability Engineering (SRE) and fault prediction models to measure test effectiveness and take corrective actions. By performing orthogonal defect classification (ODC) and escaped defect analysis, the group has found ways to improve test coverage.

CMMI® is a registered trademark of Carnegie Mellon University.

  • Structured approach to outsource testing
Shalini Aiyaroo, Motorola Malaysia Sdn. Bhd
STAREAST 2006: Apprenticeships: A Forgotten Concept in Testing

The system of apprenticeship was first developed in the late Middle Ages. The uneducated and inexperienced were employed by a master craftsman in exchange for formal training in a particular craft. So why does apprenticeship seldom happen within software testing? Do we subconsciously believe that just about anyone can test software? Join Lloyd Roden and discover what apprenticeship training is and-even more importantly-what it is not. Learn how this practice can be easily adapted to suit software testing. Find out about the advantages and disadvantages of several apprenticeship models: Chief Tester, Hierarchical, Buddy, and Coterie. With personal experiences to share, Lloyd shows how projects will benefit immediately with the rebirth of the apprenticeship system in your test team.

  • Four apprenticeship models that can apply to software testers
  • Measures of the benefits and return on investment of apprenticeships
Lloyd Roden, Grove Consultants
Using Production Failures to Jump Start Peformance Test Plans

Learning from a production system failure is not a model MassMutual Financial Group would have chosen. However, when one of their key applications failed under load in production, they turned on a dime and changed their performance testing approach, focus, and capabilities. Let’s set the scene: They ran large numbers of transactions through a performance test tool and, then, went live with a new application that was to be used by all their key users. Within hours, the application had ground to a virtual halt under normal production load. What went wrong? Join Sandra Bourgeois to find out not only what went wrong but also what they learned from failure and how they set about to improve their knowledge, skills, and tools. This is your chance to learn from their mistakes and avoid repeating them in your organization.

  • Lessons learned from the performance failure of a mission-critical application
Sandra Bourgeois, Massachusetts Mutual Life Insurance Company
Hallmarks of a Great Tester

As a manager, you want to select and develop people with the talents to become great testers, the ability to learn the skills of great testers, and the willingness to work hard in order to become great testers. As an individual, you aspire to become a great tester. So, what does it take? Michael Hunter reveals his twenty hallmarks of a great tester from personality traits-curiosity, courage, and honesty-to skills-knowing where to find more bugs, writing precise bug reports, and setting appropriate test scope. Measure yourself and your team against other great testers, and find out how to achieve greatness in each area. Learn how to identify the great testers you don’t know that you already know!

  • The personality traits a person needs to become a great tester
  • The talents a person needs to become great tester
  • The skills you need to develop to become a great tester
Michael Hunter, Microsoft Corporation
Trends, Innovations and Blind Alleys in Performance Testing

Join experts Scott Barber and Ross Collard for a lively discussion/debate on leading edge performance testing tools and methods. Do you agree with Scott that performance testing is poised for a great leap forward or with Ross who believes that these "silver bullets" will not make much difference in resolving the difficulties performance testing poses? Scott and Ross will square off on topics including commercial vs. open source tools; compatibility and integration of test and live environments; design for performance testability; early performance testing during design; test case reuse; test load design; statistical methods; knowledge and skills of performance testers; predicting operational behavior and scalability limits; and much more. Deepen your understanding of the new technology in performance testing, the promises, and the limitations.

  • The latest tools and methods for performance testing
Scott Barber, PerTestPlus, and Ross Collard, Collard & Company
Diagnosing Performance Problems in Web Server Applications

Many application performance failures are episodic, leading to frustrated users calling help desks, frantic troubleshooting of production systems, and re-booting systems. Often these failures are a result of subtle interactions between code and the configuration of multiple servers. On the other hand, well-designed applications should demonstrate gradual performance degradation and advanced warning of the need to add hardware capacity. Join Ron Bodkin as he discusses the patterns of application failure, some common examples, and testing techniques to help reduce the likelihood of episodic failures in production. Learn about the tools and techniques needed to instrument the application, monitor the infrastructure, collect systems data, analyze it, and offer insight for corrective actions.

Ron Bodkin, Glassbox software
Performance Testing Early in Development Iterations

When the software architecture is emerging and many features are not yet ready, performance testing is a challenge. However, waiting until the software is almost finished is too risky. What to do? Neill McCarthy explores how performance testing can be made more Agile and run starting in the early iterations of development. Learn how to implement early performance automation using appropriate tools in build tests and the requirements for early performance testing of user stories. Neill presents lessons learned from his "coal face" of performance testing in Agile projects and shares ideas on how you can add more agility to your performance testing.

Neill McCarthy, BJSS

Pages

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.