Conference Presentations

Choosing Effective Test Metrics

Every software project can benefit from some sort of metrics, but industry studies show that 80 percent of software metrics initiatives fail. How do you know if you've selected the right set of test metrics and whether or not they support your organizational goals? Alan Page offers methods for determining effective and useful test metrics for software quality and individual effectiveness and presents new studies showing correlation between certain metrics and post-ship quality. Alan provides examples of how commonly used metrics can be easily misused and offers helpful tips for implementing the right test metrics for your project and organization. Find out what can cause metrics projects to fail and what you can do to avoid being part of the 80 percent failure statistic.

Alan Page, Microsoft Corporation
Journey to Test Automation Maturity

Organizations that want to automate their testing generally go through a number of stages before they reach maturity. Whether you are about to begin your journey or are well under way, it is important to know where you are going and where you could go. In automating test execution, many organizations stop short of achieving their maximum benefits. This presentation looks at six levels of maturity in test automation and includes a self-assessment test to see where you are. It is important to have good objectives and realistic plans to achieve them. But in automating testing, these often seem very plausible at first but are not well expressed or are unrealistic. This presentation covers typical problems and examples of unrealistic automation plans and objectives. Leave with advice to help you have a successful journey to test automation.

Dorothy Graham, Grove Consultants
Defect Prediction with Reliability Growth Modeling

Although typically used at places like NASA, reliability growth modeling also can be used for common business and financial applications. Reliability growth modeling predicts how many defects will be in your release, how quickly your testing should uncover them, and how many will still be present after delivery. You can use this information to make ship/no-ship decisions, manage test coverage, and appropriately staff the help desk. Using a free, downloadable tool, CASRE (Computer Aided Software Reliability Estimation) from the Jet Propulsion Laboratory, Michael Allegra outlines a step-by-step process to produce and interpret dynamic reliability growth models. Michael walks you through an example project, explains how to get started with modeling, and demonstrates that you don't have to be a rocket scientist to use it.

Michael Allegra, GSX
You've Just Been Named Manager of Software Process Improvement

We have all heard of the accidental project manager, but how about the accidental irocess improvement manager? If you have fallen into the role of process improvement, there are tips and tricks to help you navigate through sometimes treacherous waters. As a former quality assurance manager, Sandi Oswalt returned from maternity leave to be told she had a new job--process improvement. In response to her question about the goals for process improvement, her boss simply said, "Hmmm? . . . Just improve." Fortunately, things got better from there for Sandi and eventually for the organization. Sandi shares some important insight from her first year: trust is key-sanitize information and ask permission; data speaks volumes-back conjecture up with basic metrics; let people find their own solutions; change is hard and takes time; all the mandates come from management; and finally, don't give up!

Sandi Oswalt, First American Credco
Establishing a CMMI Compliant Metrics Program

Implementing a useful measurement program that also addresses the measurement requirements of the SEI CMMI® process areas can be a daunting task. Organizations pursuing Level 2 and Level 3 maturity have difficult decisions to make designing a measurement program that is both practical and fully compliant. Steve Lett describes the measurement requirements contained within the CMMI® Maturity Level 2 and 3 Process Areas (PAs). He then recommends a common sense approach and a set of measurements that will address all of the PA requirements. See examples of measurement charts and learn how they can be highly valued improvement tools within your organization. Find out about the prerequisites for success and the steps necessary to establish a measurement program.

Steven Lett, The David Consulting Group
Tips for Performing A Test Process Assessment

Looking for a systematic model to help improve testing practices within your team, department, or enterprise? Recently, Lee Copeland has led several, major test process assessment projects for both small and large test organizations. Whether you are chosen to lead an assessment project within your organization or just want to get better at testing, join Lee as he shares insights he has learned-beginning with the importance of using a proven assessment model. Lee discusses the pre-assessment preparation required, including reviewing documentation and choosing interview candidates, tips for interviewing using a questionnaire, analyzing the data you gather, writing an assessment report, and delivering your findings in a way that will be understood and acted upon.

Lee Copeland, Software Quality Engineering
Software Quality Metrics as Agents for Change

What is the purpose of software quality metrics and what values do they provide to the organization? What metrics not only report on and but also help drive changes and improvements in software quality? Based on his work at EMC, Jim Bampos discusses the metrics they use to predict software quality at ship time and the key quality questions to ask customers after ship. Find out what it takes to roll out a successful metrics program and the results you can expect, including quality ownership across the organization and improved customer satisfaction. Watch out for unintended consequences and wrong behavior that can result from a metrics program. Learn from Jim the key steps to ensure that your organization adopts the metrics program and that people are held accountable for the data and results.

James Bampos, EMC Corporation
Leading Cultural Change When Implementing Process Improvements

When we are part of an improvement initiative such as CMMI®, Six Sigma, or Agile practices, we often focus on the technical aspects and pay little attention to the people and cultural issues. Major change produces a significant disruption of expectations whether the change is perceived as positive or negative. So, you need a defined process to help ensure that your improvement initiative achieves its goals. Jennifer Bonine presents the Organizational Change Management (OCM) process to help you manage the human aspects of implementing major, complex changes. She describes eight human risk factors that can sabotage process improvement programs. Learn from Jennifer how OCM can help you deal with people’s reactions to change and provide you with a change implementation architecture.

Jennifer Bonine, Express Scripts
Politics and Polemics in a Corporate Measurement System

Long, long ago in a company far, far away, Bill Curtis designed one of the largest software measurement systems ever deployed. Ultimately, it reported to senior management system measures that foretold the loss of one-half of the fifth largest company on earth. Covering all of the corporation's far-flung businesses, this measurement system highlighted the four critical factors that determine a project's fate and provided categories for normalizing and classifying the company's software. It showed that many executives do not understand normal variation, and others do not want to admit what they do know. Through Bill's fascinating and sometimes bizarre story, you'll be convinced that system measurement is not only a technical undertaking but also a political one as well. In addition, you will learn about measurement standards for successful systems-standards you may be able to apply in your organization.

Bill Curtis, Borland Software Corporation
Peanuts and Crackerjacks: What Baseball Taught Me about Metrics

Because people can easily relate to a familiar paradigm, analogies are an excellent way to communicate complex data. Rob Sabourin uses baseball as an analogy to set up a series of status reports to manage test projects, share results with stakeholders, and measure test effectiveness. For
test status, different audiences-test engineers, test leads and managers, development managers, customers, and senior management-need different information, different levels of detail, and different ways of looking at data. So, what "stats" would you put on the back of Testing Bubble Gum

Robert Sabourin, AmiBug.com Inc

Pages

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.