Conference Presentations

Using Source Code Metrics to Guide Testing

Source code metrics are frequently used to evaluate software quality and identify risky code that requires focused testing. Paul Anderson surveys common source code metrics including Cyclomatic Complexity, Halstead Complexity, and additional metrics aimed at improving security. Using a NASA project as well as data from several recent studies, Paul explores the question of how effective these metrics are at identifying the portions of the software that are the most error prone. He presents new metrics targeted at identifying integration problems. While most metrics to date have focused on calculating properties of individual procedures, newer metrics look at relationships between procedures or components to provide added guidance. Learn about newer metrics that employ data mining techniques implanted with open source machine-learning packages.

  • Common code metrics and what they mean
Paul Anderson, GrammaTech, Inc.
Measure Quality on the Way In - Not on the Way Out

If you have been a test manager for longer than a week, you have probably experienced pressure from management to offshore some test activities to save money. However, most test professionals are unaware of the financial details surrounding offshoring and are only anecdotally aware of factors that should be considered before outsourcing. Jim Olsen shares his experiences and details about the total cost structures of offshoring test activities. He describes how to evaluate the maturity of your own test process and compute the true costs and potential savings of offshore testing. Learn what is needed to coordinate test practices at home with common offshore practices, how to measure and report progress, and when to escalate problems. Jim shares the practices Dell uses for staffing and retention, including assessing cultural nuances and understanding foreign educational systems.

Jan Fish, Lifeline Systems
The Impact of Poor Estimating - And How to Fix It

The team, running Scrum by-the-book for three months, was continually failing to meet its delivery dates. As a result, trust between the business managers and the team degraded almost to a point-of-no-return. The team, which held bi-weekly retrospectives, could not pinpoint the problems causing its inability to ship. Mitch Lacey was asked to assist the team in finding the root cause of its problems. He analyzed multiple aspects of the project-from individual work items to planning meetings. While multiple issues were identified, one thing stood out above all others--the estimation process used had caused the team to miss its deadlines. Mitch discusses the estimation problems the team was having, how they were discovered and fixed, and the resulting improvement in financial and customer satisfaction.

Mitch Lacey, Ascentium Corporation
Agile Development Practices 2007: Agile Development and its Impact on Productivity

An agile approach can deliver recognizable value to organizations. Using examples from recent projects, David Garmus demonstrates that software development projects can benefit from using the agile methodology when appropriate. Delivering a project in an organization that is following an agile implementation methodology can be faster and more productive than the same project in an organization that is using the traditional waterfall approach. David provides actual delivery (project performance)

David Garmus, David Consulting Group
Empirical Studies of Agile Practices

Gone are the religious wars of plan-driven vs. agile software development methodologies and practices. Recent surveys indicate agile practices are being adopted in many software development organizations, with others seriously contemplating making the switch. The question from most organizations now has turned from plan-driven vs. agile to which agile practices they should use in their development process to most improve their business results. The experiences of other organizations that have adopted agile practices help answer this question. Based on her empirical studies of Extreme Programming teams and other agile practices, Laurie Williams shares her views on which agile methods have been most beneficial to industrial teams practicing agile development. Armed with real data, you can implement changes with more confidence that they will add value in your organization.

Laurie Williams, University of North Carolina
Test Metrics: The Good, the Bad, and the Ugly

Appropriate metrics used correctly can play a vital role in software testing. We use metrics to track progress, assess situations, predict events, and more. However, measuring often creates "people issues," which, if ignored, become obstacles to success and can even destroy a metrics program, a project, or an entire team. Metric programs may be distorted by the way metrics are depicted and communicated. In this interactive session, John Fodeh invites you to explore the good, the bad, and the ugly side of test metrics. John shows how to identify and use metrics for assessing the state and quality of the system under test. When being measured, people can react with creative, sophisticated, and unexpected behaviors. Thus our well-intentioned efforts may have a counter-productive effect on individuals and the organization as a whole. The ugly side of metrics is encountered when people manipulate metrics.

John Fodeh, HP Software
Measures and Metrics for Your Biggest Testing Challenges

Over the course of many STAR conferences, Ed Weller has collected a list of your biggest challenges in testing-lack of time, unrealistic deadlines, lack of resources, inadequate requirements, last minute changes, knowing when to stop testing, and poor quality code from development. Using this list and Victor Basili's "Goal, Question, Metric" approach to measurement, Ed identifies the measurements and metrics that will help test managers and engineers objectively evaluate and analyze their biggest problems. By doing so, you can map out improvement options and make a strong business case for the resources and funding you need. By providing management with objective evidence rather than subjective opinions, which they call "whining," you will improve your chances for success. Just as importantly, you will be able to use these measurements to guide and communicate your progress with meaningful data.

Edward Weller, Integrated Productivity Solutions, LLC
When Good Numbers Go Bad

All measurement numbers begin their life with the objective of being good and useful tools. Often a combination of mistakes, misunderstandings, organizational politics, and poor usage intersect to make these "Good Numbers Go Bad." Valuable measurements act as a nexus that focuses the members of an organization on its goals. Such measurements are relevant to the organization, predictive in that they provide foreknowledge of events, and broad enough to be useful in more than one situation. Whether you are a software manager, project manager, or a measurement guru, one of your roles is to act as the keeper of the numbers and the steward of useful information. Thomas Cagley illustrates the unfortunate realities of how good numbers can go bad and offers suggestions on how to make measurement a positive force in your development organization.

  • Common measurement mistakes that many organizations make
Thomas Cagley, The David Consulting Group
We Need It by October: What's Your Estimate?

Letting good estimates made by smart people be overwhelmed by the strong desires of powerful people is a cardinal sin of project management. Accurate estimates are the foundation of all the critical project decisions regarding staffing, functionality, delivery date, and budget. How do we properly estimate in a world where tradition declares that the deadline is set before the requirements are even known? Tim Lister offers practical advice on dealing with this thorny issue. He presents strategies and tactics for project estimating and describes his favorite estimating metric, the Estimating Quality Factor(EQF). By thinking of your project this way-goals are important and so are good estimates-you will be on the road to better quality and better projects. If you can learn to start the project and estimate continuously as events unfold, your goals and estimates will eventually converge.

Tim Lister, Atlantic Systems Guild, Inc.
Measuring and Monitoring the End Game of a Software Project: Part Deux

How do you know when a product is ready to ship? QA managers have faced this question for years. Mike Ennis shares a process he uses to take the guessing out of when to ship a product, replacing guessing with key metrics to help you rationally make the right ship decision. Learn how to estimate, predict, and manage your software project as it gets closer to its release date. Mike shows you which metrics to track and how to collect them without undue overhead on the project. Define a ratings scale for each metric you collect and create a spider chart indicating that the product is ready-or not. Mike's presentation is a must for individuals or organizations that are serious about releasing their software products when they are ready-and not before-and knowing in advance when the software will be ready.

  • Manage release risks in any software project
Mike Ennis, Accenture

Pages

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.