|
The Joy of Legacy Code Even though the code may have been written only five years ago, there it is-a sprawling unintelligible mess that nobody wants to touch. For most people and teams, this reality is a cause for fear and loathing-something we want to sweep under the rug. We can, however, choose to see “bad” code as a challenge to restructure and refactor into a maintainable design that serves the business for years to come. Although legacy code presents many constraints on design choices, it offers the opportunity for incremental improvement. Michael Feathers shows you how to practice design within the boundaries of what some see as unintelligible code and explains ways to make the improvement process manageable. See how you can escape the fear that holds you back from productive action. Find out how to start with what you have now and progress toward a structure that supports the work at hand and immediately adds value.
|
Michael Feathers, Object Mentor
|
|
Harmful Project and Management Patterns Revealed Every organization has its own project patterns. Some management teams take a long time to start a project. Others interrupt and divert the project teams once they've started. Some project teams never finish because the product must be “perfect.” Still others believe there is a single solution to all their problems such as “Two weeks of overtime and we'll be caught up!”, or “If the testers just tested more”, or “We just need some more time designing.” Project managers also fall into patterns: overly optimistic, pessimistic, risk-encouraging, risk averse, etc. Johanna Rothman explores different project and management patterns to help you understand which patterns are working-and which are harmful-for you.
|
Johanna Rothman, Rothman Consulting Group, Inc.
|
|
The Elusive Tester-Developer Ratio Perhaps the most sought after and least understood metric in software testing is the ratio of testers to developers. Many people are interested in learning the standard industry ratio so that they can determine the proper size of their test organization. Randy Rice presents the results of his recent research on this metric and explores the wide range of tester-developer ratios in organizations worldwide. Learn why this metric is almost always not the best way to determine your test organization’s staffing levels and how to understand and apply this metric in more helpful ways. Find out how different tester-developer ratios relate to test effectiveness. Take away a new appreciation of your own tester-developer ratio and ways to meaningfully convey this metric to management to help rightsize your test team and improve the ROI of testing. Determine the "right ratio" of testers to developers in your team and company.
|
Randy Rice, Rice Consulting Services
|
|
Communicating the Meaning Inside the Metrics Measurement data is supposed to help you make better decisions; yet, the information provided under the term "metrics" is often confusing, obscure, or irrelevant to those who need it most. Those providing measurement data frequently produce charts, graphs, and reports that fail to illuminate significant conditions and leave decision makers clueless. The solution to the problem is understanding essential models of decision making and recognizing the need to communicate in the language of the decision maker-not in technological lingo. Terry Vogt explains how to anticipate the informational needs of the measurement user and how to translate those needs into meaningful, actionable measurement information. He illustrates his discussion with examples of both good and poor measurement information.
|
Terry Vogt, Booz Allen Hamilton
|
|
Creating a 'Digital Cockpit' for Software Delivery In many organizations, developing and delivering software has long been described as a "black box"-requests go in and many months later something comes out. But is it what was needed? Did it provide value to the organization? Was it a quality product? In many software projects, managers are flying blind and have very little in terms of meaningful or accurate data to guide their work. Nicole Bryan introduces the software delivery cockpit and explores the practical and pragmatic instruments and indicators-metrics and measurements-that it should include. She focuses on both leading and lagging metrics and indicators that apply regardless of the development methodology you use. Nicole introduces a core set of metrics focused on the critical aspects of software delivery: code integrity compliance, product quality, business alignment, and efficiency.
|
Nicole Bryan, Borland Software Corporation
|
|
Quality Metrics for Testers: Evaluating Our Products, Evaluting Ourselves As testers, we focus our efforts on measuring the quality of our organization's products. We count defects and list them by severity; we compute defect density; we examine the changes in those metrics over time for trends, and we chart customer satisfaction. While these are important, Lee Copeland suggests that to reach a higher level of testing maturity, we must apply similar measurements to ourselves. He suggests you count the number of defects in your own test cases and the length of time needed to find and fix them; compute test coverage-the measure of how much of the software you have actually exercised under test conditions-and determine Defect Removal Effectiveness-the ratio of the number of defects you actually found divided by the total number you should have found. These and other metrics will help you evaluate and then improve the effectiveness and efficiency of your testing process.
|
Lee Copeland, Software Quality Engineering
|
|
When to Ship? Choosing Quality Metrics It's time to ship your product and you're looking at pages of data about the testing work you've done over the last year. How well does this data prepare you for making the recommendation to ship the product or delay it-perhaps once again? Do you rely primarily on the data or do you fall back on "gut feel" and intuition to make your decision? In this highly interactive session, Alan Page discusses how common measurements, such as code coverage, bug counts, and test pass rates are often misused, misinterpreted, and inaccurate in predicting software quality. Learn how to select both quantitative and qualitative metrics that evaluate your progress and help you make important decisions and recommendations for your product. Share your own ideas for test and quality metrics and learn how to evaluate those metrics to ensure that they are accurately answering the questions you need them to answer.
|
Alan Page, Microsoft
|
|
Deception and Estimation: How We Fool Ourselves Cognitive scientists tell us that we are hardwired for deception. It seems we are overly optimistic, and, in fact, we wouldn't have survived without this trait. With this built-in bias as a starting point, it's almost impossible for us to estimate accurately. That doesn't mean all is lost. We must simply accept that our estimates are best guesses and continually re-evaluate as we go, which is, of course, the agile approach to managing change. Linda Rising has been part of many plan-driven development projects where sincere, honest people with integrity wanted to make the best estimates possible and used many "scientific" approaches to make it happen-all for naught.
|
Linda Rising, Independent Consultant
|
|
Function Point Analysis: A Quick and Easy Primer The function point metric is used by many organizations worldwide to more accurately size systems. Knowing the size of a system allows developers to better meet customer demands of functionality within time and budget and communicate about these issues with the system "owners." Based on the latest version of the International Function Point Users Group (IFPUG) Counting Practices Manual, David Garmus and David Herron provide a detailed explanation of the rules engineers must follow to accurately count function points. Join them to learn the value and use of function points within an overall software measurement program and the basics of how and when to use function point analysis (FPA). Examine real-world examples of software to see how to identify the different functional components according the IFPUG's FPA standards.
|
David Garmus, David Consulting Group
|
|
The Uncertainty Surrounding the Cone of Uncertainty Barry Boehm first defined the "Cone of Uncertainty" of software estimation more than twenty-five years ago. The fundamental aspect of the cone is quite intuitive-that project uncertainty decreases as you discover more during the project. Todd Little takes an in-depth look into some of the dynamics of software estimation and questions some of the more common interpretations of the meaning of the "cone." Todd presents surprising data from more than one hundred "for market" software projects developed by a market-leading software company. He compares their data with other published industry data. Discover the patterns of software estimation accuracy Todd found, some of which go against common industry beliefs. Understanding the bounds of uncertainty and patterns from past projects help us plan for and manage the uncertainties we are sure to encounter.
|
Todd Little, Landmark Graphics Corporation
|