Measurement in the CMM

[article]
Summary:

A recent question in the Quality Engineering-Metrics message board (on StickyMinds.com) asked about measurement at the different levels in the Software Capability Maturity Model. This article begins a series that will highlight the measurement requirements at each of the CMM Levels, starting with Level 2. Measurement in the CMM is often misunderstood when people focus only on the "Measurements and Analysis" sections of the model. This article offers an in-depth explanation of the CMM.

To appreciate the scope of the measurement activities needed to satisfy the CMM, you have to look at the entire model, from Commitments and Abilities through all the activities to Measurements and Verifying Implementation. If these terms (Commitments, Abilities, etc.) sound like arcane bits of CMM-ese, read chapters 3 and 4 of The Capability Maturity Model: Guidelines for Improving the Software Process1 to understand these terms and their relationship. The terms are not as important as recognizing that measurement is spread throughout the CMM.

Another important issue is the goal of the measurement. If you are looking for a checklist of what you need in order to achieve "Level 2," you will miss the value of measurement. Instead, you should approach measurement from a business-value perspective: "What is important to the management of software development in my business?" From this perspective, identify the data you need to run your business, and translate that into measures. Starting with Level 2, the most significant measures are size, effort, and schedule. Also part of Level 2 measurement are the Critical-Computer-Resources (think CPU, disk, memory, other performance measures), rate of requirements change, configuration management measures, and quality assurance measures.

In this series of articles, the word "measure" will mean "a unit of measurement (such as source lines of code or document pages of design)."Level 3 adds quality measures and changes the way measures are used. Level 4 uses the same basic measures, but again changes the way they are used. Level 5 again changes the way measures are used as a foundation for continuous improvement. This is a straightforward list of what needs to be estimated, collected, analyzed, and saved for future use. I've avoided justifications and explanations of their usage, as these answers are well documented and published in many software engineering books.

Level 2 Measurement Basics
Level 2 is about commitment to deliver products in terms of content, cost, and schedule. Thus, the measures should support the project manager and project team in meeting this goal. Any element that is estimated is also tracked for actual expenditures and is then available to improve estimating accuracy in the next cycle. The following measures support the Project Planning and Project Tracking and Oversight Key Process Areas (KPAs).

1) Size
Size is a fundamental measure, since it can be used to estimate effort and defects. Whatever dimension is used to measure size, it should provide a normalizable unit of measure (a volume or count) that can be used when estimating effort and defects. If you have 100 units of size, it should allow you to determine there are nn units of effort, or mm defects. If you have 500 or 1,000 or 10,000 units of size, the relationship to effort and defects should follow some sort of relationship to the smaller size number. This is a basis for most estimating models (SLIM, COCOMO, etc). There may be lower or upper sizes where the relationship breaks down (in the original COCOMO model, 2,000 lines of source code was the lower size that was recommended). In many cases, the size measure can be a count, for instance, of the number of problem reports per week, the number of enhancement requests, or the number of help desk calls per day, all of which can be translated into effort.

Size estimating at level 2 puts you into the mode of estimating a target, measuring against the target, and taking mid-course corrections. Think of the fire control on World War I battleships—they estimated range, wind, etc., then fired, corrected, and fired again. This is Level 2. Level 3 adds radar control. Level 4 adds computer-controlled fire control. Note that the size-to-defect relationship is a Level 4 Software Quality Measurement capability; however, when defining and collecting measures at Level 2 and Level 3, you should look ahead to see what you would need in the future.2) Effort
Effort may be generated via an estimating model that uses size as an input (among other parameters), or it may be gathered from individual tasks in a work breakdown structure, or a combination of both. In some cases, an effort-based estimate makes more sense than a size-based estimate, especially in a legacy product where enhancements may be small incremental additions to the product. In the case of small incremental changes, size is still a useful measure to track, as it will provide input to the coding effort, inspection effort, and defect estimates. Actual size can be measured by the configuration management system. When dealing with corrective maintenance (defect fixing), you should be using the number of changes/defects distributed by severity. This is still a volume or count and can be translated into size in lines of code when useful, but typically the count of fixes will be more useful. When these tasks exceed forty hours, the usefulness of effort-based estimating declines as the larger tasks start to become "fuzzy" in their definition, and the risk of omissions or oversight increases. I have always thought one of the side benefits to size-based estimating is that it shows the correlation between functions in the requirements statement and the effort required to implement them at a level of granularity that is harder to dispute than large-effort estimates.

3) Cost
Cost data not related directly to effort should be estimated and tracked. This includes travel, equipment, overhead, and other non-labor expenses.

4) Schedule
Schedule data should include start and end dates, as well as intermediate key milestones and activities in the project plan. The schedule should relate to the effort and cost estimates.

5) Critical Computer Resources
If performance or capacity is an issue for the installed software, then estimate and track these measures. Merely saying, "Performance is not an issue because we buy more hardware," is not a good answer here. If more resources are required, someone somewhere will need advance notice, as purchasing cycles do take time.

6) Engineering Facilities and Support Tools
Development and test hardware and support software needs to be estimated and tracked. The above elements are those required by the Project Planning and Project Tracking and Oversight Key Process Areas. Estimates, actuals, re-estimates, and replans all need to be captured, along with the reasons for changes, as part of improving the database used to develop future estimates.7) Other Level 2 Measurements
Outside of the Project Planning and Tracking KPAs, there are additional Level 2 measures that are needed.

Requirements Management. Management requires that measurements be made to determine the status of the requirements management process. Typically, the status of each requirement is known, as well as the number of open, closed, and in-process modifications to requirements, as these changes directly affect the effort and schedule commitments the project has made. As an example, at the end of a project you would want a sum of new and changed requirements and their impact assessments to evaluate the original accuracy of the project estimate. Hitting your effort or schedule estimate does not mean much in terms of estimating accuracy if you have had a 50 percent increase (or decrease) in requirements that was not tracked and evaluated during the project's life. If you know the ratio of new or changed requirements to the original count, over time you develop a baseline that says we have identified nn requirements, and know this typically grows by 30 percent, so we should be estimating for this larger number.

Subcontract Management. Subcontractor cost, staffing, and schedule performance should be tracked, which suggests the same measures tracked in Project Tracking and oversight are tracked by the contracting organization. Critical computer resources, as allocated to the subcontractor, are tracked. The cost of managing the subcontractor and delivery dates should be compared to the estimates in the plan.

Software Quality Assurance. The cost and effort spent on quality assurance activities should be measured and compared to the plans. Numbers of audits or reviews performed and completion of SQA tasks should be compared to plans. These measures are one indicator of whether "real" SQA activity is occurring. The actual effort expended is one way to determine whether or not there is an effective SQA organization. A second measure would be the number of nonconformance reports and their status (e.g., days open, number open or in different stages of resolution).

Configuration Management. The cost and effort spent on configuration management activities should be measured and compared to plans. The number of change requests, trouble reports, change/problem summaries (including severity or priority of the change or problem), revision history, change in size of products, and results of the various configuration audits are all measures of configuration management activities.

Closing
I've presented a brief overview of the Software Capability Maturity Model measurement requirements. These measurements should be aligned with your business needs. If some of the items mentioned above truly have no business value to you, then don't collect them. However, "I don't have time," "It's too hard," or "We're different" are excuses, not business reasons. If you are not collecting and using the measures suggested by the CMM, there ought to be a reasonable case for justifying your position. Remember that the model was developed based on good development practices across a wide range of companies, and this collective wisdom shouldn't be arbitrarily ignored.

End Notes
1 Paulk, et al. The Capability Maturity Model: Guidelines for Improving the Software Process. Addison Wesley, 1995. pp. 29-79.
2 Ibid. p. 359.

Copyright © 2001, E.F. Weller
® CMM, Capability Maturity Model, Capability Maturity Modeling, and Carnegie Mellon are registered in the U.S. Patent and Trademark Office

About the author

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.