With Agile Development, Quality Is Built In, not Bolted On!

[article]
Summary:

As Agile software development practices mature and move into the mainstream, it is vital that organizations understand how Agile practices can help you deliver higher quality software. The Agile is a methodology for software development that promotes development iterations, open collaboration, and adaptability throughout the project life-cycle. Currently, the measures within many Agile projects focus on the successful delivery of software. We refer to these as process measures. Software is the end product and while these measures examine the progress through the delivery, there are other critical measures that need to be assessed. This collection of measurements we refer to as results measures. One critical measure that is often overlooked is called stability. That being said, the true measure of quality cannot be measured until after the project is done and the software is in production. I am not talking about improving the defect density.

 

Software or system stability refers to the consistency of systems or software used in support of a process or function. It examines the ability to continuously process without manual intervention due to process problems and or coding errors. Integrity refers to software or systems' operational availability without disruption, error or difficulty requiring manual intervention and is measured in hours per day. Stability is a state of quality that represents consistency in the processing of information that supports a business or technical function.

The absence of stability many times results in poor user acceptance, inefficiencies, errors and the inability to deliver repeatable results over a period of time. All of these attributes can have extremely negative implications for customer satisfaction and financials, both cost and value.

 

Operational Integrity Measures

 

One key measure of stability is the technical intervention level of effort. This measure uses combined metrics to evaluate the stability of software and is measured over time. Metrics included in this measure include number of technical events requiring human intervention, technical staff hours to correct problems, hours of inoperability as well as rate of error (# of errors per thousand transactions). It is important to note that user errors are not included in this calculation.

Operational Event is defined as any abnormal processing or system function that results in an error or disrupts the ability of the system to complete a process or transaction.

Operational Intervention is defined as technical support that is performed due to an unanticipated, unexpected process, system or software operational event.

Operational Hours are defined as the actual number of hours a system operates or is available for operation of the delivered functionality in a given period of time. For example a system whose operational envelope is defined as 24 X 7 X 365 has 61,320 annual operational hours.

Operational Integrity Measurement (OIM) = the total number of operational hours without technical intervention as measured in a specific period of time. The Operational Integrity Measure (OIM) should trend up over the first few weeks following implementation into the production environment.

 

In the month of June, there are 5,040 hours of operation. The chart above illustrates actual OIM measures for the month of June on a project. As you can see, technical difficulties were experienced early on in the implementation and began to smooth out later on in the month. Given the significance of the software development effort, the amount of disruption was considered to be minor and expected. However, Operational Integrity hit a low on the first of July. This was due to a significant number of issues uncovered in two areas - month end processing and roll-ups. An investigation found that very little time was spent defining the requirements and testing the functionality in these two areas. This clearly illustrates the point that you can not totally measure the success of a project until it goes through all the different business cycles while in operation.

All too often a project is deemed done or complete and closed off upon transition into an operational environment. We have a term for this - it is called the Damp;H factor ( Dumpage and Heavage ). Dumpage occurs when the project team delivers the software and basically dumps it on the operations team and provides little support or assistance. Heavage is when the project team knows about issues and heaves it over the wall to operations and lets them worry about it and fix it. While this seems to be symptomatic of many Agile efforts, this applies to all software development efforts. Quality measures of all project involving software must extend beyond delivery and into operations. One cannot truly assess the value of a delivered project until it is operating. Post turn-over support from the project team as well as measuring the operational integrity of the delivered software must be built into the measurements system and criteria for evaluating project success.

 


About the Author

Kevin G. Coleman is a seasoned management consultant with nearly two decades of experience. He brings with him a unique perspective on management, technology, and the global risk environment. He was the Chief Strategist for Netscape and has worked for leading consulting organizations such as Deloitte amp; Touche and CSC Consulting. During his career he has consulted in dozens countries and thirty plus states. Additionally, he has personally briefed fifteen executives from the Global 100 and nearly 100 CEO's worldwide. In addition, he has testified before Congress on technology policies.

 

About the author

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.