We are on the brink of a massive shift in CM, both philosophically and technologically. Our abilities and our means must undergo a change in how we operate and how we are perceived. The changes are incredibly exciting and offer enormous opportunity for the discipline. We're being backed against the wall in many organizations but we can be climbing on top of the wall like Berliner's at Check Point Charlie when the wall fell, moving our CM nation to a much better place.
Many of the tools we have play nicely in our sandbox. But the changes we make going forward must be geared toward the organizational management level. We're all so busy controlling the color of each pixel, we lose track that senior management rarely sees the pixels, only the picture. One hundred new features may go into a release. How much of that does the executive level care about? Maybe one or two. The rest is just expected improvement to match competition or fixes to problems from previous releases. All the CCB work, the file-by-file management, and the myriad of test cases and results don't mean diddly up the chain. The questions are always simple and almost always the same. When will "it" be done? How much will "it" cost to produce?
All the grunt work on our end becomes static outside our circle. That means we need to do much better packaging of our "exports." We don't just export software products, we export information about how well we operate. Are systems running effectively? Are we continually introducing problems as we fix things? Are defects at an all time low because requirements are much better written now than ever and we've improved the training cycles for software developers? Is there a disparity in the defect rates internally versus externally? The tools and processes we utilize control the SDLC; they also have to
be able to show our successes and failings. Why shouldn't the senior level be directly pulling reports showing the number of open CR's, maintenance releases, and other activity for major systems to understand system health? Many in IT complain when the order comes down to push a dying system well past its viable state instead of using those resources for the next generation. We complain when funding is cut because "they just don't understand." Our tools are not built with the executive in mind yet they are the most primary customers we have. The company is spending hundreds of thousands, if not millions, of dollars on various efforts but most places have to depend on status reports by word of mouth, with all their intonations, shades, and opportunistic wording? That's a huge shortfall for our tools. Our tools are about exactness, not interpretation. This isn't just configuration management at a micro level. We all want to provide better solutions at a macro level. Lifecycle management is what we ultimately want to achieve. If we want those better solutions, we can't just buy them, we have to set the expectations for ourselves.
It's one thing to hold yourself to a standard of activity. It's something else to hold yourself up for others to see, especially when you are setting your own goals. That visibility raises the level of competence of everyone involved. That push for excellence is the kind of behavior that improves quality at every step. Many times competitors are competing against themselves. Can they shave that extra second off or be more accurate with a throw or hang on just 10 more seconds? In our case, it's a function of doing the work right across functional areas. Are we capturing and delivering baselines correctly or leaving no issues abandoned?
To some extent, this is what audit is about. Show that we do what we say we expect of ourselves. When we fail, the discrepancy is written up. But to where? If it never escapes the bubble, the pressure to improve is lessened. In the case of many organizations, where it escapes the sphere, what percolates up is that someone violated a process rule. To those outside the bubble, that can look like red tape holding back the heroes instead of truly seeing the process preventing the hero-wannabees from costing the company more in the long run. We need to be held visibly accountable and not sweep our repeated indiscretions under the rug. It makes us less of an organization.
That means allowing the light to pervade all areas of our lifecycle and all projects. Accountability is the evil twin of quality. If we build our processes to reflect that attitude of excellence and clear responsibility, and then build the tools around them, we can also export that data to show those outside the bubble that we repeatedly run efficiently and justify our requests for resources. CM integration to non-intrinsic software development tools is where we need to go in order to cross the bubble's membrane and provide the greatest benefit. And we don't have to reinvent the wheel to do that. CM tool writers can create an add-on interface or perhaps simply an open standard that management and executive tools can tie into. It's simple vertical integration. So we should be building interfaces to the server support systems and tying in the maintenance efforts. The SD LC does not end when a product goes to production but when it's retired from production.
From an external view, we're so busy perfecting our tools for our own benefit that we aren't serving the real customer, and for that we face uphill battles for new or better tools. The client is not our customer. It's our customer's customer. Senior management is our customer. Having awesome tools that completely control the SDLC but that few beyond our perimeter know or understand is too much like being the proverbial black hole they pour money into. If you are not part of the visible product, you are just part of the bill of materials where the only focus is cutting cost. Perceived value is as important as real value. Honestly, many of us are comfortable in IT because we rarely have the interpersonal skills for marketing. It's not how we're geared mentally. But that's exactly where we need to take this discipline. CM is incredibly valuable to an organization. Why not demonstrate it? Why not hand them easy tools to understand the issues we so desperately need them to know. A little self-promotion isn't such a bad thing.
One of the most consistent gripes of IT organizations I've been involved with and seen frequently commented on in journals is the lack of requirement stability or just the lack of requirements to start with. As heroes we try to take what the client wants and turn that into code, even when it's a little or a lot vague. Then we as a project team get hammered because defect rates skyrocket, testing takes longer than expected, and deliverable dates aren't met. We have to provide better metrics for our own safety and, more importantly, for the customer's expectation of quality. What percentage of defects are being dispositioned as misinterpretations of requirements? That would be indicative of the quality of the requirements/design documents, an IT internal issue. Comparatively, what number of change requests are coming through? That makes it obvious that the customer didn't know what they wanted but that IT has been working hard to meet their needs. Those are things we can do today but how do we push that up the chain? The tools need to be geared to both collecting that kind of information and making it easily presentable across project databases. Then we can get those traditional issues fixed more readily.
And that is the real issue. We have a TON of data. What went into which build, what were the issues, and how often was it released? Our tools hold massive amounts of metadata. But too few tools have the ability, without proprietary scripting at individual companies, to provide the bigger picture and demonstrate the real issues that affect our daily work and rework, and most importantly, the cost to the company's bottom line. How often do you see the data showing help desk call volume tied to release management, and further back to user training? We see the same mistakes project after project with the same or new user groups but there is little to show the costs of poor management. It's our worst repeatable process. Pushing those kinds of behaviors up to the level where something can and will be done about it is the kind of incremental improvement we need within the bubble to make it possible to cross the membrane and be heard.
Back inside the bubble we still have opportunities to make significant changes. Why can't I drag and drop a baseline label to a build icon and expect a complete object? Or drag two release labels to a merge tool and let it create a new branch or integrate to one or the other existing branches? Why don't I have a graphical presentation that shows me where all the defect reports point to in the code so I can visually see where the problems hit most? Maybe there's something the company can fix or even buy that will do what our problem code does, but cheaper and better. That's not to put ourselves out of business
but if you aren't providing the best value for the money or at least a competitive advantage, there are no ruby red heels to click together.
Our tools have so much to offer. Requirements management, version control, testing functionality and recording, build management, but we aren't correlating that with all the relevant information possible. It's like having a library of all the knowledge of a business yet it requires a ‘key' to open it. What's most important is that we can't imagine yet where all this could take us. Think about 25 years ago. Early 80's. What kind of person would you need if you wanted someone to create a web page, manage thousands of songs, control bank transactions electronically, and book a vacation? Today, you can literally take anyone off the street to do it. MySpace, Ipod, Orbitz, and virtually any bank's online account management make this viable. Does that mean we've dumbed our systems down? Not at all. We've simply put functionality ahead of logic, market need ahead of Orwellian regimentation. That's the kind of thinking we need to make the next leap. Linking all the functional areas, past production implementation for the true SDLC, across tools that are not traditionally CM concerns is the comprehensive approach that will keep us relevant and effective in the future.
Managing the finite detail is the core of what we do but we simply cannot allow it to be the only thing we do. Version control, branching and merging, and all the other aspects of the discipline of CM are part of the whole cycle of organizational life. We are effectively clinging to the coral while the pH balance of the sea changes around us and kills the reef. We have to provide our own ability to alter the pH balance of the sea in which we live. The only way we can do that is to get outside our bubble.
Randy Wagner is a Contributing Editor for CM Crossroads and a software configuration management consultant. His experience ranges from major financial institutions to multimedia multinationals to the Federal government. Working in small to large project efforts has given him a unique perspective on balancing the discipline of SCM and enterprise change management with the resources and willpower each organization brings to the table. You can reach Randy by email at [email protected].