In his Behaviorally Speaking series, Bob Aiello discusses hands-on software configuration management best practices within the context of organizational and group behavior.
Bob Aiello explains that software engineers and architects do an amazing job designing a system’s architecture that fully represents all of the parts of the system that are created during the development lifecycle. However, one of the biggest challenges is understanding how each part of the system depends upon the others.
Today, computer systems have reached a level of complexity that is truly amazing. We expect websites and packaged software to have an incredible number of features, and we expect systems to practically anticipate our every need and response. However, creating feature-rich systems is not an easy job and neither is writing the deployment infrastructure that empowers an organization to continuously deliver new features while maintaining a high level of reliability and quality.
Software engineers and architects do an amazing job designing a system’s architecture that fully represents all of the parts of the system that are created during the development lifecycle. That being said, one of the biggest challenges is understanding how each part of the system depends upon the others.
Software today is often designed and implemented as components that fit together and run seamlessly in a complete system. One of the biggest challenges that we have is being able to successfully update one or more components without any risk of a downstream to the other parts of the system.
There is a lot of complexity involved in creating a structure that allows one component of the system to be updated without any chance of a downstream impact on the other components. How do we go about understanding and managing this complexity?
The first step is to create a logical model of the system to help all of the stakeholders understand how the different parts of the system are assembled and work together. In my work, I often find that we have many specialists but very few people who understand the entire system end-to-end.
In deployment engineering, I often have developers who are concerned with deploying the entire release just to fix a specific bug. I understand their concern, but truthfully, managing patches to a release can actually be a lot more complicated than deploying a full release. The next thing that I often hear is that deploying a full release requires that you test the entire system.
The truth is that you have to retest the entire system even if you just deploy a patch unless you may fully understand how that patch impacts the other components of the system. The point here is that managing component dependencies is essential and is not a trivial task. I recommend that organizations develop their software to be discoverable by embedding immutable version IDs and having a formal way to represent component dependencies, such as descriptive XML files, that could be shipped with the code to help with understanding how each part of the system depends upon the others. The only time when you will be able to understand and document these dependencies is during the software and system development lifecycle.
Many systems are developed by teams of highly qualified consultants who work under extreme pressure to develop feature rich software in a very short period of time. Once these technology experts are done they move on to the next project you may find that you no longer have anyone from the original development team who really understands all of the internal component dependencies. When software is being written you have a unique opportunity to document dependencies and design a strategy for managing patches or full-baselined releases in an automated way. This is exactly the same challenge that quality engineers face when they develop robust automated tests including service virtualization testing which is becoming a popular practice within continuous testing. Systems have to designed to ensure that we can manage complexity.
Complexity is not bad. We need to develop strategies to understand and manage the complexity inherent in writing complex software systems. The first step is to design systems to be fully verifiable using automated test harnesses. We can use this same approach to understand and document component dependencies and then develop strategies to be able to reliably update software in patches (or verifiable full baselined releases) while ensuring that we fully understand component dependencies. Designing a logical model is an important part of this effort but ensuring that we have some mechanism such as a descriptive XML file is a must-have for documenting and managing component dependencies. There is no magic here and very few technologies allow you to reverse engineer component dependencies. You need to design your systems to have these capabilities.
The good news is that if you do this right you will find that your systems are easier to test and update. More importantly, you will be able to continuously deliver new features to your customers while maintaining a high level of systems reliability. How do you manage component dependencies? Drop me a line and share your best practices!!