Continuous Integration for Component-Based Development
In small team development, the practice of continuous integration [2] is an effective technique for keeping every one on the team coordinated with the latest results of all changes. Practicing continuous update [3] and private build [4] in one's private workspace [5] as part of a two-phased codeline commit strategy helps ensure that workspaces and work tasks stay in sync. Integration builds ensure that the team's codeline remains stable, consistent, coherent, and correct.
Any built version of a component that needs to be accessed by internal stakeholders (such as a QA/V&V group) needs to be identified by sing a label/tag). This ensures that anyone who needs to look at it, even after it is no longer the latest and greatest, can easily do so. They can then also see which version of the component and its corresponding source code they are viewing.
In an ideal world, we could build the entire system directly from the sources in a one-step process, for everyone working on any component. Ideally, we would have the storage capacity, network bandwidth, processing power, and load distribution necessary to build the whole system. At the very least, the ability to incrementally build the whole system every time before a developer commits their changes to the codeline.
Sometimes, for reasons of build-cycle-time, or network resource load, or schedule coordination (e.g., multiple time zones, or interdependent delivery schedules of components) this is not always feasible. What happens when there are multiple teams and components, each with its own integration schedule or rhythm, that needs to coordinate with a larger-grained system integration strategy? There are many dependent factors to consider, including:
- Relationships between sub teams, and their respective integration rhythms
- Build-time dependencies between components (e.g., libraries and APIs)
- Geographically dispersed teams and team-members and differences in time zones
- Repository size and performance
- System-build performance and available network resources
System Integration for Multiple Component Teams
Many large projects and systems require multiple teams of people to work together. In component-based development, it is common practice to see a system partitioned into multiple subsystems and/or components, with a team allocated to each component of the larger system (a component team [1]). Each component-team develops a separately buildable part of the overall system, and typically develops and modifies only the source for their component. The component team then takes ownership of the component's source code.
- Each component may be used or reused in one or more products of the overall delivered system (or within a product-family).
- Some components may require delivery and distribution of their source code in order to build other components; while other components may require delivery and distribution of only binaries, or binaries and interface definitions (e.g., header files in C/C++).
- The resulting delivered component versions can then be assembled together into the final system (subject to system and integration testing of course).
- Each team should be responsible for ensuring that it delivers working code and executables to other internal stakeholders. When a two-phased codeline commit protocol is used, each task-level commit to the codeline is essentially stating that you have successfully compiled and tested the entire component with your changes incorporated.
If the team is large or dispersed enough to actually warrant sub teams, it may become necessary for each component-team to deliver tested, closed, sealed and signed, versioned binary libraries to the rest of the component teams. If the repository contains several components, and people build only their components to test their changes (rather than the entire system), then they are ensuring they have tested, closed, sealed and signed deliverables only for that one specific component!
Using a Staging Area
A common best practice that is used to coordinate cross-component build dependencies is commonly called a staging area or staging environment. A staging environment is like a sandbox or workspace reserved for sharing build/test dependent artifacts (headers, libraries, executables, etc.). It works something like this (note, this is not specific to XP/Agile development):
- Each sub-team does its own thing and builds and compiles its code as they should, then commit changes to their repository in the usual fashion.
- At agreed-upon points in time, the sub-team delivers any artifacts that other teams require in order to build (headers, APIs, libraries, configurations, etc.) into the staging area and, ideally, some additional level of build/test is done.
- When building in one's private developer workspace or even a sub team's integration workspace/machine, the compilers and linkers, etc., point to the official staging area for the non-sub team owned artifacts needed for the sub team to build their component.
- Staging area: If versioning is required, it is typically handled one of two ways: staging area staging/repository staging area.
Staging Area Implementations and Versions
The issue of versioning comes into play if it is necessary to know which version of a component is the current one in the staging area. If necessary, then, when a sub team build has its executables delivered to the staging area, it also creates a corresponding tag/label, and perhaps writes it to a file (e.g., README) for that component in the staging area (a simplified form of "version description document" or VDD).
Staging Directory
A separate directory tree in the repository is used to house any installed staging artifacts. Developers will typically use the staging area plus the top-level directory for their own components in their sandbox and don't extract/checkout anything else unless and until they need to view the source for something outside their owned component.
- Repository: A separate repository is used to house all staged artifacts. It can therefore accommodate separate sets of versions and tags/labels (which has good points and bad points)
- Sometimes granularity of access-control, administration, mirroring/ synchronizing will determine which of the above two approaches is best. If each component is large enough to already warrant its own separate repository from the others, then a separate staging repository is typically used.
- Sometimes, for local performance, a staging area might be mirrored or replicated to local sites/storage to cut-down on network bandwidth for their build-cycle time.
Making It "Agile"
The staging area is a specific technique for separating sub team build-dependency interface from implementation for the benefit of the rest of the team(s). A staging environment is the component-version mediator (coordinator, really) that houses the common interface and necessary artifacts to satisfy build/test dependencies across sub teams.
How might we apply an agile adaptation of it? The simple case is when no separate staging area is needed because the whole one team can peacefully co-exist in one repository and each work at their own sustainable pace without unduly impacting the others. There is little need to think so much about subparts and sub teams and instead more easily focus on the whole.
Other times, factors of scale may rear its ugly head. These may be issues of system/build scale, organization and organizational process, issues of ownership over computing resources, etc. Perhaps not all the sub teams are using agile and some of them can't tolerate such high-frequency changes/deliveries from the agile-teams into their own part of the repository.
One of the key problems to solve is when and how often a sub team should do a signed and sealed delivery into the staging area. If every commit to the codeline is too frequent for a staging delivery, then an arrangement must be negotiated with the other sub teams. This is where some agile methods try to scale by using a team of teams to manage the staging frequency and coordination.
Scaling Continuous Integration up to Continuous Staging
If it is necessary to scale up my build process and resources to use a staging environment, how might I scale-up a practice like continuous integration to approximate continuous staging into the staging area? This would avoid, or at least minimize the need to tag/label every delivery into the staging area. This would minimize the need to manage build-version-dependencies between components and the sub teams that work on them. Even if it were no longer practical to use a single repository or full system build for every commit across the whole team, it might be feasible to:
- Have every commit, or even just once or twice daily, trigger a delivery to the staging area.
- Another trigger, or perhaps the same one, detects an update to the staging area and does the next-level of build/link/test. This would be automated, of course, using the current set of items in the staging area.
- If it breaks, you take the appropriate course of action and notifications, just as one does for continuous integration at the smaller scale.
Even in those cases where one might still need to version the repository and component for what was delivered to the staging area, the staging area itself can be used to manage the current latest and greatest set of system build worthy components and their versions, both source and binaries.
Component Versioning and Releasing
If the development of components results in one coordinated release of a single application or system, then it may be best to version the source files, rather than the compiled libraries. Even when versions are associated with what gets delivered to the staging area, they typically refer to versions of the source that produced the staged deliverables.
If, however, your result is really an overall product-family or product-line of multiple components that feed into multiple products for multiple deliverable systems, then the component/library reuse and independent component release schedules may make it necessary to version the binary/library releases.
In the latter case of a product-family, each component release is essentially a release of a third-party component to each of the other component teams. The vendor release in this case originates from elsewhere within the organization, rather than an external supplier. The underlying business model for reuse and release of components matches that of third-party vendor/supplier, albeit an internal one.
There are truly external vendor and third-party deliverables and then there are items that may be internal to your organization, but should be regarded as internally vendor/third-party supplied to your particular product and team. In those cases, versioning the delivered binaries is recommended.
Some shops use a separate third party repository for such purposes. One reason is because its supplier and release schedule is independent of the rest of the application. Another is that if most of the elements are binary in nature, it is often desirable to have a distinct storage area with more efficient storage parameters/capacity. Sometimes the repository can be configured so it is tuned for performance based on knowledge of the kinds of elements it will predominantly store.
Of course, if you get code delivered from any of those third parties, you would most likely want to version it along with the delivered binaries unless the binaries can be reproduced from the code you were given. Sometimes the source delivered is insufficient for that.
If the code needs to be modified from the third-party for your own custom, value-added purpose, it may be best to use the third party codeline pattern. Also, it may be helpful to ask to have changes incorporated, unless the organization deems them proprietary and is unwilling to submit them back to the vendor.
References
[1] Object Solutions: Managing the Object-Oriented Project, by Grady Booch; Addison-Wesley, October 1995
[2] Continuous Integration - Just Another Buzzword?, by Steve Konieczka, Steve Berczuk and Brad Appleton ; CM Crossroads Newsletter, September 2003 (Vol. 2, No. 9)
[3] Codeline Merging and Locking: Continuous Updates and Two-Phased Commits, by Brad Appleton, Steve Konieczka and Steve Berczuk ; CM Crossroads Newsletter, October 2003 (Vol. 2, No. 10)
[4] Build Management for the Agile Team, by Steve Berczuk, Steve Konieczka and Brad Appleton ; CM Crossroads Newsletter, November 2003 (Vol. 2, No. 11)
[5] Software Configuration Management Patterns: Effective Teamwork, Practical Integration;by Stephen P. Berczuk and Brad Appleton; Addison-Wesley, November 2002
Acknowledgements
Jeff Grigg, from the extremeprogramming mailing list.