“We have pipelines.” That is what I always hear when working with organizations that claim to use DevOps and continuous delivery methods.
The claim resonates because pretty much every article you read about DevOps mentions “the pipeline.” Graphic depictions of DevOps and continuous delivery almost always are a pipeline of some kind, showing a flow of software from development through various stages of testing and finally to release.
Continuous delivery is not really about the pipeline, however. In fact, in one instance, I worked with a team that had no pipeline but nevertheless delivered continuously, and I feel that the absence of a pipeline actually improved the behavior of the team.
I would claim, in fact, that the pipeline concept is a red herring. Continuous delivery is really about two things: testing strategy and branching strategy.
We’ve Seen This Before
If you think about it, a pipeline is an awful lot like a waterfall process, just sped up. Worse, the pipeline job is really a reinvention of 1980s batch processing: You make some code changes, submit your job, and wait in line for it to execute so that you can obtain your results as a report (the pipeline tool’s console log and the JUnit test report). Is that progress?
It is not. The only real difference is that today’s pipeline doesn’t take punched cards, and the output reports are accessible via a browser instead of a printout.
Consider what a team might have to do if they did not have a pipeline:
- Deploy locally, or remotely using a script
- Run integration tests locally, or remotely
- Merge changes into the master, but after local integration tests pass
- Deploy via script to a prod segment that receives a small percent of user traffic and gradually scale up
There’s no mention of a build pipeline anywhere.
So what is the pipeline? Don’t we need it?
Yes, but not in the way that it is usually portrayed. And organizations that use it the way it is usually portrayed are using it wrong.
The true role of the pipeline, which is manifest by a build orchestration tool such as Jenkins or Azure DevOps Services and the tests that it runs, is to run tests that cannot be run locally and to rerun all tests as a regression. It is a policeman: the tests are supposed to “stay green.”
But if the team has the practice of running those same tests locally—or in isolation before they merge their code, which exposes their changes to other team members—then when merge does occur, all tests should pass. The pipeline would be green.
Isolation Is Key
The key element, then, is running tests before merging. You don’t need a pipeline to do that.
Notice also that in order to run tests before merging, you need a private place to deploy so that you can run your tests. That can be your laptop, or it can be a private area, such as a virtual machine in a cloud account. You must be able to deploy the system under test in a place where it will not replace the components that other team members are testing. In other words, you need to deploy in isolation.
Isolation is key for testing. Once isolated integration tests pass and you merge changes into the shared development branches, then—and only then—you are ready to deploy to a shared test environment. Thus, the real integration testing should happen before code changes reach the pipeline.
Some tests cannot be run locally, but they can still be run in isolation in a cloud account or data center cluster. Tests that often cannot be run locally include behavioral tests in a true production-like environment, network failure mode tests in which network anomalies are created, soak tests that run the application for a long time, and performance tests that stress the application.
The pipeline is a set of automated quality gates. However, if you are doing things right, you should have found most functional defects before code hits the pipeline. You do that by running some integration tests, and quality checks such as security scans, locally. This is known as shift-left testing, and it is how advanced DevOps organizations do things. If you are debugging functional errors in your pipeline, you are doing 1980s-era batch programming, and you are doing DevOps wrong.
The pipeline is important; it is an integral part of DevOps. However, it is not the central element. The central element is the practice of testing continually using automated tests.
This enables programmers to have a “red-green” feedback loop in which they find defects as soon as possible—ideally, on their own workstation and before they merge their changes into the shared codebase—instead of downstream, where defects affect everyone else’s changes and diagnosing problems is difficult.
The core to DevOps is the set of practices that make this shift-left approach possible. These include practices for branching and merging, as well as setting things up so that many kinds of integration testing can be performed locally on programmers’ laptops or in cloud accounts that they have direct access to, so that a programmer can initiate an integration test run that occurs in isolation from all other programmers.
DevOps is a shift-left approach. The pipeline is important, but it is not the central paradigm.
User Comments
This is not the first time to read about this idea but love the way you illastrate. Thank you.