Toggle Menu

Insights / Digital Service Delivery / 8 Ways to Keep Your Continuous Integration and Deployment (CI/CD) Pipeline Working for You

October 05, 2015

8 Ways to Keep Your Continuous Integration and Deployment (CI/CD) Pipeline Working for You

6 mins read

Jump to section

What is a CI/CD Pipeline?

A continuous integration and deployment (CI/CD) pipeline is such an important aspect of a software project. It saves a ton of manual, error-prone deployment work. It results in higher quality software for continuous integration, automated tests, and code metrics. Ultimately this facilitates better software that can be released more frequently.

With a CI/CD pipeline, every time the software’s code is changed, it is built and tested automatically. Code analysis is run against it. If it passes quality control gates and all the tests pass, it is automatically deployed, where automated acceptance tests run against it. This kind of quality control and automation is getting more and more necessary in today’s fast-paced software-centric environment where companies have to rapidly release stable software to keep up.

There are CI/CD products in many shapes and sizes, such as ones you can use in the cloud via software as a service (SaaS) or self-hosted ones. Some examples of hosted solutions are CodeShip.io and TravisCI, and self-hosted examples are TeamCity, Jenkins, and Thoughtworks Go.

Despite this, I have been on multiple projects where one of the steps in the continuous integration process was red, often for multiple days or even perpetually. So much of the benefit of a build pipeline provides is lost when this is allowed to happen. Bugs are not caught by automated tests, additional tests break without being fixed because no one notices, the culture of keeping a green pipeline diminishes, and faith in everything from the pipeline itself to automated tests reduces. Developers learn bad habits.

One way I have tried to combat this is to institute some specific process around the build pipeline. I found it more of an issue as team size grew; we couldn’t just let the team organically manage the CI/CD pipeline when many teams were contributing to the codebase. Without specific processes and task delegation, the pipeline did not keep itself clear. It is often when projects grow that the pipeline can become less stable, and explicit processes can be a big help.

Here are Eight CI/CD Pipeline Processes and Approaches:

We’ve all likely heard the Agile Manifesto statement that we should value people over processes, but this does not mean that we can have zero processes. Here are a few processes as well as general approaches that you can implement to help keep your build pipeline working fully and fully useful. Some I have implemented via developer norms, and some I delegate as tasks to lead developers on each team.

1. No check-ins when the build is broken

If the build is broken, the teams should make sure it’s in the process of being fixed ASAP. It’s harder to revert bad code if there are check-ins after the initial check-in that broke the build. We have a norm to not check code in if the build is broken. This also organically urges the person who broke it to hurry up and fix it, or try hard to not break it in the first place, since they’ll be holding up all the teams working on the codebase.

2. Run tests and code analysis before check-in

Developers should be running the same build and automated testing processes that run on the build server, locally before they check their code in. This makes it much less likely that the build is broken.
This is problematic for automated acceptance tests (AATs) because they take so long. A different approach for these, which I will discuss a bit later.

3. Refresh the database for data-driven tests

This usually includes integration tests and automated acceptance tests. If you refresh the database they use before each run, you can ensure the data is in a known state, and test runs haven’t mutated the data, causing you to have to worry about its state and cleaning it up.

4. Automated Acceptance Tests (AATs)

Automated acceptance tests can greatly benefit from a specific process to handle failures because sometimes they fail falsely and take a long time to run. Sometimes the test can’t be fixed immediately because of changes in progress, but it can’t continue failing, which taints the status of the build pipeline.

In this case, the AAT should be ignored. However it should have a link to the story that is causing it to break, so it can surely be addressed when appropriate. I have seen tests get ignored forever because there is no process to track them. A process laying out when to ignore tests has helped the larger teams I’m on immensely, in keeping this step passing.

Also, tests can be moved into an ‘unstable’ run if they are falsely failing. This will allow them to keep running, but this pipeline step of running the unstable AATs can be set to not fail the build. This way, the AATs cannot pollute the main pipeline, but can still be running so when they are troubleshooting, you can see them running (this is especially useful when the test passes locally but not on the build server, and you need to find out why).

5. Delegate enforcement to each team

No matter how many processes, norms or statements you make about what practices developers should be following, usually one person can’t control it all if it is more than a 4 person team. Lay out the responsibilities of lead developers to all of your team leads, and include things we have mentioned here. Have each take responsibility for enforcing these things on their teams, and review process and status periodically.

6. Report metrics visibly to all teams and management

Continuous integration pipelines usually run automated unit, integration, and acceptance tests, and generate metrics around them beyond just what has passed and failed. Code coverage, code duplication, and static code analysis are some of the metrics that can be generated, and they should be looked at! Report the metrics regularly at a status meeting, or email out a report to the powers that be. A very interesting metric is code that is not covered by tests, that also has a high cyclomatic complexity. Cover that code first, especially if it is part of a very important functionality.

7. Require pull requests

This isn’t something that is always necessary, but it is often very helpful to require developers to use pull requests to merge their new code into the codebase. This allows someone to review the code first. Also, the code in question can also be run through the pipeline from the branch that it’s in, and only merged if that passes.

8. Peer code review each story

For a piece of functionality to be considered complete (i.e. a user story), require the developer to have another developer conduct a code review. Group code reviews are great, but cannot scale when there are many developers contributing a lot of code. Peer code reviews can scale. The team leads will have to ensure this happens. Sometimes I ask teams to task out stories and put a code review as one of the tasks.

Not only do code reviews help catch bugs before they creep into production, they also allow the developers to learn from each other, and importantly, they reduce the chance of breaking the build or a test.

Summary

I hope this gives you some ideas to improve the effectiveness of your software pipeline on your team, and on future teams. These are all things I have tried successfully in the real world. I am always looking for better ways to get software into the hands of grateful users, and these have been very important to me in that pursuit!

You Might Also Like

Resources

Simplifying Tech Complexities and Cultivating Tech Talent with Dustin Gaspard

Technical Program Manager, Dustin Gaspard, join host Javier Guerra, of The TechHuman Experience to discuss the transformative...

Resources

How Federal Agencies Can Deliver Better Digital Experiences Using UX and Human-Centered Design

Excella UX/UI Xpert, Thelma Van, join host John Gilroy of Federal Tech Podcast to discuss...