Toggle Menu

Insights / Digital Service Delivery / What Are The Hurdles To Implementing Continuous Delivery In A Large Enterprise?

March 25, 2013

What Are The Hurdles To Implementing Continuous Delivery In A Large Enterprise?

6 mins read

I am constantly recommending that the project and development teams I support implement continuous delivery to maximize the efficiency of their software release processes. The concept is still pretty new to most IT organizations. While the benefits seem plainly obvious – you can imagine the cost savings and improvements in delivery speed if you script and automate everything related to testing and deploying your applications – there are still significant hurdles and a big mind shift organizations face to move to a model like this.

What is Continuous Delivery?

I should probably start by defining continuous delivery: continuous delivery is the practice of constructing build pipelines, so that when a developer commits a line of code to a version control repository a suite of automated jobs kick off to compile and package the code, run tests on the code, deploy the packaged code to development and test servers and store the package in a way that it can be deployed to a production environment with the click of a button.

It saves time. It saves money. Everyone wins.

Sounds great right? It is. But getting there can be a challenging road – especially in larger organizations. Here are some of the most common hurdles we’ve seen larger project teams face.

Importance of Continuous Integration

There can be no continuous delivery without continuous integration (CI). We define CI as whenever a developer commits code to version control a job on the CI server that kicks off a build script to compile, run tests and package the code. This should be occurring throughout the day. The important distinction here is that this process kicks off whenever code is committed. We frequently see project teams use a CI server like Jenkins to schedule builds once a day or several times a day. That’s better than nothing, but it’s hardly continuous.

The CI Server

One of the strong points of CI servers like Jenkins is the ease of setup and use. Any developer can install and run a CI server on their workstation. But as a CI matures in large organizations, it’s no longer adequate for every development team to be running a CI server on a developer workstation or spare machine in the office. What if you have hundreds of developers touching a code base?

Large organizations like this typically want to move to a centralized control model of CI, where system admins install the CI servers on dedicated build boxes, probably with master-slave instances and LDAP integration for user access. That all sounds great, but this introduces a need for process definition around CI, which takes time to implement in a large organization. How do developers request access to the CI server once security is turned on? Who has authority to create and configure new build jobs? What CI server plugins are installed (and how do you request new ones), how are credentials stored so the CI server can access the source code version control repository, etc.

Fortunately, CI servers like Jenkins store their configuration data in easily accessible files, and the creation/configuration of new jobs can be easily scripted. There are also various plugins available to securely store and encrypt any sensitive credentials. Tools such as Puppet and Chef can also help with scripting the provisioning of new environments and standing up additional CI server instances.

The CI Infrastructure – Version Control Systems

If you think this post is beginning to sound like a list of hurdles to implementing continuous integration more so than continuous delivery; you’re absolutely right. That’s how important a solid continuous integration base is to achieve the goal of continuous delivery. We frequently see the same CI model that works so great in small organizations fail miserably when scaled to large enterprises.

A common infrastructure hurdle we see with project teams is CI server – version control integration. A CI server checks for code changes in version control then downloads that code to the CI server to kick off a build. By far the most common model of CI server setup is to “poll” the version control system continuously for code changes on a pre-configured interval.

Initiating the poll from the CI server causes an unnecessary delay between the time the code was committed to version control and when it’s actually downloaded to the CI server and built. For a very large version control repository, this polling check for updates could take a longer time and introduce significant network latency and unnecessary traffic (especially if no code changes are found). In the worst case, you could completely bring down your version control system and halt your CI builds.

The answer most frequently provided for the above problem is to introduce post-commit hooks into your version control system, so that whenever developers commit code a notification is pushed from version control to the CI server. The CI server then downloads the specific version it was notified about without having to check the full repository for updates.

The post-commit solution should significantly reduce unnecessary network traffic and strain on your version control system. However, there is a tradeoff. Depending on the number of code commits that can be expected throughout the day, care must be taken to ensure the CI server won’t be overwhelmed with commit notifications. If your CI server is secured and won’t allow anonymous reads, then the notification from version control will be required to provide security credentials. How you will store and configure such credentials in the version control system is another factor to take into account.

Automated Testing

Similar to CI, there can be no continuous delivery without significant automated testing. This starts with thorough unit tests that are run against the code on every commit. However, continuous delivery teaches us that unit tests alone are not enough. If you’re going to automate deployments of an application to a development, test, or production server, you need a thorough suite of automated integration and acceptance tests. The key to this is that automated tests also need to be run against the live application after it’s been deployed. I have helped teams bridge this gap through the use of practices like Specification by Example and tools like Cucumber and Concordion. We use these tools to help write tests that verify application functionality beneath the user interface layer – directly accessing an application’s APIs or URLs to pass in data and interrogate responses.

Organizational Policies

We’d be naïve to think all large organizations, especially major government agencies, can simply switch to automating all their build pipelines from version control commits straight into production. Many of these organizations are subject to rigid regulations such as Sarbanes–Oxley or internal policies that require multiple, manual approvals and documentation before software is deployed into production. We’ve worked with some organizations that use commercial tools to “stage” their build artifacts before a system admin manually pulls the files and deploys into a production environment.

In these types of situations, it’s important to remember that scripting as much as possible of application deployments is a good thing. Scripts can be versioned, re-used across projects and can quickly remove the risks that come with manual deployments.

Are you implementing continuous delivery? What types of hurdles have you overcome?

You Might Also Like

Resources

Simplifying Tech Complexities and Cultivating Tech Talent with Dustin Gaspard

Technical Program Manager, Dustin Gaspard, join host Javier Guerra, of The TechHuman Experience to discuss the transformative...

Resources

How Federal Agencies Can Deliver Better Digital Experiences Using UX and Human-Centered Design

Excella UX/UI Xpert, Thelma Van, join host John Gilroy of Federal Tech Podcast to discuss...