You’re thinking about containers and what they mean for your organization; you want all the benefits containerization brings —quality, time-to-market, cost and reliability. To do this, you’ll have to get past the myths that have built up around containers since their conception over ten years ago. Here, we’ll address four common misconceptions about containers, so you […]
You’re thinking about containers and what they mean for your organization; you want all the benefits containerization brings —quality, time-to-market, cost and reliability. To do this, you’ll have to get past the myths that have built up around containers since their conception over ten years ago. Here, we’ll address four common misconceptions about containers, so you can unlock their true potential for your teams and your organization.
Even if all you do is take a legacy application running on a single server and wrap it in a single container, you’ll see significant benefits. Developers can include the application‘s configuration, dependencies and code in a single image. This will make all the environments it passes through—development, quality assurance, performance testing, production, etc.—consistent and reduce the errors and surprises that come from configuration differences. Those differences can be passed as runtime arguments, giving you more control as the application moves across environments.
If your application was tied to a highly specialized “snowflake” server, you can now host it on a more generic Virtual Machine (VM) and you can update that VM’s configuration without affecting your application. This keeps your container hosts aligned with your organization’s operations and security policies. When you need to upgrade the Operating System (OS) and packages in your container, you can pick the right time to do it, rebuild the image on the newer version and validate your application’s performance before rolling it out. Configuration changes to your “snowflake” will be traceable, easier to comprehend and tracked in source control.
You can increase utilization by putting multiple applications on a single host. You can use your container platform—Docker is a common one—to manage port mapping, operating system versions and other configurations. That reduces the number of VMs you have to manage and pay for.
Finally, you can use containers to improve deployments, reducing risk and downtime. Containers allow Blue/Green deployments where a new version of your application runs alongside the old. You can deploy with zero downtime and, with a container registry, instantly roll back to a known good version in case of a problem. When deployments are containerized, moving through lower environments becomes a series of rehearsals for your production deployment; the only differences are configuration values. This dramatically reduces the number of things that can go wrong in a Production deployment.
Container hosts like Docker and orchestration technologies like Kubernetes and Docker Swarm can reduce your maintenance burden regardless of where the physical hardware is located. Container registries—such as Nexus, Artifactory or Docker Registry— are straightforward to use. They allow you to pull publicly available container images and securely build upon them within your own network boundaries. So, you can still take advantage of what containers have to offer even if you have sensitive data that must be kept within your physical location.
Even elasticity—scaling up capacity when demand is high and scaling it back down when it falls—is possible in-house. Kubernetes and Docker Swarm have well-established practices for operating on bare metal. If your operations team can provide the necessary physical capacity, these orchestration tools will allow you to provide elasticity and will prepare your team for future hybrid/cloud-native growth.
Many existing technologies in enterprise data-center stacks can be used to build out robust and resilient container orchestration environments. Block and object storage systems can provide volumes to containers. VM hypervisors can provide physical elasticity to container orchestration engines for scaling and resiliency. Enterprise networking appliances can be dynamically configured by Kubernetes to distribute load and route traffic to ephemeral services and deployments. If you hesitate to use public cloud resources, many of the same services and solutions can be implemented in-house with little to no development effort.
To take advantage of all the benefits of containers, you need to be mindful of process and practice that supports the containerization workflow. Some application components, like web front ends, will easily scale-out. Others, such as relational databases, will be more difficult. Maintain separation between these components to create opportunities for straightforward scaling. A container ecosystem benefits from concepts like “convention over configuration.” Let the environment provide the configuration; don’t set explicit configurations for the applications directly. Architect with this concept in mind for containers which are lighter weight, more portable, and more flexible.
Microservice architecture is a popular model that thrives in containerized environments. Instead of a monolithic application that builds all components and logic into a single deliverable, microservices are discrete functional components that interact over established APIs. This modularity is extremely well–suited for containers, as it extends the inherent benefits of containerization into the application architecture. Container orchestrators support this design pattern and can create traceable and sustainable application infrastructures. Small teams can focus on and take ownership of discrete pieces of functionality, allowing them to make changes independently and accelerating the delivery of value.
A well-architected and containerized application brings environments and code closer together. This makes it significantly easier to create and maintain consistent environments—so much so that you’ll want to revisit your assumptions about how many environments you need and how long they live. For example, teams frequently queue up and wait to use specialized test environments. But if you have a well-containerized application, spinning up a new environment for testing can be quick, easy and reliable. No one has to wait! When the need for the environment goes away, it can be torn down. Disposability means reduced overhead for system maintenance.
This is not simply a change in tooling. Containerizing a legacy platform or using containers in a conventional development environment can deliver some benefit but configuring your application model and your development model to make effective use of containers is the best way to unlock their enormous benefits for cycle time, time-to-market and reliability.
Containers can dramatically change how an organization develops, tests, deploys and operates applications. The technologies themselves, as well as the innovative methods of structuring them to produce business value, are constantly evolving; containers allow you to keep abreast of these developments. The benefits you can achieve with containerization right now are great, but it’s important to keep an eye on long-term goals. Implementation is just the first step. Improving the application and its infrastructure used to be costly and infrequent. With containerization, you can make improvements as part of your regular process of development, allowing you to continually improve resiliency and reliability.
Seriously consider engaging outside firms whose consultants and engineers have practical experience with containerization. They can guide you, both in your approach and in your knowledge acquisition. This will help ensure your organization thrives in its new ecosystem.
Moving your applications to containers and modifying your infrastructure to support them can lead to enormous benefits, to your teams, your culture and your bottom line. By being mindful of what they can and can’t do, you can effectively introduce them to your organization and capitalize on their benefits. Avoiding these misconceptions is a good first step.
In my previous post, I walked through common scenarios and attempted to present a high-level...
At ng-conf 2018, the Angular team unveiled an astonishing suite of new features to come...