Containers have rapidly emerged as yet another option for enterprises seeking to modernize applications. For CIOs still getting their arms around DevOps, PaaS and microservices, the topic of containers is creating even more questions about where and how to modernize the portfolio.
To better understand containers and their pros and cons, it’s helpful to quickly compare them to virtualization.
Virtualization hypervisors enable a single physical host to support multiple application instances to run on independent virtual machines (VMs). Each VM on the hypervisor has its own dedicated operating system (OS). In fact, each individual VM can even run on a different OS platform (for example Linux vs. Windows). More on this later.
The good news? Each VM can be managed as if it were an independent physical server with corresponding management and security benefits.
The bad news? On start-up, each VM requires the boot of a full OS and associated system resources and libraries. This adds significant processing overhead to a virtualized environment.
Instead of virtualizing physical hardware, containers virtualize the operating system. Rather than virtualizing and sharing the hardware via a hypervisor, containers share a common OS instance. In the case of the popular Docker platform the containers are based on Linux, allowing application instances to share the kernel, system resources, and libraries. Interestingly, Docker isn’t, in fact, the only container out there. FreeBSD Jails and Solaris Zones have actually been around for over a decade.
The ability to have multiple application containers running within a single Linux instance has a couple of big benefits.
- First, the number of application instances on a given server can be significantly increased, allowing organizations to get more out of their infrastructure.
- Second, because a new container doesn’t require the full boot of a new OS, the startup time for new application instances can be reduced to a fraction of a second.
These factors are what make containers so attractive in web-scale environments like Google, which fires up 3,300 new containers every second to support all of its major services including search and Gmail.
So as enterprises think about how to leverage containers in their environment, what do they need to know?
1. Portability is a plus.
One of the big attractions of Docker containers is that they can run anywhere. Since Docker runs on all major versions of Linux, it can run on effectively any physical or virtual machine that can also run Linux. So not only can Docker containers run on physical infrastructure, they can also run in major IaaS and even PaaS environments such as Amazon AWS, Google Compute Engine (GCE), and Cloud Foundry.
2. Not everything can be containerized… yet.
Because they share a virtualized OS, containers must use the same operating system. So while enterprises get full portability for containerized Linux applications with Docker, other OS platforms are a different story. While containers for Windows don’t exist today, both Microsoft and Docker insist they’re on the way.
3. New management and orchestration is required.
Easily spinning up hundreds of application instances using containers brings out a different set of management and orchestration problems that require new tooling and infrastructure. While Google create Kubernetes, an open-source container and cluster management system to help address orchestration issues, management gaps skill exist (i.e., logging).
4. Containers are not a replacement for PaaS.
While a lightweight Docker approach makes sense for many apps and environments, others still need things like the service provisioning, performance monitoring, logging, and versioning that PaaS provides. Either containers or PaaS could be the best answer based on application, architecture, and use case. In some cases it may even make sense to use them together. Viewing them as substitutes for each other is a mistake. At the end of the day, the two are likely to converge.
5. Security is a non-trivial issue.
As containers share CPU, RAM, and other system resources and also require root access, the segregation they provide is not as robust as hypervisors and VMs. While progress is being made in addressing some of the known security issues with Docker and containers, there are still some significant unknowns.
6. Containers and DevOps are a natural fit.
Docker is proving popular with DevOps practitioners as it makes it easy for developers to package the runtimes and libraries need to develop, test, and run an application in a standard way. In addition, existing tools like Puppet and Chef can be used with Docker to do things like manage Docker installations, orchestrate containers, or build images if needed.
7. Enterprise adoption is still emerging.
As is true with DevOps, cloud, and other next generation IT models, early adoption has been driven primarily by startups and web-scale enterprises. Adoption still primarily by developers in dev/test environments. The limited number of examples of enterprise workloads in production in Docker is due to the security and management issues described above.
Containers are not a magic bullet on their own…
…but instead will be one of the options for enterprises seeking to modernize their application portfolios. As with PaaS, DevOps, and microservices, the trick will be figuring where and how to use them.