5 Reasons Why Containers are All the Rage

If we’re playing a word association game and someone says, “cloud native,” a common response is likely to be “containers.” Linux containers are all the rage because they are considered to be one of the building blocks of cloud native network functions (CNFs) and applications. This post explains why.

building-on-containers

Containers leverage a long-standing method for partitioning in Linux known as “namespaces,” which provides separation of different processes, filesystems and network stacks. A container is a secure partition based on namespaces in which one or more Linux processes run, supported by the Linux kernel installed on the host system.

The main difference between a container and a virtual machine (VM) is that a VM needs a complete operating system (OS) installed in it to support the application, whereas a container only needs to package up the application software, with the optional addition of any application-specific OS dependencies, and leverages the operating system kernel running on the host. Containers rely only on the Linux kernel application programming interface (API), which is extremely stable and identical across different distributions of Linux. This helps to make containers very portable.

Here are the advantages of containers compared to VMs:

1. Lower Overhead

Because they do not (in most cases) contain complete OS images, containers have a far smaller memory footprint than VMs, and therefore consume considerably less hardware resources. Their small footprint may make it feasible to deploy instances of software to serve single tenants for some kinds of services, and this could simplify the design of the software very considerably.

2. Faster Startup Speed

VM images are large because they include a complete guest OS, and the time taken to start a new VM is largely dictated by the time taken to copy its image to the host on which it is to run, which may take many seconds or even minutes. By contrast, container images tend to be very small, and they can often start up in less than 50 ms. This enables cloud native applications to scale and heal extremely quickly, and also allows for new approaches to system design in which containers are spawned to process individual transactions, and are disposed of as soon as the transaction is complete (a.k.a, the “serverless” approach).

3. Reduced Maintenance

VMs contain guest operating systems and these must be maintained, for example, to apply security patches to protect against recently discovered vulnerabilities. Containers require no equivalent maintenance.

4. Easier to Deploy

Containers provide a high degree of portability across operating environments, making it easy to move a containerized application from development through testing into production without having to make any changes along the way. Furthermore, containers allow workloads to be moved easily between private and public cloud environments. Being much more straightforward to deploy in the cloud than VMs, they are also much easier to orchestrate.

5. More Portable

Applications packaged as containers are highly portable, both across development, testing and production environments, and between different private and public cloud environments. This massively simplifies and speeds up on-boarding of applications compared with VM-based software. It also makes it easy to put in place Continuous Integration / Continuous Deployment (CI/CD) pipelines for acceleration of innovation, and to leverage public cloud services for testing, prototyping, capacity bursting and disaster recovery, offering significant savings in Capex and encouraging experimentation.

Of course, not all network functions will be packaged in containers, managed by Kubernetes, using a cloud infrastructure like Read hat OpenShift or VMware PKS and running on bare metal. That’s an ideal scenario. Realistically, cloud infrastructure will have to support legacy virtual network functions (VNFs), packaged in VMs, as well as CNFs in containers.

For more on the intricacies of cloud native network functions and why it matters for Communications Service Providers (CSPs), please download our recent white paper