4 Reasons Why Containers are Cool for NFV
Containers are all the rage in network virtualization these days. They’re not new to virtualized IT and Web-scale networks; indeed, containers have been around since the days when “docker” only meant someone who worked on a dock. But they are new to the communications networking scene. Linux containers are gaining ground in Communications Service Provider (CSP) plans for network virtualization, especially in preparing for 5G, because they offer many advantages over virtual machines (VMs).
As we describe in our definitive white paper on cloud native design principles, containers leverage a long-standing method for partitioning in Linux known as “namespaces,” which provides separation of different processes, filesystems and network stacks. A container is a secure partition based on namespaces in which one or more Linux processes run, supported by the Linux kernel installed on the host system.
One way to think about the difference between VMs and containers is that a VM is hardware virtualization whereas a container is operating system (OS) virtualization. Both allow multiple workloads to be deployed on the same host. But containers provide an isolation capability that allows multiple apps to share the same host OS without the need for a separate guest OS for each app.
This creates a number of benefits for network virtualization and makes it a key ingredient for realizing the promise of Network Functions Virtualization (NFV). Here’s why containers are so cool for NFV:
- Lower Overhead. Because they do not (in most cases) contain complete operating system images, containers have a far smaller memory footprint than virtual machines, and therefore consume considerably less hardware resources. Their small footprint may make it feasible to deploy instances of software to serve single tenants for some kinds of services, and this could significantly simplify the design of the software.
- Startup speed. Virtual machine images are large because they include a complete guest operating system, and the time taken to start a new VM is largely dictated by the time taken to copy its image to the host on which it is to run, which may take many seconds. By contrast, container images tend to be very small, and they can often start up in less than 50 ms. This enables cloud native applications to scale and heal extremely quickly, and also allows for new approaches to system design in which containers are spawned to process individual transactions, and are disposed of as soon as the transaction is complete.
- Reduced maintenance. Virtual machines contain guest operating systems, and these must be maintained, for example to apply security patches to protect against recently discovered vulnerabilities. Containers require no equivalent maintenance.
- Ease of deployment. Containers provide a high degree of portability across operating environments, making it easy to move a containerized application from development through testing into production without having to make changes along the way. Furthermore, containers allow workloads to be moved easily between private and public cloud environments. Being much more straightforward to deploy in the cloud than virtual machines, they are also much easier to orchestrate.
All of these benefits translate into cost savings, operational efficiency and service agility for CSPs.
If you need to deploy an application that was originally designed to run on a dedicated server into a cloud environment, chances are you will need to deploy it in a virtual machine because of operating system or hardware dependencies. But if you are writing new software to run in a cloud environment (in other words, cloud native software), then it’s very easy to do so in a container-friendly way. In the realm of Web-scale companies today, most cloud native software is deployed in containers.
Metaswitch is the first company to deliver a complete VoLTE offering built using cloud native software methodologies that can be deployed in public, private or hybrid cloud environments using lightweight containers. We’ve shown that, at a ridiculously low cost, we can spin up an entire one-million-subscriber VoLTE infrastructure on a private cloud using Kubernetes, while employing AWS to provide 100 percent service redundancy from practically a standing start.
For more on containers, check out these previous posts:
Simon is the Director of Technical Marketing and a man of few words.