Everything You Need To Know About Containers
The pace of technology change can be grueling. Many of us in the communications sector have certainly felt this keenly over the last five or so years as we’ve been getting to grips with the softwarization, virtualization and cloudification of networks. Indeed, as soon as the industry figures out virtualization and learns all the terminology around virtual machines (VMs), along comes the concept of cloud native with a host of new vernacular like microservices and containers.
Containers and microservices are getting a lot of attention lately among telcos, as they are foundational to cloud native software design, and especially as the new 5G Core develops. Here, we’re devoting some time to containers. To keep up with the latest terminology, we’ve compiled a handy list of terms that will help to de-mystify what containers are all about.
What is a container?
A container is a unit of software that is partitioned based on namespaces in which one or more Linux processes run, supported by the Linux kernel installed on the host system. They are more lightweight than VMs. The difference between a container and a virtual machine is that a virtual machine needs a complete operating system installed in it to support the application, whereas a container only needs to package up the application software, with any application-specific OS dependencies, and leverages the operating system kernel running on the host. So, containers don’t need a separate guest OS for each application.
Why use containers instead of VMs?
Containers have cost saving advantages. Compared to VMs, containers use less hardware resources because they don’t need the complete OS, they have faster startup times, they require less maintenance, and they are very portable. A container image can run on any Linux host. So, a container app can be written just once and then deployed anywhere.
What if my software was originally designed for dedicated hardware?
If you need to deploy an application that was originally designed to run on a dedicated server into a cloud environment, chances are you will need to deploy it in a virtual machine because of operating system or hardware dependencies. But if you are writing new software to run in a cloud environment (in other words, cloud native software), then it’s very easy to do so in a container-friendly way. In the realm of Web-scale companies today, most cloud native software is deployed in containers.
The following is a starter selection of container terminology, which you’ve probably heard a lot lately, and are most relevant for telco implementations:
Container Image: Images are files built using platforms such as Docker or rkt and contain the code that enables it to run on a container.
Container Engine: Software that handles user requests, retrieves images and essentially runs the container, providing user interfaces and APIs. Examples include Docker and rkt.
Pod: A group of one or more containers with shared storage and networking as well as instructions for how to run the containers.
Kubernetes: An open source platform for managing and orchestrating containerized workloads and services.
Grafana: An open source tool that provides metric analytics and visualization – a dashboard – for monitoring containerized apps.
Helm: An open source platform that manages Kubernetes applications, making it easier to define, install and upgrade apps.
Prometheus: An open source monitoring and alerts toolkit.
That’s just the beginning in the wonderful world of containers. As recognized pioneers in cloudification, Metaswitch has a deep understanding of what it takes to deliver cloud network functions (CNFs) with superior performance, scalability and resiliency. For more on containers and cloud native design, please download our white paper.