4 Key Features of Cloud Native App Architecture
Virtual Network Functions (VNFs) are out, Cloud Native Network Functions (CNFs) are in. But what’s the difference? For anyone not steeped in the latest best practices in software design and cloud networking, the realm of cloud native might look very confusing, even mysterious. To dispel any lingering uncertainty, this blog highlights the most important features of cloud native application architecture.
Some CSPs will certainly be wondering: how is cloud different from what we’ve been doing with Network Functions Virtualization (NFV) and why do we need it? The simple answer is that cloud native methodologies are the only way to realize the full cost savings and agility benefits that were promised by NFV.
There are four key features that distinguish the architecture of CNFs from VNFs, as follows:
1. Stateless Processing
The single most important concept in cloud native architecture is stateless processing because this is what enables massive scalability like that achieved by hyperscale companies and in a way that is inherently fault tolerant.
The best way to describe stateless processing is this: A transaction processing system is divided into two tiers. One tier comprises a variable number of identical transaction processing elements that do not store any long-lasting state. The other tier comprises a scalable storage system based on a variable number of elements that store state information securely and redundantly. The transaction processing elements read relevant state information from the state store as required to process any given transaction, and if any state information is updated in the course of processing that transaction, they write the updated state back to the store.
Microservices is a software architecture style in which complex applications are composed of small, independent processes communicating with each other using language-agnostic APIs. These services are highly decoupled and focus on doing a single small task well, facilitating a modular approach to system-building.
The modularity of microservices design enables reusability and composability, efficient scaling, ease of development and deployment, and technology heterogeneity.
Containers are now considered indispensable for cloud native applications. Containers leverage a long-standing method for partitioning in Linux known as “namespaces,” which provides separation of different processes, filesystems and network stacks. A container is a secure partition based on namespaces in which one or more Linux processes run, supported by the Linux kernel installed on the host system.
The main difference between a container and a virtual machine (VM) is that a virtual machine needs a complete operating system installed in it to support the application, whereas a container only needs to package up the application software, with the optional addition of any application-specific OS dependencies, and leverages the operating system kernel running on the host.
Compared to VMs, containers consume less hardware resources, have faster startup speeds, require less maintenance, are highly portable and easier to deploy.
4. Design for Automation
Cloud native applications are invariably orchestrated in some way to automate the deployment process. Likewise, orchestration is needed to automate operations such as scaling of the different microservices and healing failed instances because these would be too complex and onerous to perform manually.
For another perspective on cloud native principles, the Cloud Native Computing Foundation (CNCF) has published a Cloud Native Trail Map that is a useful reference.
And for a fuller explanation of the key features of cloud native and more, please download our white paper, Cloud Native Network Functions: Design, Architecture and Technology Landscape.