Building Cloud Native to deliver on the promise of NFV

The primary objective of Network Functions Virtualization, laid-out in a white paper developed by a consortium of network operators nearly 5 years ago, was quite clear: To significantly reduce the capital and operational costs associated with deploying communications services.  While the NFV specifications have progressed at a breakneck speed, since then, much of the effort has been focused in areas of management and orchestration. Meanwhile, the very essence of NFV -- the act of virtualizing network functions -- has floundered, with most suppliers taking an approach that simply porting their existing product codebase onto Virtual Machines (VM) is enough to meet those objectives. It simply isn’t.

This article was first published in CBR.

pinky-promise-nfv.jpg

The appeal of Virtual Machines is in the fact that they mask the inadequacies of legacy communications software components for emerging NFV infrastructures. Moreover, these legacy appliances, historically built on proprietary hardware, can be stood-up within a common virtualization environment with little effort. These hypervisor-based hardware virtualization techniques, however, have extremely high overheads and lack the speed to protect against infrastructure failures without costly redundancy that practically mirrors today’s non-virtualized implementations. The result is a solution that fails to meet the requisite cost savings demanded by communications service providers, who must change almost every aspect of their business to support this new model.

Cloud-native Virtualized Network Functions (VNFs) are built, from scratch, using highly scalable, web-centric, design patterns and practices. With the unique demands of telco services and infrastructures, however, these approaches must be extended to support distributed state and asynchronous message handling.  With this foundation, cloud native VNF’s are not limited to hardware-based Virtual Machine deployments within private clouds. Leveraging highly-efficient lightweight containers that instantiate in a fraction of the time a VM takes, they can be deployed in public or across hybrid public/private cloud environments, providing capacity on-demand and fail-over elements only when they are required. Together with highly automated commercial container cluster orchestration, the resulting solution represents millions of dollars in capital and operational cost savings, over early NFV approaches, while dramatically increasing overall service agility.

The move towards cloud native network services should be an opportunity to completely abandon the monolithic software architectures of old for an approach that is based on microservices. In NFV parlance, microservices are small, autonomous, components of a larger VNF. They can be developed independently of a larger network functions or supplied by a different vendor, altogether, to ultimately deliver a best-of-breed solution. Plus, with granular forwarding graphs or service chains, they can be reused across many distinct network functions, eliminating the repetitive implementation of common network element features. Microservices are essential to implementing a scale-out approach to meeting customer and traffic demands. Furthermore, they are fundamental to adopting lean DevOps methodologies towards service enhancements and upgrades.

Put it all together -- cloud native virtualized network functions built using microservices methodologies and deployed within container environments -- and you can achieve theoretically limitless and cost-effective scale. Services can start up more quickly and there's better fault isolation when one part of a service fails. Also, microservices can be added as needed to handle increased capacity or backup needs.

So what services can benefit first from solutions delivered using cloud native VNFs? Network operators are already working on the deployment and delivery of 5G mobile connectivity and services. This next generation of mobile cannot be run on today’s network functions, which are typically inflexible, expensive to scale and costly to upgrade. According to industry bodies like ETSI, 5G networks need to be [highly] scalable, deliver ultra-low latency connectivity and support a huge number of concurrent sessions. Further, network performance must be reliable and underpinned by a robust security strategy. Essentially, 5G isn’t just about RAN upgrades, it also requires a new kind of core network to deliver on the service, scale, security and quality-of-experience demands.

Whilst 5G will require substantial changes to the way networks are built – driving the need for scalable, configurable cloud native VNFs – the growth of LTE-based services is already pushing networks to their limits. With an escalating number of IoT and smart home applications now appearing in market, there is a pressing need for network functions that are designed to fully leverage the cloud and not simply reside in it. Cloud native IMS cores and session border controllers can bring immediate value to operators looking to expand their IMS capabilities into the enterprise, or to slice parts of their existing networks for specific, quality-dependent applications such as the connected car.

As even more mobile end points come to market and more IoT-driven experiences arrive, the range of services that consumers demand from network operators will expand beyond traditional boundaries. We are entering a new era of communications, one where the scale and cost-efficiencies demanded by 5G, dictate that NFV implementations cannot afford to be built on anything but cloud native VNFs because only cloud native virtualized network functions can truly deliver on the promise of NFV.