Taking Out the Trash: The Decomposition of Virtualized Network Functions

Bin packing. My favorite technical term of all time. There. I said it. The only purpose for writing this post is so I could use that term, so I just got it in early to save us all the trouble of going any further. Oh -- hold on, I’m being told that 40 words do not constitute a blog post. OK, let me explain further then.

do-not-litter

 

Network functions virtualization (NFV) initiatives have been progressing at breakneck speeds, with specifications and proof of concepts that demonstrate the telecommunications industry’s desire to move from bloated, decades-long standardization initiatives to a modus operandi that favors rough consensus and running code. With classic network operators facing competitive pressures never seen before, from new breeds of service providers that know only agile development methods, this shift could not have happened soon enough. But in the rush to resolve critical NFV infrastructure (NFVI) issues, have we lost sight of some important deployment characteristics originally conceived for the actual virtualized network functions (VNFs) themselves?

It is generally understood that VNFs like our very own Project Clearwater, built from scratch using Web development methods and techniques, will perform better in the cloud than software ported haphazardly from alternative platforms or -- worse still -- proprietary hardware. Doing so will provide the software resiliency required in non-five-nines hardware environments while enabling the Web scale demanded of consumer communications services. From the first ETSI white paper, though, it was clear the network operators expected more.

The concept of a virtualized network function component (VNFC) was prominent in early ETSI draft specifications. As the name suggests, these were elements of a “larger” VNF that were deployed separately but tied together with a specific descriptor file (VNFD) referred to as the forwarding graph (VNFFG). (Pause to recover from acronym overload.) The final versions of Phase 1 ETSI specifications, most notably from the software architecture SWA working group, dig deep into the deployment options for VNFCs, but I can’t help thinking the original VNFC goal has been diluted. You see, in today’s NFV, a VNFC exposing a standardized interface is called a VNF.

Therefore, even though you need the two distinct components we have built as part of our Perimeta Session Border Controller (enabling them to scale independently and be instantiated in disparate locations) to deliver a complete SBC solution, they are thought of as distinct VNFs in today’s NFV parlance. Only VNFCs with vendor-specific proprietary interfaces can be called VNFCs, which weakens the VNFC brand somewhat and appears to be resulting in the marginalization of the VNFC’s power, of late.

The tide, though, appears to be turning. The lowly VNFC appears to have acquired a fancy brand name, “Microservice,” which will look great in data sheets and therefore, I feel, more than makes up for the loss of the original nomenclature. It also has a nicer ring to it than decomposition, which sounds like an affliction associated with zombies in The Walking Dead. Moreover, at a recent NFV gathering, a senior executive from a large U.S. telco (who also works on various NFV initiatives) noted that if the industry can’t take advantage of the ability to decompose VNFs, then the industry has failed. And yes -- I’m holding back this individual’s name to save them the embarrassment of being associated with me or this post in any way.

There are three very distinct advantages of building a VNF from smaller components. The first is to do with bin packing (yay!), which in this case is referring to the ability to best utilize individual CPU cores within a host machine. Simply put, I can dramatically reduce stranded processing capability by deploying smaller VM instances. Finding room for a 2 x Core flavor on a host is much easier than with one requiring 8 x Cores. Too many large VM instances could easily result in a large amount of unused CPU capacity, which could end up costing network operators a substantial amount of money. Naturally, it also takes great software engineering skills to ensure you ‘pack’ a CPU core as well as possible -- but that’s a story for a future post. Anyhow, we can think about fine-grained componentization, here, like this example of a decomposed router.

decomposed-router-vnf

An example of a decomposed router.

As mentioned previously, building a VNF from decomposed elements also enables them to scale independently. Orchestrators can spin up just the media component of the Metaswitch SBC when more active calls are in progress, again saving CPU cycles by only using these resources on demand. Course componentization, such as our SBC or a media resource function (MRF) are good examples of functions that can benefit from this feature.

decomposed-sbc-mrf

Selective scaling of components handling disparate traffic streams.

Lastly, but really personifying the microservice moniker, are functional components of a NFV -- or simply VNFCs -- that can be reused by other VNFs. Take voice or video transcoding: Required of practically all multimedia communications functions in some form or another, it’s ripe for reuse. This gives network operators the opportunity to pick one (software) transcoding engine and leverage it across their entire rich communications services infrastructure, thereby eliminating the need to buy that component from each individual multimedia communications VNF vendor. Naturally, realizing this capex saving would mean network operators would have to identify the cost of such functionality, but this requires them to be ruthless to the Walmart degree.

What do I mean by that? Well, Walmart is famous for knowing the cost of the components that make up the products they sell and having vendors adjust their cost (downwards, of course) accordingly. Their buyer’s knowledge is as granular as the chemicals that are used to make Tupperware. When they see the cost of that chemical decrease, they expect the savings to be passed along.

In the case of transcoding, the potential savings go much further. With proprietary CODECs standard options in voice and video, vendors must pass the licensing costs, individually, onto the operator. With licensing structures tiered and priced based on quantity in step functions or system-wide, it’s typical for carriers to pay different prices for the same CODEC license -- or worse, pay a per-license price for one function, when they have an unlimited system-wide license on the other. A shared transcoder component eliminates that and can also increase the velocity of new applications, reducing development times for new services or enabling the rapid addition of new features to existing offerings.

So, decomposition, microservices or VNFCs. Call them what you like, while they admittedly dramatically increase the complexities of orchestrating NFV, they are a good idea. Now we have the core infrastructure on the road to ratification, it’s time they got some more attention, I think. But then I could be talking garbage. Check out www.projectclearwater.org to see a great example of a decomposed VNF.