5G Multi-Access Edge Computing with cloudlets in fog creating mist, and why the hell does every networking journal read like a London weather report these days?

It was the mighty Babylonians way back in 650 BC who used cloud patterns -- along with a sprinkling of astronomy – in the first attempt to forecast the weather. Your mother could probably tell you they were absolutely right, for there is a ring of truth around the sayings “The higher the clouds, the finer the weather” and “When clouds appear like towers, the earth is refreshed by frequent showers.” It has also been proven, however, that “When bunions flair, the weather won’t be fair,” so it seems that the greatest empire in history need not have looked any further than the extraneous flare-ups on Grandma to determine if they should pack an umbrella for an evening stroll along the Euphrates.

mec-blog

We now have far more technical ways to predict weather patterns -- most of which include crunching large amounts of data using compute resources we now, somewhat ironically, refer to as clouds. With the term used both generically and specifically, however, depending on where it’s used and who is using it, it’s starting to get a little confusing. Making matters worse, the industry, at large, has extended the analogy to include cloudlets, fog and mist, while others have steered clear of such designations in their edge and access-layer interpretations, instead choosing to use compute terminology. This shows a complete lack of imagination, quite frankly, as there are still many more forms of atmospheric phenomenon they could employ. Time to step up your game, ETSI.

Fluffy definitions

The U.S National Institute of Standards and Technology (NIST) defines cloud computing1 as having five essential characteristics, namely: (a) broad network access to (b) pooled, multi-tenant, resources which a user can (c) automatically provision and which (d) elastically scale with (e) fully transparent measuring capabilities. There are three (as a) service models, which are generally identified relative to the portion of the stack managed by the cloud service provider. Infrastructure as a Service (IaaS) provides the most flexibility to run any operating system, platform and application while putting the largest management burden on a consumer. Platform as a Service (PaaS) offers the foundation for a user to run their own application within a managed environment while cloud providers delivering Software as a Service (SaaS) allow individuals or enterprises to consume an application without worrying about any of the deployment attributes.

NIST also outlines four deployment models, which adds the concept of a community cloud to the familiar public, private and hybrid variants. Community clouds are built and managed by different organizations but with common requirements (a community of interest) around security, policy, performance and compliance, so might include healthcare, finance and telecom.

Things get a little foggy and misty-eyed at the edge

These deployment models typically refer to core (centralized) cloud implementations; however they are also applicable to cloud implementations that are decidedly decentralized. In a prescient move, somewhat prior to the now fanatical IoT handwaving enticing billions upon billions of endpoints, the concept of fog computing was first introduced by Cisco circa 20112 under the premise of enabling data processing on networking nodes closer to end users, consumers and sensors. The obvious application of fog computing is the reduction of latency when acting on information from IoT endpoints.

Contrary to cloud computing, this processing must be performed on platforms with only moderate power and limited – or periodically even no -- connectivity to core Internet cloud resources. Fog nodes may even reside within vehicles (when acting as IoT gateways) compiling and crunching individual sensor information locally before forwarding data upstream. That data may be processed by other fog nodes in the access or edge infrastructure. These nodes may be part of horizontally integrated fog architectures that might help deliver on the data driven needs of all operators and users, rather than serving the individual needs of a single vertical, like automobiles. For example, a fog node might compress all GPS data prior to being forwarded to the centralized compute cloud.

Led by the usual players with way too much time on their hands, for such matters, the OpenFog Consortium3 is now driving the industry toward standardizing on this hierarchical approach to cloud computing. With the ability for even low-power chips to perform increasingly high-powered computation, mist computing takes fog even further by assuming that the IoT endpoints (i.e. sensors) themselves can perform the sort of data normalization that I previously mentioned might be carried out by a fog node. The concept of mist is so vague that it even lacks a Wikipedia entry [gasps], but I’m going to credit the Tallinn University of Technology for first originating the concept in a 2015 IEEE paper4 and look no further for historical references. Consider the Tallinn University of Technology an Estonian Johnny Depp: finally winning the Oscar for Best Actor after years of tireless service to our industry.5

 multi-cloud

THE MULTI-CLOUD

Mist compute clouds are typically self-organizing, in that they are instantiated on more of an ad-hoc manner by the individual element’s desire to interoperate or operate as a single entity. This is contrary to fog computing infrastructures, which follow the cloud deployment models detailed previously, under the ultimate ownership of a public or private cloud service provider. Indeed, as of this post, the NIST has a draft definition in circulation outlining these variants, along with introducing mist into its vocabulary.6 These fog nodes might be characterized by the fact that they have higher compute capabilities and higher bandwidth connections to the compute cloud, than I’ve alluded to in our description so far. For example, they may reside within highly distributed and resilient data centers (perhaps within wireline central offices). In such cases, they are referred to as cloudlets – a concept and term coined by Carnegie Mellon University boffins7 that creates a clean three-tier cloud architecture and therefore brings us telco guys a nice sense of order to this complete cloud chaos.

Continued carrier cloudification

Fog computing nodes can be integrated into the Radio Access Network, reducing latency and backhaul by providing packet processing services in close proximity to endpoints, but are generally viewed as being operated by the cloud service provider, rather than the network service provider. Naturally, though, these entities may ultimately be one in the same, which is further complicated by the fact that this is sometimes referred to as a Fog Radio Access Network, or F-RAN. This is, strictly speaking, inaccurate or confusing (at best) as it has little to do with Cloud (centralized) RAN (C-RAN) initiatives that are instrumental to delivering on 5G architectural promises. C-RAN proposed the centralization of base station functions, but that put us back to square one, with regards to latency and backhaul. This led to the introduction of Mobile Edge Computing (MEC) by ETSI, an industry specifications group (ISG) that was subsequently renamed Multi-Access Edge Compute in an effort to broaden the appeal while maintaining the all-important acronym.  

MEC (or a MEC-RAN) complements C-RAN, rather than displacing a valuable architectural option. MEC is also not limited to 5G, with ETSI proposing a jump-start on this next-G by instantiating virtualized EPC (vEPC) elements -- specifically the user-plane-centric S/PGW -- at the network edge. This is in support of Control and User Plane Separation (CUPS) initiatives being pushed by the carrier community to provide a stop-gap between 4G and a full 5G Core as 5G New Radio (5GNR) threatens to flood backhaul and overwhelm centralized functions with early, low-latency IoT applications or even more requests for streaming cat videos.

The service-based architecture (SBA) of a 5G core infrastructure demands that multi-access edge compute capabilities be resident in any number of physical localities – down to the base station itself. The 5G User Plane Function (UPF) must therefore also be able to be highly distributed in nature. It must also be able to easily adapt to these distinct environments and the diversity of disparate applications it might be exposed to, both as a whole or within individual network slices. An enhanced Mobile Broadband (eMBB) network slice, for example, might demand a more comprehensive set of UPF packet processing pipelines than if supporting an ultra-reliable low-latency communications (URLLC) network slice.

multi-ran

THE MULTI-RAN

In classic cloud form, multiple individual instances of a UPF must be dynamically deployable, on-demand, almost instantaneously within the C-RAN, MEC-RAN, or somewhere within the fog in-between. Implementing this degree of cloud native-ness is nothing new for app server-like network functions handling control traffic. Well, not if you are a pioneer in such things, like Metaswitch.8 However, data-plane-centric network functions such as the UPF need high packet throughput, which is currently the domain of dedicated (physical) switching and routing platforms with highly specialized hardware.

Furthermore, while the 3GPP specifications define the basic functional requirements of a UPF9 5G core, implementations will require the flexibility to customize individual instances as the operator, user endpoints, tenets of network slices or individual service slices demand it. The UPF must also be able to adapt to its new multi-tenant Network Functions Virtualization Infrastructure (NFVI) and the network virtualization overlay of choice, be it Layer 2 VLAN or Layer 3 VxLAN, MPLS or segment routing along with tunneling and mobility techniques other than just GTP, again like SRv6/NSH or the Locator/ID Separation data plane protocol (LISP-DP) and Identifier Locator Addressing (ILA). 

A shameless plug

I have written about the types of cloud native methodologies, programmable packet processing pipelines, composable networking, data plane acceleration and VPN technologies that could enable a UPF to be successfully deployed in 5G core cloud infrastructures. However, these are simply starting points for a supplier considering delivering on such a proposition. Indeed, many would need to be not just implemented, augmented and integrated but replaced with alternatives in order to meet the demands of such an element in the unforgiving multi-tenant, multi-service, multi-cloud, multi-RAN environment.

The fog may well be clearing, revealing those with a real opportunity to provide genuine solutions in this area. I, for one, couldn’t be happier. I hate the rain.

 

1.       https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-145.pdf

2.       https://www.sigmobile.org/mobicom/2011/vanet2011/program.html

3.       https://www.openfogconsortium.org/

4.       https://ieeexplore.ieee.org/document/7163242/

5.       Fun fact: There’s a nightclub in Tartu, Estonia, called “Who wouldn’t like Johnny Depp,” which, based on that sentence construction, apparently shares the same marketing agency as most telecommunications vendors.

6.       https://csrc.nist.gov/csrc/media/publications/sp/800-191/draft/documents/sp800-191-draft.pdf

7.       http://elijah.cs.cmu.edu/

8.       Who also happen to graciously pay my monthly bills.

9.       Such as packet forwarding, charging, access control, GTP-U tunnel encap / decap, bearer lookup, service data flow mapping, per-flow QoS, etc., etc.