Taking Extreme Optimization to the Edge
Initiatives like Multi-Access Edge Computing (MEC) are working to provide a roadmap to the edge of the network, while 5G use cases that depend on low latency will make edge computing imperative. But the cloud environments at the network edge will be very different from centralized core network clouds, and they will handle different types of workloads. In 5G networks, these differences will have major implications for how core network functions are designed – namely, edge clouds will require extreme network optimization and very high performing packet processing.
Core cloud environments and edge clouds are different in terms of scale and workloads. Centralized core clouds serve tens of millions of subscribers and take up anywhere from 20 to 80 server racks in a data center. They primarily handle control plane applications, such as IP Multimedia Subsystem (IMS), Telephony Application Servers (TAS) or, for the 5G Core, the Access and Mobility Management Function (AMF) and Session Management Function (SMF).
These workloads are compute intensive in terms of CPU resource consumption and therefore require a traditional cloud architecture.
Edge clouds are smaller, serving up to a couple of million of subscribers and taking up to three server racks. The workloads are mainly user plane functions, such as the 5G UPF, deep packet inspection (DPI), content caching or video optimization.
Centralized control plane functions are easier to manage and more cost efficient to scale. But service providers do not want to backhaul huge amounts of user plane traffic back to a central location – rather, they want to process the data closer to users at the edge. So, user plane functions will increasingly reside in physically separate locations and in a different kind of cloud compared to control plane functions.
5G use cases present various user plane challenges. Enhanced mobile broadband, for example, requires a dramatic reduction in cost per bit. Low latency applications not only have the cost per bit challenge but also the task of delivering low latency over a highly distributed user plane, which requires a small footprint. And of course, the functions need to be software-based, not physical appliances.
That calls for aggressive optimizations to reduce costs and increase efficiency. There are three areas where extreme optimization in the user plane can be achieved:
- Get packet processing software close to the hardware. This involves eliminating multiple layers of software that may have been introduced with traditional container approaches as well as leveraging SR-IOV network interface cards (NICs).
- Flatten the stack. The network stack at the edge can’t have the same complex tunnel-based overlay that is typical in large centralized data centers. We need to eliminate the large overheads and additional encoding and decoding associated with tunnel-based overlays.
- Co-locate all user plane packet processing tasks. We want to touch the packet as few times as possible at the edge before sending it on its way. That means have UPF, traffic detection function (TDF), firewall and network address translation (NAT) all in one piece of software. Also, implement the GiLAN service chain within a container pod.
The key to delivering this optimization is a packet processing engine, like Metaswitch’s Composable Network Application Processor (CNAP), which provides the foundation for virtual network functions that handle user plane traffic.
VNFs deployed at the edge demand different requirements from cloud infrastructure. For user plane traffic, the network stack must be flattened; the solution should be engineered so that most packets are touched just once; and the software that handles the user plane packet must be highly optimized itself and flexible enough to cope with evolving user plane requirements.
For more on Metaswitch’s 5G core user plane solutions and edge deployment strategies, check out our previous posts: