Does 5G finally mean curtains for TCP/IP?

In the realm of protocol suites, TCP/IP is the proverbial cockroach. I mean that in a nice way, of course. Well, as nice as you can be comparing something to an animal recognized as a perennial pest capable of burrowing into human ears.1 No--you can’t unread that. In this instance, however, I am genuinely being complimentary, given that TCP’s robustness and ubiquity has made it difficult to eradicate. Not surprising, given the fact that TCP/IP was developed in the era of the early 1980s ARPANET, a network long rumored to have been designed to survive a nuclear attack.  While the Internet Society has long denied that legend, crediting the tale’s origin to a RAND study, those same long beards have admitted to planning the ARPANET with a view to “robustness and survivability, including the capability to withstand losses of large portions of the underlying networks.”2 So... errr... much like would happen in the event of a Cold War-esque nuclear attack then. With TCP/IP representative of the quintessential cockroach.


Moving beyond the Blattodea3 references, it’s not exactly news that TCP/IP has somewhat overstayed its welcome. Being on the downward slope of the 80-year story arch myself, I have a deep respect for anything still strutting its stuff after 40 years, but TCP does now have to recognize that times have changed. The protocol suite was, after all, developed in a time when information was inherently centralized, and everyone could easily point to the one or two places where the data they wanted was stored. No one cared much about latency, but memory was a scarce resource, resulting in stateless intermediary elements (i.e. routers) with little or no per-flow information held by the network.4 TCP’s continued success could be said to stem from the inherent fact that its reliability grows exponentially as the network expands. Why, then, would we break up with something that (to all intents) has increasingly less chance of breaking?

It’s not you, TCP--it’s me

We are now predominantly mobile, and we are consuming an increasing amount of broadcast(able) content. Right there–that’s two things that this transport protocol simply can’t stand. While we no longer have memory issues the prospect of billions of IoT devices being responsible for directing our every move has us paranoid about latency. Plus, with the propensity toward self-publication we are now consuming copious amounts of content from every imaginable element or endpoint. TCP/IP likes order: Static addresses that can be used to tag individual packets for routing to well-known destinations. We, however, live in an online world of complete chaos. Who knows where anything is anymore? It’s sorta like my house, really.

On top of all that, we are still practicing nothing more than Security Theatre, as the physical world refers to the act of giving the impression of us all being totally protected.5 Every time you fly, you are a participating in this particular form of performance art. When it comes to TCP, we lock down the connection quite adequately using SSL, but this does nothing to protect me from what’s inside the payload I’m receiving or from whoever is sending it.

An over-reliance on overlays

So, while the vast majority of Internet traffic is acquiring named chunks of data, such as Web pages and videos, we still request the data using completely nondescript numbering schemes, which are then, in turn, used to steer traffic across a network. This differs very little, in essence, from the telephone network and numbering scheme. In the same way we employed telephone books to equate a nondescript, pseudo-random number with real person or endpoint (business), we use global domain name servers (DNS) to perform a similar role in the Internet realm. While IP address gives the illusion of being dynamic, nothing is further from the truth. Neither the DNS nor the route discovery and propagation mechanisms respond well to changes. Indeed, any time an IP address changes anywhere, it’s a complete kerfuffle.  The internet was fundamentally architected for fixed networking.

That’s probably why we barely actually even route anything in a carrier infrastructure. We employ layer 2 or layer 3 overlay techniques (i.e. GTP) and default routes to make it appear that we have fluid connectivity to the Web from wherever we are, while what we are doing is just tunneling to the edge Internet. Not only is this cumbersome, it’s prohibitively expensive for network operators that are desperate to shave recurring (per-subscriber) infrastructure costs.

The propensity toward self-publication within this subscriber community is a trend that will continue to add to the mobility of consumable content. More popular pieces (which unfortunately but not unsurprisingly don’t include my live webcam events) either are streamed individually or employ and overlay multicast infrastructure. The former is totally inefficient while the latter is almost completely ineffective. Let’s face it--we’ve tinkered with MCAST for as long as I can recall6 and the fact is that, unless streams are well known and synchronized, multicast, as we know it, is out.

A little delay goes a long way

5G architects are getting fanatical about latency. The hundred and forty-seven billion trillion gazillion connected IoT endpoints, which are forecasted for 2020, are demanding the return of critical telemetry information such as location in a fraction of a second to prevent mass chaos on the roads and rioting on the streets. Or so the marketing material would have us believe. Ultra-reliable and low-latency communication (URLLC) is one of the three broad services defined in 5GNR (New Radio) and will ‘own’ a network slice in order to fulfill its goal. The other two service categories (massive Machine Type Communications (mMTC) and enhanced Mobile Broadband (eMBB)), however, include applications such as sensor grids and real-time multimedia, which also require us keeping delays in check.

There’s a lot of talk about multi-access edge computing playing a pivotal role in this low-latency race. Like NFV, MEC has found a home within ETSI, which sees the ultimate goal of extending public, private or hybrid cloud architectures to the mobile (or fixed line, if anyone still cares7) edge. MEC delivers on both the requirement to virtualize network functions, in support of network slicing, and to add local, low-delay processing for frequently used applications. This platform serves not only the network operator but also any trusted service provider (i.e. through a network slice) or application developer.

The need for new protocols

Another recent(ish) addition to the ETSI Industry Specification Group (ISG) family covers Next Generation Protocols. Formed in 2016, The NGP ISG is tasked with taking a step back and looking at all the protocols used within today’s mobile infrastructure–both the control and data plane–to determine if alternatives should be proposed or adopted. As if piling on to the anti-TCP wave, the ISG’s charter pretty much opens with a statement that reads: “We have identified a number of technical issues with the current (TCP/IP-based) technology which prevent it delivering the required levels of service without excessive complexity or, in some cases, at all.”8 Ouch. Our internetworking hero is quickly becoming positively villainous in a 5G world.

Not only is the 5G genesis TCP’s ruin, but it’s also enabling the downfall. The fundamental ability to totally segment (slice) end-to-end networks at the drop of a hat (or, more precisely, at the automated whim of some artificial intelligence) means operators and service providers are no longer beholden to a single suite of protocols. The network slice is completely isolated from the core network and other slices, eliminating the need for a core protocol that transcends (overlays or underlays) all otherwise distinct networks. A slice can easily live isolated with its own RF, RAN, edge and core infrastructure and (virtualized) supporting network functions and therefore its very only set of protocols. A key contender for this overhaul has got to be content-centric networking.

An information centric network

While it doesn’t have quite the 30-year saga of the aforementioned IP Multicast, the concept of content-centric networking (CCN) is no spring chicken, either. But if you think this is just a new proposition put forward by someone hell-bent on dismissing the legacy for the benefit their own self-promotion, think again. In a dramatic twist to this tale, CCN was introduced by none other than the mighty Van Jacobson, one of the main contributors to TCP/IP. Indeed, Van Jacobson’s contributions to TCP are credited with saving it (and--dramatically--the entire Internet) from impending collapse, in the late ’80s and early ’90s. It’s like our cockroach going on to invent Raid. Totally ruthless, but proof of the need to sacrifice your own offspring, when the time comes.

The precise chain of custody is a little confusing… or just too boring for me to attempt to untangle, if I’m honest. Basically, information centric networking, which comprises CCN, emerged from the research swamp around 2006, the result of a number of concurrent U.S. and European projects, including SIGCOMM’s Data-Oriented Network Architecture (DONA) and various (mostly) EU Framework 7 funded programs. While giving us a teaser during a Google Tech Talk in ’06, Van Jacobson, together with a handful of his PARC colleagues, formally introduced the world to their proposal for Networking Named Content in 2009.9

Information-centric networking now falls under the auspices of the Internet Research Taskforce’s (IRTF) ICN Research Group (ICNRG). The group started publishing informational RFCs on the topic in 2015, and while you could argue things started off slowly, there are now 27 active Internet Drafts on the topic–eight of them new this year (2018) and all of them updated in the last four months.10

ICN is built around the simple premise that there are “content consumers”--or those with an interest in a piece of data--and “producers,” or those with that data-of-interest. Both the producer and the consumer can be anyone or anything, anywhere in a network and on the move or in a fixed location. However, the reason interest in ICN has increased over the last year or so is not simply because the notion of network slicing is so close we can smell it; it’s because the network functions supporting those slices will be built fundamentally differently from those in the past. Thanks to ever-faster CPUs, increasing core counts and new data plane acceleration techniques that can move even the gnarliest traffic in seriously short order, general-purpose compute platforms are now being collared to perform the sort of packet processing previously preserved for custom ASICs within dedicated switching and routing boxes.

By their very nature, these new network functions inherently have something these devices have lacked and what IP routing was specifically designed to mitigate: a mass of memory and a surplus of storage capacity. Plus, more specifically, they are built with a platform mentality rather than as single-use boxes. They don’t just do one thing well. They can do multiple things well and those things can vary, depending on where they are deployed and what infrastructure or services they are supporting. All this explains why the Linux Foundation’s Fast Data Project ( has taken the “running code” component of the ICN initiative (CCNx) under its wing.11

Evolving the communications model

CCN operates under the idea that a consumer broadcasts its interest across all available connections. Any network node hearing of that interest and having the data can then respond. That node may not be the original producer, as ICN characteristics include the ability for any intermediary to cache content. In the same efficient way IP operates, naming is hierarchical so that data would be held in the same named tree specified by the consumer. These names may call for static or dynamically generated content and may be contextual in nature: i.e., /local/traffic could allow a consumer or producer to stream or receive traffic information in their local area without additional qualification or specificity.

ccn-packet-and pipeline

CCN packet structure and forwarding processes

There are just two packet types, supporting the request and response. Since both packets clearly identify the content being exchanged, multiple interested parties can share the same content. Unlike overlay broadcast techniques, however, the streams need not be synchronized. As content is consumed, the associated data is pushed closer to the edge of the network, naturally reducing latency and  the need to backhaul large amounts of traffic or depend on Internet routing. As the architecture is natively broadcast in nature, load balancing is also inherent to CCN.

A CCN node operates in a similar manner to a classic IP router. A longest-match lookup is first performed on the name in the incoming packet and a pipeline action is initiated accordingly. Unlike classic IP, which fundamentally prevents such shenanigans, CCN Forwarding Information Base (FIB) allows for a list of outgoing interfaces, rather than just a single one. Interfaces, being virtual or physical, are referred to as “Faces” in CCN parlance. This means CCN is not be constrained by classic Spanning Tree Protocols (STP), allowing for multiple data sources that can all be queried in parallel. Therefore, unlike an IP Content Store, CCN data is not simply forgotten the minute it's processed. As this self-identified and authenticated (secure) information does not change, it is stored for as long as possible so that others may consume it.

The Pending Interest Table (PIT) tracks interests from upstream sources, enabling data to be sent downstream to a consumer. As interest packets are sent upstream, they leave a trail of digital bread crumbs that the matching data follows back to the requestor. Once again, the symmetrical nature of CCN lends to its predictability, plus load-balancing across multiple faces is, once again, inbuilt.

Composing new networks

Name-driven content concentric networking communications are self-securing, flexible, resilient, scalable and highly programmable. They are sufficiently compatible with existing IP conventions to be incrementally deployable--even down to their ability to reuse unmodified interior gateway routing protocols such as IS-IS and OSPF–but are revolutionary enough to justify the infrastructure overhaul that 5G mandates. ICN provides the perfect foundation for low-latency, massively scaled IoT12 and fits well within virtualization frameworks for MEC and beyond.13

Information-centric networking may ultimately be both the enabler and driver for moving beyond simply new radio and deploying a complete 5G infrastructure. Drawing parallels with highly successful open-compute server models, composable networking is a software architecture philosophy that allows network architects to operators to choose the right combination of packet processing platforms, pipelines, network operating system and individual control-plane networking applications for each specific use case. Leveraging general-purpose compute platforms, composable networking reduces both capital and operational expenditures. It increases network agility, accelerates the velocity of innovation, eliminates vendor sales and support lock-in, and allows operators to exploit new, more powerful and feature-rich switching platforms and silicon ahead of typical technology adoption cycles.

Not only do composable networking methodologies make the migration to 5G possible, this approach simplifies the introduction of revolutionary technologies, like content-centric networking.  5G is the nuclear event, but CCN ensures even the cockroaches don’t survive and we get to build everything from scratch without having to continue making concessions in the area of interworking: A fresh start--free of pests. So, the curtain may well be slowly closing on TCP/IP, but it’s the potential composability of 5G that will make the migration manageable


4. Obviously the TCP endpoints implement a finite state machine (, but the routers are blissfully unaware.
6. In all fairness, that’s not very long at my age, but standards hark back to around 1986, the same year I got into this business, and we are not exactly awash in multicast.
7. MEC was formerly known as mobile edge compute, but changed to include fixed-line. Hah. like anyone still cares about that.