Pixie Dust and Unicorn Stuff - The Magic Behind SDN

PCE: The magic behind tomorrow’s software defined networks

Although Metaswitch has been supplying a path computation element (PCE) function to network equipment providers for quite some time, I first took an interest when it was incorporated into our own little SDN controller. While that interest grew to something akin to Internet stalking, the lawsuits have now been settled and the daily meditation is working, so I’m better now. With that behind me, I feel I can share my admiration for this little-known SDN function in a calm and rational manner. Well, calm and rational-ish.

Why the hysteria? Well, proving it was more than a mythology, I began to see the path computation element as an integral component in many evolving architectures from Flex-grid optical transport to application-based network operations (ABNO); from data center interconnect to the Internet of Things (IoT). Oh, and lest I forget, there’s also a play for the PCE in NFV service function chaining (SFC).

 Like most fairy tales, the PCE had humble origins. One of its initial drivers was to decouple the increasing complexities surrounding MPLS and GMPLS traffic engineering. A Constrained Shortest Path First (CSPF) process, spinning cycles independently in each network switch, was no longer cutting it. In increasingly complex infrastructures, holistic topology and state information and progressively powerful algorithms were required to make more intelligent traffic engineering decisions over those afforded by lightly pruned (constrained) Dijkstra-based link state (shortest path first) routing protocols (i.e. OSPF and IS-IS). Such protocols also suffer from bin-packing [Yay! Bin packing!] issues, whereby they tend to favor and therefore overload specific links, rather than taking into account the network utilization as a whole and a larger goal of striking a balance between link and overall network utilization.

In a world that was looking toward forwarding and control plane separation -- even in the mid 2000s -- integrating more processor-intensive functionality into each individual network node to cope with the additional metrics required to intelligently forward traffic was not only senseless from a cost perspective but also far too restrictive for network operators looking to take control of their networks from the oppressive equipment vendor ogres. A distinct PCE enabled carriers to apply path computation algorithms with their own unique policies developed by their preferred supplier any time they wanted and host them within cost-effective general compute clouds, rather than vendor-specific central office tin.

PCE-goals-2.png

Network operators want to reduce costs and improve efficiencies

Note: I will ignore the fact that the PCE could be and indeed was integrated into a switch in the early days… primarily because I’m somehow able to get these posts through without fact checking of any kind, which you have no doubt already gathered.

And so it came to pass, in August 2006, nigh half a decade before software defined networking was even born -- let alone reaching the industry fever pitch it is today -- the PCE Architecture was quietly hatched as an Informational RFC (4655). In the years that followed, the architecture was enhanced to address multi-layer networking (RFC 5623 | September 2009) and then the need for a hierarchical implementations across administrative domains (RFC 6805 | November 2012).

Architecturally, the PCE described in RFC 4655 is very simple, comprising just three functional blocks: (1) the computation component itself; (2) a traffic engineering database (TED) on which the PCE acts, populated by whatever means, but typically a routing protocol foundation; (3) the signaling engine. RFC 5623 adds a virtual network topology manager that essentially maps and tracks the distinct network layers (i.e. optical / GMPLS and layer 2.5 MPLS).

The signaling mechanism in question, here, is the PCE Protocol. PCEP was introduced at a standards track RFC (5440 | March 2009) and continues to evolve today. In this world of separate control (PCE) and forwarding (switch) planes, PCEP is the primary communications mechanism between the two. The PCE client on the switch is, not surprisingly, called a path computation client (PCC). PCEP is also employed between PCE hierarchies (RFC 6805), where there is both PCE and PCC functionality.

PCE-archictecture.png

A path computation element architecture (RFC 4655)

Interestingly, it’s only recently that measured results of a simple, single-domain, centralized implementation of CSPF using a PCE, versus a distributed (per-switch) implementation of CSPF, have been publicized. Even more interesting is the fact that the same case study from Cox Communications, first presented at SCTE 2014, has been referenced by more than one major router vendor. The MSO demonstrated that using a path computation element resulted in up to a 15 percent reduction in RSVP reserved bandwidth.

2014-cox-case-study-central-cspf-pce-vs-distributed-cspf.png

Cox Communications Case Study: Centralized (PCE) vs. Distributed (Online) CSPF computation

The advantages of PCE increase when multiple administrative domains are required to complete an end-to-end path, the rationale behind the hierarchical PCE described in the aforementioned RFC 6805. NTT Network Innovation Labs demonstrated this way back in 2011 and published the results in a research paper that described the simulated 1,000-node network and the dramatically reduced signaling times achieved when employing PCE (proposed method) over classic distributed computation (conventional method). This example also highlights the use of backward-recursive PCE-based computation, or BRPC (RFC 5441 | April 2009), which I mention not to show off but to note that it preserves confidentiality when the domains in questions are managed by different service providers. OK -- so maybe I am showing off a little bit.*

ntt-h-pce.jpg

NTT Labs: The signaling efficiencies of H-PCE (proposed) vs. distributed (conventional) CSPF

Naturally, though, with the PCE function increasingly being integrated into the heart of a larger SDN controller, PCEP is not the only option. Paths can be configured by point configuration (i.e., node-by-node/hop-by-hop) using NETCONF, ForCES or OpenFlow. In-band control protocols like RSVP can also be employed, enabling new edge switches to signal traffic engineered paths across legacy core devices.

Like all emerging technologies, our lowly PCE advanced, over time, to meet new demands... or to deliver on its original promise. Originally stateless in nature (i.e., calculating, then forgetting paths), the PCE became stateful, keeping an up-to-date record not only of the paths it had computed and successfully reserved, but also of those under construction, thereby reducing contention problems, or “glare.” It’s the stateful PCE that makes segment routing (SPRING) possible for service function chaining (SFC) and therefore something of great interest for anyone into network functions virtualization (NFV).

The PCE is also evolving from a passive device, waiting for explicit requests from the network, to an active device that can make recommendations to the network when more optimal paths for existing connections are found. Active and stateful PCEs are smart-asses who should, under no circumstances, be invited to your social gatherings or parties. They should, however, be part of your intelligent network infrastructure, which is exactly why they have been invited to play in carrier software defined network (SDN) controllers.

With much excitement, a PCE was included as a service function (together with a PCEP plug-in) in the OpenDaylight Helium release.** With this release, ODL features pretty much all the functionality of a PCE-centric multilayer SDN controller. A generic reference architecture for such an implementation is outlined within RFC 7491: “A PCE Architecture for Application-Based Network Operations,” or ABNO for short. This is a good time to recognize the RFC authors as originators of the metaphorical Unicorn, which I am blatantly plagiarizing for a cheap laugh in the title of this post.

Adoption of an ABNO philosophy affords multi-domain and multi-layer coordination, interworking and policy control, while taking into account the evolving nature of infrastructures as they move gracefully (i.e., without the help of a forklift) toward SDN.

generic-abno-architecture.png

A generic ABNO architecture per RFC 7491

The fundamental process of path establishment in an ABNO implementation is straightforward. The OSS requests a path, which is validated by the policy manager. The ABNO controller makes a request of the Unicorn... errr… PCE for a path. Once found, the provisioning manager configures either just an endpoint or both endpoints and intermediate nodes, depending on the protocol(s) employed. A closed loop system, the OSS is notified once provisioning is complete.

Now active and stateful, the PCE of the future could also become context-aware, adding and subsequently removing network bandwidth between specific carrier points-of-presence and application or Internet service provider domains based on historical network modeling that takes into account traffic or search profiles in real time or based on time of day and day of week. The local team playing an away game could well dynamically prompt the PCE to increase bandwidth and QoS profiles to streaming sports services. Conversely, as away fans stream out of a live game, searches for “restaurant” could result in an increase of bandwidth to review databases and even the restaurant point-of-sale support systems themselves. The guys then file back to their hotel rooms dotted around the host city and search for, or fire up, certain “on-demand video services,” which can again trigger a dynamic reconfiguration of network resources by the PCE. Nope -- you can’t un-imagine that. You’re welcome.

That might be blue-sky thinking -- and a little seedy -- but it provides me with a nice pivot from what has been predominantly a “what” discussion (i.e., managing connections to something) to a “where” discussion, in which we manage connections to a location. In its most granular context, we can call this the Internet of Things (IoT), and the mighty PCE has a play there as well.

In an ironic twist, IoT objects will employ time division multiple access (TDMA) techniques to form their sensor meshes. By amending the MAC portion of the IEEE802.15.4 specification, IEEE802.15.4e facilitates time-slotted channel hopping (TSCH), which essentially enables IoT endpoints to communicate using a unit of bandwidth (cell) allocated on a specific schedule. In these low-power, lossy networks (LLN), TSCH reduces power draw by eliminating the need for constrained nodes to idly “listen” for data, continuously. This TDM technique also provides predictable delay and strict QoS characteristics, critical for the many types of sensor meshes IoT will enable.

By implementing a queue-based algorithm, IoT infrastructures can employ a centralized path computation element to build and maintain the TSCH schedule, dynamically modifying it based on the changing demands of the nodes within the LLN sensor meshes it is serving. PCEP continues to be the signaling protocol used from the IoT nodes, being sufficiently lightweight in nature, although its current dependency on TCP, rather than the more nimble UDP employed by the Constrained Application Protocol (CoAP), perhaps warrants further discussion. By people who actually know what they are talking about, which naturally excludes me.

pce-in-iot.png

The IoT PCE responds to scheduling requests from inside the mesh plus external elements

So, being of relatively sound mind, I can attest that, far from being mythology, the path computation element is actually very real and can genuinely change the way a network operates in magical ways. Want to learn more? Check out:

 

* After posting this, our *real* PCE expert (and IETF PCE WG co-chair) Jon Hardwick informed me that you actually need to add a Path-Key [RFC 5520] to BRPC in order to preserve confidentiality, which was a slapping-down I thoroughly deserved, after that comment.

** The recent roll-up of ON.Lab's ONOS into the Linux Foundation also solidifies ODL's role in carrier SDN