COTS SBC vs Proprietary: More Than Just Hearsay

Most SBCs today are based on purpose-built proprietary hardware.  Supporters of this approach argue that it’s just not possible to implement some of the key functions that are essential to SBCs at any reasonable scale without purpose-built hardware.  They identify four functions in particular that they say you can’t do on COTS hardware: discarding signaling packets from malicious sources, for example in a distributed denial-of-service attack; performing media relay with rate policing and QoS measurement; handling cryptographic functions used to secure signaling and media; and performing transcoding.

Five years ago, they would have been right.   COTS hardware just didn’t offer the processing power or packet handling throughput to build large-scale SBCs.  But right now, today, they are wrong.  And as Moore’s Law rolls inexorably on, they are getting wronger.

So what’s happened over the last five years that’s changed things?  Firstly, general-purpose CPUs have a lot more cores than they used to.  8-core CPUs are commonplace now, and we can expect to see 12-core CPUs shipping in 2013.  Lots of cores means lots of processing power, although it has to be said that this only translates into lots of SBC performance if the software is properly architected to operate in a massively parallel environment.  With general-purpose CPUs, you get a lot more processing power for your dollar compared with proprietary hardware – and this can translate into a lot more headroom, for example to handle unexpected peak signaling loads.

Secondly, general-purpose CPUs have lots of on-chip cache memory – up to 20MB in the current generation.  One of the challenges of building SBCs is that you need to store flow descriptors for tens of thousands of RTP media flows, and look up the relevant flow descriptor for each and every incoming RTP packet – perhaps as often as 4 or 5 million times a second.  SBCs based on purpose-built hardware typically use a TCAM (ternary content-addressable memory) chip to perform this flow descriptor lookup.  But with 20MB of on-chip cache memory, general-purpose CPUs have both the capacity and the speed to perform flow descriptor look-ups at the required rate.  It’s worth pointing out that general purpose operating systems don’t provide direct control over what’s kept in cache memory and what’s not, so the SBC software has to be pretty smart to make sure that the flow descriptors remain in cache at all times.

Thirdly, general purpose CPUs have built-in hardware acceleration for cryptographic functions.  The kinds of encryption algorithms that are used to secure signaling and media in SIP-based VoIP networks are of the same family as those used to secure Web sessions.  Large-scale use of HTTPS in the Web has driven the designers of general-purpose CPUs to include special instructions in the CPU’s instruction set specifically to speed up the processing of the encryption algorithms.  These same instructions can be harnessed by SBC software to speed up processing of SIP over TLS, Secure RTP, IMS AKA authentication and so on.

And finally, general purpose CPUs come with software toolkits that are designed to massively improve the speed and efficiency with which packet processing can be handled.  For example, Intel has claimed that with their Data Plane Development Kit (DPDK), a standard COTS server today can forward 80 million packets per second.  That’s an order of magnitude more than a typical large-scale SBC needs to handle.  Again, it takes considerable software skills to take advantage of this kind of throughput.  For example, when implementing the function that discards incoming packets from blacklisted IP addresses, it’s vital to ensure that packets don’t percolate up the operating system’s entire IP stack, because that would eat up way too many CPU cycles.  With the right kernel know-how, packets can be selectively discarded in software at extremely high rates with minimal consumption of CPU resources.

Transcoding is the one function where purpose-built hardware can do a significantly better job than general purpose CPUs.  That’s not to say that you can’t do transcoding in COTS hardware, it’s just that this particular function can be implemented at considerably lower hardware cost using Digital Signal Processors.  For applications that need high capacity transcoding of compute-intensive codecs like AMR-WB, DSP-based platforms will undoubtedly cost less from a pure hardware standpoint.  But when you factor in SBC software licensing costs, the difference ends up being pretty marginal, and it’s worth remembering that an investment in DSPs in SBCs can only ever be used for transcoding, whereas using general purpose CPUs for transcoding (especially in a virtualized environment) means that this processing power can be used flexibly for other tasks as demand for transcoding in the network varies over time.

Network operators are well aware of the potential of COTS hardware to implement the essential functions on which their networks are built.  Over the last couple of years, some leading network operators have been carrying out their own experiments, for example building routers in pure software on COTS servers.  The results of these experiments have been so promising that 13 of the largest network operators in the world came together in October 2012 and launched an initiative called “Network Functions Virtualization” (NFV).  This envisages all kinds of network functions, including SBCs, being implemented as software appliances running in a virtualized environment on COTS servers connected together by generic Ethernet switches.  Network operators cite a wide range of compelling benefits arising from NFV, including reduced Capex and Opex, more rapid service innovation and deployment, lower barriers to entry for innovative new vendors of network functions, and elastic network scalability.  The NFV initiative is now being progressed by an ETSI working group with very broad support from both network operators and network equipment vendors.

Just 5 days before the NFV announcement, one well-known SBC vendor announced a major new purpose-built SBC hardware platform.  If you’ve always built your products on purpose-built hardware, it’s perhaps hard to admit that there’s another way.  But the NFV initiative has made it very clear indeed where the future lies. And if an SBC vendor tells you that some essential function of an SBC just can’t be done with software on COTS hardware, then here’s what he’s really telling you: “we don’t know how”.