Reality Check: Virtualization Doesn’t Have to Be So Difficult

At the annual Great Telco Debate in London, attendees debated whether virtualization was a poisoned chalice. The event is known for stimulating meaningful discussion with provocative positions. Once all the arguments were heard, the statement was rejected. So it may not be considered a poisoned chalice by the industry great and good at the event, but the discussion at least revealed just how difficult virtualization has been for telcos so far.

maze-puzzle-solved-go-around

Metaswitch CTO Martin Taylor was there to lend his insight into the current state of virtualization. He presented the case that virtualization is only a poisoned chalice for traditional, big equipment vendors. From the big vendor point of view, virtualization means that their long-time customers no longer want to buy their hardware, but just the software so that they can run it on another vendor’s hardware. And they’re going to want to negotiate on the price of that software down to some unsustainable level.

“For a legacy equipment vendor, virtualization is a world of hurt,” he said. “And it’s not terribly surprising that they’ve been slow-rolling and not really moving as quickly as they could.”

Most vendors have offered virtualized versions of their physical network functions, but they have merely ported the old software into virtualized environment with minimal changes. As Taylor pointed out, this software very often doesn’t perform as well as the box that it came from because it’s having to use industry standard hardware.

“When telcos virtualize this way, they’ve found that it hasn’t worked out terribly well for them,” he said. “They haven’t been able to save any money, they haven’t been able to automate operations in any meaningful way, and so virtualization has started to get a bad name.”

Now, as everyone has learned, the business case for virtualization must be built on operational cost savings. Operations have to be automated to the hilt. But that’s not easily done on legacy equipment, even if it is virtualized. The software must be fundamentally re-architected so that it’s possible to automate operations fully.

And that’s where the imperative for cloud native software design comes into force. The network software needs to be built from scratch to run in cloud environments with all the cloud native characteristics of dynamic scalability and orchestration, as well as containers for cloud portability, Taylor explained. And that takes time.

Metaswitch has done it. Taylor said it took about 5 years to build from the ground up a cloud native IP Multimedia Subsystem (IMS), called Clearwater. Now, cloud native Clearwater is powering Sprint’s VoWiFi and VoLTE services.

But Taylor also noted that many telcos have made virtualization more difficult for themselves. “It’s because they’re engineers and they like to roll up their sleeves and do stuff,” he said. A prime example is all the enhancements and additions telcos have made to OpenStack, rather than just accepting the limitations of off-the-shelf technology and trusting that it will get better over time. “They’ve thrown a lot of money at it but they’ve ended up with something that’s just a mess,” he added.

Taylor had much more to say about the state of virtualization and how many telcos haven’t gotten it right yet. In case you missed it, Taylor’s presentation at the Great Telco Debate is well worth viewing. In this video from the event, he starts at around minute 28:00.