The central role of the server in open networking

Open networking is a hot topic these days. When we read about open networking products and initiatives, the emphasis is on network switches more often than not. But server-based networking has also proceeded along an increasingly open path, and in many ways it set the stage for the opening of switch technology.

Network switches like top of rack (TOR) switches have traditionally been closed – they come from specific vendors with proprietary software. Networking in commercial off the shelf (COTS) servers has been open for several years, thanks to the proliferation of Linux server operating systems (OSs), and networking technologies like Open vSwitch (OVS).  The networking industry wants the switch world to follow servers’ successful path; hence the birth and popularity of the term “open networking.”

Open switch evolution

Switches have traditionally been closed – the network operating systems and protocols that run on the switches have been proprietary, could not be disaggregated from the hardware and were not open source. At first, switches were really closed because the switch ASIC, the software and the switch box were all from a single vendor and were proprietary. Then, switches got disaggregated a bit when the switch vendors adopted switch ASICs from merchant silicon vendors like Broadcom. Next came OpenFlow and OpenFlow-based SDN controllers like Floodlight, which proposed that the switch control plane protocols be removed from the switch and placed in an open source controller. This in some ways disaggregated the OS from the switch box.

Subsequently, switch operating systems like Cumulus Linux came into the market. These are disaggregated because they can install and run on merchant switch ASIC-based switch boxes from multiple vendors like Quanta and Dell. But such disaggregated switch OSes are not necessarily open source.

More recently, open source switch operating systems like SONiC and Open Network Linux have been in the news. The open source controller ecosystem has further evolved as well, focusing on feature completeness and carrier grade reliability (i.e. OpenDaylight and ONOS).

All in all, significant action and news in the realm of open networking have been related to switches, geared toward helping the industry manage the switch supply chain more effectively and deploy efficiently, similar to the COTS server model.

Figure 1: Switch disaggregation follows server model

Open networking on servers

What seems to get overlooked in these discussions about open networking is the all-important precursor to this movement – open networking on servers. Most importantly, how open networking on servers (or server-based open networking) has evolved and enabled open networking on switches.

Over the last several years, TOR switches have become simpler because data centre traffic patterns have changed and networking infrastructure efficiency requirements have increased. When using leaf (TOR) and spine switches, the imperative has shifted to moving east-west traffic most efficiently, which requires more bandwidth, more ports and lower latency. As a result, the feature requirements in hardware and software in leaf and spine switches have been reduced to a simpler set. This has made open networking in switches easier to implement and deploy.

However, the smarts of networking did not disappear – they just moved to the server, where such smarts are implemented using the virtual switch – preferably an open one such as OVS – and other Linux networking features like IP tables. Many new networking features related to network security and load balancing have been added to OVS.

OpenStack, as an open source and centralized cloud orchestration tool, has rapidly come to prominence, with more than 60% of OpenStack networking deployed today using OVS (with OpenStack Neutron). Server-based open networking has evolved relatively quietly compared to open networking in switches, but it has made major contributions toward bringing deployment efficiencies and flexibility.

Today, in many high growth cloud, SDN and NFV applications, server-based open networking is running into server sprawl and related TCO challenges. As the networking bandwidths increase and the number of VMs proliferates on servers, OVS processing is taking up an increasingly large number of CPU cycles, which is limiting the number of CPU cycles available for processing applications and VMs.

Data centre operators cannot economically scale their server-based networking using traditional software-based virtual switches. So implementing server-based networking in x86 architectures and software is a double whammy: it increases costs as too many CPU cores are consumed, and it lowers performance as applications are starved for resources. 

Offloading network processing to networking hardware is an option that has worked well in the past.  However, software-defined and open source networking is evolving at a rapid pace; such innovation stops the moment data centre operators look to inflexible networking hardware for performance and scale.

Figure 2: Networking smarts moving to servers

SmartNICs: The programmable option

The solution to this challenge is to offload OVS processing to an SmartNIC. A SmartNIC handles I/O functions and incorporates a programmable network processor that can run OVS and other software. With a SmartNIC handling OVS processing, performance is boosted by up to 5X, and the data centre operator frees as many as 11 CPU cores from network-related processing, enabling greater VM scalability and lower costs. Because it is programmable, a SmartNIC can evolve rapidly with few features, preserving the pace of innovation. 

Although server-based networking by itself can cause server sprawl, SmartNICs are making the case for efficient and flexible open networking from the COTS server side.

Figure 3: A SmartNIC offloads networking from servers

Related Stories

Leave a comment

Alternatively

This will only be used to quickly provide signup information and will not allow us to post to your account or appear on your timeline.