How are faster networks advancing the next generation of data centres?
We are witnessing significant uplift in data transmission speeds offered by network connectivity providers. Service providers are now promising speeds in hundreds of MBs to GBs with which, for instance, we can see live streaming of Blu-ray movie prints without any buffering.
Such network speeds are set to trigger many new technology possibilities. Businesses cannot afford to stay behind, as they have to take into account new technologies which are widely adopted by the competitive market landscape. Therefore, the focus of businesses has become clear and narrow; to constantly satisfy customer demands with lucrative digital offerings and push businesses ahead for gaining competitive advantage.
To align with this trend, businesses have already started to optimise and redesign their data centres to handle a vast amount of data generated by a growing number of consumer devices. It is obvious for businesses to transform the data centre for addressing the need for upgrading. A transition would involve the use of:
- Virtual network functions (VNFS), which replaces server hardware with software-based packages to specific work – network function virtualisation (NFV)
- Software-defined networking to gain a central control of the network using a core framework which will allow admins to define network operations and security policies
- Seamless orchestration among several network components using ONAP, ETSI OSM, Cloudify among others
- Workloads (VM and containers) and data centre management by implementing OpenStack, Azure Stack, Amazon S3, CloudStack, and Kubernetes. Containers are getting widely adopted due to features like faster instantiation, integration, scaling, security, and ease in management
The next thing which will disrupt the data centre is the adoption of edge architecture. Edge computing will bring a mini data centre closer to where data is going generated by devices like smartphones, industrial instruments, and other IoT devices. This will add more endpoints before data is gathered by the central data centre. But the advantage is that maximum computing will be done at the edge that will help to reduce the load on network transmission resources. Adding to this, hyperconvergence can be used at edge nodes to bring simplification in the required mini data centre.
Mobile edge computing (MEC), a core project maintained by ETSI, is emerged at an edge computing model to be followed by telecom operators. ETSI is maintaining and working on innovations to improve the delivery of core network functionalities using MEC, as well as guiding vendors and service providers.
Aside from edge computing, network slicing is a new architecture introduced in 5G that will have an impact on how data centres are designed for particular premises, and dedicated for specific cases such as Industrial IoT, transportation, and sports stadia.
Data centre performance for high speed networks
In this transforming age, a large amount of data will transfer between devices and the data centre as well as between data centres. As low latency and high bandwidth is required by new use cases, it is important to obtain a higher level of performance from the data centre. It is not possible to achieve such paramount performance with legacy techniques and by adding more capacity to data centres.
With the ‘data tsunami’ of recent years, data centre technology vendors came up with new inventions and communities formed to address performance issues raised by different types of workloads. One of the techniques which has been significantly utilised in new age data centres is to offload some of the CPU tasks to network or server interconnecting switches and routers. Let’s take an example of the network interface card (NIC) which, when used to connect servers to network components of the data centre, has become a SmartNIC, offloading processing tasks that the system CPU would normally handle. SmartNICs can perform network-intensive functions such as encryption/decryption, firewall, TCP/IP, and HTTP processing.
Analyst firm Futorium conducted a Data Centre Network Efficiency survey targeted to IT professionals about their perceptions and views on data centres and networks. Apart from virtualising network resources and workloads, for efficient processing of data for high-speed networks, SmartNIC usage and process offload techniques have emerged as the top interest for IT professionals. This reveals how businesses are relying more on smart techniques which can save costs, along with notable data centre performance improvements for faster networks.
Workload accelerators, like GPUs, FPGAs, and SmartNICs are widely used in current enterprise and hyperscale data centres to improve data processing performance. These accelerators interconnect with CPUs for generating faster processing of data and require much lower latency for transmitting data back and forth from the CPU server.
Most recently, to address the high speed and lower latency requirements between workload accelerators and CPUs, Intel, along with companies including Alibaba, Dell EMC, Cisco, Facebook, Google, HPE and Huawei, have formed an interconnect technology called Compute Express Link (CXL) that will aim to improve performance and remove the bottlenecks in computation-intensive workloads for CPUs and purpose-built accelerators. CXL is focused to create high speed, low latency interconnect between the CPU and workload accelerators, as well as maintain memory coherency between the CPU memory space and memory on attached devices. This allows for resource sharing for higher performance, reduced software stack complexity, and lower overall system cost.
NVMe is another interface introduced by the NVM Express community. It is a storage interface protocol used to boost access to SSDs in a server. NVMe can minimise CPU cycles from applications and handle enormous workloads with lesser infrastructure footprints. NVMe has emerged as a key storage technology and has had a great impact on businesses, which are dealing with vast amounts of fast data particularly generated by real-time analytics and emerging applications.
Automation and AI
Agile 5G networks will result in the growth of edge compute nodes in network architecture to process data closer to endpoints. These edge nodes, or mini data centres, will sync up with a central data centre as well as be interconnected to each other.
For operators, it will be a task ahead to manually set up several edge nodes. The edge nodes will regularly need initial deployment, configuration, software maintenance and upgrades. In the case of network slicing, there could be a need to install, or update VNFs for particular tasks for devices in the slice. It is not possible to do this manually. At this point, automation comes into the picture where operators need to get the central dashboard at the data centre to design and deploy configuration for edge nodes.
Technology businesses are demonstrating or implementing AI and machine learning at the application level for enabling auto-responsiveness – for instance, using chatbots on a website. Much of the AI is applied for a data lake, to generate insights from self-learning AI-based systems. These types of autonomous capabilities will be required by the data centre.
AI systems will be used for monitoring server operations for tracking activities meant for self-scaling for a sudden demand in compute or storage capacity, as well as self-healing from breakdowns, and end-to-end testing of operations. Already, tech businesses have started offering solutions for each of these use cases; for example, a joint AI-based integrated infrastructure offering by Dell EMC Isilon and NVIDIA DGX-1 for self-scaling at the data centre level.
New architectures and technologies are being introduced with the revolution in the network. Most of this infrastructure has turned software-centric as a response to the growing number of devices and higher bandwidth. Providing lower latency – up to 10 microseconds – is a new challenge for operators to enable new technologies in the market. For this to happen, data centres need to complement the higher broadband network. It will form the base for further digital innovation to occur.
Editor’s note: Download the eBook ‘5G Architecture: Convergence of NFV & SDN Networking Technologies’ to learn more about the technologies behind 5G and the status of adoption, along with key insights into the market
The post Analysis: How are Faster Networks Advancing the New-Age Datacenters appeared first on Calsoft Inc. Blog.
Interested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.
- » Cloud performance and change management cited in latest DORA DevOps analysis
- » The 2019 Forbes Cloud 100 analysed: Stripe top again amid big data boom and strong exits
- » An analysis of Kubernetes and OpenStack combinations for modern data centres
- » Public cloud revenue will reach $500 billion in 2023: The key factors driving it
- » Four ways to migrate to the cloud without missing a beat: A guide