OpenStack and NVMe-over-Fabrics: Getting higher performance for network-connected SSDs
The evolvement of the NVMe interface protocol is a boon to SSD-based storage arrays. It further powered SSDs (solid state drives) to obtain high performance and reduced latency for accessing data. Benefits further extended by the NVMe over Fabrics network protocol brings NVMe feature retained over the network fabric, while accessing the storage array remotely. Let’s understand how.
While leveraging NVMe protocol with storage arrays consists of high-speed NAND and SSDs, a latency was experienced when NVMe based storage arrays were accessed through shared storage or storage area networks (SAN). In SAN, data should be transferred between the host (initiator) and the NVMe-enabled storage array (target) over Ethernet, RDMA technologies (iWARP/RoCE), or Fibre Channel. Latency was caused due a translation of SCSI commands into NVMe commands, in the data transportation process.
To address this bottleneck, NVM Express introduced the NVMe over Fabrics protocol, to get replaced with iSCSI as a storage networking protocol. With this, the benefits of NVMe were taken onto network fabrics in a SAN-kind of architecture to have a complete end-to-end NVMe-based storage model which is highly efficient for modern workloads. NVMe-oF supports all available network fabrics technologies, such as RDMA (RoCE, iWARP), Fibre Channel (FC-NVMe), Infiniband, Future Fabrics, and Intel Omni-Path architecture.
NVMe over Fabrics and OpenStack
As we know, OpenStack consists of a library of open source projects for the centralised management of data centre operations. OpenStack provides an ideal environment to implement an efficient NVMe-based storage model for high throughput. OpenStack Nova and Cinder are components used in proposed NVMe-oF with an OpenStack solution. This consists of creation and integration of Cinder NVME-oF target driver, along with OpenStack Nova.
OpenStack Cinder is a block storage service project for OpenStack deployments mainly used to create services which provide persistent storage to cloud-based applications. It provides APIs to users to access storage resources without disclosing storage location information.
OpenStack Nova is a component within OpenStack which helps provide on-demand access to compute resources like virtual machines, containers, and bare metal services. In NVMe-oF with OpenStack solutions, Nova is attaching NVMe volumes to VMs.
Support of NVMe-oF in OpenStack is available from the ‘Rocky’ release. A proposed solution requires RDMA NICs and supports kernel initiator and kernel target.
NVMe-oF targets supported
Based on the proposed solution above, we get two choices to implement NVMe-oF with OpenStack; first, with a kernel NVMe-oF target driver which is supported as of the OpenStack ‘R’ release, and second Intel’s SPDK (storage performance development kit) based NVMe-oF implementation containing SPDK NVMe-oF target driver and the SPDK LVOL (Logical Volume Manager) backend. This is anticipated to be in the OpenStack ‘S’ release.
Kernel NVMe-oF target: Here is the implementation consisting of support for kernel target and kernel initiator. But the kernel-based NVMe-oF target implementation has limitations in terms of number of IOPs per CPU core. Also, kernel-based NVMe-oF suffers latency issues due to CPU interrupts, many systems calling to read data, and time taken to transfer data between threads.
SPDK NVMe-oF target: Why SPDK? SPDK architecture achieved high performance for NVMe-oF with OpenStack by moving all necessary application drivers to userspaces (apart from the kernel) and enables operation in polled mode instead of interrupt mode and lockless (avoiding the use of CPU cycles synchronising data between threads) processing.
Let’s understand what it means.
In SPDK implementation, storage drivers which are utilised for storage operations like storing, updating and deleting data are isolated from the kernel space where general purpose computing processes run. This isolation of storage drivers from kernel saves time required for processing in the kernel, and enables CPU cycles to spend more time for execution of storage drivers at user space. This avoids interruption and locking of storage drivers with other general-purpose computing drivers in kernel space.
In a typical I/O model, application requests a read/write data access and waits until the I/O cycle to complete. In polled mode, once the application places a request for data access, it goes at other execution and comes back after a defined interval to check completion of an earlier request. This reduces latency and process overheads, and further improves the efficiency of I/O operations.
By summarising, SPDK is specially designed to extract performance from non-volatile media, containing tools and libraries for scalable and efficient storage applications utilised userspace, and polled mode components to enable millions of I/Os per core. SPDK architecture is open source BSD licensed blocks optimised for bringing out high throughput from the latest generation of CPUs and SSDs.
Why SPDK NVMe-oF target?
As per the performance benchmarking report of NVMe-oF using SPDK, it has been seen that:
- Throughput scales up and latency decreases almost linearly with the scaling of SPDK NVMe-oF target and initiator I/O cores
- SPDK NVMe-oF target performed up to 7.3x better with regards to IOPS/core than Linux Kernel NVMe-oF target while running 4x 100% random write workload with increasing number of connections (16) per NVMe-oF subsystem
- SPDK NVMe-oF initiator is 3x faster than Kernel NVMe-oF initiator with null bdev-based backend
- SPDK reduces NVMe-oF software overheads by up to 10x
- SPDK saturates 8 NVMe SSDs with a single CPU core
Fig – SPDK vs. Kernel NVMe-oF I/O Efficiency
SPDK NVMe-oF implementation
This is the first implementation of NVMe-oF integrating with OpenStack (Cinder and Nova) which leverages NVMe-oF target driver and SPDK LVOL (Logical Volume Manager)-based SDS storage backend. This provides a high-performance alternative to kernel LVM and kernel NVMe-oF target.
Fig – SPDK Based NVMe-oF + OpenStack Implementation
The implementation was demonstrated at OpenStack Summit 2018 Vancouver. You can watch the demonstration video here.
If compared with Kernel-based implementations, SPDK reduces NVMe-oF software overheads and yields high throughput and performance. Let’s see how this will be added to the upcoming OpenStack ‘S’ release.
This article is based on a session at OpenStack Summit 2018 Vancouver – OpenStack and NVMe-over-Fabrics – Network connected SSDs with local performance. The session was presented by Tushar Gohad (Intel), Moshe Levi (Mellanox) and Ivan Kolodyazhny (Mirantis).
The post OpenStack and NVMe-over-Fabrics – Getting High Performance for Network Connected SSDs appeared first on Calsoft Inc. Blog.
Interested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.
- » Delusions of infrastructure grandeur: How cloud-native brings its own complexity
- » How to prevent AIOps from becoming just another cog in the machine
- » Calculating the Kube roots: Why 2019’s KubeCon represented a milestone for the industry
- » How leveraging APIs will help to enable comprehensive cloud security
- » Time is running out for SQL Server 2008/R2 support – here’s what to do about it