Why NVMe protocols are important for new data centre workloads
Today, data is the new fuel for business. New age technologies like artificial intelligence, Internet of Things, blockchain, and machine learning – all needs data to be stored, processed and analysed. Large amounts of data are generated exponentially, with a rise in internet users over the past several years. According to ‘Data Never Sleeps’, the report from Domo, 2.5 quintillion bytes of data is generated every day.
This data tsunami puts forth challenges for IT infrastructure to provide low latency and higher storage performance as many enterprises need real-time data processing and faster access to stored data. Access to high performance SSDs using legacy storage protocols like SATA or SAS are not enough as they still have higher latency, lower performance, and quality issues.
NVMe-enabled storage infrastructure
NVMe is a high performance scalable host controller interface protocol that is needed to access high performance storage media like SSDs over PCI bus. NVMe is the next generation technology which is replacing SATA and SAS protocols, and offers features required by enterprises that focus on processing high volume real-time data.
The main differentiator in NVMe, SATA, and SAS is the number of commands supported in a single queue. SATA devices support 32 commands, SAS supports 256 commands, while NVME supports up to 64k commands per queue, and up to 64k queues. Queues are designed to take advantage of parallel processing capabilities of multi-core processors.
NVMe protocol is characterised by the fact that existing applications are getting accelerated and enabled by real-time workload processing within NVMe enabled infrastructure. Infrastructure can be a legacy data centre or at an edge. Such performance is achieved as NVMe consumes significantly fewer CPU cycles as compared to SATA or SAS where CPU consumption is on a higher side. This feature allows businesses to get maximum returns from their existing IT infrastructure.
NVMe-based infrastructure for IoT workloads
NVMe-based systems will be the key element in processing IoT and machine learning workloads.
Multiple sensors that stream generated data at a faster rate and need to push into databases require a higher bandwidth. Also, consumed data needs to get processed and analysed at a higher computing rate, and return back to devices with analysed data. This entire operation needs high performance and a low latency network, plus a storage ecosystem to respond at an equal rate to the network. NVMe over fabrics can be used with IoT use cases that utilise message-based commands to transfer data between a system and a target SSD or system over a network (Ethernet, Fibre Channel or InfiniBand).
Any enterprise using SSDs will get the benefit from the application of NVMe protocols. NVMe-based infrastructure will be ideal for use cases representing SQL/NoSQL databases, real-time analytics, and high performance computing (HPC). NVMe enables new applications for machine learning, IoT databases, and analytics, as well as real-time application performance monitoring and security audits. NVMe offers scalable performance and low latency options that optimise the storage stack – and is architected to take full advantage of multi-core CPUs which will drive rapid proliferation in technology advancement in upcoming years.
The post Why NVMe is Important for New Age Data Center Workloads? appeared first on Calsoft Inc. Blog.
- » Dropbox revamps as enterprise collaboration space to help users conquer ‘work about work’
- » What matters most in business intelligence 2019: Key enterprise use cases
- » NASCAR moves onto AWS to uncover and analyse its racing archive
- » Why cloud projects need a small team of effective IT stakeholders to keep things going
- » Enterprises not seeing total fulfilment with cloud strategies – but hybrid remains the way to go