By Jeramiah Dooley, Cloud Architect at SolidFire
Public cloud services have put huge pressures on enterprise IT to compete in a more agile way. When it can take days or even weeks for IT departments to procure and manually set-up necessary networking and storage hardware to support new applications, why wouldn’t employees turn to providers who can meet their needs within minutes?
To meet these demands, the hardware infrastructure needs to be more than fast; it needs to be flexible and scalable with rapid automation to meet the needs of its users. Storage has a key role to play here, with a sophisticated management layer that can separate performance and capacity – independently controlling access speeds and available drive space on an app-by-app basis – it has now become possible to virtualise performance independent of capacity.
Indeed, data centre storage is going through an interesting time, with $1.2bn of publicly disclosed funding pouring into storage start-ups in the last year alone. However, it’s not just flash storage that’s responsible for the flurry of activity. It’s the functionality that storage vendors are wrapping around flash technology that is enabling true next generation data centres that are getting people excited.
Across the industry we can see a perfect storm of other data centre technology forming, with virtualisation and software-defined networking both reaching maturity. But storage is the one area still dominated by legacy technology and thinking.
With IDC predicting that the global market for flash storage will continue its growth to $4.5bn in 2015, flash becoming the de facto standard is almost inevitable. But merely retrofitting complex legacy storage systems to incorporate flash is insufficient in the face of current market dynamics that require rapid application deployment, dramatic system scalability, and the ease of use of end-user self service.
Of course, the performance of flash has become table stakes in the storage race. Next generation enterprise storage arrays will instead be measured on simplicity of operation, deep API automation and thoughtful integrations with cloud management platforms like VMware and OpenStack, rich data services, broad protocol support and fine-grain Quality of Service (QoS) controls. Any array using flash is expected to be fast; it’s the rest of the services that will differentiate them.
So, the impending storage battle that will lay waste to the legacy data centre won’t be won by raw performance alone. Instead, it will be the rich features and functions over and above the flash layer, tailored for specific use cases or workloads that will drive significant capital and operational cost savings.
Storage for the next generation data centre
A world that relies increasingly on cloud services, or cloud-like internal IT services, is one that thrives on guaranteed performance. Cloud contracts are underpinned by all manner of “availability” SLAs. As cloud computing continues to gain in enterprise popularity, cloud providers are coming under increasing pressure to supply cast-iron guarantees that focus on application performance and predictability.
This level of “performance reliability” underpins the next generation data centre and is enabled by robust Quality of Service tools. It is the combination of Quality of Service, agile management and performance virtualisation capabilities that will define storage architectures within next generation data centres – to demand less will be to accept siloed computing and legacy IT thinking as the status quo.
As the IT industry shifts away from the classic monolithic and static operational models into the new era of dynamic, agile workflows of cloud computing, it’s time to up the ante and look beyond traditional storage hardware architectures and consider products that are built from the ground up for the next generation of applications and operations.