Is hyperconvergence really key to your data centre cloud strategy?
Vendors often like to create a new name for an existing piece of technology that is essentially made up of the same components and fulfils the same functions. This is because of factors such as the competitive pressure to keep customers interested: application service provision is more commonly known today as the cloud, while converged infrastructure has led to hyperconverged infrastructure.
Sometimes there are actual technological differences between products, but this isn’t always the case. That’s because once a technology has reached its peak, the market could potentially drop off its perch. Vendors’ claims – and even media reports – should therefore be treated with a pinch of salt and some careful scrutiny.
For example, DABCC magazine (August 2017) highlighted: “Cloud is becoming a key plank in virtually every organisation’s technology strategy. The potential benefits are now widely understood, among them the ability to save money and reduce IT management overheads, meaning more resources can be ploughed into other parts of your business.”
The article, ‘Why Data Centre Hyperconvergence is Key to Your Cloud Strategy’, points out that moving to the cloud “…won’t necessarily deliver these benefits if done in isolation: organisations also need to look at their data centre operations and streamline how these are run. There’s a need to rationalise your data centre as you move to cloud.”
Cloud: Not for everyone
Let’s face it, the cloud isn’t for everyone, but nevertheless it has its merits. Yet before you go and invest in new technology or move to it, you should examine whether your existing infrastructure is sufficient to do the job you need it for. Ask yourself questions, including: “the hyperconvergence story: what’s really important?”
In response to this, David Trossell, CEO and CTO of data acceleration vendor Bridgeworks, notes: “We’ve been shouldered traditional system architecture for more than 50 years now”. He explains that there have only been a few significant changes along the way. Apart from the likes of IBM, which has traditionally provided a one-stop shop, the company still purchases different parts of the system from different vendors.
This approach means customers can source parts for the most competitive price or which offer the best solution, from different vendors. However, the downside is the need to repeat the entire process of verifying compatibility, performance and so on.
“The other often unseen consequence is the time taken to learn new skill sets to manage and administer the varying products”, Trossell warns. Yet he points out that there is increasing pressure on organisations’ budgets to spend less while achieving more for each pound or dollar. This means that there is an expectation to deliver more performance and functionality from decreasing IT budgets.
He adds: “With its Lego-style building blocks, where you add the modules you require knowing everything is interoperable and auto-configuring, increasing resources in an area becomes a simple task. Another key benefit is the single point of administration, which dramatically reduces the administrative workload and one product skill set.
“So, what about the cloud? Does this not simplify the equation even further?” he asks. With the cloud, he says there’s no need to “…invest in capital equipment anymore; you simply add or remove resources as you require them, and so we are constantly told this is the perfect solution.” To determine if it is the perfect solution, there is a need to examine other aspects of a cloud-only strategy. The cloud may be one of many approaches that’s needed to run your system.
Part of the story
Anjan Srinivas, senior director of product management at Nutanix – a company that claims to go beyond hyperconverged infrastructure – agrees that hyperconvergence is only part of the story. He explains the history that led to this technological creation. “The origins of the name were due to the servers’ form factor used for such appliances in the early days,” he says. “The story actually hinges upon the maturity of software to take upon itself the intelligence to perform the functions of the whole infrastructure stack, all the way from storage, compute, networking and virtualisation to operations management, in a fault tolerant fashion.”
He adds: “So, it is fundamentally about intelligent software enabling data centre infrastructure to be invisible. This allows companies to operate their environments with the same efficiency and simplicity of a cloud provider. Hyperconvergence then becomes strategic, as it can stitch together the public cloud and on-premise software-defined cloud, making the customer agile and well-positioned to select multiple consumption models.”
Trossell nevertheless believes that it’s important to consider the short-term and long-term costs of moving to the cloud: “You have to consider whether this is going to be a long-term or short-term process. This is about whether it is cheaper to rent or buy, and about which option is most beneficial.”
The problem is that although the cloud is often touted as being cheaper than a traditional in-house infrastructure, its utility rental model could make it far more expensive in the long-term; more than any the capital expenditure of owning and running your own systems.
“Sometimes, for example, it is cheaper to buy a car than to rent one”, he explains. The same principle applies to the cloud model. For this reason, it isn’t always the perfect solution. “Done correctly, hyper-convergence enables the data centre to build an IT infrastructure capable of matching public cloud services in terms of elements like on-demand scalability and ease of provisioning and management”, adds Srinivas.
“Compared to public cloud services, it can also provide a much more secure platform for business-critical applications, as well as address the issues of data sovereignty and compliance. A hyper-converged platform can also work out more economical than the cloud, especially for predictable workloads running over a period.”
“Not every cloud has a silver lining”, says Trossell. He argues that believing the hype about the cloud isn’t necessarily the way to go. “You have to consider a number of factors such as hybrid cloud, keeping your databases locally, the effect of latency and how you control and administer the systems.”
He believes that there is much uncertainty to face, since the cloud computing industry expects the market to consolidate over the forthcoming years. This means there will be very few cloud players in the future. If this happens, cloud prices will rise and requests to cheapen the technology will be lost. There are also issues to address, such as remote latency and the interaction of databases with other applications.
Impact of latency
Trossell explains this: “If your application is in the cloud and you are accessing it constantly, then you must take into account the effect of latency on the users’ productivity. If most of your users are within HQ, this will affect it. With geographically dispersed users you don’t have to take this into account.
If you have a database in the cloud and you are accessing it a lot, the latency will add up. It is sometimes better to hold your databases locally, while putting other applications into the cloud.
“Databases tend to access other databases, and so you have to look at the whole picture to take it all into account – including your network bandwidth to the cloud.”
Your existing infrastructure, within your data centre and outside of it, therefore must be part of this ‘bigger picture’. So, with regards to whether hyperconvergence is the way to go, Trossell advises you to analyse whether you’re still able to gain a return on investment (ROI) from your existing infrastructure.
“Think about whether it is a has role in your cloud strategy”, he advises before adding: “With a hybrid cloud strategy can you downsize your data centre, saving on maintenance charges too. If you are going to go hyperconverged, then some training will be required. If you are going to use your existing infrastructure, then you will already have some skillsets on-site.”
He adds: “If the licence and maintenance costs of the existing infrastructure outweigh the costs of hyperconvergence, then there is a good business case for installing a hyperconverged infrastructure. This will allow everything to be in one place – a single point of administration.”
There is still a need to consider data protection, which often gets lost in the balancing act. The cloud can nevertheless be used for backup as a service (BUaaS) and disaster recovery as a service (DRaaS), as part of a hybrid solution. Still, he stresses that you shouldn’t depend solely on the cloud and recommends storing data in multiple places.
This can be achieved with a solution, he claims, such as PORTrock IT: “If you decide to change over to the cloud, you need to be able to move your data around efficiently and at speed, as well as restore data if required. You need to keep it running to protect your business operations.”
Not just storage
Trossell and Srinivas agree that storage shouldn’t be your only consideration. “Storage is an important aspect, but that alone does not allow enterprises to become agile and provide their businesses with the competitive edge they expect from their IT”, says Srinivas. He argues that the advantage hyper-convergence offers is “the ability to replace complex and expensive SAN technology with efficient and highly available distributed storage, [which] is surely critical.
“What is critical, is how storage becomes invisible and the data centre OS - such as that built by Nutanix - can not only intelligently provide the right storage for the right application, but also make the overall stack and its operation simple”, believes Srinivas.
“Consider backups, computing, networks - everything”, says Trossell before adding: “Many people say it’s about Amazon-type stuff, but it’s about simplifying the infrastructure. We’re now moving to IT-as-a-service, and so is hyper-convergence the way to go for that type of service?”
Technology will no doubt evolve and by then, hyper-convergence may have transformed into something else. This means that it remains an open question as to whether hyper-convergence is key to your data centre cloud strategy.
It may be now, but in the future, it might not be. Nutanix is therefore wise to ensure that the Nutanix Enterprise Cloud “…goes beyond hyperconverged infrastructure.” It would also be an idea for you to consider if there are other options which might serve your data centre needs better. That’s because some healthy scepticism can help us find the right answers and solutions.
Top tips for assessing whether hyperconverged is for you
- Work out if there is any value in your existing infrastructure, and dispose of what no long has any value – which may not be all of it.
- Calculate and be aware of the effects of latency on your users, including functionality and performance
- Run the costings of each solutions out for three to fove years, and examine the TCO for those periods to determine which solution is right for your business
- Look at what the savings on maintenance and licensing against the cost of moving to a hyper-converged infrastructure
- Consider a hybrid solution – and don’t lose sight of your data protection during this whole process
Interested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.
- » Why we should take the brakes off digital transformation with cloud-based connectivity
- » Redis Labs further changes licensing terms – to make developers happy and keep big cloud vendors at bay
- » How new cloud agents are increasing confidence in the public cloud
- » The cloud in 2020: Enterprise compatibility with edge computing, containers and serverless
- » Addressing cloud sprawl: Combining security best practices with business foundations