By Steve Davis, marketing director, NGD
We’ve seen it many times before; first generation technology products creating huge untapped marketplaces but eventually being bettered either by their originators or competitors. Think VCRs and then CDRs, both were usurped by DVRs and streaming, or the first mobile phones becoming the smartphones of today – the list goes on.
Cloud computing is no exception. The original ‘product’ concept remains very much in vogue but the technology and infrastructure holding it together keeps on getting faster, more functional, more reliable – put simply, better. Growing user and cloud service provider maturity is seeing to that. After 10 years of cloud, the industry and users have learned valuable lessons on what does and doesn’t work. They still like it and want much more of it but there’s no longer room for a one size fits all approach.
With this evolution, cloud “1.0” has morphed into “2.0” in the past year or so; while the name has been around for a few years, 451 Research among others have recently put it again at the forefront. The two core varieties of public and private have ‘cross-pollenated’ and given rise to hybrid, an increasingly ‘virulent’ strain. This is because companies are realising that they need many different types of cloud services in order to meet a growing list of customer needs.
For the best of both worlds, hybrid cloud offers a private cloud combined with the use of public cloud services which together create a unified, automated, and well-managed computing environment.
Economics and speed are the two greatest issues driving this market change. Look at the numbers. According to RightScale’s 2016 State of the Cloud Report, hybrid cloud adoption rose from 58% in 2015 to 71% thanks to the increased adoption of private cloud computing, which rose to 77%. Synergy Research’s 2015 review of the global cloud market found public IaaS/PaaS services had the highest growth rate at 51%, followed by private and hybrid cloud infrastructure services at 45%.
It’s an incestuous business. Enterprises using public clouds for storing non-sensitive data and for easy access to office applications and productivity tools, automatically become hybrid cloud users as soon as they connect any of these elements with private clouds, and vice versa. Many still prefer the peace of mind of retaining private cloud infrastructure for manging core business applications as well as embracing those still valuable on-premise legacy systems and equipment which just can’t be virtualised.
Equally, a company might want to use a public cloud development platform that sends data to a private cloud or a data centre based application, or move data from a number of SaaS (Software as a Service) applications between private or data centre resources. A business process is therefore designed as a service so that it can connect with environments as though they were a single environment.
Hybrid and the data centre
So where does cloud 2.0 and the rise of hybrid leave the data centre? Clearly, the buck must continue to eventually stop with the data centre provider as it is ultimately the rock supporting any flavour of cloud – public, private or hybrid. Whether you are a service provider, systems integrator, reseller or the end user you will want to be sure the data centres involved have strong physical security, sufficient power supply on tap for the high density racks that allow scaling of services at will, and of course, diverse high speed connectivity for reliable anyplace, anytime access.
But for implementing hybrid environments, the devil is in the detail. Often what isn’t considered is how to connect public and private clouds. And don’t forget some applications may still remain outside of cloud type infrastructures. There is not only the concern around latency between these three models but the cost of connectivity needs to be built into the business plan.
Location of the public and private cloud is a primary concern and needs careful consideration. The time to cover large geographical distances must be factored in and as a result the closer the environments can be positioned the better. The security of the connections and how they are routed also needs to be examined. If the links between the two clouds was impacted then how might this affect your organisation?
Customers who are actively building hybrid solutions increasingly demand their private clouds to be as close to the public cloud as possible. This is because using public Internet connections to connect to public cloud can expose end users to possible congestion and latency whilst direct connections do not come cheap. Sure, latency between private and public cloud can be reduced but with some costs. Caching can help sometimes and the use of traffic optimisation devices is well-proven but each adds more complexity and cost to what should be a relatively straightforward solution. Developers need to be conscious of the fact that moving large amounts of data between private and public cloud will cause latency and sometimes will need to redesigned purely to get over latency problems.
In a perfect world it would be ideal to use a single facility to host both public and private cloud infrastructure and use the various backup solutions available for data stored in the private and public clouds. This would reduce latency, connectivity costs and provide a far higher level of control for the end user. Obviously the location would have to be in a scalable, highly secure data centre with good on-site engineering services available for providing remote hands as necessary. And thanks to the excellent quality of modern monitoring and diagnostics tools, much of the technical support can now be done remotely by the provider or user these days.