Real use cases: Why 50% of enterprises are choosing hybrid cloud
Enterprises are shaping the cloud to fit their needs, and the result is overwhelmingly a hybrid of public, private, and on-premises clouds. Today, nineteen percent (19%) of organizations manage hybrid clouds and an additional 60% plan to deploy them. Gartner estimates that hybrid cloud adoption will near 50% by 2017.
Why are enterprises going hybrid? Below are four use cases that demonstrate how enterprises determine what, when and how to move to the cloud.
Case 1: Testing environments
Migrating testing/staging environments to the public cloud is a compelling business case for many enterprises, especially when business leaders are still skittish about trusting production environments to multi-tenant clouds. Enterprises get a 20-30% cost reduction on non-critical infrastructure, and developers have the opportunity to get familiar with cloud architecture without the risk of an inexperienced cloud engineer bringing down their applications.
A large software service provider builds discrete environments for each of their clients. Their data is highly regulated, and many of their clients are not comfortable using the public cloud to host sensitive information. However, compelled by the cost savings of Amazon Web Services, they decided to host their development and staging environments for their customer-facing applications onto AWS. Their development and staging environments do not host sensitive data. Low-latency connections from their AWS environment to their hosted private cloud with AWS Direct Connect allow them to maintain high deployment velocity. AWS Storage Gateway allows them to easily move data between clouds if necessary for a nominal fee, and AWS Code Deploy allows them to coordinate code pushes to their production servers.
The company has reduced their hosting costs and the success of the project has led to greater department-wide acceptance of cloud hosting. If the project is successful, they plan to migrate the production environment to AWS in 6-12 months.
Case 2: Disaster recovery
Enterprises spend millions maintaining backup systems that are largely unused for 95% of the year. The public cloud allows companies to pay for disaster recovery when they need it — and not pay when they don’t. The public cloud also has greater geographic diversity of datacenters than even the largest enterprises can afford, and enterprises can cheaply ship and store machine images in AWS.
However, migrating backups to the cloud is not a simple proposition. Many enterprises have highly regulated backup procedures around location of backups, length of data storage, and data security. Enterprises often do not have the internal experience in cloud database and storage to meet compliance standards.
A research company maintains critical intellectual property in on-premises and colocated data centers. Their business leaders are adamant about not hosting intellectual property in the public cloud, yet they want to explore the public cloud for cost savings. When evaluating their disaster recovery procedures, they realized that while their backups were geographically dispersed, each was highly prone to earthquakes.
The research company has decided to maintain backups in AWS, including vast quantities of data from research trials. This will allow them to repurpose hardware currently dedicated to backups on front-line data processing, saving both on disaster recovery costs and non-hardware provisioning. They plan to use pilot light disaster recovery to maintain a mirrored DB server while keeping their other application and caching servers off. In the case of a disaster, they will be able to start up these instances in under 30 minutes.
Case 3: Legacy systems
In many organizations, complex lines of application dependency mean that some components of an application can move to the cloud while others, usually those tied to legacy systems, cannot. These legacy systems are hosted on-premises or in private clouds while other components are moved to the public cloud.
A large SaaS provider maintains Oracle RAC as a major database system supporting a critical piece of several applications’ environments. Oracle RAC is a shared cache clustered database architecture that utilizes Oracle Grid Infrastructure to enable the sharing of server and storage resources. Automatic, instantaneous failover to other nodes enables an extremely high degree of scalability, availability, and performance. They wanted to move other components of the application onto AWS, but neither Amazon’s EC2 nor RDS provide native support for RAC.
The high performance capability of RAC meant they did not want to look for another solution on AWS. Instead, they decided to host RAC on bare metal servers and use AWS Direct Connect to provide low-latency connections to the other application tiers on AWS. They were able to successfully maintain the high performance of RAC while still gaining the scalability and low cost of AWS compute resources.
Case 4: Cloud bursting
The idea of cloud bursting appeared several years ago, but the “own the base, rent the spike” system has never taken off in the enterprise. Partially this is due to the fact that it is technically difficult to accomplish, and often requires that applications be built for cross-cloud interoperability from the get-go. Systems often cannot talk to each other, and the systems often require handcrafting by developers and administrators. Building the automation scripts to perform scaling without human intervention is a challenge for even the most advanced automation engineers. Only in the last 6-12 months have we seen the kind of tools appear on the market that might facilitate bursting for enterprise grade applications, like Amazon resources that now appear in VMware’s vCenter management console.
We have yet to hear of cases where a large-scale enterprise employs cloud bursting. We expect this to become more common only as hybrid cloud tools mature. Vendors that claim to have the built-in ability to do cloud bursting are often limited in other ways, such as breadth of services and compliance. Furthermore enterprises are at a far greater risk of vendor lock-in with these smaller clouds than with a system built modularly for scalability, as expert engineers can achieve in AWS.
In the next 12-24 months, many enterprises will be in the application evaluation and planning phase of their hybrid cloud deployments. Some will choose to experiment with the public cloud in highly controlled ways, as in Case 1 and 2, while other enterprises — usually smaller ones — will take a more aggressive approach and migrate production applications, as in Case 3 and 4. Although most hybrid deployments add complexity to enterprise infrastructure, the success of this planning phase will turn what could become a series of unwieldy mash-ups into the ultimate tool for business agility.
The post Why Are 50% of Enterprises Choosing Hybrid Cloud? Real Use-Cases appeared first on Gathering Clouds.
- » Why NVMe protocols are important for new data centre workloads
- » Microsoft sees it all come together in financials as Azure revenues go up 89% year on year
- » How worldwide blockchain spending is set to double in 2018
- » More emphasis on cloud usage required before DevOps dreams can be realised, firms warned
- » IBM and SAP’s cloud financials continue to impress – but bigger hitters still to come