Some things should not go public: The risks of a single-cloud strategy
Both Amazon and Microsoft cloud users experienced majorly disrupted services earlier this year due to unforeseen outages on both the AWS and Azure platforms. Customers faced disrupted services with no access to vital apps or data and some companies were left without access to crucial SaaS-based technologies that inhibited business operations across the globe. This outage was met with the swift realisation that the public cloud may not always be the most appropriate solution for every workload. Keeping all your critical data in one location, and relying solely on one cloud provider, isn’t always wise.
However, downtime is not the only reason to avoid putting all your workloads in one cloud. Other factors around cost and performance are key and can play a factor in the day to day running of businesses operations.
The price of public cloud
Initially, it may appear that public cloud offers considerable cost savings in comparison to private or on-premises based alternatives. However, as many will find, migrating applications to a public cloud platform can result in significant hidden costs. Auto-scaling features, though with good intentions, can mean that costs soar in line with demand for resources, making predicting costs difficult and budgeting even harder. Private cloud environments are far better suited to forecasting precise needs and financial allocations due to the integration of analytics that can anticipate needs for additional capacity and performance to avoid over-provisioning.
Like going on a spending spree, resource-intensive workloads and applications that reside in the public cloud can amount up to hefty credit card bills. Horror stories of surprise monthly charges for network bandwidth usage are no modern-day myth – these services come free when a workload sits in a company’s own data centre.
It’s all about performance
IT teams have to adhere to high performance standards: 99.999% availability is considered the bar for modern enterprises. To put that into perspective, it will take Amazon 30 years to achieve such numbers again after a near four-hour outage.
Organisations that rely on data for mission-critical operations should be aware that the public cloud generally works to a consistent level of three of four nines, and applications have to provide their own availability and resiliency in the public cloud, rather than depend on the infrastructure to provide them, which makes it difficult to maintain uptime.
Ultimately, when deciding which workload is right for the public cloud, a company may find itself asking how much downtime can be afforded before it impacts the bottom line, or causes customers to complain.
Workloads which are in fact more business critical are more suited to private and on-premises environments as they allow for the elimination of resource conflicts and performance through the implementation of quality of service controls.
Find the right basket for each egg
It’s likely that many organisations have a high level of variability when it comes to the needs of its applications. Yes, some may be considered right for the public cloud, but there are certainly some more suited to a company’s own data centre due to factors like cost and performance. As organisations work to deliver a cloud strategy, it is vital to remember that not all workloads – or eggs – fit in the same basket.
- » SQL Server high availability and disaster recovery for AWS, Azure and GCP: A guide
- » OVH rebrands as OVHcloud, claims more than 70% of revenues are cloud-based
- » Google Cloud launches in Poland as European data centre expansion continues
- » Google achieves ‘quantum supremacy’ with new experiment – reports
- » Cloud services and infrastructure spending breaks $150bn in six months, says Synergy