Cutting the cost of running SQL Server in the cloud: A guide

Cutting the cost of running SQL Server in the cloud: A guide David Bermingham is recognized within the technology community as a high availability expert and has been honored by his peers by being elected to be a Microsoft MVP for the past 8 years, 6 years as a Cluster MVP and 2 years as a Cloud and Datacenter Management MVP. David’s work as director, Technical Evangelist at SIOSTechnology Corp., has him focused on evangelizing Microsoft high availability and disaster recovery solutions as well as providing hands-on support, training and professional services for cluster implementations. David holds numerous technical certifications and draws from more than thirty years of IT experience, including work in the finance, healthcare and education fields, to help organizations design solutions to meet their high availability and disaster recovery needs. David's work recently has him focused on high availability and disaster recovery solutions for the public and private cloud.


Who knew that there would be so many different ways that cloud service providers could charge for services? You were expecting to pay for compute and storage resources, sure. And it’s not really surprising that if you’re configuring your SQL Server infrastructure for high availability (HA) in the cloud you’re going to pay more for backup VMs and storage in remote sites. But added costs for Enterprise editions of software whose services you don’t otherwise really want or need? Added costs for moving data between regions if the nodes of your failover cluster reside in separate regions? That just seems mean.

There are, however, ways to cut some of the costs associated with a HA SQL Server deployment in the cloud—and not by small amounts.

Consider your availability goals

To ensure HA you’re going to want to configure either a Failover Cluster Instance (FCI) or an Always On Availability Group (AG), and that is going to involve building out SQL Server deployments in at least two cloud data centers. If your primary instance of SQL Server fails for any reason, your databases will automatically fail over to the server running in a secondary data center and enable you to continue operations with minimal downtime.

If you want to guard against a disaster of epic proportions—think of the cycle of hurricanes that took out infrastructure across the entire island of Puerto Rico a few years ago—then you’ll want to deploy one of those backup instances of SQL Server in a geographically distinct region.

If you don’t feel the need to configure for disaster recovery (DR), though, then be sure to deploy the nodes of your servers in different data centers within the same region. Ensuring that secondary servers have access to the data that the active server is using involves moving a lot of data across the network, and one of the gotchas in the cloud is that cloud service providers charge a fee for moving data between regions. If you don’t need DR, you can avoid this charge. Providers don’t charge extra fees for moving data between data centers within a single region. If you do need to move data between regions, consider using compression to minimize the volume of data you’re moving.

Consider your replication mechanism

The mechanism you use to move data between primary and secondary servers may offer opportunities for significant cost savings. In the cloud, you can’t configure for HA using a shared storage area network (SAN)—which you can do in an on-premises configuration. Instead, each node in your HA configuration will have its own storage; nothing is shared. And because nothing is shared, you need to employ a mechanism to replicate the data from your active cluster node to each of the secondary nodes. They’re poised to take over if the primary infrastructure goes offline, but they can’t do anything unless they have local up-to-date copies of the data that your active server was using.

There are two basic ways to replicate SQL Server databases for HA. You can use the AG functionality that has been available in the Enterprise edition of SQL Server since 2012 or you can create an FCI using a SANLess clustering solution from a third party. Both approaches can synchronize SQL Server data between your primary and secondary servers—synchronously or asynchronously—and both work closely with the HA solution to facilitate failover in an emergency.

Depending on what version of SQL Server you need, though, the costs associated with these approaches may differ dramatically. Using an AG to manage data replication will require you to license the Enterprise Edition of SQL Server. On a per-core basis across all the VMs in your FCI, that can be a very pricey option, particularly if your workloads don’t themselves require the Enterprise edition of SQL Server. If you don’t need the Enterprise edition of SQL Server, creating an FCI that relies on a SANless clustering tool for replication can deliver huge savings. It provides the same HA guarantee, but it enables you to use the Standard edition of SQL Server.

How dramatic are the cost savings? On identically configured dual-node clusters with quad-core CPUs, the cost of implementing HA is 58% lower using Standard edition of SQL Server and a SANless clustering tool. Bump those up to 24-core CPUs, and a SANless clustering approach can cut your HA costs by more than 70% compared to the cost of running Enterprise SQL Server and an AG.

So, yes, look at your HA needs. Consider whether you need the full feature set of Enterprise SQL Server. If you don’t, you may be able to cut costs dramatically by using the Standard edition of SQL Server in a SANless cluster configuration.

Photo by Allef Vinicius on Unsplash

Interested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

View Comments
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *