Azure : Setting up a high available SQL cluster with standard edition


It is important to know that you will only get an SLA (99,95%) with Azure when you have two machines deployed (within one availability set) that do the same thing. If this is not the case, then Microsoft will not guarantee anything. Why is that? Because during service windows, a machine can go down. Those service windows are quite broad in terms of time where you will not be able to negotiate or know the exact downtime.

That being said… Setting up your own high available SQL database is not that easy. There are several options, though it basically bears down to the following ;

  • an AlwaysOn Availability Groups setup
  • a Failover Cluster backed by SIOS datakeeper

Where I really like AlwaysOn, there are two downsides to that approach ;

  • to really enjoy it, you need the enterprise edition (which isn’t exactly cheap)
  • not all applications support AlwaysOn with their implementations

So a lot of organisations were stranded in terms of SQL and moving to Azure. Though, thank god, a third party tool introduced itself ; SIOS Datakeeper ! Now we can build our traditional Failover Cluster on Azure.



Before we start, let’s delve into the design for our setup ;



Base Ingredients

We’ll be using two virtual machines and give it a static IP address. Each host will have SIOS Datakeeper running. The software will do an sync (or async) of the local volumes and present the synced volume as a cluster aware volume! Next up will be to build a failover cluster and add the cluster aware volume(s) as a cluster resource for our SQL failover cluster. The SQL part is still the same as it was before.

In terms of networking, it will become rather complex… We’re going to define a cluster ip address. Though Azure (currently) has no way to reserve “a floating IP”, so we’re going to hack our way around that by using an internal loadbalancer (iLB) The hack will be that we’re going to configure the iLB with the address we gave to our SQL cluster. This to ensure that we can reach the cluster resource from anywhere in our network!

Sidenote ; While I haven’t drawn it on the above topology… Do deploy a cluster quorum. With two nodes, you do not want to have split brain scenario. When using Windows 2012 R2, use a file share witness for that. When using Windows 2016, use the cloud witness (Azure Storage Account) for that!


Setting up Sios Datakeeper & MSSQL

Basically start off with deploying the SIOS Datakeeper template from the Azure marketplace. You will use this template for the two SQL nodes of your cluster.

2016-04-24 11_18_40-SIOS DataKeeper Cluster Edition - Microsoft Azure

After deploying those, follow the steps from the following guide. Be aware that this is still based on “classic mode” (service management) and not on ARM (Azure Resource Manager / Ibiza Portal). Though the same logic applies…


Getting the network stuff right!

Something I was struggling with was the network integration. It all seems quite straight forward, though the integration with the load balancer can be quite tedious. To help you out and avoid that trouble, do take a good look at the following code (as provided in the guide, as mentioned above)

$ClusterNetworkName = “Cluster Network 1” # the cluster network name
$IPResourceName = “SQL IP Address 1 (sqlcluster)” # the IP Address resource name
$CloudServiceIP = “” # IP address of your Internal Load Balancer
Import-Module FailoverClusters
# If you are using Windows 2012 or higher, use the Get-Cluster Resource command. If you are using Windows 2008 R2, use the cluster res command which is commented out.
Get-ClusterResource $IPResourceName | Set-ClusterParameter -Multiple @{“Address”=”$CloudServiceIP”;”ProbePort”=”59999″;SubnetMask=”″;”Network”=”$ClusterNetworkName”;”OverrideAddressMatch”=1;”EnableDhcp”=0}
# cluster res $IPResourceName /priv enabledhcp=0 overrideaddressmatch=1 address=$CloudServiceIP probeport=59999 subnetmask=

When looking into detail on that code, you’ll see that it sets up a probe on port 59999. This probe will become available on the node which will host the SQL Cluster Instance (read: the active node). So you can use this port to let the loadbalancer know which host should become active!

2016-04-24 11_27_56-Settings - Microsoft Azure

This is how your internal loadbalancer should look like. You should have one probe that will check your backend pool on port 59999 (tcp). The backend pool should consist of your cluster nodes. The IP address should be the one you configured as cluster IP address for your SQL Cluster. And the listening port (here 15000) should be the one you will be using for SQL.

Sidenote ; Be aware that the normal default port is 1433 for MSSQL. Though I changed it to 15000 to mess around with the networking stuff a bit more.



  • You can setup a high available SQL cluster by using MSSQL standard
  • Be aware to configure the load balancer correctly and that you need a tight integration with the probe port
  • SIOS Datakeeper will help you create the needed shared storage for your cluster


5 thoughts on “Azure : Setting up a high available SQL cluster with standard edition

  1. Nice Blog Karim.

    Out of interest, have you ever taken a look at how Zadara Storage integrates with Azure?

    Zadara does true SCSI-3 Persistent reservations as a native block device to the Azure compute and we use this as a witness for the quorum, thus reducing complexity.
    This also means that when it comes to support Microsoft merely sees a native block storage device that comes complete with a VSS hardware provider to provide application consistent snapshots at a storage level.
    You also get to select the storage that meets the application requirements, SSD Accelerated SAS or SATA or native high performance SSD. The ability to use the snapshots and clones for test and dev / service pack testing and also to replicate to another region or even cloud provider.

    Just another idea and we would be keen to take you through the technology if you have the time


    1. Did not know zadara. After a quick search it looks a VSAN and/or service. The latter is only available in two azure regions. The first is something I would like to avoid from an architecture point-of-view.

      1. Our infrastructure resides close to Azure compute and is connected via Express Route. We can be connected wherever there is Express Route capability and we charge our fully managed service by the hour. There are Pro’s and Cons to this approach… On the upside the customer gets a dedicated Virtual Private Storage Array with dedicated Cache, RAM, CPU and Drives (so a known Quality of Service) but… MS don’t see the revenue for the storage business. I understand that this can sometimes work against us but on the flip side we help MS win business that they otherwise may have lost… Thanks for your reply anyway and the offer is always open for a Tech session if you would like at a later date.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.