A while back I posted a blog post how to setup a High Available SQL cluster on Azure using SIOS Datakeeper. As I’m an avid believer of storage spaces, I was looking for a moment to test drive “storage spaces direct” on Azure. The blog post of today will cover that journey…
UPDATE (01/02/2017) ; At this point, there is no official support for this solution. So do not implement it for production at this point. As soon as this changes, I’ll update this post accordingly!
UPDATE (08/02/2017) ; New official documentation has been released. Though I cannot find official support statements.
UPDATE (30/03/2017) ; A few days after the previous update, the following post was made => Deploying IaaS VM Guest Clusters in Microsoft Azure
What do we want to build today?
- A two node cluster which will be used as a Failover Cluster Instance for MSSQL.
- As a quorum, we’ll be using the cloud witness feature of Windows 2016 in combination with an Azure storage account.
- In regards to storage, we’ll create a Scale Out File Server setup which will leverage the local disks of the two servers via Storage Spaces Direct.
- To achieve a “floating IP”, we’ll be using the Azure LoadBalancer setup (as we did in the last post).
Create your network and add all systems to it…
As you can see here, I have three systems ;
- DC = Domain Controller
- SQL01 = first cluster node
- SQL02 = second cluster node
In addition, you see that the end result is showing two internal load balancer IPs. My end result had two instances, both running on a different port and with a different cluster IP. So yes… you can create your own database hotel on Azure!
When looking towards each cluster node, here I added two P20 (512GB) disks per node.
Storage Spaces Direct
Before you do anything related to SQL, ensure that your Storage Spaces Direct (S2D) setup has been done. An easy to follow tutorial can be found here. The end result should look like this (in powershell) …
and like this in your Failover Cluster Manager ;
The keen eye will probably have noticed that the “Get-PhysicalDisk” showed four P20 disks instead of two. That is totally normal! S2D can see all the disks of all nodes, and will take them into consideration!
Basically, now you just install a SQL cluster as you normally would. You’ll notice that the MSSQL nodes will leverage the SOFS deployment to store their data.
Do not forget your load balancer setup! You will need to create a (health)probe per instance and use the internal IP of the loadbalancer as your cluster ip (of the given instance).
Now be sure not to forget your cloud witness! I really love this feature! This is truly an easy way to introduce quorum into your cluster. Just select the cloud witness option…
Easy, cheap & reliable! What more could you ever want… 😉
Storage Performance Benchmark
For the storage performance, I’m relying on my trusty storage performance benchmarker again! I did four extended test runs ;
- 2x DS1V2, each with 2x P20
- 2x DS2V2, each with 2x P20
- 2x DS3V2, each with 2x P20
- 2x DS4V2, each with 2x P20
(Click on the links to see the full test results)
What can we conclude from these tests? If we check the limits of each VM size, then we have the following overview ;
In the benchmarks we can see that we reach the caps for each VM size … 32MBps (DS1V2), 64MBps (DS1V2), 128MBps (DS1V2) & 256MBps (DS1V2).
This might come as a surprise to some, as our P20 would normally have an individual limit of 2300 IOPS & 150MB/s. So not counting the limit of the VM, you could expect something between 2x to 4x a P20.
We see that the performance always reaches the cap of the VM sizes from DS1V2 till DS3V2. Only with the DS4V2 we can see that not all performance tests reach the cap of this VM size. We notice that the SmallIO is being capped on ~4600IOPS, which bears down to 2 times a P20. So here we reach the cap of the individual disks and not the VM size.
MSSQL Performance Benchmark
In terms of MSSQL performance, I’ll be running a TPC-H benchmark by using Quest’s Benchmark Factory for Databases. This also provides us the means to compare the gathered data with our previous benchmarks. All tests were done with both nodes having the same VM size and having both the SOFS & MSSQL role on the same cluster node. The tests were launched after a reboot of the system (necessary for changing the VM size). Anyhow, let’s get to the results!
In our lasts tests the best result was a ~3300, where an S3 scored around ~1400. So we can safely say that this setup outperforms our previous designs! Now if you want to see what your current solution can bring to the table… Do a TPC-H benchmark (scale factor = 1) with the tool from Quest vs your own environment. This will provide you with ample information to compare environments.
As for the closing thoughts…
- If you aren’t familiar with a cloud witness, try that one out! Really!
- Using S2D is a windows native solution for having a clustered volume. Put SOFS on top of that, with CA, and you have a supported solution for MSSQL. No more messing around with 3th party software to get a fully native Microsoft stack working!
- Two P20 disks are sufficient for a setup up till a DS3V2. Once you go beyond that, you can think about adding more disks (for performance reasoning).
- When comparing S2D to SIOS, I must say S2D outperforms SIOS. The biggest weakness SIOS had were the sync writes. Here the performance is seriously crippled, where S2D is stable on all areas.
- Always use a neutral benchmark tool to compare different environments. When migrating, I would also advise you to benchmark your source & target environment. This will enable you to make conclusions around performance based on metrics and not on gut feeling.
And one final note… Want to learn more about storage spaces direct? Check out this post… 😉
Or one more… I hope you’ve found this post useful!