Introduction
Yesterday we talked about the combination of Azure+S2D+SOFS+MSSQL. Here we had a cluster where each node had two P20 disks. What if at a given point we would need more than 1TB of disk space? We’ll be extending the pool (and virtual disk etc). So let’s take a look what that would look like?
Adding the disks
First part… Let’s add the disks (note : even entire hosts is possible!). Browse to both VMs and press “attach new” in the disks section ;
Here we’ll be adding another P20 disk (512GB) ;
Now let’s take a look at what happens on the system… For this I’m using show-prettypool.ps1 as mentioned in the following storage spaces direct deep dive.
So we started off with two P20 disks per node. Now let’s see what happens if we add another disk ;
It’s going to be … wait for it … legendary. Euhr, no, just being attached. 🙂
Now let’s run the script / commands again ;
We can see that the disks were added to each node AND that the disks were already imported into the storage pool.
This is done automatically, UNLESS that there are multiple pools. Then you need to add the disk to the pool yourself.
So we have a virtual disk on our storage pool with a given volume inside it. Let’s get to work… First of all, we’ll put the clustered shared volume into maintenance mode.
And then we’ll do an “Optimize-StoragePool”. This will re-organize/optimize the storage pool and spread the data across all disks. (Source : the image below was taken from the storage deep dive article)
So while that is busy (and it will take a while), let’s look at the pool from our cluster manager view. Here we can see that the pool is showing the 3TB (raw) and that 2TB (raw) has been used.
We can also see that the system is in maintenance mode. So if you are allergic to powershell (huh? why…), then you can use this GUI option too; to enable/disable maintenance mode on the resource.
… fast forward in time …
Now we see that the optimize of our storage pool has finished. The data has been spread across all disks. Though we haven’t been using our 1,5TB efficiently yet. So now we’ll be extending the virtual disk…
And after the resize, we can see that we’re nearing the full utilization. It seems I was a bit too cautious, and I still had about 62,5GB to hand out.
Now for the next step, we’ll also need to extend the partition ;
So in the end, we did the following ;
(Source : Extending volumes in Storage Spaces Direct)
And to finish it, let’s get our cluster shared volume out of maintenance mode ;
Closing Thoughts
- Adding disks & nodes is surprisingly (?) easy in storage spaces direct.
- Extending the SQL cluster went like a breeze. Though it implied a downtime of the system to be sure we did not lose any data.
I just came across this and the previous article these articles, finding them really useful as we start our own SQL migration to the cloud. Have you ever thought about following them up with how you can automate the creation of these systems in azure with terraform and PowerShell DSC ?
By now, I see more value in first party cloud services to do this job. There is no business value in managing that piece yourself imho. Where you can then leverage Terraform to roll out a SQL DB/Pool/ManagedInstance (for example). Without the hassle of having to manage the entire operational aspect yourself.