Extending a Storage Spaces Direct pool on Azure

Introduction

Yesterday we talked about the combination of Azure+S2D+SOFS+MSSQL. Here we had a cluster where each node had two P20 disks. What if at a given point we would need more than 1TB of disk space? We’ll be extending the pool (and virtual disk etc). So let’s take a look what that would look like?

 

Adding the disks

First part… Let’s add the disks (note : even entire hosts is possible!). Browse to both VMs and press “attach new” in the disks section ;

2017-02-01-15_57_08-disks-microsoft-azure

Continue reading “Extending a Storage Spaces Direct pool on Azure”

Ever tried the mix of Azure, SQL Server, Storage Spaces Direct & Scale Out File Server?

Introduction

A while back I posted a blog post how to setup a High Available SQL cluster on Azure using SIOS Datakeeper. As I’m an avid believer of storage spaces, I was looking for a moment to test drive “storage spaces direct” on Azure. The blog post of today will cover that journey…

UPDATE (01/02/2017) ; At this point, there is no official support for this solution. So do not implement it for production at this point. As soon as this changes, I’ll update this post accordingly!

UPDATE (08/02/2017) ; New official documentation has been released. Though I cannot find official support statements.

 

Solution Blueprint

What do we want to build today?

  • A two node cluster which will be used as a Failover Cluster Instance for MSSQL.
  • As a quorum, we’ll be using the cloud witness feature of Windows 2016 in combination with an Azure storage account.
  • In regards to storage, we’ll create a Scale Out File Server setup which will leverage the local disks of the two servers via Storage Spaces Direct.
  • To achieve a “floating IP”, we’ll be using the Azure LoadBalancer setup (as we did in the last post).

kvaes-sql-cluster-s2d-sofs-azure

 

Continue reading “Ever tried the mix of Azure, SQL Server, Storage Spaces Direct & Scale Out File Server?”

Azure : Benchmarking SQL Database Setups – To measure is to know, and being able to improve…

Introduction

To measure is to know. If you can not measure it, you cannot improve it!

Today’s post will go more in-depth on what performance to expect from different SQL implementations in Azure. We’ll be focussing on two kind of benchmarks ; the storage subsystem and an industry benchmark for SQL. This so that we can compare the different scenario’s to each other in the most neutral way possible.

to-measure-is-to-know-storage-database-performance-kvaes

Test Setup

As a test bed I started from one of my previous posts

kvaes-azure-sql-cluster-sios-datakeeper-high-availability-ha

The machines I used were DS1 v2 machines when using single disks and a DS2 v2 machines when using multiple disks. In terms of OS, I’ll be using Windows 2012 R2 and MSSQL 2014 (12.04100.1) as database.

Continue reading “Azure : Benchmarking SQL Database Setups – To measure is to know, and being able to improve…”

Storage Spaces : Create a non-clustered storage pool on a cluster instance

Introduction

Today I ran into some issues when creating a storage spaces volume on a cluster instance. I wanted to use the performance benefits of joining multiple Azure storage disks by using storage spaces. Afterwards I wanted to use the volume with SIOS Datakeeper. The issue at hand was that the newly storage spaces would auto register with the cluster. It would then assume that the Azure disks were shared accross the cluster.

Continue reading “Storage Spaces : Create a non-clustered storage pool on a cluster instance”

Storage Performance Benchmarker 0.3 – DISKSPD option!

With this, I’m happy to announce the new release of the “Storage Performance Benchmarker“! The previous version was heavily relying on “SQLIO”, where this version offers you the ability to choose between “DISKSPD” (default) and “SQLIO”. The output will still be aggregated in the same manner towards the backend web interface, though the individual output locally will be in the format of the relative output.

2015-08-31 15_42_15-Administrator_ Windows PowerShell

Parameters added ;

  • -TestMethod : Either “DISKSPD” or “SQLIO”, depending on your preference
  • -TestWarmup : The warmup time used if you use “DISKSPD”

If you have any suggestions/comments, feel free to let me know!

chart-55e45a286e95b

Windows Storage Performance Benchmarking : a predefined set of benchmarks & analytics!

Introduction
A while ago we were looking into a way to benchmark storage performance on Windows systems. This started out with the objective to see how Storage Spaces held up under certain configurations and eventually moved towards us benchmarking existing OnPremise workloads to Azure deployments. For this we created a wrapper script for SQLIO that was heavily based upon previous work from both Jose Baretto & Mikael Nystrom. Adaptations were made to make it a bit more clean in code and to have a back-end for visualization purposes. At this point, I feel that the tool has a certain level of maturity that it can be publically shared for everyone to use.

Storage Performance Benchmarker Script
The first component is the “Storage Performance Benchmarker Script“, which you can download from the following location ; https://bitbucket.org/kvaes/storage-performance-benchmarker

I won’t be quoting all the options/parameters, as the BitBucket page clearly describes this. By default it will do a “quick test” (-QuickTest true). This will trigger one run (with 16 outstanding IO) for four scenario’s ; LargeIO Read, SmallIO Read, LargeIO Write & SmallIO Write.

The difference between the “Read” & “Write” part will be clear I presume… 🙂 The difference between the “LargeIO” & “SmallIO” reside in the block size (8Kbyte for SmallIO, 512Kbyte for LargeIO) and the access method (Random for SmallIO & Sequential for LargeIO). The tests are foreseen to mimmick a typical database behaviour (SmallIO) and a large datastore / backup workload (LargeIO). When doing an “extended test” (-QuickTest false), a multitude of runs will be foreseen to benchmark different “Outstanding IO” scenario’s.

Website Backend
You can choose not to send the information (-TestShareBenchmarks false) and the information will not be sent to the backend server. Then you will only have the csv output, as the backend system is used to parse the information into charts for you ; Example.

2015-08-20 07_40_02-Storage Performance Benchmarker [kvaes.be]

By default, your information will be shown publically, though you can choose to have a private link (-Private true) and even have the link emailed to you (-Email you@domain.tld).

On the backend, you will have the option to see individual test scenarios (-TestScenario *identifying name*) and to compare all scenarios against each other.

For each benchmark scenario, you will see the following graphs ;

  • MB/s : The throughput measured in MB/s. This is often the metric people know… Though be aware that the MB/s is realised by multiplying the IO/s times the block size. So the “SmallIO” test will show a smaller throughput compared to the “LargeIO”, though the processing power (IOPS or IO/s) of the “SmallIO” may sometimes be even better on certain systems.
    chart-55d567c06e74f
  • IO/S : This is the number of IOPS measured during the test. This provides you with an insight into the amount requests a system can handle concurrently. The higher the number, the better… To provide assistance, marker zones were added o indicate what other systems typically reach. This to provide you with an insight about what is to be expected or to which you can reference.
    chart-55d567cadcb77
  • Latency : This is the latency that was measured in milliseconds. Marker zones are added to this chart to indicate what is to be considered a healthy, risk or bad zone.
    chart-55d567d16f7e3

The X-axis will show the difference between different “Outstanding IO” situations ;

Number of outstanding I/O requests per thread. When attempting to determine the capacity of a given volume or set of volumes, start with a reasonable number for this and increase until disk saturation is reached (that is, latency starts to increase without an additional increase in throughput or IOPs). Common values for this are 8, 16, 32, 64, and 128. Keep in mind that this setting is the number of outstanding I/Os per thread. (Source)

Microsoft Storage Spaces Direct – What is in it for me?

Introduction

Storage Spaces is a technology in Windows and Windows Server that enables you to virtualize storage by grouping industry-standard disks into storage pools, and then creating virtual disks called storage spaces from the available capacity in the storage pools.

For me… Storage spaces is a disruptor in the Enterprise landscape. After the mainframe, we went towards intel hardware. The systems had their own disks and we used them as a stand-alone or did some cloning/copying between them. As the data center grew, this became unmanageable, and we turned towards SAN systems. Here we had several challengers over the years, though the concepts remained the same. At a given point, I was hoping VMware would turn towards the concept that Nutanix eventually released upon the world. Server hardware with direct attached storage, and replication via the virtualization host. storage-spaces-direct Anyhow, the aspect of using JBOD storage with basic enterprise SAN features excites me! This could provide a lot of IT departments with some budget space to maneuver again… Though, a san is commonly still a single point of failure, so a kind of scale-out concept (instead of a typical SAN scale up!) would be great. And with storage spaces direct, Microsoft has hit the nail dead on for me! Hyper Convergence vs Convergence With the Storage Spaces in Windows 2012 we got a basic NAS functionality; raid0/raid1/raid5, snapshots, pooling. With 2012 R2, the game was on… Storage Tiering, Data Deduplication, flexible Resilience (“Dynamic Hot Spare”) & Persistent Write-back cache. Suddenly, the game was on. Microsoft had turned the vision of SMB as a file server towards a NAS (kinda like Netapp for Linux/VMware). And with the scale-out file server (SOFS), you pretty much had a basic SAN that covered the basic needs of the majority of the SME landschape. 2015-05-22 13_31_45-MDC-B218.pptx [Protected View] - PowerPoint Though, as you can see, the architecture was still comparable towards the “SAN”-architectures we see a lot in the field. The concept that Nutanix brought to life wasn’t there yet. Though, at Microsoft Ignite, Storage Spaces Direct (“S2D”) was announced. And along with it, the possible to go towards a hyper-converged architecture. At that point, a lot of people tweeted ; “Nutanix their stock is less worth after Ignite”. And to be honest, there lies a lot of truth in those tweets… You are now able to build the same kind of Hyper Converged architecture with Microsoft components. image_thumb13 With S2D, you have to conceptual options ;

  • Hyper Converged – “Nutanix Mode” – Scale out with storage & CPU power combined.
  • Converged / Disaggregated – “Traditional Mode” – Scale out with SOFS & Compute nodes separately.

For the entry tier segment, this technology step is huge. Twitter has been buzzing about the fact that features like replication are part of the data center edition. Though for me, the hyper-converged part solves this. And let’s be honest… we all know the R&D money needs to come from somewhere, and in the next edition it’ll go towards the standard edition. Storage Replica So what drives the “Hyper Converged”-engine? Storage replication… IC797134 Source : https://msdn.microsoft.com/en-us/library/mt126183.aspx The replication comes in the two traditional modes ;

  • Synchronous Replication – Synchronous replication guarantees that the application writes data to two locations at once before completion of the IO. This replication is more suitable for mission critical data, as it requires network and storage investments, as well as a risk of degraded application performance. Synchronous replication is suitable for both HA and DR solutions. When application writes occur on the source data copy, the originating storage does not acknowledge the IO immediately. Instead, those data changes replicate to the remote destination copy and return an acknowledgement. Only then does the application receive the IO acknowledgment. This ensures constant synchronization of the remote site with the source site, in effect extending storage IOs across the network. In the event of a source site failure, applications can failover to the remote site and resume their operations with assurance of zero data loss.
  • Asynchronous Replication – Contrarily, asynchronous replication means that when the application writes data, that data replicates to the remote site without immediate acknowledgment guarantees. This mode allows faster response time to the application as well as a DR solution that works geographically. When the application writes data, the replication engine captures the write and immediately acknowledges to the application. The captured data then replicates to the remote location. The remote node processes the copy of the data and lazily acknowledges back to the source copy. Since replication performance is no longer in the application IO path, the remote site’s responsiveness and distance are less important factors. There is risk of data loss if the source data is lost and the destination copy of the data was still in buffer without leaving the source. With its higher than zero RPO, asynchronous replication is less suitable for HA solutions like Failover Clusters, as they are designed for continuous operation with redundancy and no loss of data.

In addition, technet states the following ;

The Microsoft implementation of asynchronous replication is different from most. Most industry implementations of asynchronous replication rely on snapshot-based replication, where periodic differential transfers move to the other node and merge. SR asynchronous replication operates just like synchronous replication, except that it removes the requirement for a serialized synchronous acknowledgment from the destination. This means that SR theoretically has a lower RPO as it continuously replicates. However, this also means it relies on internal application consistency guarantees rather than using snapshots to force consistency in application files. SR guarantees crash consistency in all replication modes

When reading the above… On a conceptual level, this is to be compared with the implementation of Netapp “near online sync”. Anyhow, very cool stuff from Microsoft as this is really entering the SAN market space and understanding the necessities it entails. Another important note ;

The destination volume is not accessible while replicating. When you configure replication, the destination volume dismounts, making it inaccessible to any writes by users or visible in typical interfaces like File Explorer. Block-level replication technologies are incompatible with allowing access to the destination target’s mounted file system in a volume; NTFS and ReFS do not support users writing data to the volume while blocks change underneath them.

From a technical stance, this is completely understandable. Though, do not expect to have “local” access to all data when doing “hyper convergence”. So you will need a high-speed / low latency network between your hyper-converged nodes! Think towards RDMA with infiniband/iWarp… Eager for more?

Of feel free to ping me