Trying out the Azure Front Door Service

Introduction

With the statement “Scalable and secure entry point for fast delivery of your global applications”, Azure Front Door (Service) saw the lights of dawn! The features it promises to bring to the table?

My first thought was… Does this fill the gap where I’m currently leveraging CloudFlare for my personal deployments? So today’s post will be to see what the process of creating the service looks like in real life!

 

Let’s take it for a spin!

After browsing through the marketplace, we select the “Azure Front Door” and press the button to create one. Here I must say that the configuration flow here is a pleasant surprise. First we enter the “basics”…

Continue reading “Trying out the Azure Front Door Service”

Azure Functions : Compiled or interpreted C#… What impact does it have on my performance?

Introduction

Last week I did a post about how to integrate Compiled Azure Functions working with VSTS… In the closing thoughts I made a statement about my observation that compiled functions had a performance improvement.

 

Here I should have known Nills would challenge me on that… 😉

 

So… #challengeaccepted

Continue reading “Azure Functions : Compiled or interpreted C#… What impact does it have on my performance?”

Serverless On-Demand Scaling : Pushing the pedal when you need it…

Introduction

A lot of workloads are driven by peak consumption. From my experience, there aren’t the amount of workloads that have a constant performance need are in the minority. Now here comes the interesting opportunity when leveraging serverless architectures… Here you only pay for your actual consumption. So if you tweak your architecture to leverage this, then you can get huge gains!

For today’s post, I’ll be using VMchooser once again as an example. A lot has changed since the last post on the anatomy of this application. Here is an updated drawing of the high level architecture ;

Underneath you can see the flow that’ll be used when doing a “Bulk Mapping” (aka “CSV Upload”). The webapp (“frontend”) will store the CSV as a blob on the storage account. Once a new blob arrives, a function will be triggered that will examine the CSV file and put every entry onto a queue. Once a message is published onto the queue, another function will start processing this message. By using this pattern, I’m transforming this job into parallel processing job where each entry is handled (about) simultaneously. The downside of this, is that there will be contention/competition for the back-end resources (being the data store). Luckily, CosmosDB can scale on the fly too… We can adapt the request units as needed; up or down! So let’s do a small PoC and see who this could work…

Continue reading “Serverless On-Demand Scaling : Pushing the pedal when you need it…”

Ever tried the mix of Azure, SQL Server, Storage Spaces Direct & Scale Out File Server?

Introduction

A while back I posted a blog post how to setup a High Available SQL cluster on Azure using SIOS Datakeeper. As I’m an avid believer of storage spaces, I was looking for a moment to test drive “storage spaces direct” on Azure. The blog post of today will cover that journey…

UPDATE (01/02/2017) ; At this point, there is no official support for this solution. So do not implement it for production at this point. As soon as this changes, I’ll update this post accordingly!

UPDATE (08/02/2017) ; New official documentation has been released. Though I cannot find official support statements.

UPDATE (30/03/2017) ; A few days after the previous update, the following post was made => Deploying IaaS VM Guest Clusters in Microsoft Azure

 

Solution Blueprint

What do we want to build today?

  • A two node cluster which will be used as a Failover Cluster Instance for MSSQL.
  • As a quorum, we’ll be using the cloud witness feature of Windows 2016 in combination with an Azure storage account.
  • In regards to storage, we’ll create a Scale Out File Server setup which will leverage the local disks of the two servers via Storage Spaces Direct.
  • To achieve a “floating IP”, we’ll be using the Azure LoadBalancer setup (as we did in the last post).

kvaes-sql-cluster-s2d-sofs-azure

 

Continue reading “Ever tried the mix of Azure, SQL Server, Storage Spaces Direct & Scale Out File Server?”

Azure : Benchmarking SQL Database Setups – To measure is to know, and being able to improve…

Introduction

To measure is to know. If you can not measure it, you cannot improve it!

Today’s post will go more in-depth on what performance to expect from different SQL implementations in Azure. We’ll be focussing on two kind of benchmarks ; the storage subsystem and an industry benchmark for SQL. This so that we can compare the different scenario’s to each other in the most neutral way possible.

to-measure-is-to-know-storage-database-performance-kvaes

Test Setup

As a test bed I started from one of my previous posts

kvaes-azure-sql-cluster-sios-datakeeper-high-availability-ha

The machines I used were DS1 v2 machines when using single disks and a DS2 v2 machines when using multiple disks. In terms of OS, I’ll be using Windows 2012 R2 and MSSQL 2014 (12.04100.1) as database.

Continue reading “Azure : Benchmarking SQL Database Setups – To measure is to know, and being able to improve…”

Azure : MSSQL – Impact of parameters on Backup Performance

A while ago we experienced an issue with slow backup performance on an Azure VM. In light of the experience we had on that case, I went into my lab environment to test the impact of different parameters…

What is the test environment we’ll be using?

  • Virtual Machine : D2
  • Storage : Z-drive, based upon storage spaces, with one disk (100GB / LRS / Standard)
  • Paths : SQL Binaries, Data & Log files on Z-drive. Backup destination used was the temporary drive (D).
  • Database : AdventureWorks increased several times up till 1,5GB (using a set of SQL commands by Jonathan Kehayias), compared to the 250MB.

 

Test Run A
Command

BACKUP DATABASE [AdventureWorks2012]
TO DISK = N’d:\Backup\Test.bak’ WITH NOFORMAT, NOINIT,
NAME = N’HC_TST-Full Database Backup’, SKIP, NOREWIND, NOUNLOAD, STATS = 10

Result

Processed 192888 pages for database ‘AdventureWorks2012’, file ‘AdventureWorks2012_Data’ on file 1.
Processed 405530 pages for database ‘AdventureWorks2012’, file ‘AdventureWorks2012_Log’ on file 1.
BACKUP DATABASE successfully processed 598418 pages in 211.441 seconds (22.110 MB/sec).

Comment

We started out using a basic backup command… Nothing fancy, and we got 22MB/s, so not that great either.

 

Test Run B
Command

BACKUP DATABASE [AdventureWorks2012]
TO DISK = N’d:\Backup\Test.bak’ WITH INIT,
NAME = N’HC_TST-Full Database Backup’, SKIP, COMPRESSION, NOREWIND, NOUNLOAD, STATS = 10

Result

Processed 192888 pages for database ‘AdventureWorks2012’, file ‘AdventureWorks2012_Data’ on file 1.
Processed 405535 pages for database ‘AdventureWorks2012’, file ‘AdventureWorks2012_Log’ on file 1.
BACKUP DATABASE successfully processed 598423 pages in 152.066 seconds (30.744 MB/sec).

Comment

Now we added compression, and the backup command went up by 50%.

 

Test Run C
Command

BACKUP DATABASE [AdventureWorks2012original]
TO DISK = N’d:\Backup\Test.bak’ WITH INIT,
NAME = N’HC_TST-Full Database Backup’, SKIP, COMPRESSION, NOREWIND, NOUNLOAD, STATS = 10

Result

Processed 24192 pages for database ‘AdventureWorks2012original’, file ‘AdventureWorks2012_Data’ on file 1.
Processed 2 pages for database ‘AdventureWorks2012original’, file ‘AdventureWorks2012_Log’ on file 1.
BACKUP DATABASE successfully processed 24194 pages in 6.776 seconds (27.894 MB/sec).

Comment

Running the same command on the original (smaller) database had a performance that was a tad slower.

 

Test Run D
Command

BACKUP DATABASE [AdventureWorks2012]
TO DISK = N’d:\Backup\Test.bak’ WITH INIT,
NAME = N’HC_TST-Full Database Backup’, SKIP, COMPRESSION, NOREWIND, NOUNLOAD, STATS = 10, BLOCKSIZE = 65536, MAXTRANSFERSIZE=2097152

Result

Processed 192888 pages for database ‘AdventureWorks2012’, file ‘AdventureWorks2012_Data’ on file 1.
Processed 405539 pages for database ‘AdventureWorks2012’, file ‘AdventureWorks2012_Log’ on file 1.
BACKUP DATABASE successfully processed 598427 pages in 119.499 seconds (39.123 MB/sec).

Comment

Now we’ll add a fixed block size and max transfer size. Be aware that the blocksize is 65k for tape devices and 512 for anything else. So we’ll beefed that up and got a bit less than double from where we started.

 

Test Run E
Command

BACKUP DATABASE [AdventureWorks2012]
TO DISK = N’d:\Backup\Test.bak’ WITH INIT,
NAME = N’HC_TST-Full Database Backup’, SKIP, COMPRESSION, NOREWIND, NOUNLOAD, STATS = 10,BUFFERCOUNT = 20, BLOCKSIZE = 65536, ,MAXTRANSFERSIZE=2097152

Result

Processed 192888 pages for database ‘AdventureWorks2012’, file ‘AdventureWorks2012_Data’ on file 1.
Processed 405546 pages for database ‘AdventureWorks2012’, file ‘AdventureWorks2012_Log’ on file 1.
BACKUP DATABASE successfully processed 598434 pages in 100.559 seconds (46.492 MB/sec).

Comment

Now we’ll add a buffer count and things heat up even more.

 

Test Run F
Command

BACKUP DATABASE [AdventureWorks2012]
TO DISK = N’d:\Backup\Test.bak’ WITH INIT,
NAME = N’HC_TST-Full Database Backup’, SKIP, COMPRESSION, NOREWIND, NOUNLOAD, STATS = 10,BUFFERCOUNT = 100, BLOCKSIZE = 65536, MAXTRANSFERSIZE=2097152

Result

Processed 192888 pages for database ‘AdventureWorks2012’, file ‘AdventureWorks2012_Data’ on file 1.
Processed 405550 pages for database ‘AdventureWorks2012’, file ‘AdventureWorks2012_Log’ on file 1.
BACKUP DATABASE successfully processed 598438 pages in 97.435 seconds (47.983 MB/sec).

Comment

And let’s pump up the buffer count… a tad better.

 

Test Run G
Command

BACKUP DATABASE [AdventureWorks2012original]
TO DISK = N’d:\Backup\Test.bak’ WITH INIT,
NAME = N’HC_TST-Full Database Backup’, SKIP, COMPRESSION, NOREWIND, NOUNLOAD, STATS = 10, BUFFERCOUNT = 100, BLOCKSIZE = 65536, MAXTRANSFERSIZE=2097152

Result

Processed 24192 pages for database ‘AdventureWorks2012original’, file ‘AdventureWorks2012_Data’ on file 1.
Processed 3 pages for database ‘AdventureWorks2012original’, file ‘AdventureWorks2012_Log’ on file 1.
BACKUP DATABASE successfully processed 24195 pages in 4.437 seconds (42.600 MB/sec).

Comment

Doing the same with the smaller database gives us a lower performance.

 

Test Run H
Command

BACKUP DATABASE [AdventureWorks2012]
TO DISK = N’d:\Backup\Test.bak’ WITH INIT,
NAME = N’HC_TST-Full Database Backup’, SKIP, COMPRESSION, NOREWIND, NOUNLOAD, STATS = 10, BUFFERCOUNT = 100, BLOCKSIZE = 65536, MAXTRANSFERSIZE=2097152

Result

Processed 648352 pages for database ‘AdventureWorks2012’, file ‘AdventureWorks2012_Data’ on file 1.
Processed 1592447 pages for database ‘AdventureWorks2012’, file ‘AdventureWorks2012_Log’ on file 1.
BACKUP DATABASE successfully processed 2240799 pages in 260.300 seconds (67.254 MB/sec).

Comment

I ran the “increase” script a few more time to have a database from about 5.2GB. The performance is better than the two other smaller ones.

Conclusion

  • Parameters : Choosing your parameters matters A LOT. We had a case that went from 9MB/s to 90MB/s by using the same set of parameters as above…
  • Database Size : There is a ramp up time involved… Small databases experience a penalty due to this.
  • Design : Be aware of your disk subsystem design. Using one standard disk will provide you with 500 IOPS/s. If you use this disk for both source & target, then you don’t need to be a professor to know that this won’t yield a big performance.