Azure : Performance limits when using MSSQL datafiles directly on an Storage Account

Introduction

In a previous post I explained how you are able to integrate MSSQL with Azure storage by directly storing the data files on the storage account.

2016-04-22 19_41_15-kvaessql21 - 104.40.158.231_3389 - Remote Desktop Connection

Now this made me wondering what the performance limitations would be of this setup? After doing some research, the basic rule is that the same logic applies to “virtual disks”, as to the “data files”… Why is this? They are both “blobs” ; the virtual disk is a blob called “disk” and the data files will be “page blobs”.

2016-04-25 09_35_42-Pricing - Cloud Storage _ Microsoft Azure

 

So what can you expect?

  • Standard Storage
    • 20k IOPS per storage account
    • Max. 60MB/s and 500 IOPS per blob
      (exception ; basic machines have lower limit of 300 IOPS)
    • Max. size is 1TB (=1023GB)
    • GRS supported, though not across multiple blobs
  • Premium Storage
    • Disk Level
      • P10 : 128GiB / 500 IOPS / 100MB/s
      • P20 : 512GiB / 2300 IOPS / 150MB/s
      • P30 : 1024GiB / 5000 IOPS / 200MB/s
    • Storage Account
      • Max. total size is 35TB
      • Max. 50 Gbps
    • GRS not Supported!

Source : Azure Storage Scalability and Performance Targets

 

Experience from the field

Some points I want to give you… When configuring the data files, set the maximum file size to 1TB. This is the limit of your blob and you do not want to get any funky error messages.

2016-04-25 09_45_19-kvaessql21-4 - 104.40.176.144_3389 - Remote Desktop Connection

 

You are able to connect to data files across regions. Be aware that traffic send outside of the region of your storage account will be billable! This means that if you configure the SQL backup to go to a storage account, then you will not be storage. Unless that the origin server is also located in azure and the backup target is in another region.

Last, but not least, do not forget the impact of the expiry date of your SAS token… If you used the script from my scriptbin, that’s been set to 10 years. I can imagine that the server won’t outlive that period. Though you can argue that this is not a good practice in terms of security.

 

TL;DR

  • The same limits apply to data files as to disks, as they are both “blobs”.
  • Set the max. size of your blob to 1TB to avoid funky errors.
  • Do not forget the impact of the expiry date of your SAS token.
  • Be aware of the network billing rules when crossing region boundaries

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.