Aggregating Metrics from multiple Azure Storage Accounts


When working at scale, the only way to properly handle true scale is to work with horizontal scaling options. Some services (like CosmosDBCosmosDB for instance) do this out of the box and abstract it away from the user/customer. Though sometimes this is something you need to facilitate yourself… In terms of Azure Storage, we’re very open in regards to our limitations. For example, at this point in time, we’re currently facing a maximum egress of 50Gbps per storage account. Where this is more than enough for a lot or customers, at times we need to scale beyond this. Here the solution at hand is to see the storage account as a “scale unit”, and use it for horizontal scaling. So if you need 200GBps, then you can partition your data across four storage accounts.

In today’s post, we’re going to take a look at how you can aggregate these metrics into a single pane of glass. Because, at the end of the day, your operations team does not want to have a disaggregated view of all the components in play.


Important Note

All Azure teams are constantly looking to evolve their services. Please note that the limits mention in this post are linked to the point in time when the article was written. As many of you know, Azure keeps evolving at vast pace, so the limits might already have been changed. If you are wondering, always check the following page for the most current limits that are linked to GA (“General Available”) services!


Option A : Azure Monitor – Metrics

The first option is to use the “Metrics” as exposed via Azure Monitor. Just to go the “Monitoring” section and browse to “Metrics ;

Here you can add the different storage accounts.

And you will get an aggregated view already.

This is a scenario that will work pretty well if you have a “modest amount” of storage accounts. Though if you’re looking to a double-digit number of resources, then you’ll notice that the capability was not designed for this. In my scenario, I was looking to add a bit less than 100 storage accounts, and my browser did not speak to me for about a week. 😉


Option B : Azure Monitor – Log Analytics

If you’re working at scale, you will want to aggregate the data into a “data lake”-alike. In case of Azure, the most logical choice in doing so is “Log Analytics”. It can be the sink for a lot of data sources, where the Azure resources are also covered. Once you set up the integration, you’ll can find the data under “LogManagement” and then “AzureMetrics” ;

If you click on the “Eye” (view?) icon, then a query to get the 50 metrics back is already prepopulated.

Now let’s get a bit more creative…

| where MetricName == “Egress”
// TotalGbps => / 60 for minute to second – * 8 to go from Bps to bps – 3x 1000 to go from bytes to gigibytes
| project TimeGenerated, Resource, TotalGbps = Total / 60 / 1000 / 1000 / 1000 * 8
| render areachart
Which will render this nice chart! 😉

Next up, we can pin this view to our Azure dashboard ;

Which will already give our operations team a much better insight into what is happening! In addition, do note that you can save your queries for easy reference by anyone in the team… 😉


Gotcha : Azure Monitor is in transition…

As some of you might have noticed, there are two “Monitoring”-sections in the Azure portal. As mentioned, Azure keeps evolving! Here we’re currently in a transition between the “Classic” (previous) monitoring implementation and a new one.

Where both still show what you need to see…

Though the new one is currently still missing the “Diagnostic logs” pane.

And as you can notice, the option to send to event hubs & log analytics seems to have removed? Where the options are not visible anymore in the portal, you can still set it up via command line / rest api.

(Note : Where the system is currently in transition. The objective of the team behind this is to ensure that feature parity is achieved as soon as possible.)


Setting up log analytics via the command line (powershell)

So what did I do let the Azure Monitor Metrics flow into Log Analytics ;

Get-AzureRmResource -ResourceType Microsoft.Storage/StorageAccounts -TagName "PocStorageMetrics" -TagValue "LogAnalytics" | ForEach-Object { $_.ResourceId }

Get-AzureRmResource -ResourceType Microsoft.Storage/StorageAccounts | Where-Object { $_.Location -Like "westeurope" -and $_.Name -Like "*kvaes" } | ForEach-Object { Set-AzResource -Tag @{ PocStorageMetrics="LogAnalytics" } -ResourceId $_.ResourceId -Force }

Get-AzureRmResource -ResourceType Microsoft.Storage/StorageAccounts -TagName "PocStorageMetrics" -TagValue "LogAnalytics" | ForEach-Object { $_.ResourceId }

$loganalyticsid = $(Get-AzureRmResource -ResourceGroupName "loganalytics" -ResourceType Microsoft.OperationalInsights/workspaces).ResourceId

Get-AzureRmResource -ResourceType Microsoft.Storage/StorageAccounts -TagName "PocStorageMetrics" -TagValue "LogAnalytics" | ForEach-Object { Set-AzureRmDiagnosticSetting -ResourceId $_.ResourceId  -WorkspaceId $loganalyticsid -Enabled $true }


In essence ;

  • I’m tagging the storage accounts I want to have monitored (Tag = PocStorageMetrics | Value = LogAnalytics) according to a specific location (“westeurope”) and ending with “kvaes”.
    (command #2)
  • Verifying the accounts that were tagged according to my expectations.
    (command #1 & #3)
  • Retrieving the resource id of my log analytics workspace
    (command #4)
  • Enabling all the storage accounts (that were tagged according to my expectations) to send their data to the log analytics account.
    (command #5)


Why did I do it like this? This way of setting it up is “idempotent”. I can run it as an automation job to ensure that new storage accounts are added automatically. Next to that, it provides me with the flexibility to choose which storage accounts should be targeted.


Are there costs involved?

When using option A, then there is no cost involved. This is part of the platform! When using option B, you are charged per GB for the ingest & data retention. In addition, if you send data between regions, do note that bandwidth charges will also occur. That being said… this amount of data generated here typically fades away into the workload.

At a busy day I noticed that I was at 20MB / day for about 25 storage accounts. That would result in about 620MB/month. Given, it’s a cost. Though when you are sharding across XX storage accounts, I’m pretty sure this will not be the costs optimization you should be focussing on. 😉


Closing Thoughts

Aggregating the metrics into log analytics is a great way to assist your operations in the day to day business and allow them to have a single pane of glass on the Azure resources. Next to that, in my humble opinion, you should also integrate Application Insights into your code. Here you can leverage custom metrics to include things like the amount of time it needed to retrieve/upload a blob to a storage account. You could even look at the TrackDependency to do this! By doing so, you’re broadening the scope and provide insights into the full end-to-end scope.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.