Hardening your Azure Storage Account by using Service Endpoints


Earlier this week I received a two folded question ; “Does a service endpoint go over internet? As when I block the storage account tags with a NSG, my connection towards the storage account stops.” Let’s look at the following illustration ;


The first thing to mention here is that the storage account (at this time) always listens to a public IP address. The funky thing is, that in Azure, you’ll have a capability called “Service Endpoint“, which I already covered briefly in the past. For argument’s sake, I’ve made a distinction in the above illustration between the “Azure Backbone” and the “Azure SDN”. A more correct representation might have been to have said “internal” & “external” Azure Backbone in terms of the IP address space used. So see the “Azure Backbone” in the above drawing as the public IP address space. Here all public addresses reside. Where the “Azure SDN” is the one that covers the internal flows. Also be aware that an Azure VNET can only have address spaces as described by RFC1918. So why did I depict it like this? To indicate that there are different flows;

  • Connections from outside of Azure (“internet”)
  • Connections from within the Microsoft backbone (“Azure Backbone”)
  • Connections by leveraging a service endpoint

So how does the service endpoint work?

So to answer the question stated above ;

  • Q: Does a service endpoint go over “the internet”?
  • A : Define “internet”…
    • If you mean that it uses a public ip address instead of an internal one? Then yes.
    • If you mean that it leaves the Microsoft backbone? Then no.
    • If you mean that the service is accessible from the internet? Unless you open up the firewall, it won’t (by default, when having a service endpoint configured).


  • Q: When I block the storage tag in my network security group (“NSG”), then the traffic stops. How come?
  • A: The NSG is active on NIC level. The storage account, even when using a service endpoint, will still use the public IP. As this public IP is listed in the ranges that are configured in the service tag, you’ll be effectively blocking the service. This might be your objective… Though if you do this to “lock down the internet flow”, then you won’t achieve the requested you wanted. You should leverage the firewall functionality from the service for which the service endpoint was used.


Deep Dive

As always, let’s do a deep dive to experience this flow! So I did set up the above drawing in my personal lab ;

So we have a VNET with two subnets. Each subnet has one VM inside of it.

Where we’ve setup a service endpoint with a storage account towards the first subnet.

On the storage account, the firewall allows only connections from that first subnet.

(Note: The above highlighted check box was even disabled/unchecked before the tests.) Now let’s create a public container.

So if I want to go “inside” of that container, then I’ll receive the “Access Denied”. Which is logical, as the computer I was using to access the portal was not white listed.

Same for other endpoints…

Now if I add my client IP ;

Then I’m able to access the container ;

So I’m uploading a file which we’ll be using in our tests later on ;

Once uploaded, let’s grab the url of this object ;

And I’m able to reach it from a browser on my client ;

Likewise if I would use the command line with “wget” ;

Now let’s remove my client ip from the white list…

And I’m not able to download the file anymore.


Now let’s test from the VM in the 2nd subnet (SUBNET002, which does not have the service endpoint linked), where we’ll see that we’re unable to download the file from this subnet.

If we do the same thing from the VM in the 1st subnet (SUBNET001, which has the service endpoint), and we’re able to download the file!

Now let’s take a look at the effective routes of that VM, and we’ll notice that we have a static route (“UDR” = User Defined Route) that points towards the PIPs (Public IP) of the storage account.

If we do the same on the 2nd VM, then we’ll notice that these UDRs are missing… So the 2nd VM will try to leverage the route via the public internet. Though it’ll be blocked by the firewall of the service at hand (in this case; Azure Storage).

Now let’s add the PIP of the 2nd VM to the storage account ;

And you’ll notice that this is not possible. BAZINGA. All logic dictates that this should…
(Note: This is actually a “gotcha” and “by design” in the case of Azure Storage. We’ll do the same test with Azure SQL later on… Which does not have this “gotcha”.)

Now if we open everything up on the firewall ;

Then we’ll be able to access the file from our 2nd VM too.

Now let’s harden our implementation again by using a service endpoint ;

And you’ll notice that our 2nd VM is unable to access the file again ;

Where the 1st VM is able to access the file ;

Now we’re going to introduce the NSG that blocks the access from the VM to the “Storage”. (Note/Correction:  The screenshot depicts “Allow”, but the configuration was set to “Deny”)

And … boom … we’re waiting…

… and waiting…

So we’re unable to access the file. As expected, we’re blocking the connection earlier on. So the service endpoint isn’t even reached, as the traffic is blocked at NIC (Network Interface Card) by the NSG (Network Security Group).


Testing “the gotcha” with Azure SQL as an alternative

Now let’s do the same with Azure SQL. We’re setting up a service endpoint with the Azure SQL Server.

Now let’s test our 1st VM (from SUBNET001), and that works …

Where our 2nd VM (from SUBNET002) fails … (as expected)

Now let’s add (/white list) the PIP (Public IP) of this 2nd VM to(/in) our firewall ;

And now we’re able to reach the Azure SQL DB via the public route.


Closing Thoughts

  • By leveraging a “service endpoint”, you can harden your connectivity to limit connectivity from a specific endpoint.
  • A “mixed” scenario where subnet X uses a service endpoint and where subnet Y uses the public space is possible (unless if you’re linking it to Azure Storage => TIP : create multiple service endpoints then).
  • With a “service endpoint”, the Azure Service still leverages “public IP addresses” (for the endpoint of the service). It will not have a private IP from the subnet you had linked.
  • Network access security is of course one aspect of security. Though also look at the access control in terms of hardening. For Azure storage, think in terms of AAD integration or SAS tokens. Likewise for Azure SQL with AAD or SQL authentication.

4 thoughts on “Hardening your Azure Storage Account by using Service Endpoints

  1. Problems that I found:

    Using VNET/Subnet restrictions and using SASS token is not working.

    At the same sample code, if I use Connection String with Account and Key works well. If I change to SAS Token, is blocked (HTTP 403).

    Did you found some like that?

  2. Really nice and clear article,
    If i have a web app and a SQL database for example, and both a configured on the same v-net, do i also need (or gain, from) service endpoints, or it’s not relevant?

    1. If you have an App Service Environment and Azure SQL Managed Instance, then they are “injected” into the VNET. Where they are part of your network, just like a typical VM. So you will not need a service endpoint / private link at that time.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.