Azure Networking ; Service endpoints, Private links and VNET Injection

Introduction

Today’s post is inspired by the combination of a twitter thread from earlier today… and having had the same conversation with a customer earlier today too! Truth be told, there are a variety of networking options when integration Azure Services with your VNET (Virtual Network). So let us go over them!

 

The different options

When you want to integrate PaaS services, you have several options on how to integrate them into your VNET ;

 

Now lets us take a look at the differences by using the following schematic ;

 

You will see that both the “SQL Managed Instance” and “Azure Kubernetes” service reside inside of the virtual network. This is what used to be called “VNET injection”, and where you deploy the service directly into the VNET. Typically you give each of those services their own subnet, without any other things inside of it to avoid interoperability issues. Though from then on you can leverage private traffic within your VNET and have hybrid integration scenarios (aka “Connect to On Premises”).

When looking towards the “Azure Storage”, you can see two colors ;

  • Purple indicates a “Private Link” & “Private Endpoint”. The private link is the line from the service to the dot. Where the dot is actually the private endpoint, which will have a private ip belonging to the range of the subnet (within the VNET) it belongs too. This means that the service will be able to connects you privately and securely to a service powered by Azure Private Link. The nuance difference between “VNET injection” and “Private Link” is that the first is used for resources dedicated to you (AKS Workers, SQL Managed Instance, …) and the latter is used for services that share resources underneath (AKS Master Nodes, Azure SQL DB, Azure Storage, …). This will also allow you to connect to services in a hybrid integration scenario.
  • The orange link depicts the concept of a service endpoint. It extends your virtual network private address space to a shared service. The endpoints also extend the identity of your VNet to the Azure services over a direct connection. Endpoints allow you to secure your critical Azure service resources to only your virtual networks. Traffic from your VNet to the Azure service always remains on the Microsoft Azure backbone network. If you want to understand more of the mechanics underneath, check the following post from when I took my first glance at them. Important to know is that the service will still have its public IP, and that you will leverage that to connect to it. It is used to connect from inside of the VNET to a public endpoint, while you can configure the firewall of the public service to filter on your private range. It cannot be used to connect to a service in a hybrid integration scenario.

 

Not all options work for all services!

Though be aware that not all services have all options available… Check the documentation of the services at hand and the above options. I know this makes it a bit complex at times. Though the capabilities are constantly evolving! And sometimes network integration is also only unlocked in premium editions of a service. For example, in the past you could only get a fully private scenario for App Service by leveraging the Isolated edition (or “App Service Environment”). This made it possible to inject the service into your VNET. Though it had a starting cost of about 1k USD… With the arrival of Private Link/Endpoint, this is not a requirement anymore. Where the API Management does still require either the Premium (Production) or Developer (Non-Production) variants of the service to unlock VNET integration.

 

Closing Thoughts

Things might be confusing at times. Though I hope this brief post helps you position the different options you have in terms of network integration.

  • Service Endpoints ; Connect in a hardened way from a VNET to a shared service
  • Private Link ; Give a shared service a private endpoint in your VNET
  • Deploy inside of a VNET (“VNET Injection”) ; Deploy a service privately for you into your VNET

Azure : What do I put in front of my (web) application?

Introduction

In almost every design session, the following question pops up…

“What do I put in front of my web application to secure it?”

Today we will take a look at the different options at hand. Or better said … What is the difference between the following services?

Azure CDN vs Azure Front Door versus Azure Application Gateway (WAF) vs Azure Traffic Manager

 

TL;DR

If you do not want to go through the entire post, here is a brief summary of the different options ;

My gut feeling tells me this will already help out like a lot! 😉

 

Web Application Firewall (WAF) : Azure Front Door vs Azure Application Gateway

Both Azure Front Door and Azure Application Gateway state that they can be configured to act as a Web Application Firewall. They key difference here is that the Azure Application Gateway can do a “detection only”-mode and that it supports CRS 2.2.9, 3.0, and 3.1. This means that there are out-of-the-box rules that provide a baseline security against most of the top-10 vulnerabilities that Open Web Application Security Project (OWASP) identifies. With Azure Front Door, you need to configure the rules yourself to your liking.

In terms of networking, there also is a very important difference between both. An Azure Application Gateway can be deployed (injected) in(to) an Azure VNET (Virtual Network). By doing so, the traffic between this gateway and your internal backend will flow through a private network. With Azure Front Door, your backend is always based on a public endpoint. Though you can harden / lock down the connectivity between Azure Front Door and your application by restricting the ingress of the traffic.

 

Geographic Redundancy : Azure Front Door vs Azure Traffic Manager

If you want to publish your webapp across regions, then you can leverage Azure Front Door or Azure Traffic Manager to do so. The most important thing to know is that Azure Traffic Manager is DNS based and serves as a redirection mechanism. Azure Front Door will serve as an entrypoint (think reverse proxy). So when using Traffic Manager, you will typically look at another reverse proxy / WAF (like Azure Application Gateway) to cover the need for the secure entrypoint for your application. Next to that, the Azure Traffic Manager is more feature rich in terms of the routing mechanisms, though Azure Front Door (just like Azure Application Gateway) have the advantage of working at layer 7 (Reference : OSI Model), thus also do header/path based routing.

 

Caching Content : Azure CDN vs Azure FrontDoor

The first thing to realize is what the intent was of both solutions?

  • Azure CDN is a Content Delivery Network and was designed to delivery content. Think in terms of media files (“video”) for instance.
  • Azure Front Door was built as a scalable entry point for applications (like Office 365)

Both will do caching (check the links above), though they differ in the sweet spots. Azure Front Door will cache files up to 8MB (as a side trait), where you should think that this is mainly done to optimize the delivery of your “web app”. Azure CDN its core function is caching… It was built to ensure that your files would be distributed world wide and delivered from a local pop.

The funky thing here, is that (if you ignore the Verizon and Akamai flavors of Azure CDN) both Azure CDN and Azure Front Door share the same POPs! Though the caching mechanics of both work differently.

 

DDOS

All services will have DDOS protection (“Basic”), as this is native to any Azure service! It is a key component of the Microsoft backbone actually. Though you can augment the services that reside in your VNET with the “Standard” protection. That being said, do know that the Azure Application Gateway does not scale as dynamic as the other services mentioned here (Azure Front Door and Azure Traffic Manager). So this could become a bottleneck for your design.

 

Closing Thoughts

With more than 100 services in Azure, you are bound to have services that overlap in terms of capabilities. I do hope this post provides you with some guidance towards a the external facing endpoints of your application! The above things are the recurring patterns I see in discussions with the organizations I talk to when deploying web applications on Azure.

Opinion – Cloud Native, Cloud Native and Cloud Native? What I like about, and my two cents on, running Containers, Kubernetes and/or Serverless

Introduction

In the beginning of the month I posted about my experience of moving VMchooser from “Serverless” to “Containers”. As in, moving from one way of implementing a CloudNative architecture to another… Since then, I have actually moved back to “Serverless”.  Though the cogwheels in my head have been turning 24/7 on how to put everything around this into perspective. Yesterday Yves made a tweet (reply) that really made something click inside of my head…

In today’s post I’m going to try to do a “brain dump” of several thoughts that have been floating around in my mind. Where I hope this will help you in your journey of “finding your perfect rock”. Here I will indicate what I like about the various options and what my typical advice would be to organizations looking to do a given option.

Continue reading “Opinion – Cloud Native, Cloud Native and Cloud Native? What I like about, and my two cents on, running Containers, Kubernetes and/or Serverless”

Improving your code quality by linking Azure DevOps with SonarCloud

Introduction

In a customer workshop earlier this week, Hans mentioned a very nice tool (SonarCloud). He used it “in his previous life and was very enthusiastic about it. So this immediately triggered my curiosity… 😉 As it is free for public projects, I investigated how easy it was to integrate into my existing pipelines. Which turned out to be quite easy! After browsing around a bit on how to integrate it into a YAML pipeline, I can proudly say that VMchooser is now fully hooked up with SonarCloud.

However, it did confirm my suspicion, that I am a lousy developer! 😉 Though better lousy code fulfilling a purpose than having no alternative at all?!?

Anyhow, today’s post is about the experience of moving existing pipelines to SonarCloud and investigate the results you get out of it.

Continue reading “Improving your code quality by linking Azure DevOps with SonarCloud”

Is Azure a tier 3 datacenter? And what about Service Levels in a broader sense…

Introduction

Everyone who has been working with cloud, and involved with tenders, has had the follow question (in one form or another) ; “Has the cloud datacenter achieved a tier 3 (or higher) classification? In today’s post we will delve into the specifics linked to the ask ; Why do organizations ask the question, and how does it related to cloud?

What is a “Tier 3 Datacenter”?

To better understand the concept of data-center tiers, it is important to understand that several organizations (like the Telecommunications Industry Association (TIA) and the Uptime Institute) have defined standards for data-centers.

Uptime Institute created the standard Tier Classification System as a means to effectively evaluate data center infrastructure in terms of a business’ requirements for system availability. The Tier Classification System provides the data center industry with a consistent method to compare typically unique, customized facilities based on expected site infrastructure performance, or uptime. Furthermore, Tiers enables companies to align their data center infrastructure investment with business goals specific to growth and technology strategies.
Source ; https://uptimeinstitute.com/tiers

Which typically consists of several tiers…

Four tiers are defined by the Uptime Institute :

  • Tier I : lacks redundant IT equipment, with 99.671% availability, maximum of 1729 minutes annual downtime
  • Tier II : adds redundant infrastructure, with 99.741% availability (1361 minutes)
  • Tier III : adds more data paths, duplicate equipment, and that all IT equipment must be dual-powered, with 99.982% availability (95 minutes)
  • Tier IV : all cooling equipment is independently dual-powered; adds Fault-tolerance, with 99.995% availability (26 minutes)

Source ; https://en.wikipedia.org/wiki/Data_center#Uptime_Institute_-_Data_Center_Tier_Standards 

So it is a classification for organizations to understand the quality of the data-center, and be able to take a given availability into account. Though it is important to understand, that this relates to “datacenter housing” (colocation) and not to the cloud service models! Why is this statement important? As on top of that housing, additional services will be delivered by cloud providers to achieve service models like IaaS, PaaS, SaaS, …

 

UPDATE ; Azure Datacenter Tier = Higher than Uptime “Tier IV” Institute’s DC Tier standard

In the following document the datacenter classifications have been documented ; https://gallery.technet.microsoft.com/Azure-Standard-Response-to-5de19cb6

From generation 1 the datacenters have been designed to meet the customer SLAs and service needs of 99,999%. Given that a tier 4 datacenter is designed towards a customer SLA and service need of 99,995%, we can state that an Azure Datacenter exceeds the expectations of a tier 4 datacenter.

 

Continue reading “Is Azure a tier 3 datacenter? And what about Service Levels in a broader sense…”

Azure DevOps : Operational validation with Approval Gates & Azure Monitor Alerts

Introduction

After having migrated VMchooser from a fully Serverless infrastructure to Containers, I am currently doing the opposite move. As I can start off the same code base to basically run different deployment options in Azure. Where I found that the serverless deployment added more value for me compared to a lower cost profile. That being said, one of the big learnings I had this week is that while having an automated landscape with Terraform, some changes are rather intrusive… Where I should have checked the output of the terraform plan stage, I failed to do so. Which resulted in downtime for VMchooser. So I  was looking for way to do operational validation in the least intrusive and re-usable way. This led me to a solution where the Azure DevOps pipelines would leverage the health-check used in the Traffic manager deployment. This was already part of the deployment of course and in this a key aspect of understanding if the deployment was healthy or not.

 

Gates

In order to add validation steps in our deployment process, we can leverage the concept of Gates in Azure DevOps ;

Gates allow automatic collection of health signals from external services, and then promote the release when all the signals are successful at the same time or stop the deployment on timeout. Typically, gates are used in connection with incident management, problem management, change management, monitoring, and external approval systems.

As most of the health parameters vary over time, regularly changing their status from healthy to unhealthy and back to healthy. To account for such variations, all the gates are periodically re-evaluated until all of them are successful at the same time. The release execution and deployment does not proceed if all gates do not succeed in the same interval and before the configured timeout. The following diagram illustrates the flow of gate evaluation where, after the initial stabilization delay period and three sampling intervals, the deployment is approved.

Continue reading “Azure DevOps : Operational validation with Approval Gates & Azure Monitor Alerts”

Leveraging Azure Tags and Azure Graph for deploying to your Blue/Green environments

Introduction

For this post I am assuming you are pretty familiar with the concept of deployment strategies (if not check out this post by Etienne). Now these are typically seen from an application deployment level, where platforms (like for instance Kubernetes) typically have out-of-the box mechanisms in place to do this. Now what if you would want to do this on an “infrastructure level”, like for instance the Kubernetes version of Azure Kubernetes Service. We could do an in-place upgrade, which will carefully cordon and drain the nodes. Though what if things go bad? We could do a Canary, Blue/Green, A/B, Shadow, … on cluster level too? Though how would we tackle the infrastructure point of view of this? That is the base for today’s post!

 

Architecture at hand

For today’s post we’ll leverage the following high level architecture ;

This project leverages Terraform under the hood. Things like DNS, Traffic Manager, Key Vault, CosmosDB, etc are “statefull’ where its lifecycle is fully managed by Terraform. On the other hand, our kubernetes clusters are “stateless” from an Infrastructure-as-Code point-of-view. We deploy them via Terraform, though do not keep track of them… All the lifecycle management is done on operating on the associated tags afterwards.

 

Community-Tool-of-the-day

The drawing above was not created in Visio for once. The above was made leveraging CloudSkew, which was created by Mithun Shanbhag. Always awesome to see community contributions, which we can only applaud!

Continue reading “Leveraging Azure Tags and Azure Graph for deploying to your Blue/Green environments”

Unified monitoring view in Kubernetes : Linking infrastructure monitoring with application monitoring

Introduction

Typically you notice that there are two dimensions / viewpoints when it comes to monitoring. On one side there is a team that wants to view everything related to the “infrastructure”, like for instance the kubernetes cluster. On the other hand, there is the typical application performance monitoring that starts from the application side. Sadly enough, in a lot of cases, those two are separated islands… 😦

 

As you might know, on the Azure front you can do Application Performance Monitoring with Application Insights and there is like a really awesome integration with Azure Monitor (“Log Analytics”) from the container space (kubernetes). Though I see you thinking it… Two separate solutions. Though, what a lot of people forget, is that they are actually using “Log Analytics” under the hood. And… That you can query across workspaces in Log Analytics! Which means that you can join the two and have an aggregated view to span both worlds.

 

Let’s take a look!

For this test, I’ve created a k8s cluster which is linked to a separate log analytics work-space. Where next to it, there is an application (Azure Function) inside of a docker container that is linked to Application Insights.

 

Continue reading “Unified monitoring view in Kubernetes : Linking infrastructure monitoring with application monitoring”

Test driving the newly announced Visual Studio Online

Introduction

This week Ignite kicked off with a series of announcements (as always). One of those was “Visual Studio Online“…

 

In my eagerness I wanted to test drive it and check out what the developer experience would be, and if it could replace my development station in Azure.  So let’s delve into it shall we?

Continue reading “Test driving the newly announced Visual Studio Online”

Taking a look at Github Enterprise Server & Github Connect

Introduction

For today’s post we’re going to take a look at GitHub Connect … It’s the link between the On-Premises installation of GitHub Enterprise Server and the popular SaaS offering (as we all have come to love it) called GitHub. 😉

 

Installing GitHub Enterprise Server (on Azure)

So my journey for today started with registering for the GitHub Enterprise Trial, where I decided to install it on Azure… as my “On Premises” location.

Continue reading “Taking a look at Github Enterprise Server & Github Connect”