Azure DevOps : Operational validation with Approval Gates & Azure Monitor Alerts

Introduction

After having migrated VMchooser from a fully Serverless infrastructure to Containers, I am currently doing the opposite move. As I can start off the same code base to basically run different deployment options in Azure. Where I found that the serverless deployment added more value for me compared to a lower cost profile. That being said, one of the big learnings I had this week is that while having an automated landscape with Terraform, some changes are rather intrusive… Where I should have checked the output of the terraform plan stage, I failed to do so. Which resulted in downtime for VMchooser. So I  was looking for way to do operational validation in the least intrusive and re-usable way. This led me to a solution where the Azure DevOps pipelines would leverage the health-check used in the Traffic manager deployment. This was already part of the deployment of course and in this a key aspect of understanding if the deployment was healthy or not.

 

Gates

In order to add validation steps in our deployment process, we can leverage the concept of Gates in Azure DevOps ;

Gates allow automatic collection of health signals from external services, and then promote the release when all the signals are successful at the same time or stop the deployment on timeout. Typically, gates are used in connection with incident management, problem management, change management, monitoring, and external approval systems.

As most of the health parameters vary over time, regularly changing their status from healthy to unhealthy and back to healthy. To account for such variations, all the gates are periodically re-evaluated until all of them are successful at the same time. The release execution and deployment does not proceed if all gates do not succeed in the same interval and before the configured timeout. The following diagram illustrates the flow of gate evaluation where, after the initial stabilization delay period and three sampling intervals, the deployment is approved.

Continue reading “Azure DevOps : Operational validation with Approval Gates & Azure Monitor Alerts”

Leveraging Azure Tags and Azure Graph for deploying to your Blue/Green environments

Introduction

For this post I am assuming you are pretty familiar with the concept of deployment strategies (if not check out this post by Etienne). Now these are typically seen from an application deployment level, where platforms (like for instance Kubernetes) typically have out-of-the box mechanisms in place to do this. Now what if you would want to do this on an “infrastructure level”, like for instance the Kubernetes version of Azure Kubernetes Service. We could do an in-place upgrade, which will carefully cordon and drain the nodes. Though what if things go bad? We could do a Canary, Blue/Green, A/B, Shadow, … on cluster level too? Though how would we tackle the infrastructure point of view of this? That is the base for today’s post!

 

Architecture at hand

For today’s post we’ll leverage the following high level architecture ;

This project leverages Terraform under the hood. Things like DNS, Traffic Manager, Key Vault, CosmosDB, etc are “statefull’ where its lifecycle is fully managed by Terraform. On the other hand, our kubernetes clusters are “stateless” from an Infrastructure-as-Code point-of-view. We deploy them via Terraform, though do not keep track of them… All the lifecycle management is done on operating on the associated tags afterwards.

 

Community-Tool-of-the-day

The drawing above was not created in Visio for once. The above was made leveraging CloudSkew, which was created by Mithun Shanbhag. Always awesome to see community contributions, which we can only applaud!

Continue reading “Leveraging Azure Tags and Azure Graph for deploying to your Blue/Green environments”

Unified monitoring view in Kubernetes : Linking infrastructure monitoring with application monitoring

Introduction

Typically you notice that there are two dimensions / viewpoints when it comes to monitoring. On one side there is a team that wants to view everything related to the “infrastructure”, like for instance the kubernetes cluster. On the other hand, there is the typical application performance monitoring that starts from the application side. Sadly enough, in a lot of cases, those two are separated islands… 😦

 

As you might know, on the Azure front you can do Application Performance Monitoring with Application Insights and there is like a really awesome integration with Azure Monitor (“Log Analytics”) from the container space (kubernetes). Though I see you thinking it… Two separate solutions. Though, what a lot of people forget, is that they are actually using “Log Analytics” under the hood. And… That you can query across workspaces in Log Analytics! Which means that you can join the two and have an aggregated view to span both worlds.

 

Let’s take a look!

For this test, I’ve created a k8s cluster which is linked to a separate log analytics work-space. Where next to it, there is an application (Azure Function) inside of a docker container that is linked to Application Insights.

 

Continue reading “Unified monitoring view in Kubernetes : Linking infrastructure monitoring with application monitoring”

Test driving the newly announced Visual Studio Online

Introduction

This week Ignite kicked off with a series of announcements (as always). One of those was “Visual Studio Online“…

 

In my eagerness I wanted to test drive it and check out what the developer experience would be, and if it could replace my development station in Azure.  So let’s delve into it shall we?

Continue reading “Test driving the newly announced Visual Studio Online”

Taking a look at Github Enterprise Server & Github Connect

Introduction

For today’s post we’re going to take a look at GitHub Connect … It’s the link between the On-Premises installation of GitHub Enterprise Server and the popular SaaS offering (as we all have come to love it) called GitHub. 😉

 

Installing GitHub Enterprise Server (on Azure)

So my journey for today started with registering for the GitHub Enterprise Trial, where I decided to install it on Azure… as my “On Premises” location.

Continue reading “Taking a look at Github Enterprise Server & Github Connect”

Hardening your storage account with Private Link / Endpoint

Introduction

Earlier this week, a new capability called “Azure Private Link” (and also “Azure Private Endpoint”) went into public preview. As a nice copy & past from the documentation page ;

Azure Private Link enables you to access Azure PaaS Services (for example, Azure Storage and SQL Database) and Azure hosted customer/partner services over a Private Endpoint in your virtual network. Traffic between your virtual network and the service traverses over the Microsoft backbone network, eliminating exposure from the public Internet. You can also create your own Private Link Service in your virtual network (VNet) and deliver it privately to your customers. The setup and consumption experience using Azure Private Link is consistent across Azure PaaS, customer-owned, and shared partner services.

As always, we’ll take this one out for a spin! For this we’ll see if we can access a storage account privately (from a virtual machine) over the VNET.

 

Continue reading “Hardening your storage account with Private Link / Endpoint”

Taking the user delegation SAS tokens for a spin

Introduction

A few days ago the preview for the “User delegation SAS token” has seen the light. In today’s post, we’ll take a first glance on this new capability! Though why should we care about this feature? You can now create SAS tokens based on the scoped permissions of an AAD user, instead of linked towards the storage account key. From a security perspective this is REALLY awesome, cause you can harden the scope of a possible even more.

 

Bibliography

Continue reading “Taking the user delegation SAS tokens for a spin”

Writing straight to the Azure Storage Access Tier you want (with AzCopy)

Introduction

With the “Set Blob Tier” operation you can set the Access Tier of the blob object of a storage account. Now at times you know a certain object will go to tier that’s not your default access tier. Or you want to write immediately to archive. The cool thing is that AzCopy can assist you in this!

 

AzCopy Flags

Take a look at the AzCopy flags… Here you’ll notice the “–block-blob-tier” flag. This is the one that’ll help you on writing directly to the access tier you want.

Quick Demo

I’ve created a storage account in mint condition, where the default access tier is set to “Hot”.

Continue reading “Writing straight to the Azure Storage Access Tier you want (with AzCopy)”

Data Workflows in Azure : Taking an end-to-end look from ingest to reporting!

Introduction

There are a lot of scenario’s where organization are leveraging Azure to process their data at scale. In today’s post I’m going to go through the various pieces that can connect the puzzle for you in such a work flow. Starting from ingesting the data into Azure, and afterwards processing it in a scalable & sustainable manner.

 

High Level Architecture

As always, let’s start with a high level architecture to discuss what we’ll be discussing today ;

 

  • Ingest : The entire story starts here, where the data is being ingested into Azure. This can be done via an offline transfer (Azure DataBox), or online via (Azure DataBox Edge/Gateway, or using the REST API, AzCopy, …).
  • Staging Area : No matter what ingestation method you’re using, the data will end up in a storage location (which we’ll now dub “Staging Area”). From there one we’ll be able to transfer it to it’s “final destination”.
  • Processing Area : This is the “final destination” for the ingested content. Why does this differ from the staging area? Cause there are a variety of reasons to put data in another location. Ranging from business rules and the linked conventions (like naming, folder structure, etc), towards more technical reasons like proximity to other systems or spreading the data across different storage accounts/locations.
  • Azure Data Factory : This service provides a low/no-code way of modelling out your data workflow & having an awesome way of following up your jobs in operations. It’ll serve as the key orchestrator for all your workflows.
  • Azure Functions : Where there are already a good set of activities (“tasks”) available in ADF (Azure Data Factory), the ability to link functions into it extends the possibility for your organization even more. Now you can link your custom business logic right into the workflows.
  • Cosmos DB : As you probably want to keep some metadata on your data, we’ll be using Cosmos DB for that one. Where Functions will serve as the front-end API layer to connect to that data.
  • Azure BatchData Bricks : Both Batch & Data Bricks can be directly called upon from ADF, providing key processing power in your workflows!
  • Azure Key Vault : Having secrets lying around & possibly being exposed is never a good idea. Therefor it’s highly recommended to leverage the Key Vault integration for storing your secrets!
  • Azure DevOps : Next to the above, we’ll be relying on Azure DevOps as our core CI/CD pipeline and trusted code repository. We can use it to build & deploy our Azure Functions & Batch Applications, as for storing our ADF templates & Data Bricks notebooks.
  • Application Insights : Key to any successful application is collecting the much needed telemetry, where Application Insights is more than suited for this task.
  • Log Analytics : ADF provides native integration with Log Analytics. This will provide us with an awesome way to take a look at the status of our pipelines & activities.
  • PowerBI : In terms of reporting, we’ll be using PowerBI to collect the data that was pumped into Log Analytics and joining it with the metadata from Cosmos DB. Thus providing us with live data on the status of our workflow!

 

Now let’s take a look at that End-to-End flow!

Continue reading “Data Workflows in Azure : Taking an end-to-end look from ingest to reporting!”

What the proxy?!? How to use a proxy with the typical Azure tools…

Introduction

Proxy servers are a very common thing in a lot of enterprises. They are used so that people cannot directly access the internet, and additional management capabilities to the flow (logging, authentication, …). Now that sounds very dandy, though what about those non-browser-based tools? How can we ensure that tools like Azure CLI, Azure Powershell & AzCopy work with our “beloved” enterprise proxy? That’ll be the topic for today!

Test Setup

What will we be doing today? I’ve setup a proxy server in my own lab… Basically deployed a Squid proxy by means of a container.

Next up, I’m going to use the three earlier mentioned tools on both Linux (WSL) & Windows, and see what needs to be done to get things working. In the following screenshots you’ll typically see a “split screen”, where left is a “tcpdump” on the box running the proxy server and right will be the commands on the box running the tools. If you see a lot of mumbo jumbo (network packets) on the left, that’ll mean that the proxy server was being used. Ready?!? Cool, let’s go!

Continue reading “What the proxy?!? How to use a proxy with the typical Azure tools…”