Data Workflows in Azure : Taking an end-to-end look from ingest to reporting!

Introduction

There are a lot of scenario’s where organization are leveraging Azure to process their data at scale. In today’s post I’m going to go through the various pieces that can connect the puzzle for you in such a work flow. Starting from ingesting the data into Azure, and afterwards processing it in a scalable & sustainable manner.

 

High Level Architecture

As always, let’s start with a high level architecture to discuss what we’ll be discussing today ;

 

  • Ingest : The entire story starts here, where the data is being ingested into Azure. This can be done via an offline transfer (Azure DataBox), or online via (Azure DataBox Edge/Gateway, or using the REST API, AzCopy, …).
  • Staging Area : No matter what ingestation method you’re using, the data will end up in a storage location (which we’ll now dub “Staging Area”). From there one we’ll be able to transfer it to it’s “final destination”.
  • Processing Area : This is the “final destination” for the ingested content. Why does this differ from the staging area? Cause there are a variety of reasons to put data in another location. Ranging from business rules and the linked conventions (like naming, folder structure, etc), towards more technical reasons like proximity to other systems or spreading the data across different storage accounts/locations.
  • Azure Data Factory : This service provides a low/no-code way of modelling out your data workflow & having an awesome way of following up your jobs in operations. It’ll serve as the key orchestrator for all your workflows.
  • Azure Functions : Where there are already a good set of activities (“tasks”) available in ADF (Azure Data Factory), the ability to link functions into it extends the possibility for your organization even more. Now you can link your custom business logic right into the workflows.
  • Cosmos DB : As you probably want to keep some metadata on your data, we’ll be using Cosmos DB for that one. Where Functions will serve as the front-end API layer to connect to that data.
  • Azure BatchData Bricks : Both Batch & Data Bricks can be directly called upon from ADF, providing key processing power in your workflows!
  • Azure Key Vault : Having secrets lying around & possibly being exposed is never a good idea. Therefor it’s highly recommended to leverage the Key Vault integration for storing your secrets!
  • Azure DevOps : Next to the above, we’ll be relying on Azure DevOps as our core CI/CD pipeline and trusted code repository. We can use it to build & deploy our Azure Functions & Batch Applications, as for storing our ADF templates & Data Bricks notebooks.
  • Application Insights : Key to any successful application is collecting the much needed telemetry, where Application Insights is more than suited for this task.
  • Log Analytics : ADF provides native integration with Log Analytics. This will provide us with an awesome way to take a look at the status of our pipelines & activities.
  • PowerBI : In terms of reporting, we’ll be using PowerBI to collect the data that was pumped into Log Analytics and joining it with the metadata from Cosmos DB. Thus providing us with live data on the status of our workflow!

 

Now let’s take a look at that End-to-End flow!

Continue reading “Data Workflows in Azure : Taking an end-to-end look from ingest to reporting!”

What the proxy?!? How to use a proxy with the typical Azure tools…

Introduction

Proxy servers are a very common thing in a lot of enterprises. They are used so that people cannot directly access the internet, and additional management capabilities to the flow (logging, authentication, …). Now that sounds very dandy, though what about those non-browser-based tools? How can we ensure that tools like Azure CLI, Azure Powershell & AzCopy work with our “beloved” enterprise proxy? That’ll be the topic for today!

Test Setup

What will we be doing today? I’ve setup a proxy server in my own lab… Basically deployed a Squid proxy by means of a container.

Next up, I’m going to use the three earlier mentioned tools on both Linux (WSL) & Windows, and see what needs to be done to get things working. In the following screenshots you’ll typically see a “split screen”, where left is a “tcpdump” on the box running the proxy server and right will be the commands on the box running the tools. If you see a lot of mumbo jumbo (network packets) on the left, that’ll mean that the proxy server was being used. Ready?!? Cool, let’s go!

Continue reading “What the proxy?!? How to use a proxy with the typical Azure tools…”

Hardening your Azure Storage Account by using Service Endpoints

Introduction

Earlier this week I received a two folded question ; “Does a service endpoint go over internet? As when I block the storage account tags with a NSG, my connection towards the storage account stops.” Let’s look at the following illustration ;

 

The first thing to mention here is that the storage account (at this time) always listens to a public IP address. The funky thing is, that in Azure, you’ll have a capability called “Service Endpoint“, which I already covered briefly in the past. For argument’s sake, I’ve made a distinction in the above illustration between the “Azure Backbone” and the “Azure SDN”. A more correct representation might have been to have said “internal” & “external” Azure Backbone in terms of the IP address space used. So see the “Azure Backbone” in the above drawing as the public IP address space. Here all public addresses reside. Where the “Azure SDN” is the one that covers the internal flows. Also be aware that an Azure VNET can only have address spaces as described by RFC1918. So why did I depict it like this? To indicate that there are different flows;

  • Connections from outside of Azure (“internet”)
  • Connections from within the Microsoft backbone (“Azure Backbone”)
  • Connections by leveraging a service endpoint

So how does the service endpoint work?

So to answer the question stated above ;

  • Q: Does a service endpoint go over “the internet”?
  • A : Define “internet”…
    • If you mean that it uses a public ip address instead of an internal one? Then yes.
    • If you mean that it leaves the Microsoft backbone? Then no.
    • If you mean that the service is accessible from the internet? Unless you open up the firewall, it won’t (by default, when having a service endpoint configured).

 

  • Q: When I block the storage tag in my network security group (“NSG”), then the traffic stops. How come?
  • A: The NSG is active on NIC level. The storage account, even when using a service endpoint, will still use the public IP. As this public IP is listed in the ranges that are configured in the service tag, you’ll be effectively blocking the service. This might be your objective… Though if you do this to “lock down the internet flow”, then you won’t achieve the requested you wanted. You should leverage the firewall functionality from the service for which the service endpoint was used.

 

Deep Dive

As always, let’s do a deep dive to experience this flow! So I did set up the above drawing in my personal lab ;

Continue reading “Hardening your Azure Storage Account by using Service Endpoints”

Aggregating Metrics from multiple Azure Storage Accounts

Introduction

When working at scale, the only way to properly handle true scale is to work with horizontal scaling options. Some services (like CosmosDBCosmosDB for instance) do this out of the box and abstract it away from the user/customer. Though sometimes this is something you need to facilitate yourself… In terms of Azure Storage, we’re very open in regards to our limitations. For example, at this point in time, we’re currently facing a maximum egress of 50Gbps per storage account. Where this is more than enough for a lot or customers, at times we need to scale beyond this. Here the solution at hand is to see the storage account as a “scale unit”, and use it for horizontal scaling. So if you need 200GBps, then you can partition your data across four storage accounts.

In today’s post, we’re going to take a look at how you can aggregate these metrics into a single pane of glass. Because, at the end of the day, your operations team does not want to have a disaggregated view of all the components in play.

 

Important Note

All Azure teams are constantly looking to evolve their services. Please note that the limits mention in this post are linked to the point in time when the article was written. As many of you know, Azure keeps evolving at vast pace, so the limits might already have been changed. If you are wondering, always check the following page for the most current limits that are linked to GA (“General Available”) services!

Continue reading “Aggregating Metrics from multiple Azure Storage Accounts”

Taking the Azure Data Box Gateway (preview) out for a spin!

Introduction

At the last Ignite conference, three new additions joined the Data Box family. In today’s post we’ll take one of those out for a spin, being the “Data Box Gateway“. This one comes as a virtual appliance that you can run on top of your own physical hardware.

 

So where does it fit into the picture?

  • Cloud archival – Copy hundreds of TBs of data to Azure storage using Data Box Gateway in a secure and efficient manner. The data can be ingested one time or an ongoing basis for archival scenarios.
  • Data aggregation – Aggregate data from multiple sources into a single location in Azure Storage for data processing and analytics.
  • Integration with on-premises workloads – Integrate with on-premises workloads such as backup and restore that use cloud storage and need local access for commonly used files.

 

Let’s take it for a spin!

So let’s make it a bit more tangible and see what the user experience is in setting it up & using it. Start by searching the Azure Marketplace for Data Box Gateway.

Continue reading “Taking the Azure Data Box Gateway (preview) out for a spin!”

Azure Data Lake Storage (Gen2) : Exploring AAD B2B & ACL hardening

Introduction

In the summer of 2018, the 2nd generation of the Azure Data Lake Storage was announced.  In today’s post, we’ll delve into the authentication & authorization part of this service. We’re going to see how we can leverage AAD to tighten security around our Data Lake.

 

Use Case

To help us in this storyline, we’ll be looking to solve the following use case. A customer has stored a lot of data on its Data Lake, and is looking to provide a “partner” access to a subset of the data. In this use case, what would we need to to to achieve this goal?

 

Azure Data Lake Storage : Access Control Model

The first part of our puzzle is looking at the “Access Control Model“… In essence there are four ways to provide access to the data lake ;

  • Shared Key ; The caller effectively gains ‘super-user’ access, meaning full access to all operations on all resources, including setting owner and changing ACLs
  • SAS Tokens ; The token includes the allowed permissions as part of the token. The permissions included in the SAS token are effectively applied to all authorization decisions, but no additional ACL checks are performed.
  • Azure RBAC ; Azure Role-based Access Control (RBAC) uses role assignments to effectively apply sets of permissions to users, groups, and service principals for Azure resources. Typically, those Azure resources are constrained to top-level resources (e.g., Azure Storage accounts). In the case of Azure Storage, and consequently Azure Data Lake Storage Gen2, this mechanism has been extended to the file system resource.
  • ACL ; And last, but not least, we have the access control list we can apply at a more fine-grained level.

Continue reading “Azure Data Lake Storage (Gen2) : Exploring AAD B2B & ACL hardening”

Landscaping a Secure/Closed Loop Infrastructure in Azure with Terraform & Azure Devops

Introduction

Posts about security are always the ones that make everyone get really excited… Or maybe not everyone. 😉 Anyhow, what is typically the weakest link in any security design? Indeed, the human touch… The effects of this can range from having seen secrets to creating drift (unwanted changes vs de expected baseline). In today’s post, I’ll walk you through an example setup that aims to close some additional holes for you. How will we be doing this? By basically automating the entire infrastructure management with Azure Devops & Terraform. Now you’ll probably think, what does that have to do with security? Good response! We’re going to reduce the points to where human contact can interfere with our security measures. Though we want to do this without putting our agility at risk!

 

Blueprint

For this exercise, we’re going to leverage this blueprint ;

Continue reading “Landscaping a Secure/Closed Loop Infrastructure in Azure with Terraform & Azure Devops”