Opinion – Cloud Native, Cloud Native and Cloud Native? What I like about, and my two cents on, running Containers, Kubernetes and/or Serverless

Introduction

In the beginning of the month I posted about my experience of moving VMchooser from “Serverless” to “Containers”. As in, moving from one way of implementing a CloudNative architecture to another… Since then, I have actually moved back to “Serverless”.Ā  Though the cogwheels in my head have been turning 24/7 on how to put everything around this into perspective. Yesterday Yves made a tweet (reply) that really made something click inside of my head…

In today’s post I’m going to try to do a “brain dump” of several thoughts that have been floating around in my mind. Where I hope this will help you in your journey of “finding your perfect rock”. Here I will indicate what I like about the various options and what my typical advice would be to organizations looking to do a given option.

Continue reading “Opinion – Cloud Native, Cloud Native and Cloud Native? What I like about, and my two cents on, running Containers, Kubernetes and/or Serverless”

Improving your code quality by linking Azure DevOps with SonarCloud

Introduction

In a customer workshop earlier this week, Hans mentioned a very nice tool (SonarCloud). He used it “in his previous life and was very enthusiastic about it. So this immediately triggered my curiosity… šŸ˜‰ As it is free for public projects, I investigated how easy it was to integrate into my existing pipelines. Which turned out to be quite easy! After browsing around a bit on how to integrate it into a YAML pipeline, I can proudly say that VMchooser is now fully hooked up with SonarCloud.

However, it did confirm my suspicion, that I am a lousy developer! šŸ˜‰ Though better lousy code fulfilling a purpose than having no alternative at all?!?

Anyhow, today’s post is about the experience of moving existing pipelines to SonarCloud and investigate the results you get out of it.

Continue reading “Improving your code quality by linking Azure DevOps with SonarCloud”

Azure DevOps : Operational validation with Approval Gates & Azure Monitor Alerts

Introduction

After having migrated VMchooser from a fully Serverless infrastructure to Containers, I am currently doing the opposite move. As I can start off the same code base to basically run different deployment options in Azure. Where I found that the serverless deployment added more value for me compared to a lower cost profile. That being said, one of the big learnings I had this week is that while having an automated landscape with Terraform, some changes are rather intrusive… Where I should have checked the output of the terraform plan stage, I failed to do so. Which resulted in downtime for VMchooser. So IĀ  was looking for way to do operational validation in the least intrusive and re-usable way. This led me to a solution where the Azure DevOps pipelines would leverage the health-check used in the Traffic manager deployment. This was already part of the deployment of course and in this a key aspect of understanding if the deployment was healthy or not.

 

Gates

In order to add validation steps in our deployment process, we can leverage the concept of Gates in Azure DevOpsĀ ;

Gates allow automatic collection of health signals from external services, and then promote the release when all the signals are successful at the same time or stop the deployment on timeout. Typically, gates are used in connection with incident management, problem management, change management, monitoring, and external approval systems.

As most of the health parameters vary over time, regularly changing their status from healthy to unhealthy and back to healthy. To account for such variations, all the gates are periodically re-evaluated until all of them are successful at the same time. The release execution and deployment does not proceed if all gates do not succeed in the same interval and before the configured timeout. The following diagram illustrates the flow of gate evaluation where, after the initial stabilization delay period and three sampling intervals, the deployment is approved.

Continue reading “Azure DevOps : Operational validation with Approval Gates & Azure Monitor Alerts”

Leveraging Azure Tags and Azure Graph for deploying to your Blue/Green environments

Introduction

For this post I am assuming you are pretty familiar with the concept of deployment strategies (if not check out this post by Etienne). Now these are typically seen from an application deployment level, where platforms (like for instance Kubernetes) typically have out-of-the box mechanisms in place to do this. Now what if you would want to do this on an “infrastructure level”, like for instance the Kubernetes version of Azure Kubernetes Service. We could do an in-place upgrade, which will carefully cordon and drain the nodes. Though what if things go bad? We could do a Canary, Blue/Green, A/B, Shadow, … on cluster level too? Though how would we tackle the infrastructure point of view of this? That is the base for today’s post!

 

Architecture at hand

For today’s post we’ll leverage the following high level architecture ;

This project leverages Terraform under the hood. Things like DNS, Traffic Manager, Key Vault, CosmosDB, etc are “statefull’ where its lifecycle is fully managed by Terraform. On the other hand, our kubernetes clusters are “stateless” from an Infrastructure-as-Code point-of-view. We deploy them via Terraform, though do not keep track of them… All the lifecycle management is done on operating on the associated tags afterwards.

 

Community-Tool-of-the-day

The drawing above was not created in Visio for once. The above was made leveraging CloudSkew, which was created by Mithun Shanbhag. Always awesome to see community contributions, which we can only applaud!

Continue reading “Leveraging Azure Tags and Azure Graph for deploying to your Blue/Green environments”

Improving the security & compatibility aspects of package management with native GitHub features

Introduction

Did you know almost every piece of software depends on OpenSource? Not sure… What libraries is your software using? Bingo! šŸ˜‰

Now we all know that package management can be a true hell. Tracking everything and ensure you are up-to-date to achieve the needed security level is hard. Next to that, there is always the risk that your build will break to moving to a library version.

What if we could enhance that flow a bit? You guessed it… Today’s post will be around how we can leverage native GitHub features to help us in this area!

 

Let’s hit the slopes!

For this walk-through, we’ll use the following ;

  • an existing code repository, where we’ve forked CoreUI’s VueJS repo
  • GitHub’s actions to run a workflow on every pull request
  • GitHub’s automated security feature that will send pull requests to us when it detects security issues

Want to test this one out or follow along? Browse to the following sample repository ; https://github.com/beluxappdev/CoreUI-VueJS-GitHubSecurityDemo

So let’s fork this sample repository!

Continue reading “Improving the security & compatibility aspects of package management with native GitHub features”

Taking a look at Github Enterprise Server & Github Connect

Introduction

For today’s post we’re going to take a look at GitHub Connect … It’s the link between the On-Premises installation of GitHub Enterprise Server and the popular SaaS offering (as we all have come to love it) called GitHub. šŸ˜‰

 

Installing GitHub Enterprise Server (on Azure)

So my journey for today started with registering for the GitHub Enterprise Trial, where I decided to install it on Azure… as my “On Premises” location.

Continue reading “Taking a look at Github Enterprise Server & Github Connect”

Data Workflows in Azure : Taking an end-to-end look from ingest to reporting!

Introduction

There are a lot of scenario’s where organization are leveraging Azure to process their data at scale. In today’s post I’m going to go through the various pieces that can connect the puzzle for you in such a work flow. Starting from ingesting the data into Azure, and afterwards processing it in a scalable & sustainable manner.

 

High Level Architecture

As always, let’s start with a high level architecture to discuss what we’ll be discussing today ;

 

  • Ingest : The entire story starts here, where the data is being ingested into Azure. This can be done via an offline transfer (Azure DataBox), or online via (Azure DataBox Edge/Gateway, or using the REST API, AzCopy, …).
  • Staging Area : No matter what ingestation method you’re using, the data will end up in a storage location (which we’ll now dub “Staging Area”). From there one we’ll be able to transfer it to it’s “final destination”.
  • Processing Area : This is the “final destination” for the ingested content. Why does this differ from the staging area? Cause there are a variety of reasons to put data in another location. Ranging from business rules and the linked conventions (like naming, folder structure, etc), towards more technical reasons like proximity to other systems or spreading the data across different storage accounts/locations.
  • Azure Data Factory : This service provides a low/no-code way of modelling out your data workflow & having an awesome way of following up your jobs in operations. It’ll serve as the key orchestrator for all your workflows.
  • Azure Functions : Where there are already a good set of activities (“tasks”) available in ADF (Azure Data Factory), the ability to link functions into it extends the possibility for your organization even more. Now you can link your custom business logic right into the workflows.
  • Cosmos DB : As you probably want to keep some metadata on your data, we’ll be using Cosmos DB for that one. Where Functions will serve as the front-end API layer to connect to that data.
  • Azure Batch &Ā Data Bricks : Both Batch & Data Bricks can be directly called upon from ADF, providing key processing power in your workflows!
  • Azure Key Vault : Having secrets lying around & possibly being exposed is never a good idea. Therefor it’s highly recommended to leverage the Key Vault integration for storing your secrets!
  • Azure DevOps : Next to the above, we’ll be relying on Azure DevOps as our core CI/CD pipeline and trusted code repository. We can use it to build & deploy our Azure Functions & Batch Applications, as for storing our ADF templates & Data Bricks notebooks.
  • Application Insights : Key to any successful application is collecting the much needed telemetry, where Application Insights is more than suited for this task.
  • Log Analytics : ADF provides native integration with Log Analytics. This will provide us with an awesome way to take a look at the status of our pipelines & activities.
  • PowerBI : In terms of reporting, we’ll be using PowerBI to collect the data that was pumped into Log Analytics and joining it with the metadata from Cosmos DB. Thus providing us with live data on the status of our workflow!

 

Now let’s take a look at that End-to-End flow!

Continue reading “Data Workflows in Azure : Taking an end-to-end look from ingest to reporting!”

Landscaping a Secure/Closed Loop Infrastructure in Azure with Terraform & Azure Devops

Introduction

Posts about security are always the ones that make everyone get really excited… Or maybe not everyone. šŸ˜‰ Anyhow, what is typically the weakest link in any security design? Indeed, the human touch… The effects of this can range from having seen secrets to creating drift (unwanted changes vs de expected baseline). In today’s post, I’ll walk you through an example setup that aims to close some additional holes for you. How will we be doing this? By basically automating the entire infrastructure management with Azure Devops & Terraform. Now you’ll probably think, what does that have to do with security? Good response! We’re going to reduce the points to where human contact can interfere with our security measures. Though we want to do this without putting our agility at risk!

 

Blueprint

For this exercise, we’re going to leverage this blueprint ;

Continue reading “Landscaping a Secure/Closed Loop Infrastructure in Azure with Terraform & Azure Devops”

From Cloud Dev Station to Terraform landscaping in Azure

Introduction

A lot of people always keep telling me that they love Azure’s Cloud Shell. Oddly enough, I use it more occasionally and find my self using the WSL (Windows Subsystem for Linux) more. If I analyze it a bit, I recon it’s because I want to easily edit & use files with the Azure CLI (etc). Now, the Azure Cloud Shell has a way to persist files! Therefor I embarked on a small test to see what kind of workflow would work whilst working with Terraform and leveraging the Cloud Shell to apply the configurations.

 

Basic Workflow

So what did I come up with? As you know, I’m running my development workstation in the cloud. In addition, I’ve mounted the CloudDrive onto my workstation and cloned my GitHub repo to that location. Next up, I can author my files locally and afterwards push to my repository. As the local files are synced with the CloudDrive, they’ll immediately pop up in my Cloud Shell too. So I can apply them there…

Sounds great? Let’s take it for spin!

Continue reading “From Cloud Dev Station to Terraform landscaping in Azure”

Using Azure DevOps to deploy your static webpage (SPA) to Azure Storage

Introduction

To, without shame, grab the introduction of the “Static website hosting in Azure Storage” page ;

 

Azure Storage now offers static website hosting, enabling you to deploy cost-effective and scalable modern web applications on Azure. On a static website, webpages contain static content and JavaScript or other client-side code. By contrast, dynamic websites depend on server-side code, and can be hosted using Azure Web Apps.

As deployments shift toward elastic, cost-effective models, the ability to deliver web content without the need for server management is critical. The introduction of static website hosting in Azure Storage makes this possible, enabling rich backend capabilities with serverless architectures leveraging Azure Functions and other PaaS services.

 

Which, to me, sounds great! As one of my projects (VMchooser) is actually a static site (VueJS based Single Page App) that could just as well run on Azure Storage (thus reducing my cost footprint). So today we’re going to test that one out, and afterwards integrate it into our existing CI/CD pipeline (powered by Azure DevOps).

 

Continue reading “Using Azure DevOps to deploy your static webpage (SPA) to Azure Storage”