In the beginning of the month I posted about my experience of moving VMchooser from “Serverless” to “Containers”. As in, moving from one way of implementing a CloudNative architecture to another… Since then, I have actually moved back to “Serverless”. Though the cogwheels in my head have been turning 24/7 on how to put everything around this into perspective. Yesterday Yves made a tweet (reply) that really made something click inside of my head…
In today’s post I’m going to try to do a “brain dump” of several thoughts that have been floating around in my mind. Where I hope this will help you in your journey of “finding your perfect rock”. Here I will indicate what I like about the various options and what my typical advice would be to organizations looking to do a given option.
Continue reading “Opinion – Cloud Native, Cloud Native and Cloud Native? What I like about, and my two cents on, running Containers, Kubernetes and/or Serverless”
For this post I am assuming you are pretty familiar with the concept of deployment strategies (if not check out this post by Etienne). Now these are typically seen from an application deployment level, where platforms (like for instance Kubernetes) typically have out-of-the box mechanisms in place to do this. Now what if you would want to do this on an “infrastructure level”, like for instance the Kubernetes version of Azure Kubernetes Service. We could do an in-place upgrade, which will carefully cordon and drain the nodes. Though what if things go bad? We could do a Canary, Blue/Green, A/B, Shadow, … on cluster level too? Though how would we tackle the infrastructure point of view of this? That is the base for today’s post!
Architecture at hand
For today’s post we’ll leverage the following high level architecture ;
This project leverages Terraform under the hood. Things like DNS, Traffic Manager, Key Vault, CosmosDB, etc are “statefull’ where its lifecycle is fully managed by Terraform. On the other hand, our kubernetes clusters are “stateless” from an Infrastructure-as-Code point-of-view. We deploy them via Terraform, though do not keep track of them… All the lifecycle management is done on operating on the associated tags afterwards.
The drawing above was not created in Visio for once. The above was made leveraging CloudSkew, which was created by Mithun Shanbhag. Always awesome to see community contributions, which we can only applaud!
Continue reading “Leveraging Azure Tags and Azure Graph for deploying to your Blue/Green environments”
Typically you notice that there are two dimensions / viewpoints when it comes to monitoring. On one side there is a team that wants to view everything related to the “infrastructure”, like for instance the kubernetes cluster. On the other hand, there is the typical application performance monitoring that starts from the application side. Sadly enough, in a lot of cases, those two are separated islands… 😦
As you might know, on the Azure front you can do Application Performance Monitoring with Application Insights and there is like a really awesome integration with Azure Monitor (“Log Analytics”) from the container space (kubernetes). Though I see you thinking it… Two separate solutions. Though, what a lot of people forget, is that they are actually using “Log Analytics” under the hood. And… That you can query across workspaces in Log Analytics! Which means that you can join the two and have an aggregated view to span both worlds.
Let’s take a look!
For this test, I’ve created a k8s cluster which is linked to a separate log analytics work-space. Where next to it, there is an application (Azure Function) inside of a docker container that is linked to Application Insights.
Continue reading “Unified monitoring view in Kubernetes : Linking infrastructure monitoring with application monitoring”
In today’s post we’ll go through the steps to get Azure Active Directory (AAD) integrated in RedHat’s OpenShift. So that we can use the AAD identity we all love in OpenShift too.
For the next steps, I’m assuming you already have an OpenShift deployment up & running. If not, check out this repository!
Continue reading “Enabling Azure Active Directory support in OpenShift (Origin)”
Todays post will be the backend tour of “Frietjes-of-Niet” (translated from Dutch : “French-Frites-or-Not?”). A big part of the mission of Azure is about democratizing technology so it becomes accessible to organizations in order for them to achieve more. AI (Artificial Intelligence) is a key part of that vision.
What will be the flow for today?
- We’ll train a model to recognize fries
- Next we’ll be exporting that model to be used as a container
- Afterwards we’ll build that container
- To end with deploying (and testing) it onto AKS
Sound cool? Let’s get to it..
Continue reading “Azure Custom Vision AI : From training to deploying the container export on the Azure Kuberenetes Service (AKS)”
Pfew, it’s odd to admit that it has been a while since I’ve posted about Rancher. Though today is as good a day as any to pick up that thread… So today we’ll go through give or take the same objective as in the past, where we’ll notice that the integration has improved significantly with the arrival of AKS! Let’s get today’s post underway and deploy AKS from our Rancher control plane.
Before the below started, I already had the following things ready ;
Continue reading “Taking a glance at Rancher’s ability to manage the Azure Kubernetes Service (AKS)”
A while ago I talked about “Faas/Serverless” in relation to vendor lock-in. Today we’ll be continuing in that road, where we’ll be doing a small proof-of-concept (PoC). In this PoC, we’ll be replatforming existing Azure Functions code into an Azure Functions container!
Things to know
Since Azure Functions 2.0 (in preview at the time of writing this post), you are able to leverage containers. Though be aware that there are several known issues. Do check them out first before embarking on your journey!
So first, we’ll start off with testing the Azure Functions Core Tools! If you’re looking to follow this guide, be sure to have the Azure Functions Core Tools installed, which also depends on .NET Core 2.0 and Nodejs. Once you have those installed, do a “func –help”, and you’ll see what capabilities are at hand…
Continue reading “Replatforming Azure Functions into an Azure Functions Container”
Today’s post is conceptually a rather simple one… Let’s see how we can go from this ;
To here ;
By using a CI/CD pipeline.
Flow of the day
What will we be doing today?
- Kick-off a VSTS build once a change has been made to our Github repo
- Build a container via VSTS
- Publish the container to an ACR (Azure Container Registry)
- Kick-off a VSTS release once the build succeeded
- Use an ARM template to deploy an ACI (Azure Container Instance) with our docker container underneath
Sound cool? Let’s get to it!
Continue reading “From Github to ACI – A tale how to use Visual Studio Team Services & Azure Container Registry for Container CI/CD”
In the past I’ve already done several posts about containers. This by using various orchestrators & workflow management tools. Today’s post will be about deploying a Linux container with Service Fabric… The main goals is to provide you with the look & feel of the initial steps. In a future posts, I’ll delve into the more advanced stuff (like data persistence & inter-container connectivity).
Continue reading “Azure Service Fabric : Deploying your first container…”
When you are deploying an image, which is hosted on a private registry, to a kubernetes (k8s) cluster with windows nodes… Then you might get the following error ;
Failed to pull image “kvaes.azurecr.io/kvaes2017:v1“: rpc error: code = 2 desc = unknown blob
Error syncing pod, skipping: failed to “StartContainer” for “private-reg-container” with ErrImagePull: “rpc error: code = 2 desc = unknown blob”
So what did my setup look like?
- Orchestrator : Kubernetes for Windows (Azure Container Service)
- Registry : Private (Azure Container Registry)
- Image : Windows Nano Based
Let’s deploy two pods…
The first I’ll deploy via yaml, which is basically the example from the kubernetes docs on pulling an image from a private repo…
Now the second one is an adaptation of the example flow from the Azure Container Service documentation ;
Now let’s see how that one went…
The first one failed, and the second one passed! What was the difference?
Apparently this one forces the switch to “windows container mode” (or something like that…). As it seems very similar to the following thread…
When deploying windows containers to a kubernetes cluster. be sure to the set the “nodeSelector” or you might end up with errors on pulling the image.