A few weeks ago the “HA Ports” (finally) saw the light (in public preview)! I’m truly excited about this one, as it had become a “unicorn” for me over the last years.
Why am I so excited about this one? This unlocks of advanced networking patterns, starting with a truly HA setup for the Network Virtual Appliances (NVAs). In the past, we needed to rely on “workarounds” that would switch the UDR to point to the surviving node. That was great for the time, but let’s be honest… It shouldn’t have been like that.
Another use case is the scenario where an application needs to connect to a certain dynamic port ranges (like with SQL). I’ve seen several deployments annoyed by this requirement, which then forced people to create a lot of rules. This can now be avoided by allowing the entire port range, and just hardening it with a “Network Security Group” or Firewall rule base.
Continue reading “Azure Star / Any Load Balancer or … like we would like to call it “HA Ports””
When moving to the cloud, one cannot imagine this without some kind of network integration. Taking a look at “Infrastructure-as-a-Service”, there are several common patterns that are utilized by enterprises. Today we’ll discuss these patterns…
Typical Network Maturity Models
Embarking on a cloud journey? You’ll typically go through the following patterns depending on your “maturity level” in working with the cloud ;
- “Island” : The first approach is typically “the island”. The VMs reside in a VNET that is not connected/integrated with any other networks, except for (maybe) the internet.
- “Forced Tunneling” : The first step towards integration is “forced tunneling”. Here you want to access “On Premises” resources, though the mass of the resources on Azure do not justify the investment into a “Network Virtual Appliance” (AKA Firewall). Here you set up a “UDR” (User Defined Route, AKA Static Route), where you force all traffic to go back to the “On Premises” network.
- “Single VNET with DMZ” : One step beyond “forced tunneling”, is moving towards the typical DMZ-alike pattern, where you setup a HA-pair of “Network Virtual Appliances” and segregate network zones.
- “Hub & Spoke”-model : Growing even further, you’ll have multiple subscriptions. Setting up “NVAs” on all of those can be quite expensive. In terms of governance, this also a nice model, where you can consolidate all network integration into a segregated subscription/vnet.
The advantage of these patterns is that you can evolve into another pattern without breaking anything in terms of design.
Continue reading “Azure Networking : Blueprint patterns for enterprises”
A topic that’s often discussed in workshops is “Availability Sets“. And during that topic, a question/comment that pops up every time ; “Can I schedule the maintenance for my VMs, because…”. Today we’ll delve into that part.
Why do we need maintenance?
For some this might seem like a very odd question to pose and that is a given fact of life. Though some organisations live by the mantra “if it isn’t broke, why fix it”, and once a systems gets deployed, they’ll (try) never to touch it again…
Continue reading “Azure : How to prepare for maintenance impacting your Virtual Machines”
There are several questions that I’m often posed that relate to availability on Azure. In today’s post, we’ll take a look at the different availability patterns. Here I hope this will answer a big portion of the questions you might have about availability on Azure. The main intake for this post will relate to the “IaaS” chunk of Azure services. Concepts like Azure SQL, Webapps, etc may have a totally different approach. But then again, you are not responsible for designing (and thus do not need to worry about) the availability aspect of these services.
Continue reading “Azure : Availability Patterns for IaaS – Can I do multiple regions?”
Today I was setting up a Traffic Manager deployment in Resource Manager. I wanted a rather “simple” failover scenario where my secondary site would only take over when my primary site was down. As you might now, there are several routing methods, where “failover” is one ;
Failover: Select Failover when you have endpoints in the same or different Azure datacenters (known as regions in the Azure classic portal) and want to use a primary endpoint for all traffic, but provide backups in case the primary or the backup endpoints are unavailable.
Though I was surprised that the naming between the “classic mode” (“the old portal“) and “resource manager” (“the new portal“) were different!
“Classic Mode” / Service Management
So when taking a look at “classic mode”, we see three methods ;
They are described fairly in-depth on the documentation page, though in short ;
- Performance : You’ll be redirected to the closest endpoint (based on network response in ms)
- Round Robin : The load will be distributed between all nodes. Depending on the weight of a node, one might get more or less requests.
- Failover : A picking order will be in place. The highest ranking system alive will receive the requests.
“New Portal” / Resource Manager
When taking a look at “Resource Manager”, we’ll see (again) three methods ;
Though the naming differs… When going into the technical details, it’s more a naming thing than a technical thing. The functionalitity is (give of take) the same. Where the “Round Robin” had the option of weights (1-1000) before, this now seems a focal point. Where “Failover” was working with a list (visualizuation), you can now directly alter the “priority” (1-1000) of each endpoint.
The info when checking out the routing method from within the portal ;
- Performance: Use this method when your endpoints are deployed in different geographic locations, and you want to use the one with the lowest latency.
- Priority: Use this method when you want to select an endpoint which has highest priority and is available.
- Weighted: Use this method when you want to distribute traffic across a set of endpoints as per the weights provided.
Where the naming differs between the two stacks, the functionality remains the same ;
- Performance didn’t get renamed
- Round Robin became “weighted”
- Failover became “priority
It is important to know that you will only get an SLA (99,95%) with Azure when you have two machines deployed (within one availability set) that do the same thing. If this is not the case, then Microsoft will not guarantee anything. Why is that? Because during service windows, a machine can go down. Those service windows are quite broad in terms of time where you will not be able to negotiate or know the exact downtime.
That being said… Setting up your own high available SQL database is not that easy. There are several options, though it basically bears down to the following ;
- an AlwaysOn Availability Groups setup
- a Failover Cluster backed by SIOS datakeeper
Where I really like AlwaysOn, there are two downsides to that approach ;
- to really enjoy it, you need the enterprise edition (which isn’t exactly cheap)
- not all applications support AlwaysOn with their implementations
So a lot of organisations were stranded in terms of SQL and moving to Azure. Though, thank god, a third party tool introduced itself ; SIOS Datakeeper ! Now we can build our traditional Failover Cluster on Azure.
Before we start, let’s delve into the design for our setup ;
Continue reading “Azure : Setting up a high available SQL cluster with standard edition”
One of the sensitive areas when it comes to docker is persistent storage… A typical service upgrade involves shutting down the “V1” container and pulling/starting the “V2” container. If no actions are taken, then all your data will be wiped… This is not really the scenario we want off course!
So today we’ll go over several variants when it comes down to data persistence ;
- Default : No Data Persistence
- Data Volumes : Container Persistence
- Data Only Container : Container Persistence
- Host Mapped Volume : Container Persistence
- Host Mapped Volume, backed by Shared Storage : Host Persistence
- Convoy Volume Plugin : Host Persistence
What do I mean with the different (self invented) persistence level ;
- Container : An upgrade of the container will not scratch the data
- Host : A host failure will not result in data loss
So let’s go through the different variants, shall we?
Continue reading “Docker : Storage Patterns for Persistence”