How to try out the experimental windows 2016 support in the Rancher 1.3.0 release candidate?

Introduction

Yesterday Rancher commented on my github request for windows support ;

Tested with rancher-server version – v1.3.0-rc1 with catalog “library” set to vnext branch in https://github.com/rancher/rancher-catalog

Able to add “Windows Server 2016 Standard Evaluation” hosts successfully to rancher environment with orchestration set to “windows”.

Able to launch containers in “nat” network and “transparent” network.

@kvaes , Windows 2016 support will be available as experimental feature in rancher-server 1.3.0 release.

Great news! Let’s take it out for a spin… 😀

 

Rancher Host

Installing the host(s) is the same as any other time…  Though the host will still be a Linux machine off course ;

sudo docker run -d –restart=unless-stopped -p 8080:8080 rancher/server:v1.3.0-rc1

Though notice that I specified the v1.3.0-rc1 tag… And let the system do its magic!

(Update : For the stable, release you can omit the -rc1 part!)

Note ; Be aware that this is an early release candidate. Do not use this for your production! There is for instance a bug with the GUI, where the “Auto”-theme is malfunctioning. So switch to light or dark to get that one fixed. 😉

Continue reading “How to try out the experimental windows 2016 support in the Rancher 1.3.0 release candidate?”

Wow… I got awarded with the Microsoft MVP Award!

Today I received the very exciting news that I received my first “Microsoft MVP Award”! I’m now one of the (eight?) MVPs in Belgium for Azure! Off-course I’m very happy on receiving this award for my merits in terms of Azure in the last year. This is an additional incentive to keep giving back to the community!

mvp_logo_horizontal_preferred_cyan300_rgb_300ppi

For those who are unaware of the Microsoft MVP Awards, here is a small description ;

Microsoft Most Valuable Professionals, or MVPs, are technology experts who passionately share their knowledge with the community. They are always on the “bleeding edge” and have an unstoppable urge to get their hands on new, exciting technologies. They have very deep knowledge of Microsoft products and services, while also being able to bring together diverse platforms, products and solutions, to solve real world problems. MVPs are driven by their passion, community spirit and their quest for knowledge. Above all and in addition to their amazing technical abilities, MVPs are always willing to help others – that’s what sets them apart.

Deploying OMS for Docker via Rancher

Introduction
Today we’ll be deploying Microsoft Operations Management Suite (OMS) for Docker via Rancher… Sound cool? It is! Basically we’re going to do the following guide and add Rancher to the twist.

For those unfamiliar with the Microsoft offering and more knowledgeable  in the OSS community. Imaging OMS as being the Microsoft counterpart of a typical ELK stack. The advantage is that it’s managed and that there are already a lot of integrations possible.

Continue reading “Deploying OMS for Docker via Rancher”

Azure : Benchmarking SQL Database Setups – To measure is to know, and being able to improve…

Introduction

To measure is to know. If you can not measure it, you cannot improve it!

Today’s post will go more in-depth on what performance to expect from different SQL implementations in Azure. We’ll be focussing on two kind of benchmarks ; the storage subsystem and an industry benchmark for SQL. This so that we can compare the different scenario’s to each other in the most neutral way possible.

to-measure-is-to-know-storage-database-performance-kvaes

Test Setup

As a test bed I started from one of my previous posts

kvaes-azure-sql-cluster-sios-datakeeper-high-availability-ha

The machines I used were DS1 v2 machines when using single disks and a DS2 v2 machines when using multiple disks. In terms of OS, I’ll be using Windows 2012 R2 and MSSQL 2014 (12.04100.1) as database.

Continue reading “Azure : Benchmarking SQL Database Setups – To measure is to know, and being able to improve…”

Azure : Performance limits when using MSSQL datafiles directly on an Storage Account

Introduction

In a previous post I explained how you are able to integrate MSSQL with Azure storage by directly storing the data files on the storage account.

2016-04-22 19_41_15-kvaessql21 - 104.40.158.231_3389 - Remote Desktop Connection

Now this made me wondering what the performance limitations would be of this setup? After doing some research, the basic rule is that the same logic applies to “virtual disks”, as to the “data files”… Why is this? They are both “blobs” ; the virtual disk is a blob called “disk” and the data files will be “page blobs”.

2016-04-25 09_35_42-Pricing - Cloud Storage _ Microsoft Azure

Continue reading “Azure : Performance limits when using MSSQL datafiles directly on an Storage Account”

Azure : Setting up a high available SQL cluster with standard edition

Introduction

It is important to know that you will only get an SLA (99,95%) with Azure when you have two machines deployed (within one availability set) that do the same thing. If this is not the case, then Microsoft will not guarantee anything. Why is that? Because during service windows, a machine can go down. Those service windows are quite broad in terms of time where you will not be able to negotiate or know the exact downtime.

That being said… Setting up your own high available SQL database is not that easy. There are several options, though it basically bears down to the following ;

  • an AlwaysOn Availability Groups setup
  • a Failover Cluster backed by SIOS datakeeper

Where I really like AlwaysOn, there are two downsides to that approach ;

  • to really enjoy it, you need the enterprise edition (which isn’t exactly cheap)
  • not all applications support AlwaysOn with their implementations

So a lot of organisations were stranded in terms of SQL and moving to Azure. Though, thank god, a third party tool introduced itself ; SIOS Datakeeper ! Now we can build our traditional Failover Cluster on Azure.

 

Design

Before we start, let’s delve into the design for our setup ;

kvaes-azure-sql-cluster-sios-datakeeper-high-availability-ha

Continue reading “Azure : Setting up a high available SQL cluster with standard edition”

Network virtualization ; Do I go for NVGRE or VXLAN with vNext?

With Windows 2016 / vNext the network virtualization has made a fundamental change… In 2012 Microsoft only supported its own NVGRE protocol. With the upcoming release, Microsoft will support both NVGRE and VXLAN! It even goes so far that VXLAN will be the default protocol. So in terms of VHS vs Betamax, we can conclude that Microsoft has decided that the market is more in favor of VXLAN.
  
What does that mean for organizations that have already implementee NVGRE? Both protocols will be supported by the typical Mellanox of Chelsio adapters in terms of translations. The HNV (Hyper-V Network Virtualization) will also support both simultaneously. Though if you are looking to start a new implementation, it is best to go for VXLAN now!

Microsoft Storage Spaces Direct – What is in it for me?

Introduction

Storage Spaces is a technology in Windows and Windows Server that enables you to virtualize storage by grouping industry-standard disks into storage pools, and then creating virtual disks called storage spaces from the available capacity in the storage pools.

For me… Storage spaces is a disruptor in the Enterprise landscape. After the mainframe, we went towards intel hardware. The systems had their own disks and we used them as a stand-alone or did some cloning/copying between them. As the data center grew, this became unmanageable, and we turned towards SAN systems. Here we had several challengers over the years, though the concepts remained the same. At a given point, I was hoping VMware would turn towards the concept that Nutanix eventually released upon the world. Server hardware with direct attached storage, and replication via the virtualization host. storage-spaces-direct Anyhow, the aspect of using JBOD storage with basic enterprise SAN features excites me! This could provide a lot of IT departments with some budget space to maneuver again… Though, a san is commonly still a single point of failure, so a kind of scale-out concept (instead of a typical SAN scale up!) would be great. And with storage spaces direct, Microsoft has hit the nail dead on for me! Hyper Convergence vs Convergence With the Storage Spaces in Windows 2012 we got a basic NAS functionality; raid0/raid1/raid5, snapshots, pooling. With 2012 R2, the game was on… Storage Tiering, Data Deduplication, flexible Resilience (“Dynamic Hot Spare”) & Persistent Write-back cache. Suddenly, the game was on. Microsoft had turned the vision of SMB as a file server towards a NAS (kinda like Netapp for Linux/VMware). And with the scale-out file server (SOFS), you pretty much had a basic SAN that covered the basic needs of the majority of the SME landschape. 2015-05-22 13_31_45-MDC-B218.pptx [Protected View] - PowerPoint Though, as you can see, the architecture was still comparable towards the “SAN”-architectures we see a lot in the field. The concept that Nutanix brought to life wasn’t there yet. Though, at Microsoft Ignite, Storage Spaces Direct (“S2D”) was announced. And along with it, the possible to go towards a hyper-converged architecture. At that point, a lot of people tweeted ; “Nutanix their stock is less worth after Ignite”. And to be honest, there lies a lot of truth in those tweets… You are now able to build the same kind of Hyper Converged architecture with Microsoft components. image_thumb13 With S2D, you have to conceptual options ;

  • Hyper Converged – “Nutanix Mode” – Scale out with storage & CPU power combined.
  • Converged / Disaggregated – “Traditional Mode” – Scale out with SOFS & Compute nodes separately.

For the entry tier segment, this technology step is huge. Twitter has been buzzing about the fact that features like replication are part of the data center edition. Though for me, the hyper-converged part solves this. And let’s be honest… we all know the R&D money needs to come from somewhere, and in the next edition it’ll go towards the standard edition. Storage Replica So what drives the “Hyper Converged”-engine? Storage replication… IC797134 Source : https://msdn.microsoft.com/en-us/library/mt126183.aspx The replication comes in the two traditional modes ;

  • Synchronous Replication – Synchronous replication guarantees that the application writes data to two locations at once before completion of the IO. This replication is more suitable for mission critical data, as it requires network and storage investments, as well as a risk of degraded application performance. Synchronous replication is suitable for both HA and DR solutions. When application writes occur on the source data copy, the originating storage does not acknowledge the IO immediately. Instead, those data changes replicate to the remote destination copy and return an acknowledgement. Only then does the application receive the IO acknowledgment. This ensures constant synchronization of the remote site with the source site, in effect extending storage IOs across the network. In the event of a source site failure, applications can failover to the remote site and resume their operations with assurance of zero data loss.
  • Asynchronous Replication – Contrarily, asynchronous replication means that when the application writes data, that data replicates to the remote site without immediate acknowledgment guarantees. This mode allows faster response time to the application as well as a DR solution that works geographically. When the application writes data, the replication engine captures the write and immediately acknowledges to the application. The captured data then replicates to the remote location. The remote node processes the copy of the data and lazily acknowledges back to the source copy. Since replication performance is no longer in the application IO path, the remote site’s responsiveness and distance are less important factors. There is risk of data loss if the source data is lost and the destination copy of the data was still in buffer without leaving the source. With its higher than zero RPO, asynchronous replication is less suitable for HA solutions like Failover Clusters, as they are designed for continuous operation with redundancy and no loss of data.

In addition, technet states the following ;

The Microsoft implementation of asynchronous replication is different from most. Most industry implementations of asynchronous replication rely on snapshot-based replication, where periodic differential transfers move to the other node and merge. SR asynchronous replication operates just like synchronous replication, except that it removes the requirement for a serialized synchronous acknowledgment from the destination. This means that SR theoretically has a lower RPO as it continuously replicates. However, this also means it relies on internal application consistency guarantees rather than using snapshots to force consistency in application files. SR guarantees crash consistency in all replication modes

When reading the above… On a conceptual level, this is to be compared with the implementation of Netapp “near online sync”. Anyhow, very cool stuff from Microsoft as this is really entering the SAN market space and understanding the necessities it entails. Another important note ;

The destination volume is not accessible while replicating. When you configure replication, the destination volume dismounts, making it inaccessible to any writes by users or visible in typical interfaces like File Explorer. Block-level replication technologies are incompatible with allowing access to the destination target’s mounted file system in a volume; NTFS and ReFS do not support users writing data to the volume while blocks change underneath them.

From a technical stance, this is completely understandable. Though, do not expect to have “local” access to all data when doing “hyper convergence”. So you will need a high-speed / low latency network between your hyper-converged nodes! Think towards RDMA with infiniband/iWarp… Eager for more?

Of feel free to ping me

Microsoft Azure : Windows Azure Pack versus Azure Stack

During Ignite, the first glance of Azure Stack saw the light… A bit after that, Kristian Nese also posted some insights about the upcoming Azure Stack.

Though earlier this week, I was corrected when mentioning that “Azure Stack” was the new version of the “Windows Azure Pack”. Despite that it might be seen as the third version of this iteration, a lot has changed under the hood. I did some digging, and the InfoGraphic below is the result of my research.

Karim-Vaes-Infrastructure-Architect-RealDolmen-Windows-Azure-Pack-versus-Azure-Stack-kvaes.be

Though if you are wondering what to do if you’re looking to move towards your private cloud… Make your first transition to WAP now, as it’s still supported till 2017! The Azure Stack is part of the Windows 2016 release momentum, so it’s currently not available yet (and you should not expect it next week). Though, be aware that you should set it as your target goal once it has been released. It will be an experience that’s more aligned with the Azure Public Cloud and even extends the Private Cloud experience further to true Hybrid Cloud capabilities.