Taking a glance at Rancher’s ability to manage the Azure Kubernetes Service (AKS)

Introduction

Pfew, it’s odd to admit that it has been a while since I’ve posted about Rancher. Though today is as good a day as any to pick up that thread… So today we’ll go through give or take the same objective as in the past, where we’ll notice that the integration has improved significantly with the arrival of AKS! Let’s get today’s post underway and deploy AKS from our Rancher control plane.

Preparation

Before the below started, I already had the following things ready ;

 

Creating our AKS from Rancher

Go the clusters, and select “add cluster”.

Here you can see AKS ;

Do notice the following…

Now we’ll need to enter some information to get the Azure integration operational.

 

Getting the info for our Azure Integration

If you’re stuck at the previous screen, here is a summary of where you can find the information ;

  • Subscription ID : Subscriptions => select subscription => copy the subscription id
  • Tenant ID : Azure Active Directory => Properties

  • Client ID/Secret : Azure Active Directory => App registrations


    Copy the “Application ID” as the needed information for the “Client ID”, and generate a key for the “Client Secret”

Continue to path to create the Cluster / AKS

If you want, you can go a bit advanced in terms of networking too…

Though I kept it to the default settings for now ;

Let’s press “Create” and see what happens.

And we’ll see things being added in the Azure portal ;

 

Where the AKS object has been created ;

 

Browsing through some additional settings

In terms of authentication, do note that there you can setup AD, AAD & ADFS as options for your authentication… 😉

In addition, I’ve enabled “Helm Stable” as an additional catalog option ;

 

Taking a glance at the cluster

Let’s take a look at the cluster shall we? Our cluster is up & running. The errors seem “normal”, when taking a look at the known issues on Github.

Checking out the nodes, we can see that they are nicely tagged and leveraging the 1.8.11 version.

Which is the current default when provisioning AKS ;

 

If we launch kubectl from the cluster view. Then we can see the same info.

 

Upgrading AKS

From the Azure portal, let’s upgrade the kubernetes stack (going from 1.8.11 to 1.9.10) ;

 

We’ll notice that the node is being “cordoned” and that a new node with the 1.9.10 version has been added ;

Once that was done, and after a bit of cool down to let everything settle in…

I went on to 1.10.7 ;

  

Which did the same thing as with the previous upgrade ;

Do note that the cluster version remains at the initial version. This seems to be a known thing, when again looking at the issues on github.

 

 

Azure Storage Integration

Another very good thing to notice is that the Azure storage classes have been natively integrated!

 

Likewise for the persistent volumes ;

 

Cattle in AKS

Cattle is the Rancher’s own orchestration power. It’s been setup nicely in the System space, with its own namespace ;

 

Deploying from the Catalog

Now that our system is according to our expectations, we’ll start adding workloads.

Let’s take Drupal for instance, where it’s being sources from the Helm Stable ;

Just like in the past, you can tweak the settings ;

And launch the deployment.

By default it’ll get its own namespace…

and you can see it’s being activated ;

Now I went a bit “bezerk” and created some additional workloads from the catalog too…

Now let’s see what the “Load Balancing” says ;

Now if we would go to the three dots, and select “View in API” ;

Then we can see the public endpoints ;

And the workload is accessible on this endpoint ;

Likewise for another workload ;

And if we take a look at the backend Azure objects. Then we can see that the NSGs have been set correctly ;

 

And that the loadbalancer has been updated accordingly ;

 

Azure integrated Volumes

When looking towards the volumes…

Then we notice the same level of integration ;

 

Deploy your own workload

Using the catalog is great, though let’s see if we can deploy our own little container. Press “Deploy” under “Workloads” ;

And enter all the information needed ;

Here I learned that caps aren’t appreciated… 😉

But that was quickly fixed!

And now we have our little workload present.

Where we can still alter (“upgrade”) it if needed, just like in the past ;  

And inside the details of our pod … 

… we can see the status of the deployment, and also what events occurred.

 

 

 

Closing Thoughts

Rancher has evolved nicely with the 2.0 version, where it natively supports AKS. When looking at the combination of Rancher & AKS, then I must admit that I’m excited by the combination! Where AKS provides the stable / flexible / scalable platform, Rancher can provide workflow orchestration on top of that. In addition, typical kubernetes deployments are far from user-friendly, where Rancher greatly reduces that knowledge / entry barrier for stepping into kubernetes.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.