Azure & Cross-Host Container Networking using Rancher

Introduction

Today we’ll try to understand a bit more about the Rancher cross-host networking capabilities.

Networking

Rancher supports cross-host container communication by implementing a simple and secure overlay network using IPsec tunneling. To leverage this capability, a container launched through Rancher must select “Managed” for its network mode or if launched through Docker, provide an extra label “–label io.rancher.container.network=true”. Most of Rancher’s network features, such as load balancer or DNS service, require the container to be in the managed network.

Under Rancher’s network, a container will be assigned both a Docker bridge IP (172.17.0.0/16) and a Rancher managed IP (10.42.0.0/16) on the default docker0 bridge. Containers within the same environment are then routable and reachable via the managed network.

Note:The Rancher managed IP address will be not present in Docker meta-data and as such will not appear in the result of a Docker “inspect.” This sometimes causes incompatibilities with certain tools that require a Docker bridge IP. We are already working with the Docker community to make sure a future version of Docker can handle overlay networks more cleanly.

Source : http://docs.rancher.com/rancher/concepts/#networking

So in short… You can create a virtual network spanned accross all hosts using Rancher. At the time of writing, this is still based upon an IPsec VPN implementation underneath, where RancherLabs is looking to implement the “new” overlay networking of the native Docker. Be aware that Weave is also pretty known, and used, within the community. Though at this point I want to keep it as simple as possible…

High Level Setup

Anyhow, let’s look at our labo for the day…

Drawing1

We’ll be setting up some docker hosts in a private 10.0.0.0/24 network. In addition, we’ll add them to Rancher and deploy some services upon them that are “Managed”. This will trigger the deploy of the “Network Agent”, which will act as the router/switch for all containers on one host. The containers that are “managed” will receive an IP from the 10.42.0.0/16 range. As long as the “Network Agent (Containers)” are able to connect to eachother, they will be able to setup their IPSec VPN tunnels. If that has been done, then you can connect between services accros hosts easily.

The Actual Lab Setup

First off, I’ve deployed some virtual machines into a private network. Two will function as docker hosts, and one as Rancher management host.

2016-01-12 09_28_14-Foto's

Next up, I’ve added the Docker hosts to my Rancher setup. At this point, when adding the Rancher agent, be sure to provide the environment variable “CATTLE_AGENT_IP” with the internal IP of the Docker host!

2016-01-12 09_29_21-root@docker21_ ~

sudo docker run -e “CATTLE_AGENT_IP=10.0.0.5” -d …

This is of the upmost importance in Azure! For the simple reason that the Rancher implementations expects (udp) ports 500 & 4500 to be accesible by all hosts. The Azure implementation (in the service management model) provides you (by default) with one external / public ip, where a concept of NATting will apply. This will screw up your network implementation… What is the downside of this? Virtual networks accross private networks will not be possible, unless you setup each host as a public system with its own public IP. Sadly, there is no way to provide custom ports for the IPsec implementation, otherwise this would have been possible!

Next up, I’ve deployed the “GlusterFS”-service from the catalog to tigger the creation of the “Network Agent”-containers. These will be setup as “standalone” containers on each host.

2016-01-12 09_28_27-Start

Notice that they each “Network Agent” container, and all containers using the “Managed Network” are given addresses from the 10.42.0.0/16 address space. This is how you know that the containers will be able to benefit from the cross-host network capabilities.

So now let’s do a simple test… Connect to the shell of one of the “Network Agent”-containers and do a ping to the other “Network Agent”-container.2016-01-12 09_29_40-Start

As you can see, that works pretty good. If you are unable to to this, then the Network Agents are not able to communicate to eachother on udp ports 500 and 4500!

2016-01-12 09_30_10-Foto's

TL;DR

So today we learned…

  • Rancher has its own implementation of cross-host networking. At the moment this is based upon IPsec VPN, but they are working to move towards the Docker native implementation.
  • Connectivity works or breaks due to the accesibility of the hosts on ports 500 & 4500 (udp).
  • Deploy your agent with the environment variable
    CATTLE_AGENT_IP

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.