Introduction
Any means of additional layers is often considered an “overhead” that decreases performance. We all heard the statements that database should be physical and so on… So let’s put the medal to the metal and do a very quick & dirty performance test!
Docker allows you to package an application with all of its dependencies into a standardized unit for software development.
Or with a bit more words ;
Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in.
So what is the difference with plain virtualization? You strip the hypervisor “emulation” layer and do not deploy individual operating systems…
Virtualization
Docker
From an operations/infrastructure point of view it is a bit of mix between application virtualization & virtual machines. The technical base is “containerization”, which is an operating system level virtualization. This is not something new in IT land, as I recall it was the way virtualization was done in the hosting world before VMware (and equivalents) came to rise. So where is the difference? For me… in the ecosphere that was created around the technology!
Datadog is one of the more popular cloud monitoring tools in the DevOps community. It is not as strong in the APM suite as a NewRelic, though it is a very nice product. The reason I’m trying it today, is because I love my raspberry pi’s… and NewRelic does not support ARM. 😦
For this walkthrough I’ll be using 4 x Azure A0 Machines with Ubuntu 14.04TLS on them. Three of those will serve as docker hosts and one will be my Rancher management tooling. The docker hosts will be put into a swarm. For easy reference (and as a basic enterprise simulation), I’ve setup my docker hosts in a seperate subnet compared to the rancher.
Introduction
The concept of comparing servers to cattle or pets isn’t something new… A presentation by Gavin McCance already covered the subject three years ago. Looking at a server farm as cattle is one of the major philosophies of the DevOps culture.
Pets
We are fond of our pets. They are given cute names (grumpycat.kvaes.be). We hand raise them and care for them. When they get ill, we’ll do everything we can to nurse them back to health!
Cattle
Cattle are given numbers (web007f01.kvaes.be). They are all (give of take) identical. If one gets ill, we’ll just replace them…
Herding
In the end, everyone knows that herding is for cattle and not for pets… Or am I wrong about that?
Just kidding… The herding bares down to things like “configuration management” (think puppet, chef, ansible, tower, salt, …) & towards “automation” & “orchestration”. But why should we herd? It’s a matter of scaling out… and not up. Making it easy to go in numbers, makes it easy to scale. And in the end, this agility is what makes our business run better.
Anyhow, “ops” should see the infrastructure as code. One should be able to do version management upon it and “compile” (rebuild/deploy) it upon ease. Systems should be regarded as volatile and rebuilt with ease. Docker might seem an odd concept to some… though the overlays they use are really about combining libraries & software versions towards “infrastructure”. I’m not saying Docker is the way to go, just pointing out that the concept can be found all around the place. Just ask yourself… do you consider your servers as pets… or as cattle? Have fun petting / herding!