Docker : Storage Patterns for Persistence

Introduction

One of the sensitive areas when it comes to docker is persistent storage… A typical service upgrade involves shutting down the “V1” container and pulling/starting the “V2” container. If no actions are taken, then all your data will be wiped… This is not really the scenario we want off course!

DockerDataPatterns-container_service_upgrade

So today we’ll go over several variants when it comes down to data persistence ;

  • Default : No Data Persistence
  • Data Volumes : Container Persistence
  • Data Only Container : Container Persistence
  • Host Mapped Volume : Container Persistence
  • Host Mapped Volume, backed by Shared Storage : Host Persistence
  • Convoy Volume Plugin : Host Persistence

What do I mean with the different (self invented) persistence level ;

  • Container : An upgrade of the container will not scratch the data
  • Host : A host failure will not result in data loss

So let’s go through the different variants, shall we?

 

Default

The most basic implementation… We created our container without any notion of volumes. The data resides within the container. As we mentioned during the introduction, during an upgrade we’ll suffer data loss… Where this may not be an issue for some containers, there are state full implementations who do want to keep their data (for instance ; databases, …)

DockerDataPatterns-container_only

 

Data Volumes

One step up from the “default” is to add a volume to your container implementation. This will ensure that a given volume is mapped into a data volume. These volumes reside on the host system and will remain untouched during service upgrades.

DockerDataPatterns-volume_only

 

Data Only Container

A slight variation on the typical data volume is to use a data-only container. Here you’ll create a container (typically a base busybox or alpine) that contains the volumes as you would have used it in the “data volume” variation. When starting our main container, we’ll use the “-volumes-from” parameter to ensure that all the volumes from our “data-only container” are mapped into our main container. So this pattern is one where we see a typical “side kick” implementation.

DockerDataPatterns-volumes_from_data_container

 

Host Mapped Volume

Another variation on the “data volume” pattern is mapping a volume to a directory on host level. With the “data volume”, the folder will be physically located in a file on the default volume location. With the “host mapped volume“, you’ll do a direct mapping between the directory (volume) on container level and on host level. In essence, you’ll have the same advantages as with the volume, where more hybrid scenario’s become possible… The main disadvantage here is mapping the rights (uid/gid) between the container & the host level.

DockerDataPatterns-host_mapping

 

Host Mapped Volume, backed by Shared Storage

We can crank it a notch up… and use a folder that is backed by shared storage. Think about NFS, Gluster, … whatever for this. The main advantage here is that you will not suffer any data loss in case of a host level failure. DockerDataPatterns-shared_storage_host_mapping

 

Convoy Volume Plugin

Mapping to host level still feels a bit “static”. You have to clearly align your hosts in a default manner, or you’ll run into the wall at a given point. Another implementation is “Convoy“… Very unrespectfully said, Convoy will run as a docker volume extension and will behave as an intermediate container. This intermediate container will ensure the link to your shared storage. At the moment, the main implementations are NFS & Gluster, where others have been touted during a beer too as “in the near future”…

DockerDataPatterns-convoy

 

Others?

There are probably even more patterns… If you have any, feel free to give me a ping!

Flocker

I’m aware of Flocker, though I must admit I have not experimented with that. Though the concept looks very nice!

diagram-1

 

Conclusion

  • Data persistence is possible with Docker!
  • Be aware that there are different implementations. Each with given advantages & disadvantages.
  • Pro Tip : Always test your deployments! Doublecheck all aspects related to resilience & performance.

 

 

20 thoughts on “Docker : Storage Patterns for Persistence

  1. Karim, great article! This is a fantastic overview for anyone getting into storage persistence of containers, which is a problem we are readily solving. Check out REX-Ray. It’s more than just a Docker Volume Driver, but it provides storage persistence for running containers with AWS EBS, GCE Block, Cinder, a multitude of EMC platforms, as well as VirtualBox. VirtualBox allows anyone to run containers with persistent storage in a development environment and have the same experience when that container image needs to go into production. Pretty awesome. https://github.com/emccode/rexray

    1. i should also mention that our solution is easier to implement than Flocker because it uses the native Docker Volume Driver and is a single Go binary that’s installed via curl bash. Enjoy!

  2. Hey kcoleman, I will admit the single go binary is convenient šŸ˜‰ Both Flocker and REX-Ray work with native docker volume driver. Unless you mean something else and i’m misinterpreting what you mean, just want to make sure readers are informed correctly šŸ™‚

    Great post Karim!, thanks for including Flocker, please give it a try. Flocker does run a clustered which means theres a few extra steps for installing the system but this allows for better resiliency. We also integrate with any of your favorite OFs (Swarm, Mesos, Kubernetes).

    Feel free to check out our new docs! https://docs.clusterhq.com/en/latest/

  3. What about using the “default” pattern and dealing with persistence at the application level? With databases, you usually want to set up replication anyway, so one way to go from V1 to V2 is to set up a V2 slave of your V1 and then do a switchover to make the V2 your master. There could be performance downsides, like the time to do the replication and possibly cache warming issues. But it is the most environment-independent approach.

    1. In that case, I would suggest a data container that has the persistent data. So that you can update/upgrade the process container without losing the data.

      Such an approach is commonly deployed. F.e. With gluster, syncthing, etc

      1. I think you are missing my point. As long as you have application level data replication, the “default” approach simply does not have the show-stopping problem of data loss or the impossibility of upgrade. Not saying a shared file system wouldn’t buy you anything, but there are downsides (complexity to set it up, worries about increased latency, filesystem semantics not what you expect, etc.).

      2. I do get what you mean. Though, the Application replication will also take time. Having the data (volume) already in place will speed up that process. For 10mb, have Fun… For 10tb, good luck!

      3. OK, we’re on the same page. As I understand it, though, if the replication time gets to be onerous, you could just use normal local docker volumes, and upgrade on the same node, just using the switchover technique if you have to switch nodes (which on Azure you might not normally have to do because you can change the size of a node). I don’t know if this strategy would have the best cost-benefit tradeoff or not though.

  4. Using a data container is a method of upgrading with data persistence. Example ; http://docs.rancher.com/rancher/upgrading/

    If you have not volumes set, then you will lose your data and will need to sync again (in reference to your use case).
    I myself would use the “data container” pattern, though I’m unsure if the basic “volume” pattern would also cover your need here.

    A “data container” is not safe from the failure of the host on which it resides. For this you need either a shared storage, or application replication.

    In regards to Azure, you are able to scale up a host. Though be aware that the Azure SLAs (at this time of writing) only are 99,95% when you have two VMs who are capable of deliverying the same output in an availability set. (Simply put ; You need to foresee your own “HA”, as Microsoft may down your server during service windows.)

  5. Hello,

    Thank you for the many options you present. Which variant would you recommend if I wanted to persist all the data on a docker image? By that I mean that I want to use the docker image as a persistent VM where all changes are preserved across reboots, just as with a physical machine or a Parallels VM. Before everyone calls me a heretic, I think that wiping all the data is great for many use cases, including live deployments and when developing one app. However I do also sometimes go on exploratory meanders where I really don’t care in the least about replicating an environment. I want to start in some known state – a docker image is ideal – and then make changes that are preserved for weeks or months and that are preserved across reboots.

    Many thanks again for your article.

    1. First things first ; The way of thinking of VMs does not apply to Docker. If you are looking for VMs, stay with VMs… šŸ˜‰

      Like with everything in consultancy, what is recommended depends on a lot of factors. As a general rule of thumb, I recommend using the native docker volumes. If you want persistence across hosts, look towards a docker volume plugin that can deliver that in your environment.

      Using a host mapping towards a shared volume (like NFS) is also a possibility. This is fine for small environments or those where you can controle a desired state of the host to ensure that all settings are the same on all hosts. (read: all hosts are identical)

    1. Good question. I must say that I don’t use them much personally. Though for the completeness, it’s a pattern that does exist. I guess, if you have a workload that doesn’t need high resiliency in terms of data, but you still want a kind of “basic” data persistence, then data containers could be a good. An example where I used this is with the rancher deployment at my personal lab. Here I want to test upgrades, but in the end, if the shit hits the fan, I don’t care that much.

  6. Hi, This is grate article thanks. but can you tell me how to mount the stoage in the container., not on the docker host. i tried mount 10.1.123.123:/xbbl47nvol1 /tmp this gives an error. Kindly suggest

    1. Be sure that the ACL on your NFS host are set accordingly. Depending on the way you have setup your container networking, you might have a given, or another, source IP address.

      That being said, it’s my personal opinion that doing a direct mount from a container is a bad idea. Let the host/orchestrator do the magic stuff on that area. That’ll probably ensure that you have a better performance & security profile.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.