Docker Swarm Cluster Improvements

Post thumbnail

This post is part of a series.

  1. Docker Swarm with Ceph for cross-server files
  2. Upgrading Ceph in Docker Swarm
  3. Docker Swarm Cluster Improvements (This Post)

Since my previous posts about running docker-swarm with ceph, I’ve been using this fairly extensively in production and made some changes to the setup that follows on from the previous posts.

Upgrading Ceph in Docker Swarm

Post thumbnail

This post is part of a series.

  1. Docker Swarm with Ceph for cross-server files
  2. Upgrading Ceph in Docker Swarm (This Post)
  3. Docker Swarm Cluster Improvements

This post is a followup to an earlier blog bost regarding setting up a docker-swarm cluster with ceph.

I’ve been running this cluster for a while now quite happily however since setting it up, a new version of ceph has been released - nautilus - so now it’s time for some upgrades.

Note: This post is out of date now.

I would suggest looking at this post and using the docker-compose based upgrade workflow instead, up to the housekeeping part.

I’ve mostly followed https://docs.ceph.com/docs/master/releases/nautilus/#upgrading-from-mimic-or-luminous but adapted it for the fact we’re running everything in docker. I recommend that you have a read though this yourself first to have an idea of what we are doing and why.

(It’s worth noting at this point that this guide was mostly written after the fact based on command history so I may have missed something. It’s always a good idea to do this on a test cluster first, or in a maintenance window!)

Docker Swarm with Ceph for cross-server files

Post thumbnail

This post is part of a series.

  1. Docker Swarm with Ceph for cross-server files (This Post)
  2. Upgrading Ceph in Docker Swarm
  3. Docker Swarm Cluster Improvements

I’ve been wanting to play with Docker Swarm for a while now for hosting containers, and finally sat down this weekend to do it.

Something that has always stopped me before now was that I wanted to have some kind of cross-site storage but I don’t have any kind of SAN storage available to me just standalone hosts. I’ve been able to work around this using ceph on the nodes.

Note: I’ve never used ceph before, I don’t really know what I’m doing with ceph, so this is all a bit of guesswork. I used Funky Penguin’s Geek Cookbook as a basis for some of this, though some things have changed since then, and I’m using base-centOS not AtomicHost (I tried AtomicHost, but wanted a newer-version of docker so switched away).

All my physical servers run Proxmox, and this is no exception. On 3 of these host nodes I created a new VM (1 per node) to be part of the cluster. These all have 3 disks, 1 for the base OS, 1 for Ceph, 1 for cloud-init (The non-cloud-init disks are all SCSI with individual iothreads).