Since my previousposts about running docker-swarm with ceph, I’ve been using this
fairly extensively in production and made some changes to the setup that follows on from the previous
posts.
This post is a followup to an earlier blog bost regarding
setting up a docker-swarm cluster with ceph.
I’ve been running this cluster for a while now quite happily however since setting it up, a new version of
ceph has been released - nautilus - so now it’s time for some upgrades.
Note: This post is out of date now.
I would suggest looking at this post and using
the docker-compose based upgrade workflow instead, up to the housekeeping part.
(It’s worth noting at this point that this guide was mostly written after the fact based on command history
so I may have missed something. It’s always a good idea to do this on a test cluster first, or in a maintenance
window!)
I’ve been wanting to play with Docker Swarm for a while now for hosting containers, and finally sat down
this weekend to do it.
Something that has always stopped me before now was that I wanted to have some kind of cross-site storage
but I don’t have any kind of SAN storage available to me just standalone hosts. I’ve been able to work around
this using ceph on the nodes.
Note: I’ve never used ceph before, I don’t really know what I’m doing with ceph, so this is
all a bit of guesswork. I used Funky Penguin’s Geek
Cookbook as a basis for some of this, though some things have changed since then, and I’m using base-centOS
not AtomicHost (I tried AtomicHost, but wanted a newer-version of docker so switched away).
All my physical servers run Proxmox, and this is no exception. On
3 of these host nodes I created a new VM (1 per node) to be part of the cluster. These all have 3 disks, 1 for
the base OS, 1 for Ceph, 1 for cloud-init (The non-cloud-init disks are all SCSI with individual
iothreads).