Since my previousposts about running docker-swarm with ceph, I’ve been using this
fairly extensively in production and made some changes to the setup that follows on from the previous
posts.
This post is a followup to an earlier blog bost regarding
setting up a docker-swarm cluster with ceph.
I’ve been running this cluster for a while now quite happily however since setting it up, a new version of
ceph has been released - nautilus - so now it’s time for some upgrades.
Note: This post is out of date now.
I would suggest looking at this post and using
the docker-compose based upgrade workflow instead, up to the housekeeping part.
(It’s worth noting at this point that this guide was mostly written after the fact based on command history
so I may have missed something. It’s always a good idea to do this on a test cluster first, or in a maintenance
window!)
I’ve been wanting to play with Docker Swarm for a while now for hosting containers, and finally sat down
this weekend to do it.
Something that has always stopped me before now was that I wanted to have some kind of cross-site storage
but I don’t have any kind of SAN storage available to me just standalone hosts. I’ve been able to work around
this using ceph on the nodes.
Note: I’ve never used ceph before, I don’t really know what I’m doing with ceph, so this is
all a bit of guesswork. I used Funky Penguin’s Geek
Cookbook as a basis for some of this, though some things have changed since then, and I’m using base-centOS
not AtomicHost (I tried AtomicHost, but wanted a newer-version of docker so switched away).
All my physical servers run Proxmox, and this is no exception. On
3 of these host nodes I created a new VM (1 per node) to be part of the cluster. These all have 3 disks, 1 for
the base OS, 1 for Ceph, 1 for cloud-init (The non-cloud-init disks are all SCSI with individual
iothreads).
In previous years, Myself and Chris have been fairly informally trying to
see who was able to produce the fastest code (Me in PHP, Chris in Python). In the final week of last year to
assist with this, weboth made our repos
run in Docker and produce time output for each day.
This allowed us to run each other’s code locally to compare fairly without needing to install the other’s
dev environment, and made the testing a bit fairer as it was no longer dependant on who had the faster CPU when
running their own solution. For the rest of the year this was fine and we carried on as normal. As we got to
the end I remarked it would be fun to have a web interface that automatically dealt with it and showed us the
scores, but there was obviously no point in doing that once the year was over. Maybe in a future year…
Fast forward to this year. Myself and Chris (and ChrisN) coded up our Day 1
solutions as normal and then some otherfriends started doing it for the first time. I remembered my plans from the
previous year and suggested everyone should also docker-ify their repos… and sotheyagreed…
DNS Hosting - Part 3: Putting it all together (This Post)
In my previousposts I
discussed the history leading up to, and the eventual rewrite of my DNS hosting solution. So this post will
(finally) talk briefly about how it all runs in production on MyDNSHost.
Shortly before the whole rewrite I’d found myself playing around a bit with Docker for another
project, so I decided early on that I was going to make use of Docker for the main bulk of the setup to allow
me to not need to worry about incompatibilities between different parts of the stack that needed different
versions of things, and to update different bits at different times.
The system is split up into a number of containers (and could probably be split up into more).