Converting LXC Containers to full VMs

Post thumbnail

I have been a long-time user of proxmox for my virtualisation needs, starting back in the days of Proxmox 3 (prior to that I was running Ubuntu with OpenVZ rolled in by hand). Back then I only had a single server and resources were tight, so I deployed a lot of my workloads as OpenVZ containers.

As time went on, proxmox switched to LXC, and I dutifully converted all my containers to LXC and kept on going.

More time went on and I added more servers, and ended up clustering them for ease of management. Then eventually replacing them all with more powerful nodes so now resources were no longer a concern. I also eventually added Ceph into the mix (using proxmox’s built-in support for ceph) and 10G Networking so that I had shared storage for VMs and could the start doing VM migrations between nodes quickly.

But, LXC Conainers have flaws - live migration only works for full-VMs. LXC Containers have to do a reboot to actually move onto and start running on the new node. For some workloads this isn’t really noticable or a problem - but for others this is quite bad.

Also, as more and more of what I run involves Docker, it’s a lot easier/nicer/safer to run these workloads in actual VMs rather than LXC containers.

But installing and configuring full-VMs was a chore. With LXC Containers you could be up and running in minutes by just deploying a template. Full-VMs required a full installation to an empty disk. This could be automated using kickstart/preseeding etc (And I wrote a tool to help manage a pxe-boot environments for this purpose). But over time, this has now become trivial as well - cloud-init is now supported directly within proxmox and all the major OSes provide cloud-init compatible disk images, so getting a fresh VM is a matter of cloning a template, updating some cloud-init settings and starting the VM.

Due to all of this, almost all of the VMs I create these days are full VMs. Anything new - it’s a VM, which gives me all-the-good-stuff™.

But I still have a lot of legacy LXC containers. These all end up suffering any time I do hardware/software maintenance on the host nodes, or if I have any problems that require putting a host into maintenance mode.


It’s time to fix this.


Remote LVM-on-LUKS (via ISCSI) with automatic decrypt on boot

Post thumbnail

I have recently added some iscsi-backed storage to my proxmox-based server environment, primarily as an off-server location to store backup data.

For a multitude of reasons, such as the sensitive nature of the data, the fact that the physical storage lies outside of my control, and just good security hygiene - I wanted to ensure that the data is all encrypted at rest.

I wanted to be able to use this iscsi as a storage target for proxmox allowing me to just add the volumes to VMs allowing HA, and I didn’t want to have to do encryption inside every VM incase I accidentally forgot to enable it for one of the VMs (remember, the storage is hosted external to me so I have no control over the physical access to it) so to do this I have made use of LUKS encryption on the iscsi block device that I am presented with and then I run LVM over the top of this. (LVM-on-LUKS as-opposed to LUKS-on-LVM)

Updated Theme

Post thumbnail

Once again, I have replaced the theme of this blog.

Unlike the previous theme, this one is actually one I mostly ended up designing myself rather than just finding one that I mostly liked and running with it, and given that, I figured I’d talk a little bit about the thoughts behind it and how it came to be and what else I’ve done behind the scenes.

Posted on February 8, 2022 General

DNS Hosting - Part 3: Putting it all together

Post thumbnail

This post is part of a series.

  1. DNS Hosting - Part 1: History
  2. DNS Hosting - Part 2: The rewrite
  3. DNS Hosting - Part 3: Putting it all together (This Post)

In my previous posts I discussed the history leading up to, and the eventual rewrite of my DNS hosting solution. So this post will (finally) talk briefly about how it all runs in production on MyDNSHost.

Shortly before the whole rewrite I’d found myself playing around a bit with Docker for another project, so I decided early on that I was going to make use of Docker for the main bulk of the setup to allow me to not need to worry about incompatibilities between different parts of the stack that needed different versions of things, and to update different bits at different times.

The system is split up into a number of containers (and could probably be split up into more).

DNS Hosting - Part 2: The rewrite

Post thumbnail

This post is part of a series.

  1. DNS Hosting - Part 1: History
  2. DNS Hosting - Part 2: The rewrite (This Post)
  3. DNS Hosting - Part 3: Putting it all together

In my previous post about DNS Hosting I discussed the history leading up to when I decided I needed a better personal DNS hosting solution. I decided to code one myself to replace what I had been using previously.

I decided there was a few things that were needed:

  • Fully-Featured API
    • I wanted full control over the zone data programmatically, everything should be possible via the API.
    • The API should be fully documented.
  • Fully-Featured default web interface.
    • There should be a web interface that fully implements the API. Just because there is an API shouldn’t mean it has to be used to get full functionality.
    • There should exist nothing that only the default web ui can do that can’t be done via the API as well.
  • Multi-User support
    • I also host some DNS for people who aren’t me, they should be able to manage their own DNS.
  • Domains should be shareable between from users
    • Every user should be able to have their own account
    • User accounts should be able to be granted access to domains that they need to be able to access
      • Different users should have different access levels:
        • Some just need to see the zone data
        • Some need to be able to edit it
        • Some need to be able to grant other users access
  • Backend Agnostic
    • The authoritative data for the zones should be stored independently from the software used to serve it to allow changing it easily in future

These were the basic criteria and what I started off with when I designed MyDNSHost.

HUGO PPA

Post thumbnail

I run ubuntu on my servers, and since moving to Hugo, I wanted to make sure I was using the latest version available.

The ubuntu repos currently contain hugo version 0.15 in Xenial, and 0.25.1 in artful (And the next version, bionic only contains 0.26). The latest version of hugo (as of today) is currently 0.32.2 - so the main repos are quite a bit out of date.

So to work around this, I’ve setup an apt repo that tracks the latest release for hugo, which can be installed and used like so:

sudo wget http://packages.dataforce.org.uk/packages.dataforce.org.uk_hugo.list -O /etc/apt/sources.list.d/packages.dataforce.org.uk_hugo.list
wget -qO- http://packages.dataforce.org.uk/pubkey.gpg | sudo apt-key add -
sudo apt-get update
sudo apt-get install hugo

This repo tracks the latest hugo debs in all 4 of the architectures supported: amd64, i386, armhf and arm64 and should stay automatically up to date with the latest version.

Posted on January 8, 2018 General

DNS Hosting - Part 1: History

Post thumbnail

This post is part of a series.

  1. DNS Hosting - Part 1: History (This Post)
  2. DNS Hosting - Part 2: The rewrite
  3. DNS Hosting - Part 3: Putting it all together

For as long as I can remember I’ve hosted my own DNS. Originally this was via cpanel on a single server that I owned and then after a while I moved to a new server away from cpanel and moved to doing everything myself (web, email, dns) - hand-editing zone files and serving them via BIND.

This was great for learning, and this worked well for a while but eventually I ended up with more servers, and a lot more domains. Manually editing DNS Zone files and reloading BIND was somewhat of a chore any time I was developing things or spinning up new services or moving things between servers - I wanted something web-based.

There wasn’t many free/cheap providers that did what I wanted (This was long before Cloudflare or Route 53), so around 2007 I did what any (in)sane person would do - I wrote a custom control panel for managing domains. and email. and it was multi-user. and it had billing support. and a half-baked ticket-system… ok, so I went a bit overboard with the plans for it. But mainly it controlled DNS and throughout it’s lifetime that was the only bit that was “completed” and fully functional.

Moving to Hugo

Post thumbnail

For a while now I’ve been thinking of moving this blog to a statically generated site rather than using wordpress.

There are a number of reasons for this:

  1. I can version-control the content rather than relying on wordpress database backups.
  2. It renders quicker
  3. I don’t actually use any of the wordpress features, so it’s just bloat, and a potential security hole.

I’ve seen Chris successfully use Hugo for his site and it seems to do exactly what I want, So I took the opportunity during the Christmas break to spend some time converting my blog from Wordpress to Hugo.

Actually doing this was a multi-stage process.

Posted on December 30, 2017 General

It did not help.

It’s been a while…

It’s been a while since I updated this. Not through a lack of wanting to, more a combination of things - lack of time, lack of anything worth writing, hating the old blog theme…

So I’ve replaced the theme!

Maybe that’ll help.

It probably won’t.