Converting LXC Containers to full VMs

Post thumbnail

I have been a long-time user of proxmox for my virtualisation needs, starting back in the days of Proxmox 3 (prior to that I was running Ubuntu with OpenVZ rolled in by hand). Back then I only had a single server and resources were tight, so I deployed a lot of my workloads as OpenVZ containers.

As time went on, proxmox switched to LXC, and I dutifully converted all my containers to LXC and kept on going.

More time went on and I added more servers, and ended up clustering them for ease of management. Then eventually replacing them all with more powerful nodes so now resources were no longer a concern. I also eventually added Ceph into the mix (using proxmox’s built-in support for ceph) and 10G Networking so that I had shared storage for VMs and could the start doing VM migrations between nodes quickly.

But, LXC Conainers have flaws - live migration only works for full-VMs. LXC Containers have to do a reboot to actually move onto and start running on the new node. For some workloads this isn’t really noticable or a problem - but for others this is quite bad.

Also, as more and more of what I run involves Docker, it’s a lot easier/nicer/safer to run these workloads in actual VMs rather than LXC containers.

But installing and configuring full-VMs was a chore. With LXC Containers you could be up and running in minutes by just deploying a template. Full-VMs required a full installation to an empty disk. This could be automated using kickstart/preseeding etc (And I wrote a tool to help manage a pxe-boot environments for this purpose). But over time, this has now become trivial as well - cloud-init is now supported directly within proxmox and all the major OSes provide cloud-init compatible disk images, so getting a fresh VM is a matter of cloning a template, updating some cloud-init settings and starting the VM.

Due to all of this, almost all of the VMs I create these days are full VMs. Anything new - it’s a VM, which gives me all-the-good-stuff™.

But I still have a lot of legacy LXC containers. These all end up suffering any time I do hardware/software maintenance on the host nodes, or if I have any problems that require putting a host into maintenance mode.


It’s time to fix this.


Remote LVM-on-LUKS (via ISCSI) with automatic decrypt on boot

Post thumbnail

I have recently added some iscsi-backed storage to my proxmox-based server environment, primarily as an off-server location to store backup data.

For a multitude of reasons, such as the sensitive nature of the data, the fact that the physical storage lies outside of my control, and just good security hygiene - I wanted to ensure that the data is all encrypted at rest.

I wanted to be able to use this iscsi as a storage target for proxmox allowing me to just add the volumes to VMs allowing HA, and I didn’t want to have to do encryption inside every VM incase I accidentally forgot to enable it for one of the VMs (remember, the storage is hosted external to me so I have no control over the physical access to it) so to do this I have made use of LUKS encryption on the iscsi block device that I am presented with and then I run LVM over the top of this. (LVM-on-LUKS as-opposed to LUKS-on-LVM)

mdadm RAID with Proxmox

Post thumbnail

I recently acquired a new server with 2 drives that I intended to use as RAID1 for a virtualisation host for various things.

My hypervisor of choice is Proxmox (For a few reasons, Support for KVM and LXC primarily, but the fact it’s debian based is a nice bonus, and I really dislike the occasionally-braindead networking implementation from vmware which rules out ESXi)

This particular server does not have a RAID card, so I needed to use a software raid implementation. Out of the box for RAID1 on Proxmox you need to use ZFS, however To keep this box similar to others I have I wanted to use ext4 and mdadm. So we’re going have to do a bit of manual poking to get this how we need it.

This post is mostly an aide-memoire for myself for the future.