Advent of Code Benchmarking

Post thumbnail

For a few years now I’ve been enjoying Eric Wastl’s Advent of Code. For those unaware, each year since 2015 Advent of Code provides a 2-part coding challenge every day from December 1st to December 25th.

In previous years, Myself and Chris have been fairly informally trying to see who was able to produce the fastest code (Me in PHP, Chris in Python). In the final week of last year to assist with this, we both made our repos run in Docker and produce time output for each day.

This allowed us to run each other’s code locally to compare fairly without needing to install the other’s dev environment, and made the testing a bit fairer as it was no longer dependant on who had the faster CPU when running their own solution. For the rest of the year this was fine and we carried on as normal. As we got to the end I remarked it would be fun to have a web interface that automatically dealt with it and showed us the scores, but there was obviously no point in doing that once the year was over. Maybe in a future year…

Fast forward to this year. Myself and Chris (and ChrisN) coded up our Day 1 solutions as normal and then some other friends started doing it for the first time. I remembered my plans from the previous year and suggested everyone should also docker-ify their repos… and so they agreed

mdadm RAID with Proxmox

Post thumbnail

I recently acquired a new server with 2 drives that I intended to use as RAID1 for a virtualisation host for various things.

My hypervisor of choice is Proxmox (For a few reasons, Support for KVM and LXC primarily, but the fact it’s debian based is a nice bonus, and I really dislike the occasionally-braindead networking implementation from vmware which rules out ESXi)

This particular server does not have a RAID card, so I needed to use a software raid implementation. Out of the box for RAID1 on Proxmox you need to use ZFS, however To keep this box similar to others I have I wanted to use ext4 and mdadm. So we’re going have to do a bit of manual poking to get this how we need it.

This post is mostly an aide-memoire for myself for the future.

DNS Hosting - Part 3: Putting it all together

Post thumbnail

This post is part of a series.

  1. DNS Hosting - Part 1: History
  2. DNS Hosting - Part 2: The rewrite
  3. DNS Hosting - Part 3: Putting it all together (This Post)

In my previous posts I discussed the history leading up to, and the eventual rewrite of my DNS hosting solution. So this post will (finally) talk briefly about how it all runs in production on MyDNSHost.

Shortly before the whole rewrite I’d found myself playing around a bit with Docker for another project, so I decided early on that I was going to make use of Docker for the main bulk of the setup to allow me to not need to worry about incompatibilities between different parts of the stack that needed different versions of things, and to update different bits at different times.

The system is split up into a number of containers (and could probably be split up into more).

DNS Hosting - Part 2: The rewrite

Post thumbnail

This post is part of a series.

  1. DNS Hosting - Part 1: History
  2. DNS Hosting - Part 2: The rewrite (This Post)
  3. DNS Hosting - Part 3: Putting it all together

In my previous post about DNS Hosting I discussed the history leading up to when I decided I needed a better personal DNS hosting solution. I decided to code one myself to replace what I had been using previously.

I decided there was a few things that were needed:

  • Fully-Featured API
    • I wanted full control over the zone data programmatically, everything should be possible via the API.
    • The API should be fully documented.
  • Fully-Featured default web interface.
    • There should be a web interface that fully implements the API. Just because there is an API shouldn’t mean it has to be used to get full functionality.
    • There should exist nothing that only the default web ui can do that can’t be done via the API as well.
  • Multi-User support
    • I also host some DNS for people who aren’t me, they should be able to manage their own DNS.
  • Domains should be shareable between from users
    • Every user should be able to have their own account
    • User accounts should be able to be granted access to domains that they need to be able to access
      • Different users should have different access levels:
        • Some just need to see the zone data
        • Some need to be able to edit it
        • Some need to be able to grant other users access
  • Backend Agnostic
    • The authoritative data for the zones should be stored independently from the software used to serve it to allow changing it easily in future

These were the basic criteria and what I started off with when I designed MyDNSHost.

HUGO PPA

Post thumbnail

I run ubuntu on my servers, and since moving to Hugo, I wanted to make sure I was using the latest version available.

The ubuntu repos currently contain hugo version 0.15 in Xenial, and 0.25.1 in artful (And the next version, bionic only contains 0.26). The latest version of hugo (as of today) is currently 0.32.2 - so the main repos are quite a bit out of date.

So to work around this, I’ve setup an apt repo that tracks the latest release for hugo, which can be installed and used like so:

sudo wget http://packages.dataforce.org.uk/packages.dataforce.org.uk_hugo.list -O /etc/apt/sources.list.d/packages.dataforce.org.uk_hugo.list
wget -qO- http://packages.dataforce.org.uk/pubkey.gpg | sudo apt-key add -
sudo apt-get update
sudo apt-get install hugo

This repo tracks the latest hugo debs in all 4 of the architectures supported: amd64, i386, armhf and arm64 and should stay automatically up to date with the latest version.

Posted on January 8, 2018 General

DNS Hosting - Part 1: History

Post thumbnail

This post is part of a series.

  1. DNS Hosting - Part 1: History (This Post)
  2. DNS Hosting - Part 2: The rewrite
  3. DNS Hosting - Part 3: Putting it all together

For as long as I can remember I’ve hosted my own DNS. Originally this was via cpanel on a single server that I owned and then after a while I moved to a new server away from cpanel and moved to doing everything myself (web, email, dns) - hand-editing zone files and serving them via BIND.

This was great for learning, and this worked well for a while but eventually I ended up with more servers, and a lot more domains. Manually editing DNS Zone files and reloading BIND was somewhat of a chore any time I was developing things or spinning up new services or moving things between servers - I wanted something web-based.

There wasn’t many free/cheap providers that did what I wanted (This was long before Cloudflare or Route 53), so around 2007 I did what any (in)sane person would do - I wrote a custom control panel for managing domains. and email. and it was multi-user. and it had billing support. and a half-baked ticket-system… ok, so I went a bit overboard with the plans for it. But mainly it controlled DNS and throughout it’s lifetime that was the only bit that was “completed” and fully functional.

Moving to Hugo

Post thumbnail

For a while now I’ve been thinking of moving this blog to a statically generated site rather than using wordpress.

There are a number of reasons for this:

  1. I can version-control the content rather than relying on wordpress database backups.
  2. It renders quicker
  3. I don’t actually use any of the wordpress features, so it’s just bloat, and a potential security hole.

I’ve seen Chris successfully use Hugo for his site and it seems to do exactly what I want, So I took the opportunity during the Christmas break to spend some time converting my blog from Wordpress to Hugo.

Actually doing this was a multi-stage process.

Posted on December 30, 2017 General

It did not help.

It’s been a while…

It’s been a while since I updated this. Not through a lack of wanting to, more a combination of things - lack of time, lack of anything worth writing, hating the old blog theme…

So I’ve replaced the theme!

Maybe that’ll help.

It probably won’t.

Limiting the effectiveness of DNS Amplification

I recently had the misfortune of having a server I am responsible for used as a target for DNS Amplification, and thought I’d share how I countered this. (Whilst this was effective for me, your mileage may vary, but if this actually helps someone then it’s worth posting about.)

This particular server was the main recursor for the site that it was located at (And this was correctly limited not to allow open recursion), but was also authoritative for a small selection of domains. (Yes I know mixing recursors and resolvers is bad.)

The problem only came about when I needed to relocate the server to another site. In order to ensure continuity of service whilst the nameserver IP change propagated, I added some port-forwards at the old site that redirected DNS traffic to the new site. This however meant that all DNS traffic going towards the server came from an IP that was trusted for recursion. Oops.