DNS Hosting - Part 3: Putting it all together

Post thumbnail

In my previous posts I discussed the history leading up to, and the eventual rewrite of my DNS hosting solution. So this post will (finally) talk briefly about how it all runs in production on MyDNSHost.

Shortly before the whole rewrite I’d found myself playing around a bit with Docker for another project, so I decided early on that I was going to make use of Docker for the main bulk of the setup to allow me to not need to worry about incompatibilities between different parts of the stack that needed different versions of things, and to update different bits at different times.

The system is split up into a number of containers (and could probably be split up into more).

To start with, I had the following containers:

  • API Container - Deals with all the backend interactions
  • WEB Container - Runs the main frontend that people see. Interacts with the API Container to actually do anything.
  • DB Container - Holds all the data used by the API
  • BIND Container - Runs an instance of bind to handle DNSSEC signing and distributing DNS Zones to the public-facing servers.
  • CRON Container - This container runs a bunch of maintenance scripts to keep things tidy and initiate DNSSEC signing etc.

The tasks in the CRON container could probably be split up more, but for now I’m ok with having them in 1 container.

This worked well, however I found some annoyances when redeploying the API or WEB containers causing me to be logged out from the frontend, so another container was soon added:

  • MEMCACHED Container - Stores session data from the API and FRONTEND containers to allow for horizontal scaling and restarting of containers.

In the first instance, the API Container was also responsible for interactions with the BIND container. It would generate zone files on-demand when users made changes, and then poke BIND to load them. However this was eventually split out further, and another 3 containers were added:

  • GEARMAN Container - Runs an instance of Gearman for the API container to push jobs to.
  • REDIS Container - Holds the job data for GEARMAN.
  • WORKER Container - Runs a bunch of worker scripts to do the tasks the API Container previously did for generating/updating zone files and pushing to BIND.

Splitting these tasks out into the WORKER container made the frontend feel faster as it no longer needed to wait for things to happen and could just fire the jobs off into GEARMAN and let it worry about them. I also get some extra logging from this as the scripts can be a lot more verbose. In addition, if a worker can’t handle a job it can be rescheduled to try again and the workers can (in theory) be scaled out horizontally a bit more if needed.

There was some initial challenges with this - the main one being around how the database interaction worked, as the workers would fail after periods of inactivity and then get auto restarted and work immediately. This turned out to be mainly due to how I’d pulled out the code from the API into the workers. Whereas the scripts in API run using the traditional method where the script gets called and does it’s thing (including setup) then dies, the WORKER scripts were long-term processes, so the DB connections were eventually timing out and the code was not designed to handle this.

Finally, more recently I added statistical information about domains and servers, which required another 2 containers:

  • INFLUXDB Container - Runs InfluxDB to store time-series data and provide a nice way to query it for graphing.
  • CHRONOGRAF Container - Runs Chronograf to allow me to easily pull out data from INFLUXDB for testing.

That’s quite a few containers to manage. To actually manage running them, I make use of Docker-Compose primarily (to set up the various networks, volumes, containers) etc. This works well for the most part, but there are a few limitations around how it deals with restarting containers that cause fairly substantial downtime with upgrading WEB or API. To get around this I wrote a small bit of orchestration scripting that uses docker-compose to scale the WEB and API containers up to 2 (Letting docker-compose do the actual creation of the new container), then manually kills off the older container and then scales them back down to 1. This seems to behave well.

So with all these containers hanging around, I needed a way to deal with exposing them to the web, and automating the process of ensuring they had SSL Certificates (using Let’s Encrypt). Fortunately Chris Smith has already solved this problem for the most part in a way that worked for what I needed. In a blog post he describes a set of docker containers he created that automatically runs nginx to proxy towards other internal containers and obtain appropriate SSL certificates using DNS challenges. For the most part all that was required was running this and adding some labels to my existing containers and that was that…

Except this didn’t quite work initially, as I couldn’t do the required DNS challenges unless I hosted my DNS somewhere else, so I ended up adding support for HTTP Challenges and then I was able to use this without needing to host DNS elsewhere. (And in return Chris has added support for using MyDNSHost for the DNS Challenges, so it’s a win-win). My orchestration script also handles setting up and running the automatic nginx proxy containers.

This brings me to the public-facing DNS Servers. These are currently the only bit not running in Docker (though they could). These run on some standard Ubuntu 16.04 VMs with a small setup script that installs bind and an extra service to handle automatically adding/removing zones based on a special “catalog zone” due to the versions of bind currently in use not yet supporting them natively. The transferring of zones between the frontend and the public servers is done using standard DNS Notify and AXFR. DNSSEC is handled by the backend server pre-signing the zones before sending them to the public servers, which never see the signing keys.

By splitting jobs up this way, in theory it should be possible in future (if needed) to move away from BIND to alternatives (such as PowerDNS or so).

As well as the public service that I’m running, all of the code involved (All the containers and all the Orchestration) is available on Github under the MIT License. Documentation is a little light (read: pretty non-existent) but it’s all there for anyone else to use/improve/etc.

DNS Hosting - Part 2: The rewrite

Post thumbnail

In my previous post about DNS Hosting I discussed the history leading up to when I decided I needed a better personal DNS hosting solution. I decided to code one myself to replace what I had been using previously.

I decided there was a few things that were needed:

  • Fully-Featured API
    • I wanted full control over the zone data programmatically, everything should be possible via the API.
    • The API should be fully documented.
  • Fully-Featured default web interface.
    • There should be a web interface that fully implements the API. Just because there is an API shouldn’t mean it has to be used to get full functionality.
    • There should exist nothing that only the default web ui can do that can’t be done via the API as well.
  • Multi-User support
    • I also host some DNS for people who aren’t me, they should be able to manage their own DNS.
  • Domains should be shareable between from users
    • Every user should be able to have their own account
    • User accounts should be able to be granted access to domains that they need to be able to access
      • Different users should have different access levels:
      • Some just need to see the zone data
      • Some need to be able to edit it
      • Some need to be able to grant other users access
  • Backend Agnostic
    • The authoritative data for the zones should be stored independently from the software used to serve it to allow changing it easily in future

These were the basic criteria and what I started off with when I designed MyDNSHost.

MyDNSHost Homepage

Now that I had the basic criteria, I started off by coming up with a basic database structure for storing the data that I thought would suit my plans, and a basic framework for the API backend so that I could start creating some initial API endpoints. With this in place I was able to create the database structure, and pre-seed it with some test data. This would allow me to test the API as I created it.

I use chrome, so for testing the API I use the Restlet Client extension.

Armed with a database structure, a basic API framework, and some test data - I was ready to code!

Except I wasn’t.

Before I could start properly coding the API I needed to think of what endpoints I wanted, and how the interactions would work. I wanted the API to make sense, so wanted to get this all planned first so that I knew what I was aiming for.

I decided pretty early on that I was going to version the API - that way if I messed it all up I could re do it and not need to worry about backwards compatability, so for the time being, everything would exist under the /1.0/ directory. I came up with the following basic idea for endpoints:

MyDNSHost LoggedIn Homepage
  • Domains
    • GET /domains - List domains the current user has access to
    • GET /domains/<domain> - Get information about
    • POST /domains/<domain> - Update domain
    • DELETE /domains/<domain> - Delete domain
    • GET /domains/<domain>/records - Get records for
    • POST /domains/<domain>/records - Update records for
    • DELETE /domains/<domain>/records - Delete records for
    • GET /domains/<domain>/records/<recordid> - Get specific record for
    • POST /domains/<domain>/records/<recordid> - Update specific record for
    • DELETE /domains/<domain>/records/<recordid> - Delete specific record for
  • Users
    • GET /users - Get a list of users (non-admin users should only see themselves)
    • GET /users/(<userid>|self) - Get information about a specific user (or the current user)
    • POST /users/(<userid>|self) - Update information about a specific user (or the current user)
    • DELETE /users/<userid> - Delete a specific user (or the current user)
    • GET /users/(<userid>|self)/domains - Get a list of domains for the given user (or the current user)
  • General
    • GET /ping - Check that the API is responding
    • GET /version - Get version info for the API
    • GET /userdata - Get information about the current login (user, access-level, etc)

This looked sane so I set about with the actual coding!

Rather than messing around with oauth tokens and the like I decided that every request to the API should be authenticated. Initially using basic-auth and username/password, but eventually also using API Keys, this made things fairly simple whilst testing, and made interacting with the API via scripts quite straight forward (no need to grab a token first and then do things).

The initial implementation of the API with domain/user editing functionality and API Key support was completed within a day, and then followed a week of evenings tweaking and adding functionality that would be needed later - such as internal “hook” points for when certain actions happened (changing records etc) so that I could add code to actually push these changes to a DNS Server. As I was developing the API, I also made sure to document it using API Blueprint and Aglio - it was easier to keep it up to date as I went, than to write it all after-the-fact.

Once I was happy with the basic API functionality and knew from my (manual) testing that it functioned as desired, I set about on the Web UI. I knew I was going to use Bootstrap for this because I am very much not a UI person and bootstrap helps make my stuff look less awful.

MyDNSHost Records View

Now, I should point out here, I’m not a developer for my day job, most of what I write I write for myself to “scratch an itch” so to speak. I don’t keep up with all the latest frameworks and best practices and all that. I only recently in the last year switched away from hand-managing project dependencies in Java to using gradle and letting it do it for me.

So for the web UI I decided to experiment and try and do things “properly”. I decided to use composer for dependency management for the first time and then used a 3rd party request-router Bramus/Router for handling how pages are loaded and used Twig for templating. (At this point, the API code was all hand-coded with no 3rd party dependencies. However my experiment with the front end was successful and the API Code has since changed to also make use of composer and some 3rd party dependencies for some functionality.)

The UI was much quicker to get to an initial usable state - as all the heavy lifting was already handled by the backend API code, the UI just had to display this nicely.

I then spent a few more evenings and weekends fleshing things out a bit more, and adding in things that I’d missed in my initial design and implementations. I also wrote some of the internal “hooks” that were needed to make the API able to interact with BIND and PowerDNS for actually serving DNS Data.

As this went on, whilst the API Layout I planned stayed mostly static except with a bunch more routes added, I did end up revisiting some of my initial decisions:

  • I moved from a level-based user-access to the system for separating users and admins, to an entirely role-based system.
    • Different users can be granted access to do different things (eg manage users, impersonate users, manage all domains, create domains, etc)
  • I made domains entirely user-agnostic
    • Initially each domain had an “owner” user, but this was changed so that ownership over a domain is treated the same as any other level of access on the domain.
    • This means that domains can technically be owned by multiple people (Though in normal practice an “owner” can’t add another user as an “owner” - only users with “Manage all domains” permission can add users at the “owner” level)
    • This also allows domain-level API Keys that can be used to only make changes to a certain domain not all domains a user has access to.

Eventually I had a UI and API system that seemed to do what I needed and I could look at actually putting this live and starting to use it (which I’ll talk about in the next post).

After the system went live I also added a few more features that were requested by users that weren’t part of my initial requirements, such as:

  • TOTP 2FA Support
    • With “remember this device” option rather than needing to enter a code every time you log in.
  • DNSSEC Support
  • EMAIL Notifications when certain important actions occur
    • User API Key added/changed
    • User 2FA Key added/changed
  • WebHooks when ever zone data is changed
  • Ability to use your own servers for hosting the zone data not mine
    • The live system automatically allows AXFR for a zone from any server listed as an NS on the domain and sends appropriate notifies.
  • Domain Statistics (such as queries per server, per record type, per domain etc)
  • IDN Support
  • Ability to import and export raw BIND data.
    • This makes it easier for people to move to/from the system without needing any interaction with admin users or needing to write any code to deal with zone files.
    • Ability to import Cloudflare-style zone exports.
      • These look like BIND zone files, but are slightly invalid, this lets users just import from cloudflare without needing to manually fix up the zones.
  • Support for “Exotic” record types: CAA, SSHFP, TLSA etc.
  • Don’t allow domains to be added to accounts if they are sub-domains of an already-known about domain.
    • As a result of this, also don’t allow people to add obviously-invalid domains or whole TLDs etc.

DNS Hosting - Part 1: History

Post thumbnail

For as long as I can remember I’ve hosted my own DNS. Originally this was via cpanel on a single server that I owned and then after a while I moved to a new server away from cpanel and moved to doing everything myself (web, email, dns) - hand-editing zone files and serving them via BIND.

This was great for learning, and this worked well for a while but eventually I ended up with more servers, and a lot more domains. Manually editing DNS Zone files and reloading BIND was somewhat of a chore any time I was developing things or spinning up new services or moving things between servers - I wanted something web-based.

There wasn’t many free/cheap providers that did what I wanted (This was long before Cloudflare or Route 53), so around 2007 I did what any (in)sane person would do - I wrote a custom control panel for managing domains. and email. and it was multi-user. and it had billing support. and a half-baked ticket-system… ok, so I went a bit overboard with the plans for it. But mainly it controlled DNS and throughout it’s lifetime that was the only bit that was “completed” and fully functional.

The DNS editing was simple, it parsed BIND Zone files, presented them in a table of text input fields and let me make changes. (This is an ever-so-slight upgrade to “hand-edit zone files”)

For security reasons the webserver couldn’t write to the bind zone file directory, so it made use of temporary files that the webserver could write to and then a cron script made these temporary files live by moving them into the bind zone file directory and reloading the zone with rndc reload example.org. Reading of zone data in the editor would look for the zone in the temporary directory first before falling back to the bind directory so that any pending edits didn’t get lost before the cronjob ran.

After I had the editor working I wanted redundancy. I made use of my other servers and added secondary name servers that synced the zones from the master. There was a cronjob on the master server to build a list of valid zones, and separate cronjobs on the secondary servers that synced this list of zones every few hours. Zone data came in via AXFR from the master to the secondaries.

I even added an API of sorts for changing zone data. It wasn’t good ( GET requests to crafted URLs such as GET /api/userapi-key/dns/key=domain-api-key/type=A/setrecord=somerecord.domain.com/ttl=3600/data=1.1.1.1 or so) but it let me automate DNS changes which I used for automated website failover.

DNS Control Panel

This all kinda worked. There was a bunch of delays waiting for cronjobs to do things (Creating new zones needed a cronjob to run and this needed to run before zones could be edited (and before they existed on the secondary servers). Editing zones then needed to wait for a cronjob to make the changes live, etc) but ultimately it did what I needed, and the delays weren’t really a problem. The cronjobs on the master server ran every minute, and the secondary servers ran every 6 hours. Things worked, DNS got served, I could edit the records, job done?

A few years later (2010) I realised that the DNS editing part of the control panel was the only bit worth keeping, so I made plans to rip it out and make it a standalone separate service. The plan was to get rid of BIND and zone-file parsing, and move to PowerDNS which had a mySQL backend that I could just edit directly. The secondary servers would then run with the main server configured as a supermaster to remove the need for cronjobs to sync the list of zones.

So I bought a generic domain mydnshost.co.uk (You can never have too many domain names!) and changed all the nameservers at my domain registrar to point to this generic name… and then did absolutely nothing else with it (except for minor tweaks to the control panel to add new record types) for a further 7 years. Over the years I’d toy again with doing something, but this ultimately never panned out.

Whilst working on another project that was using letsencrypt for SSL certificates, I found myself needing to use dns-based challenges for the domain verification rather than http-based verification. At first I did this using my old DNS system. I was using dehydrated for getting the certificates so I was able to use a custom shell script that handled updating the DNS records (and sleeping for the appropriate length of time to ensure that the cronjob had run before letting the letsencrypt server check for the challenge) - but this felt dirty. I had to add in support for removing records to my API (in 10 years I’d never needed it as I always just changed where records pointed) and it just felt wrong. The old API wasn’t really very useful for anything other than very specific use cases and the code was nasty.

So, in March 2017 I finally decided to make use of the mydnshost.co.uk domain and actually build a proper DNS system which I (appropriately) called MyDNSHost - and I’ll go into more detail about that specifically in part 2 of this series of posts.