DNS Hosting - Part 2: The rewrite

Post thumbnail

In my previous post about DNS Hosting I discussed the history leading up to when I decided I needed a better personal DNS hosting solution. I decided to code one myself to replace what I had been using previously.

I decided there was a few things that were needed:

  • Fully-Featured API
    • I wanted full control over the zone data programmatically, everything should be possible via the API.
    • The API should be fully documented.
  • Fully-Featured default web interface.
    • There should be a web interface that fully implements the API. Just because there is an API shouldn’t mean it has to be used to get full functionality.
    • There should exist nothing that only the default web ui can do that can’t be done via the API as well.
  • Multi-User support
    • I also host some DNS for people who aren’t me, they should be able to manage their own DNS.
  • Domains should be shareable between from users
    • Every user should be able to have their own account
    • User accounts should be able to be granted access to domains that they need to be able to access
      • Different users should have different access levels:
      • Some just need to see the zone data
      • Some need to be able to edit it
      • Some need to be able to grant other users access
  • Backend Agnostic
    • The authoritative data for the zones should be stored independently from the software used to serve it to allow changing it easily in future

These were the basic criteria and what I started off with when I designed MyDNSHost.

MyDNSHost Homepage

Now that I had the basic criteria, I started off by coming up with a basic database structure for storing the data that I thought would suit my plans, and a basic framework for the API backend so that I could start creating some initial API endpoints. With this in place I was able to create the database structure, and pre-seed it with some test data. This would allow me to test the API as I created it.

I use chrome, so for testing the API I use the Restlet Client extension.

Armed with a database structure, a basic API framework, and some test data - I was ready to code!

Except I wasn’t.

Before I could start properly coding the API I needed to think of what endpoints I wanted, and how the interactions would work. I wanted the API to make sense, so wanted to get this all planned first so that I knew what I was aiming for.

I decided pretty early on that I was going to version the API - that way if I messed it all up I could re do it and not need to worry about backwards compatability, so for the time being, everything would exist under the /1.0/ directory. I came up with the following basic idea for endpoints:

MyDNSHost LoggedIn Homepage
  • Domains
    • GET /domains - List domains the current user has access to
    • GET /domains/<domain> - Get information about
    • POST /domains/<domain> - Update domain
    • DELETE /domains/<domain> - Delete domain
    • GET /domains/<domain>/records - Get records for
    • POST /domains/<domain>/records - Update records for
    • DELETE /domains/<domain>/records - Delete records for
    • GET /domains/<domain>/records/<recordid> - Get specific record for
    • POST /domains/<domain>/records/<recordid> - Update specific record for
    • DELETE /domains/<domain>/records/<recordid> - Delete specific record for
  • Users
    • GET /users - Get a list of users (non-admin users should only see themselves)
    • GET /users/(<userid>|self) - Get information about a specific user (or the current user)
    • POST /users/(<userid>|self) - Update information about a specific user (or the current user)
    • DELETE /users/<userid> - Delete a specific user (or the current user)
    • GET /users/(<userid>|self)/domains - Get a list of domains for the given user (or the current user)
  • General
    • GET /ping - Check that the API is responding
    • GET /version - Get version info for the API
    • GET /userdata - Get information about the current login (user, access-level, etc)

This looked sane so I set about with the actual coding!

Rather than messing around with oauth tokens and the like I decided that every request to the API should be authenticated. Initially using basic-auth and username/password, but eventually also using API Keys, this made things fairly simple whilst testing, and made interacting with the API via scripts quite straight forward (no need to grab a token first and then do things).

The initial implementation of the API with domain/user editing functionality and API Key support was completed within a day, and then followed a week of evenings tweaking and adding functionality that would be needed later - such as internal “hook” points for when certain actions happened (changing records etc) so that I could add code to actually push these changes to a DNS Server. As I was developing the API, I also made sure to document it using API Blueprint and Aglio - it was easier to keep it up to date as I went, than to write it all after-the-fact.

Once I was happy with the basic API functionality and knew from my (manual) testing that it functioned as desired, I set about on the Web UI. I knew I was going to use Bootstrap for this because I am very much not a UI person and bootstrap helps make my stuff look less awful.

MyDNSHost Records View

Now, I should point out here, I’m not a developer for my day job, most of what I write I write for myself to “scratch an itch” so to speak. I don’t keep up with all the latest frameworks and best practices and all that. I only recently in the last year switched away from hand-managing project dependencies in Java to using gradle and letting it do it for me.

So for the web UI I decided to experiment and try and do things “properly”. I decided to use composer for dependency management for the first time and then used a 3rd party request-router Bramus/Router for handling how pages are loaded and used Twig for templating. (At this point, the API code was all hand-coded with no 3rd party dependencies. However my experiment with the front end was successful and the API Code has since changed to also make use of composer and some 3rd party dependencies for some functionality.)

The UI was much quicker to get to an initial usable state - as all the heavy lifting was already handled by the backend API code, the UI just had to display this nicely.

I then spent a few more evenings and weekends fleshing things out a bit more, and adding in things that I’d missed in my initial design and implementations. I also wrote some of the internal “hooks” that were needed to make the API able to interact with BIND and PowerDNS for actually serving DNS Data.

As this went on, whilst the API Layout I planned stayed mostly static except with a bunch more routes added, I did end up revisiting some of my initial decisions:

  • I moved from a level-based user-access to the system for separating users and admins, to an entirely role-based system.
    • Different users can be granted access to do different things (eg manage users, impersonate users, manage all domains, create domains, etc)
  • I made domains entirely user-agnostic
    • Initially each domain had an “owner” user, but this was changed so that ownership over a domain is treated the same as any other level of access on the domain.
    • This means that domains can technically be owned by multiple people (Though in normal practice an “owner” can’t add another user as an “owner” - only users with “Manage all domains” permission can add users at the “owner” level)
    • This also allows domain-level API Keys that can be used to only make changes to a certain domain not all domains a user has access to.

Eventually I had a UI and API system that seemed to do what I needed and I could look at actually putting this live and starting to use it (which I’ll talk about in the next post).

After the system went live I also added a few more features that were requested by users that weren’t part of my initial requirements, such as:

  • TOTP 2FA Support
    • With “remember this device” option rather than needing to enter a code every time you log in.
  • DNSSEC Support
  • EMAIL Notifications when certain important actions occur
    • User API Key added/changed
    • User 2FA Key added/changed
  • WebHooks when ever zone data is changed
  • Ability to use your own servers for hosting the zone data not mine
    • The live system automatically allows AXFR for a zone from any server listed as an NS on the domain and sends appropriate notifies.
  • Domain Statistics (such as queries per server, per record type, per domain etc)
  • IDN Support
  • Ability to import and export raw BIND data.
    • This makes it easier for people to move to/from the system without needing any interaction with admin users or needing to write any code to deal with zone files.
    • Ability to import Cloudflare-style zone exports.
      • These look like BIND zone files, but are slightly invalid, this lets users just import from cloudflare without needing to manually fix up the zones.
  • Support for “Exotic” record types: CAA, SSHFP, TLSA etc.
  • Don’t allow domains to be added to accounts if they are sub-domains of an already-known about domain.
    • As a result of this, also don’t allow people to add obviously-invalid domains or whole TLDs etc.


Post thumbnail

I run ubuntu on my servers, and since moving to Hugo, I wanted to make sure I was using the latest version available.

The ubuntu repos currently contain hugo version 0.15 in Xenial, and 0.25.1 in artful (And the next version, bionic only contains 0.26). The latest version of hugo (as of today) is currently 0.32.2 - so the main repos are quite a bit out of date.

So to work around this, I’ve setup an apt repo that tracks the latest release for hugo, which can be installed and used like so:

sudo wget http://packages.dataforce.org.uk/packages.dataforce.org.uk_hugo.list -O /etc/apt/sources.list.d/packages.dataforce.org.uk_hugo.list
wget -qO- http://packages.dataforce.org.uk/pubkey.gpg | sudo apt-key add -
sudo apt-get update
sudo apt-get install hugo

This repo tracks the latest hugo debs in all 4 of the architectures supported: amd64, i386, armhf and arm64 and should stay automatically up to date with the latest version.

DNS Hosting - Part 1: History

Post thumbnail

For as long as I can remember I’ve hosted my own DNS. Originally this was via cpanel on a single server that I owned and then after a while I moved to a new server away from cpanel and moved to doing everything myself (web, email, dns) - hand-editing zone files and serving them via BIND.

This was great for learning, and this worked well for a while but eventually I ended up with more servers, and a lot more domains. Manually editing DNS Zone files and reloading BIND was somewhat of a chore any time I was developing things or spinning up new services or moving things between servers - I wanted something web-based.

There wasn’t many free/cheap providers that did what I wanted (This was long before Cloudflare or Route 53), so around 2007 I did what any (in)sane person would do - I wrote a custom control panel for managing domains. and email. and it was multi-user. and it had billing support. and a half-baked ticket-system… ok, so I went a bit overboard with the plans for it. But mainly it controlled DNS and throughout it’s lifetime that was the only bit that was “completed” and fully functional.

The DNS editing was simple, it parsed BIND Zone files, presented them in a table of text input fields and let me make changes. (This is an ever-so-slight upgrade to “hand-edit zone files”)

For security reasons the webserver couldn’t write to the bind zone file directory, so it made use of temporary files that the webserver could write to and then a cron script made these temporary files live by moving them into the bind zone file directory and reloading the zone with rndc reload example.org. Reading of zone data in the editor would look for the zone in the temporary directory first before falling back to the bind directory so that any pending edits didn’t get lost before the cronjob ran.

After I had the editor working I wanted redundancy. I made use of my other servers and added secondary name servers that synced the zones from the master. There was a cronjob on the master server to build a list of valid zones, and separate cronjobs on the secondary servers that synced this list of zones every few hours. Zone data came in via AXFR from the master to the secondaries.

I even added an API of sorts for changing zone data. It wasn’t good ( GET requests to crafted URLs such as GET /api/userapi-key/dns/key=domain-api-key/type=A/setrecord=somerecord.domain.com/ttl=3600/data= or so) but it let me automate DNS changes which I used for automated website failover.

DNS Control Panel

This all kinda worked. There was a bunch of delays waiting for cronjobs to do things (Creating new zones needed a cronjob to run and this needed to run before zones could be edited (and before they existed on the secondary servers). Editing zones then needed to wait for a cronjob to make the changes live, etc) but ultimately it did what I needed, and the delays weren’t really a problem. The cronjobs on the master server ran every minute, and the secondary servers ran every 6 hours. Things worked, DNS got served, I could edit the records, job done?

A few years later (2010) I realised that the DNS editing part of the control panel was the only bit worth keeping, so I made plans to rip it out and make it a standalone separate service. The plan was to get rid of BIND and zone-file parsing, and move to PowerDNS which had a mySQL backend that I could just edit directly. The secondary servers would then run with the main server configured as a supermaster to remove the need for cronjobs to sync the list of zones.

So I bought a generic domain mydnshost.co.uk (You can never have too many domain names!) and changed all the nameservers at my domain registrar to point to this generic name… and then did absolutely nothing else with it (except for minor tweaks to the control panel to add new record types) for a further 7 years. Over the years I’d toy again with doing something, but this ultimately never panned out.

Whilst working on another project that was using letsencrypt for SSL certificates, I found myself needing to use dns-based challenges for the domain verification rather than http-based verification. At first I did this using my old DNS system. I was using dehydrated for getting the certificates so I was able to use a custom shell script that handled updating the DNS records (and sleeping for the appropriate length of time to ensure that the cronjob had run before letting the letsencrypt server check for the challenge) - but this felt dirty. I had to add in support for removing records to my API (in 10 years I’d never needed it as I always just changed where records pointed) and it just felt wrong. The old API wasn’t really very useful for anything other than very specific use cases and the code was nasty.

So, in March 2017 I finally decided to make use of the mydnshost.co.uk domain and actually build a proper DNS system which I (appropriately) called MyDNSHost - and I’ll go into more detail about that specifically in part 2 of this series of posts.

Moving to Hugo

Post thumbnail

For a while now I’ve been thinking of moving this blog to a statically generated site rather than using wordpress.

There are a number of reasons for this:

  1. I can version-control the content rather than relying on wordpress database backups.
  2. It renders quicker
  3. I don’t actually use any of the wordpress features, so it’s just bloat, and a potential security hole.

I’ve seen Chris successfully use Hugo for his site and it seems to do exactly what I want, So I took the opportunity during the Christmas break to spend some time converting my blog from Wordpress to Hugo.

Actually doing this was a multi-stage process.

Stage 1 - Exporting the old content

This was mostly achieved using the wordpress-to-hugo-exporter.

Once the content was exported, I then went through and removed any posts that were not marked as Published (This blog has technically been running for a long time under various guises. Some of the very old content is “my-first-blog” style garbage and over the years I’ve marked these as not-published within wordpress. I’ll keep hold of these posts separately from the main site git repository.).

Stage 2 - Converting the theme

The next stage was converting the old wordpress theme I was using. It took me a long time to decide on a theme that I even somewhat liked before, so for now I figured I’d keep the same theme.

Thankfully hugo themes are pretty straight forward and the conversion process was pretty painless once I got to grips with the hugo syntax.

Stage 3 - Fixing the exported content

The wordpress-to-hugo-exporter plugin did a pretty decent job of exporting the old content, but not perfect.

When exporting, the content gets run through the wordpress the_content filter so that 3rd party plugin get a chance to modify it. Sometimes the generated HTML confused the converter and resulted in sub-optimal (broken) markdown output.

Thankfully, I don’t have a huge amount of content, and hugo lets you run a debugging server using hugo server that automatically refreshes pages as you save them, this allowed me to fix the content and see the fixes in real time.

Stage 4 - Publishing

This was the easy stage. I created a new github repo that stores the site content.

This is then checked out on the webserver and hugo is run in the directory to generate the final content into the /public directory that is then served by my webserver.

To automate the deployments process, there is also a script in there (github-webhook-deploy.php) that I run on the server under a different domain that gets poked by github any time I push the repo, this script handles pulling the updated version of the site and running hugo on it.

Final Thoughts

  • Hugo’s theming is nice and simple so at some point I may redo the theme. The content is totally agnostic to the theme, so this should be pretty seamless.

  • The new blog no longer has comments. I don’t think this is a big loss, there are plenty of ways for people to contact me. I may look into something like disqus or so in future.

  • I don’t know if this will actually make me more likely to update the site or write more, but it’s worth a try.

  • Sorry to RSS Readers, the change is likely going to spam you about all the posts again.

It’s been a while…

It’s been a while since I updated this. Not through a lack of wanting to, more a combination of things - lack of time, lack of anything worth writing, hating the old blog theme…

So I’ve replaced the theme!

Maybe that’ll help.

It probably won’t.

Limiting the effectiveness of DNS Amplification

I recently had the misfortune of having a server I am responsible for used as a target for DNS Amplification, and thought I’d share how I countered this. (Whilst this was effective for me, your mileage may vary, but if this actually helps someone then it’s worth posting about.)

This particular server was the main recursor for the site that it was located at (And this was correctly limited not to allow open recursion), but was also authoritative for a small selection of domains. (Yes I know mixing recursors and resolvers is bad.)

The problem only came about when I needed to relocate the server to another site. In order to ensure continuity of service whilst the nameserver IP change propagated, I added some port-forwards at the old site that redirected DNS traffic to the new site. This however meant that all DNS traffic going towards the server came from an IP that was trusted for recursion. Oops.

After adding the port-forwards, but before updating the nameservers, I got distracted and ended up forgetting about this little hack, until the other day when I suddenly noticed that both sites were suffering due to large numbers of packets. (It’s worth noting, that in this case both sites were actually on standard ADSL connections, so not a whole lot of upload bandwidth available here!)

After using tcpdump it became apparent quite quickly what was going on, and it reminded me that I hadn’t actually made the nameserver change yet. This left me in a situation where the server was being abused, but I wasn’t in a position to just remove the port forward without causing a loss of service.

I was however able to add a selection of iptables rules to the firewall at the first site (that was doing the forwarding) in order to limit the effectiveness of the attack, which should be self explanatory (along with the comments):

# Create a chain to store block rules in
iptables -N BADDNS

# Match all "IN ANY" DNS Queries, and run them past the BADDNS chain.
iptables -A INPUT -p udp --dport 53 -m string --hex-string "|00 00 ff 00 01|" --to 255 --algo bm -m comment --comment "IN ANY?" -j BADDNS
iptables -A FORWARD -p udp --dport 53 -m string --hex-string "|00 00 ff 00 01|" --to 255 --algo bm -m comment --comment "IN ANY?" -j BADDNS

# Block domains that are being used for DNS Amplification...
iptables -A BADDNS -m string --hex-string "|04 72 69 70 65 03 6e 65 74 00|" --algo bm -j DROP --to 255 -m comment --comment "ripe.net"
iptables -A BADDNS -m string --hex-string "|03 69 73 63 03 6f 72 67 00|" --algo bm -j DROP --to 255 -m comment --comment "isc.org"
iptables -A BADDNS -m string --hex-string "|04 73 65 6d 61 02 63 7a 00|" --algo bm -j DROP --to 255 -m comment --comment "sema.cz"
iptables -A BADDNS -m string --hex-string "|09 68 69 7a 62 75 6c 6c 61 68 02 6d 65 00|" --algo bm -j DROP --to 255 -m comment --comment "hizbullah.me"

# Rate limit the rest.
iptables -A BADDNS -m recent --set --name DNSQF --rsource
iptables -A BADDNS -m recent --update --seconds 10 --hitcount 5 --name DNSQF --rsource -j DROP

This flat-out blocks the DNS queries that were being used for domains that I am not authoritative for, but I didn’t want to entirely block all “IN ANY” queries, so rate limits the rest of them. This was pretty effective at stopping the ongoing abuse.

It only works of course if the same set of IPs are repeatedly being targeted (remember, these are generally spoofed IPs that are actually the real target). Once the same target is spoofed enough times, it gets blocked and no more DNS packets will be sent to it, thus limiting the effectiveness of the attack (how much it limits it, depends on how much packets would otherwise have been aimed at the unsuspecting target).

Here is my iptables output as of right now, considering the counters were cleared Friday morning:

root@rakku:~ # iptables -vnx --list BADDNS
Chain BADDNS (2 references)
    pkts      bytes target     prot opt in     out     source               destination
  458939 29831035 DROP       all  --  *      *             STRING match "|0472697065036e657400|" ALGO name bm TO 255 /* ripe.net */
 2215367 141783488 DROP       all  --  *      *             STRING match "|0473656d6102637a00|" ALGO name bm TO 255 /* sema.cz */
       0        0 DROP       all  --  *      *             STRING match "|0968697a62756c6c6168026d6500|" ALGO name bm TO 255 /* hizbullah.me */
       1     2248 DROP       all  --  *      *             STRING match "|03697363036f726700|" ALGO name bm TO 255 /* isc.org */
    5571   385042            all  --  *      *             recent: SET name: DNSQF side: source
    5542   374343 DROP       all  --  *      *             recent: UPDATE seconds: 10 hit_count: 5 name: DNSQF side: source
root@rakku:~ #

Interestingly, the usual amplification target, isc.org, wasn’t really used this time.

As soon as the nameserver IP updated (seems the attackers were using DNS to find what server to attack), the packets started arriving directly at the new site and thus no longer matched the recursion-allowed subnets and the attack stopped being effective (and then eventually stopped altogether once I removed the port-forward which stopped the first site responding recursively also)

In my case I applied this where I was doing the forwarding, as the attack was only actually a problem if the query ended up at that site and to limit the outbound packets being forwarded, however this would work just fine if implemented directly on the server ultimately being attacked.

Website Reshuffle

Over the past few weekends I’ve been (slowly) working on moving my websites around a bit so that things are all in once place, and in the case of this blog, no longer hosted on my home ADSL connection.

At the moment all non-existent pages on http://home.dataforce.org.uk/, http://dataforce.org.uk/ and http://shanemcc.co.uk/ will now redirect to the equivalent link on http://blog.dataforce.org.uk/, over time I will work on moving all public content from these sites over to here (There isn’t much, they’ve mostly been used as dumping grounds!)

After this http://home.dataforce.org.uk/ will be primarily for private things and http://dataforce.org.uk/ and http://shanemcc.co.uk/ will simply redirect here. Eventually I may look to transition this blog over to one of the raw domains (probably dataforce.org.uk). Ultimately I’m trying to do this without breaking any links that may exist to files/etc on these domains.

So if stuff breaks over the next few weeks, that’s why. Feel free to leave a comment if you notice anything or something goes missing.

GMail – apply labels to email from group members – Redux

A while ago I posted a python script that allowed automatically adding labels to GMail messages based on contact groups.

Unfortunately, a side effect of this script was that Google occasionally would lock an account out for “suspicious activity”, and for this reason I stopped using the script.

However recently I looked at Google Apps Scripts to see if this would allow me to recreate this using Google-Approved APIs, and the good news is, yes it does.

The following script implements the same behaviour as the old python script. It checks every thread from the past 2 dates (so today, and yesterday) and then for each message in the thread gets the list of groups the sender is in (if the sender is a contact, and in any groups) and then checks to see if there are labels that match the same name, if so it applies them to the message.

To get this running, create a new project on the Google Apps script page, then paste the code in.

Modify scheduledProcessInbox and processInboxAll to include a label prefix if desired (eg contacts/) and then enable the desired schedule (click on the clock icon in the toolbar). Once this has been scheduled you can run an initial pass over the inbox using processInboxAll() - however this is limited to the last 500 threads.

The code can now be found here on github

Any questions/comments/bugs please leave them here or on github.

Microsoft Lync on Linux

Post thumbnail

Update: This post still gets a lot of search traffic hits, but is now over a year old, and I no longer have a need to use Lync, so haven’t needed to keep this working.

I believe that the Ubuntu repos now contain new enough versions of SIPE that the deb mentioned here shouldn’t be needed any more, but that the rest of the instructions should still be valid.

Update 2: I need to use LYNC again. Pidgin from the default Ubuntu repos does indeed now appear to work just fine with a custom user agent. In addition, I’ve also had some success with “WYNC” which works pretty well but has a few minor issues of it’s own.

Recently at work we have started using Lync internally. Whilst this is great for the Windows and Mac users among us, not so much for those of us running on Linux.

However, it turns out that it is possible to get basic Lync support working quite easily. I can see people, talk to people, people can talk to me – I can send files to people, but people can’t send file to me. I’ve not tried any video/voice stuff but I suspect it doesn’t work.

It’s done using “sipe” – basically an open source implementation of the Extended SIP/SIMPLE protocol Lync uses for chat.

The basic steps on Ubuntu are:

  • Install the latest pidgin from pidgin devs ppa apt-get install pidgin pidgin-sipe
  • Download sipe
  • Compile it
  • Connect to lync.

The compiling step is required because we use Office365 for Lync which needs the latest version of SIPE for which a deb does not yet appear to exist. However, I have uploaded my compiled deb which can be found below.

Instructions for Ubuntu (using a pre-compiled deb I’ve uploaded):

sudo apt-add-repository ppa:pidgin-developers/ppa
sudo apt-get update
sudo apt-get install pidgin
wget http://www.myfileservice.net/pidgin-sipe_1.13.1-2_i386.deb
sudo dpkg -i pidgin-sipe_1.13.1-2_i386.deb

Once this is done you can then open pidgin, and add an “Office Communicator” account, using the following settings:

First tab (Basic)
Login: email address
Username: email address
Password: password

Second tab (Advanced)
Server/port: blank
Connection Type: Auto
User Agent: UCCAPI/4.0.7577.314 OC/4.0.7577.314
Auth Scheme: TLS-DSK

Un-tick /Use single sign on/, leave everything below it blank

Ignore the other 2 tabs

Done. Connect, see buddies :)

Amusingly, at home, I’ve actually had more success on Linux than windows! On my windows machine, opening LyncSetup.exe seems to just do nothing at all, the process appears to be running, but no setup window appears.

Issues encountered:

  • The version of pidgin-sipe currently in Ubuntu repos is too old to work with Office 365 (needs 1.13.0, hence compiling myself)
  • Version of pidgin in ubuntu was old, I installed a new version to be sure
  • A colleague of mine seems to have had no success with these steps - pidgin seems to crash immediately after trying to connect

The pidgin-sipe deb above also builds the required “telepathy” binaries – so I’m going to have a go at getting it working with KDE’s native messaging client rather than pidgin, but for now for IM at least, pidgin is quite usable. (As I no longer need to use Lync, I never did get round to trying this)