Docker Swarm with Ceph for cross-server files

Post thumbnail

I’ve been wanting to play with Docker Swarm for a while now for hosting containers, and finally sat down this weekend to do it.

Something that has always stopped me before now was that I wanted to have some kind of cross-site storage but I don’t have any kind of SAN storage available to me just standalone hosts. I’ve been able to work around this using ceph on the nodes.

Note: I’ve never used ceph before, I don’t really know what I’m doing with ceph, so this is all a bit of guesswork. I used Funky Penguin’s Geek Cookbook as a basis for some of this, though some things have changed since then, and I’m using base-centOS not AtomicHost (I tried AtomicHost, but wanted a newer-version of docker so switched away).

All my physical servers run Proxmox, and this is no exception. On 3 of these host nodes I created a new VM (1 per node) to be part of the cluster. These all have 3 disks, 1 for the base OS, 1 for Ceph, 1 for cloud-init (The non-cloud-init disks are all SCSI with individual iothreads).

CentOS provide a cloud-image compatible disk here that I use as the base-os. I created a disk in proxmox, then detached it and overwrote it with the centos-provided image and re-attached it. I could have used an Ubuntu cloud-image instead.

I now had 3 empty CentOS VMs ready to go.

First thing to do, is get the nodes ready for docker:

curl https://download.docker.com/linux/centos/docker-ce.repo -o /etc/yum.repos.d/docker-ce.repo
mkdir /etc/docker
echo '{"storage-driver": "overlay2"}' > /etc/docker/daemon.json
yum install docker-ce
systemctl start chronyd
systemctl enable chronyd
systemctl start docker
systemctl enable docker

And build our swarm cluster.

On the first node:

docker swarm init
docker swarm join-token manager

And then on the other 2 nodes, copy and paste the output from the last command to join to the cluster. This joins all 3 nodes as managers, and you can confirm the cluster is working like so:

[root@ds-2 ~]# docker node ls
ID                            HOSTNAME                         STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
fo6paibeunoo9sulaiqu3iuqu     ds-1.dev.shanemcc.net            Ready               Active              Leader              18.09.1
phoy6ju7ait1aew7yifiemaob *   ds-2.dev.shanemcc.net            Ready               Active              Reachable           18.09.1
eexahtaiza1saibeishu8quie     ds-3.dev.shanemcc.net            Ready               Active              Reachable           18.09.1
[root@ds-2 ~]#

Even though we will be running ceph within docker containers, I’ve also installed the ceph tools on the host node for convenience:

rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
rpm -Uvh https://download.ceph.com/rpm-luminous/el7/noarch/ceph-release-1-1.el7.noarch.rpm
yum install ceph

And all 3 host nodes have SSH keys generated (ssh-keygen -t ed25519) and setup within /root/.ssh/authorized_keys on each node so that I can ssh between them.

Now we can start setting up ceph.

Remove any old ceph that may be lying around:

rm -Rfv /etc/ceph
rm -Rfv /var/lib/ceph
mkdir /etc/ceph
mkdir /var/lib/ceph
chcon -Rt svirt_sandbox_file_t /etc/ceph
chcon -Rt svirt_sandbox_file_t /var/lib/ceph

On the first node, initialise a ceph monitor:

docker run -d --net=host --restart always -v /etc/ceph:/etc/ceph -v /var/lib/ceph/:/var/lib/ceph/ \
-e MON_IP=$(ip addr show dev eth0 | grep "inet " | head -n 1 | awk '{print $2}' | awk -F/ '{print $1}') \
-e CEPH_PUBLIC_NETWORK=$(ip route show dev eth0 | grep link | grep -v 169.254.0.0 | awk '{print $1}') \
--name="ceph-mon" ceph/daemon mon

And then copy the generated data over to the other 2 nodes:

scp -r /etc/ceph/* ds-2:/etc/ceph/
scp -r /etc/ceph/* ds-3:/etc/ceph/

And start the monitor on those also using the same command again.

Now, on all 3 nodes we can start a manager:

docker run -d --net=host --privileged=true --pid=host -v /etc/ceph:/etc/ceph -v /var/lib/ceph/:/var/lib/ceph/ --name="ceph-mgr" --restart=always ceph/daemon mgr

And create the OSDs on all 3 nodes (This will remove all the data from the disk provided (/dev/sdb) so be careful. The disk is given twice here):

ceph auth get client.bootstrap-osd -o /var/lib/ceph/bootstrap-osd/ceph.keyring
docker run --rm --privileged=true -v /dev/:/dev/ -e OSD_DEVICE=/dev/sdb ceph/daemon zap_device
docker run -d --net=host --privileged=true --pid=host -v /etc/ceph:/etc/ceph -v /var/lib/ceph/:/var/lib/ceph/ -v /dev/:/dev/ -e OSD_DEVICE=/dev/sdb -e OSD_TYPE=disk --name="ceph-osd" --restart=always ceph/daemon osd

Once the OSDs are finished initialising on each node (watch docker logs -f ceph-osd), we can create the MDSs on each node:

docker run -d --net=host --name ceph-mds --restart always -v /var/lib/ceph/:/var/lib/ceph/ -v /etc/ceph:/etc/ceph -e CEPHFS_CREATE=1 -e CEPHFS_DATA_POOL_PG=128 -e CEPHFS_METADATA_POOL_PG=128 ceph/daemon mds

And then once these are created, lets tell ceph how many copies of things to keep:

ceph osd pool set cephfs_data size 2
ceph osd pool set cephfs_metadata size 2

And there’s no point scrubbing on VM disks:

ceph osd set noscrub
ceph osd set nodeep-scrub

Now, we have a 3-node ceph cluster set up and we can mount it into the hosts. Each host will mount from itself:

mkdir /var/data
ceph auth get-or-create client.dockerswarm osd 'allow rw' mon 'allow r' mds 'allow' > /etc/ceph/keyring.dockerswarm
echo "$(hostname -s):6789:/      /var/data/      ceph      name=dockerswarm,secret=$(ceph-authtool /etc/ceph/keyring.dockerswarm -p -n client.dockerswarm),noatime,_netdev,context=system_u:object_r:svirt_sandbox_file_t:s0 0 2" >> /etc/fstab
mount -a

All 3 hosts should now have a /var/data directory and files that are created on one should appear automatically on the others.

For my use-case so far, this is sufficient. I’m using files/directories within /var/data as bind mounts (not volumes) in my docker containers currently and it seems to be working. I’m planning on playing about more with this in the coming weeks to see how well it works with more real-world usage.

Advent of Code Benchmarking

Post thumbnail

For a few years now I’ve been enjoying Eric Wastl’s Advent of Code. For those unaware, each year since 2015 Advent of Code provides a 2-part coding challenge every day from December 1st to December 25th.

In previous years, Myself and Chris have been fairly informally trying to see who was able to produce the fastest code (Me in PHP, Chris in Python). In the final week of last year to assist with this, we both made our repos run in Docker and produce time output for each day.

This allowed us to run each other’s code locally to compare fairly without needing to install the other’s dev environment, and made the testing a bit fairer as it was no longer dependant on who had the faster CPU when running their own solution. For the rest of the year this was fine and we carried on as normal. As we got to the end I remarked it would be fun to have a web interface that automatically dealt with it and showed us the scores, but there was obviously no point in doing that once the year was over. Maybe in a future year…

Fast forward to this year. Myself and Chris (and ChrisN) coded up our Day 1 solutions as normal and then some other friends started doing it for the first time. I remembered my plans from the previous year and suggested everyone should also docker-ify their repos… and so they agreed

Now, I’m not one who is lacking in side-projects, but with everyone making their code able to run with a reasonably-similar docker interface, and the first couple of days not yet fully scratching the coding-itch, I set about writing what I now call AoCBench.

The idea was simple:

  • Check out (or update) code
  • Build docker container
  • Run each day multiple times and store time output
  • Show fastest time for each person/day in a table.

And the initial version did exactly that. So I fired up an LXC container on one of my servers and set it off to start running benchmarks and things were good.

AoCBench Main Page

Pretty quickly the first problem became obvious - it was running everything every time which as I added more people really slowed things down, so the next stage was to make it only run when code changed.

In the initial version, the fastest time from 10 runs was the time that was used for the benchmark. But some solutions had wildly-varying times and sometimes “got lucky” with a fast run which unfairly skewed the results. We tried using mean times. Then we tried running the benchmarks more often to see if this resulted in more-alike times. I even tried making it ignore the top-5 slowest times and then taking the mean of the rest. These still didn’t really result in a fair result as there was still a lot of variance. Eventually we all agreed that the median time was probably the fairest given the variance in some of the solutions.

But this irked me somewhat, there was no obvious reason some of the solutions should be so variant.

It seemed like it was mostly the PHP solutions that had the variance, even after switching my container to alpine (which did result in quite a speed improvement over the non-alpine one) I was still seeing variance.

I was beginning to wonder if the host node was too busy. It didn’t look too busy, but it seemed like the only explanation. Moving the benchmarking container to a different host node (that was otherwise empty) seemed to confirm this somewhat. After doing that (and moving it back) I looked some more at the host node. I found an errant fail2ban process sitting using 200% CPU, and killing this did make some improvement (Though the node has 24 cores, so this shouldn’t really have mattered too much. If it wasn’t for AoCBench I wouldn’t even have noticed that!). But the variance remained, so I just let it be. Somewhat irked, but oh well.

AoCBench Matrix Page

We spent the next few evenings all optimising our solutions some more, vying for the fastest code. To level the playing feed some more, I even started feeding everyone the same input to counter the fact that some inputs were just fundamentally quicker than others. After ensuring that everyone was using the same output, the next step was to ensure that everyone gave the right answer and removing them from the table if they didn’t (This caught out a few “optimisations” that optimised away the right answer by mistake!). I also added support for running each solution against everyone else’s input files and displaying this in a grid to ensure that everyone was working for all inputs not just their own (or the normalised input that was being fed to them all).

After all this, the variance problem was still nagging away. One day in particular resulted in huge variances in some solutions (from less than 1s up to more than 15s some times). Something wasn’t right.

I’d already ruled out CPU usage from being at fault because the CPU just wasn’t being taxed. I’d added a sleep-delay between each run of the code in case the host node scheduler was penalising us for using a lot of CPU in quick succession. I’d even tried running all the containers from a tmpfs RAM disk in case the delay was being caused reading in the input data, but nothing seemed to help.

With my own solution, I was able to reproduce the variance on my own local machine, so it wasn’t just the chosen host node at fault. But why did it work so much better with no variance on the idle host node? And what made the code for this day so much worse than the surrounding days?

I began to wonder if it was memory related. Neither the host node or my local machine was particularly starved for memory, but I’d ruled out CPU and DISK I/O at this stage. I changed my code for Day 3 to use SplFixedArray and pre-allocated the whole array at start up before then interacting with it. And suddenly the variance was all but gone. The new solution was slow-as-heck comparatively, but there was no more variance!

So now that I knew what the problem was (Presumably the memory on the busy host node and my local machine is quite fragmented) I wondered how to fix it. Pre-allocating memory in code wasn’t an option with PHP so I couldn’t do that, and I also couldn’t pre-reserve a block of memory within each Docker container before running the solutions. But I could change the benchmarking container from running as an LXC Container to a full KVM VM. That would give me a reserved block of memory that wasn’t being fragmented by the host node and the other containers. Would this solve the problem?

Yes. It did. The extreme-variance went away entirely, without needing any changes to any code. I re-ran all the benchmarks for every person on every day and the levels of variance were within acceptable range for each one.

AoCBench Podium Mode

The next major change came about after Chris got annoyed by python (even under pypy) being unable to compete with the speed improvements that PHP7 has made, and switched to using Nim. Suddenly most of the competition was gone. The compiled code wins every time. every. time. (Obviously). So Podium Mode was added to allow for competing for the top 3 spaces on each day.

Finally, after a lot of confusion around implementations for Day 7 and how some inputs behaved differently than others in different ways in different code, the input matrix code was extended to allow feeding custom inputs to solutions to weed out miss-assumptions and see how they respond to input that isn’t quite so carefully crafted.

If anyone wants to follow along, I have AoCBench running here - and I have also documented here the requirements for making a repo AoCBench compatible. The code for AoCBench is fully open source under the MIT License and available on GitHub

Happy Advent of Code all!

DNS Hosting - Part 3: Putting it all together

Post thumbnail

In my previous posts I discussed the history leading up to, and the eventual rewrite of my DNS hosting solution. So this post will (finally) talk briefly about how it all runs in production on MyDNSHost.

Shortly before the whole rewrite I’d found myself playing around a bit with Docker for another project, so I decided early on that I was going to make use of Docker for the main bulk of the setup to allow me to not need to worry about incompatibilities between different parts of the stack that needed different versions of things, and to update different bits at different times.

The system is split up into a number of containers (and could probably be split up into more).

To start with, I had the following containers:

  • API Container - Deals with all the backend interactions
  • WEB Container - Runs the main frontend that people see. Interacts with the API Container to actually do anything.
  • DB Container - Holds all the data used by the API
  • BIND Container - Runs an instance of bind to handle DNSSEC signing and distributing DNS Zones to the public-facing servers.
  • CRON Container - This container runs a bunch of maintenance scripts to keep things tidy and initiate DNSSEC signing etc.

The tasks in the CRON container could probably be split up more, but for now I’m ok with having them in 1 container.

This worked well, however I found some annoyances when redeploying the API or WEB containers causing me to be logged out from the frontend, so another container was soon added:

  • MEMCACHED Container - Stores session data from the API and FRONTEND containers to allow for horizontal scaling and restarting of containers.

In the first instance, the API Container was also responsible for interactions with the BIND container. It would generate zone files on-demand when users made changes, and then poke BIND to load them. However this was eventually split out further, and another 3 containers were added:

  • GEARMAN Container - Runs an instance of Gearman for the API container to push jobs to.
  • REDIS Container - Holds the job data for GEARMAN.
  • WORKER Container - Runs a bunch of worker scripts to do the tasks the API Container previously did for generating/updating zone files and pushing to BIND.

Splitting these tasks out into the WORKER container made the frontend feel faster as it no longer needed to wait for things to happen and could just fire the jobs off into GEARMAN and let it worry about them. I also get some extra logging from this as the scripts can be a lot more verbose. In addition, if a worker can’t handle a job it can be rescheduled to try again and the workers can (in theory) be scaled out horizontally a bit more if needed.

There was some initial challenges with this - the main one being around how the database interaction worked, as the workers would fail after periods of inactivity and then get auto restarted and work immediately. This turned out to be mainly due to how I’d pulled out the code from the API into the workers. Whereas the scripts in API run using the traditional method where the script gets called and does it’s thing (including setup) then dies, the WORKER scripts were long-term processes, so the DB connections were eventually timing out and the code was not designed to handle this.

Finally, more recently I added statistical information about domains and servers, which required another 2 containers:

  • INFLUXDB Container - Runs InfluxDB to store time-series data and provide a nice way to query it for graphing.
  • CHRONOGRAF Container - Runs Chronograf to allow me to easily pull out data from INFLUXDB for testing.

That’s quite a few containers to manage. To actually manage running them, I make use of Docker-Compose primarily (to set up the various networks, volumes, containers) etc. This works well for the most part, but there are a few limitations around how it deals with restarting containers that cause fairly substantial downtime with upgrading WEB or API. To get around this I wrote a small bit of orchestration scripting that uses docker-compose to scale the WEB and API containers up to 2 (Letting docker-compose do the actual creation of the new container), then manually kills off the older container and then scales them back down to 1. This seems to behave well.

So with all these containers hanging around, I needed a way to deal with exposing them to the web, and automating the process of ensuring they had SSL Certificates (using Let’s Encrypt). Fortunately Chris Smith has already solved this problem for the most part in a way that worked for what I needed. In a blog post he describes a set of docker containers he created that automatically runs nginx to proxy towards other internal containers and obtain appropriate SSL certificates using DNS challenges. For the most part all that was required was running this and adding some labels to my existing containers and that was that…

Except this didn’t quite work initially, as I couldn’t do the required DNS challenges unless I hosted my DNS somewhere else, so I ended up adding support for HTTP Challenges and then I was able to use this without needing to host DNS elsewhere. (And in return Chris has added support for using MyDNSHost for the DNS Challenges, so it’s a win-win). My orchestration script also handles setting up and running the automatic nginx proxy containers.

This brings me to the public-facing DNS Servers. These are currently the only bit not running in Docker (though they could). These run on some standard Ubuntu 16.04 VMs with a small setup script that installs bind and an extra service to handle automatically adding/removing zones based on a special “catalog zone” due to the versions of bind currently in use not yet supporting them natively. The transferring of zones between the frontend and the public servers is done using standard DNS Notify and AXFR. DNSSEC is handled by the backend server pre-signing the zones before sending them to the public servers, which never see the signing keys.

By splitting jobs up this way, in theory it should be possible in future (if needed) to move away from BIND to alternatives (such as PowerDNS or so).

As well as the public service that I’m running, all of the code involved (All the containers and all the Orchestration) is available on Github under the MIT License. Documentation is a little light (read: pretty non-existent) but it’s all there for anyone else to use/improve/etc.

DNS Hosting - Part 2: The rewrite

Post thumbnail

In my previous post about DNS Hosting I discussed the history leading up to when I decided I needed a better personal DNS hosting solution. I decided to code one myself to replace what I had been using previously.

I decided there was a few things that were needed:

  • Fully-Featured API
    • I wanted full control over the zone data programmatically, everything should be possible via the API.
    • The API should be fully documented.
  • Fully-Featured default web interface.
    • There should be a web interface that fully implements the API. Just because there is an API shouldn’t mean it has to be used to get full functionality.
    • There should exist nothing that only the default web ui can do that can’t be done via the API as well.
  • Multi-User support
    • I also host some DNS for people who aren’t me, they should be able to manage their own DNS.
  • Domains should be shareable between from users
    • Every user should be able to have their own account
    • User accounts should be able to be granted access to domains that they need to be able to access
      • Different users should have different access levels:
      • Some just need to see the zone data
      • Some need to be able to edit it
      • Some need to be able to grant other users access
  • Backend Agnostic
    • The authoritative data for the zones should be stored independently from the software used to serve it to allow changing it easily in future

These were the basic criteria and what I started off with when I designed MyDNSHost.

MyDNSHost Homepage

Now that I had the basic criteria, I started off by coming up with a basic database structure for storing the data that I thought would suit my plans, and a basic framework for the API backend so that I could start creating some initial API endpoints. With this in place I was able to create the database structure, and pre-seed it with some test data. This would allow me to test the API as I created it.

I use chrome, so for testing the API I use the Restlet Client extension.

Armed with a database structure, a basic API framework, and some test data - I was ready to code!

Except I wasn’t.

Before I could start properly coding the API I needed to think of what endpoints I wanted, and how the interactions would work. I wanted the API to make sense, so wanted to get this all planned first so that I knew what I was aiming for.

I decided pretty early on that I was going to version the API - that way if I messed it all up I could re do it and not need to worry about backwards compatability, so for the time being, everything would exist under the /1.0/ directory. I came up with the following basic idea for endpoints:

MyDNSHost LoggedIn Homepage
  • Domains
    • GET /domains - List domains the current user has access to
    • GET /domains/<domain> - Get information about
    • POST /domains/<domain> - Update domain
    • DELETE /domains/<domain> - Delete domain
    • GET /domains/<domain>/records - Get records for
    • POST /domains/<domain>/records - Update records for
    • DELETE /domains/<domain>/records - Delete records for
    • GET /domains/<domain>/records/<recordid> - Get specific record for
    • POST /domains/<domain>/records/<recordid> - Update specific record for
    • DELETE /domains/<domain>/records/<recordid> - Delete specific record for
  • Users
    • GET /users - Get a list of users (non-admin users should only see themselves)
    • GET /users/(<userid>|self) - Get information about a specific user (or the current user)
    • POST /users/(<userid>|self) - Update information about a specific user (or the current user)
    • DELETE /users/<userid> - Delete a specific user (or the current user)
    • GET /users/(<userid>|self)/domains - Get a list of domains for the given user (or the current user)
  • General
    • GET /ping - Check that the API is responding
    • GET /version - Get version info for the API
    • GET /userdata - Get information about the current login (user, access-level, etc)

This looked sane so I set about with the actual coding!

Rather than messing around with oauth tokens and the like I decided that every request to the API should be authenticated. Initially using basic-auth and username/password, but eventually also using API Keys, this made things fairly simple whilst testing, and made interacting with the API via scripts quite straight forward (no need to grab a token first and then do things).

The initial implementation of the API with domain/user editing functionality and API Key support was completed within a day, and then followed a week of evenings tweaking and adding functionality that would be needed later - such as internal “hook” points for when certain actions happened (changing records etc) so that I could add code to actually push these changes to a DNS Server. As I was developing the API, I also made sure to document it using API Blueprint and Aglio - it was easier to keep it up to date as I went, than to write it all after-the-fact.

Once I was happy with the basic API functionality and knew from my (manual) testing that it functioned as desired, I set about on the Web UI. I knew I was going to use Bootstrap for this because I am very much not a UI person and bootstrap helps make my stuff look less awful.

MyDNSHost Records View

Now, I should point out here, I’m not a developer for my day job, most of what I write I write for myself to “scratch an itch” so to speak. I don’t keep up with all the latest frameworks and best practices and all that. I only recently in the last year switched away from hand-managing project dependencies in Java to using gradle and letting it do it for me.

So for the web UI I decided to experiment and try and do things “properly”. I decided to use composer for dependency management for the first time and then used a 3rd party request-router Bramus/Router for handling how pages are loaded and used Twig for templating. (At this point, the API code was all hand-coded with no 3rd party dependencies. However my experiment with the front end was successful and the API Code has since changed to also make use of composer and some 3rd party dependencies for some functionality.)

The UI was much quicker to get to an initial usable state - as all the heavy lifting was already handled by the backend API code, the UI just had to display this nicely.

I then spent a few more evenings and weekends fleshing things out a bit more, and adding in things that I’d missed in my initial design and implementations. I also wrote some of the internal “hooks” that were needed to make the API able to interact with BIND and PowerDNS for actually serving DNS Data.

As this went on, whilst the API Layout I planned stayed mostly static except with a bunch more routes added, I did end up revisiting some of my initial decisions:

  • I moved from a level-based user-access to the system for separating users and admins, to an entirely role-based system.
    • Different users can be granted access to do different things (eg manage users, impersonate users, manage all domains, create domains, etc)
  • I made domains entirely user-agnostic
    • Initially each domain had an “owner” user, but this was changed so that ownership over a domain is treated the same as any other level of access on the domain.
    • This means that domains can technically be owned by multiple people (Though in normal practice an “owner” can’t add another user as an “owner” - only users with “Manage all domains” permission can add users at the “owner” level)
    • This also allows domain-level API Keys that can be used to only make changes to a certain domain not all domains a user has access to.

Eventually I had a UI and API system that seemed to do what I needed and I could look at actually putting this live and starting to use it (which I’ll talk about in the next post).

After the system went live I also added a few more features that were requested by users that weren’t part of my initial requirements, such as:

  • TOTP 2FA Support
    • With “remember this device” option rather than needing to enter a code every time you log in.
  • DNSSEC Support
  • EMAIL Notifications when certain important actions occur
    • User API Key added/changed
    • User 2FA Key added/changed
  • WebHooks when ever zone data is changed
  • Ability to use your own servers for hosting the zone data not mine
    • The live system automatically allows AXFR for a zone from any server listed as an NS on the domain and sends appropriate notifies.
  • Domain Statistics (such as queries per server, per record type, per domain etc)
  • IDN Support
  • Ability to import and export raw BIND data.
    • This makes it easier for people to move to/from the system without needing any interaction with admin users or needing to write any code to deal with zone files.
    • Ability to import Cloudflare-style zone exports.
      • These look like BIND zone files, but are slightly invalid, this lets users just import from cloudflare without needing to manually fix up the zones.
  • Support for “Exotic” record types: CAA, SSHFP, TLSA etc.
  • Don’t allow domains to be added to accounts if they are sub-domains of an already-known about domain.
    • As a result of this, also don’t allow people to add obviously-invalid domains or whole TLDs etc.

GMail – apply labels to email from group members – Redux

A while ago I posted a python script that allowed automatically adding labels to GMail messages based on contact groups.

Unfortunately, a side effect of this script was that Google occasionally would lock an account out for “suspicious activity”, and for this reason I stopped using the script.

However recently I looked at Google Apps Scripts to see if this would allow me to recreate this using Google-Approved APIs, and the good news is, yes it does.

The following script implements the same behaviour as the old python script. It checks every thread from the past 2 dates (so today, and yesterday) and then for each message in the thread gets the list of groups the sender is in (if the sender is a contact, and in any groups) and then checks to see if there are labels that match the same name, if so it applies them to the message.

To get this running, create a new project on the Google Apps script page, then paste the code in.

Modify scheduledProcessInbox and processInboxAll to include a label prefix if desired (eg contacts/) and then enable the desired schedule (click on the clock icon in the toolbar). Once this has been scheduled you can run an initial pass over the inbox using processInboxAll() - however this is limited to the last 500 threads.

The code can now be found here on github

Any questions/comments/bugs please leave them here or on github.

IPv6 with Endian Community Firewall (EFW) 2.4.0

First post in over a year! Oops.

For a while now, my home ADSL provider (EntaNET) has provided me with an IPv6 allocation, but I’ve never really used it (Its been on my to-do list for some time) primarily due to the fact that it is unsupported by Endian which I use for my home router/firewall.

However the other day after being asked about IPv6 at my day job, I decided I wanted to get this working, and decided to document it here in case it can assist anyone else in future. (I also finally got round to completing the Hurricane Electric IPv6 Certification up to sage level)

There’s a few things worth noting before we continue here.

  1. I use a Draytek Vigor 120 for my adsl modem - this is a PPPoA to PPPoE bridge. This means that my Endian box uses PPPoE to get its Internet connection, and directly receives an IPv4 address via the PPP session. There is no “PPP Half-Bridge” tricks here (such as where Modem does authentication, then DHCPs the address to Endian).
  2. Due to Endian lacking support for IPv6 you will need to use SSH to configure this, and any Endian upgrades will probably reverse a fair chunk of it. (Also, some reconfigurations may also undo things) - so with this in mind the rest of this guide assumes you are familiar with SSH and have successfully logged in as root to the Endian box (SSH can be enabled under the “System” section and “SSH Access”).
  3. Due to previous requirements, my Endian server is not “pure” in that I have additional packages installed that made this easier. Notably, a complete build environment. This won’t be needed here.
  4. This was all done without writing it down, so this documentation is based on my recollection and attempts at replicating various parts on a VirtualBox VM (which can’t do PPPoE…). If I’ve missed anything, please let me know in the comments.
  5. This was done with EFW 2.4.0 and may not work in the latest 2.5.1 version.
  6. I have only had this running for a few days, so there may be some unforeseen issues with this.

With this in mind, we continue to the actual important stuff!

The way EntaNET do IPv6, with a default setup you will get an IP Address allocated over PPP that is in a /64, but you also get a /56 which is routed to you. We will use a /64 from the /56 as the address for the LAN.

For the purposes of this, we are going to assume the following:

  • 2001:DB8:4D51:AA00::/56 - /56 range allocated to us by the ISP
  • 2001:DB8:4D51:AAFF::/64 - /64 range we are going to use internally.
  • 2001:DB8:4D51:FFFF::/64 - /64 range advertised across the PPP session.

The first thing to do, is to have Endian actually ask for IPv6 from the upstream provider at PPP time. This is easy:

echo "+ipv6" >> /etc/ppp/peers/defaults/pppd-pppoe

Assuming that the ADSL provider and modem both support IPv6, and you have been assigned an allocation you will see an IPv6 address attached to ppp0 once your session is active. This is from the PPP /64 and is not part of your /56 allocation.

So now we know that IPv6 works we can disconnect the PPP session.

Now, we actually want to be able to do something with our allocation, so we will want to announce it to our network.

For this, we will need radvd which will send the required RA Packets out to the network. As Endian 2.4.0 is built on Fedora Core 3, we can use the existing package for this, these can currently be found here

Unfortunately, Endian doesn’t quite provide a complete environment, so we will need to force the install to ignore dependencies (specifically, chkconfig and /sbin/service are missing).

rpm --nodeps -Uvh http://archives.fedoraproject.org/pub/archive/fedora/linux/core/3/i386/os/Fedora/RPMS/radvd-0.7.2-9.i386.rpm

We can now configure this to announce our prefix by editing /etc/radvd.conf to something like this:

interface br0
{
        AdvSendAdvert on;
        MinRtrAdvInterval 3;
        MaxRtrAdvInterval 10;
        AdvHomeAgentFlag off;
        prefix 2001:DB8:4D51:AAFF::/64
        {
                AdvOnLink on;
                AdvAutonomous on;
                AdvRouterAddr off;
        };
};

To trick radvd into starting, we also need to create a dummy file that exists in real RedHat-esque distros that Endian doesn’t provide:

echo "NETWORKING_IPV6=yes" >> /etc/sysconfig/network

We also need to enable IPv6 forwarding:

sysctl net.ipv6.conf.all.forwarding=1

and we should now be able to start radvd:

/etc/init.d/radvd start

Now, if we bring the ppp connection back up, you’ll notice that the ppp0 interface no longer gets allocated a routable IPv6 address from the PPP /64. This is because with ipv6 forwarding turned on, this host is now acting as an ipv6 router, and ipv6 routers ignore RA packets.

This isn’t a problem.

At this point, your LAN boxes will have IPv6 addresses, but the LAN boxes won’t be able to communicate with the internet yet.

To fix this, we need to tell the Endian box how to route traffic, specifically to both our LAN, and the default route:

route --inet6 add 2001:DB8:4D51:AAFF::/64 dev br0
route --inet6 add default dev ppp0

With this however, the Endian box won’t have IPv6 connectivity, if this is something that is required, we can do something like this instead:

ip -6 addr add 2001:DB8:4D51:AAFF::/64 dev br0
route --inet6 add default dev ppp0

But remember, that any time Endian makes any changes to the network configuration, this will be lost.

Endian’s version of iputils is missing ping6 and traceroute6, but we can install these as follows:

cd /
curl http://archives.fedoraproject.org/pub/archive/fedora/linux/core/3/i386/os/Fedora/RPMS/iputils-20020927-16.i386.rpm > iputils.rpm
rpm2cpio iputils.rpm | cpio -ivd '*6'
rm iputils.rpm

This will give you ping6 etc to allow you to verify everything so far.

The next thing to do then is firewalling, this is done with ip6tables, which again Endian doesn’t have, however we can install ip6tables using the iptables-ipv6 package available in the RPM repo above)

rpm -Uvh http://archives.fedoraproject.org/pub/archive/fedora/linux/core/3/i386/os/Fedora/RPMS/iptables-ipv6-1.2.11-3.1.i386.rpm

Now you’ll be able to create firewall rules for your IPv6 connectivity. Its worth noting though that this version of ip6tables doesn’t support some modules (comment and state that I’ve seen so far). If you want these modules, then you’ll need to compile a newer version of iptables. (I’ve got a follow up post with a guide for this.)

To support this, I wrote a set of scripts for parsing “formatted-english” rules files into iptables rules, so lets install that and configure some rules.

cd /root
wget https://github.com/ShaneMcC/Firewall-Rules/zipball/master -O fwrules.zip
unzip fwrules.zip
mv ShaneMcC-Firewall-Rules-* fwrules
cp fwrules/example.rules fwrules/rules.rules
chmod a+x fwrules/run.sh

Looking at fwrules/rules.rules should give you a good guide on how the rules work, and you can edit these to your needs.

Once you are happy, the rules can be installed by running:

./fwrules/run.sh

The last thing then is to make this all work automatically.

In theory we should be able to just drop some files into the subfolders of /etc/uplinksdaemon, or into ifup.d or ifdown.d folders inside mkdir /var/efw/uplinks/main/ but neither of these approaches works. So instead we will make a minor modification to /usr/lib/uplinks/generic/hookery.sh and then have a script of our own do it.

Firstly, the minor change and an empty file:

sed -ri 's#^(.*log_done "Notify uplinks.*)$#\1\n    /sbin/uplinkchanged.sh "$@" >/dev/null 2>&1#' /usr/lib/uplinks/generic/hookery.sh
touch /sbin/uplinkchanged.sh
chmod a+x /sbin/uplinkchanged.sh

Now we can put the following into /sbin/uplinkchanged.sh:

#!/bin/bash

OURPREFIX="2001:DB8:4D51:AAFF::/64"

UPLINK=${1}
STATUS=${2}

if [ "${STATUS}" = "" ]; then
	exit 0;
fi;

PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin

if [ "${UPLINK}" = "main" ]; then
	if [ "${STATUS}" = "OK" ]; then
		sysctl net.ipv6.conf.all.forwarding=1
		/etc/init.d/radvd start
		route --inet6 add ${OURPREFIX} dev br0 2>&1
		route --inet6 add default dev ppp0
		/root/fwrules/run.sh
	elif [ "${STATUS}" = "FAILED" ]; then
		route --inet6 del default dev ppp0

		ip6tables -F
		ip6tables -X
		ip6tables -P INPUT ACCEPT
		ip6tables -P OUTPUT ACCEPT
		ip6tables -P FORWARD ACCEPT
	fi;
fi;

Now when the state of the main uplink changes, the relevant ipv6-related commands will be run to ensure that connectivity remains.

And that’s it, you should now have native IPv6 connectivity combined with Endian. Feel free to leave any comments you have regarding this.

Ident Server

I recently encountered a problem on a server that I manage where by the oidentd server didn’t seem to be working.

Manual tests worked, but connecting to IRC Servers didn’t.

I tried switching oidentd with ident2 and the same problem.

After switching back, and a bit of debugging later it appeared that the problem was that the IRC Servers were expecting spaces in the ident reply, whereas oidentd wasn’t giving them.

I then quickly threw together an xinet.d-powered ident server with support for spoofing.

First the xinet.d config:

service ident
{
	disable = no
	socket_type = stream
	protocol = tcp
	wait = no
	user = root
	server = /root/identServer.php
	nice = 10
}

Unfortunately yes, this does need to run as root otherwise it is unable to see what process is listening on a socket. In future I plan to change it to allow it to run without needing to be root (by using sudo for the netstat part)

Now for the code itself:

Edit: The code is now available on github: ShaneMcC/phpident

I welcome any comments about this, or any improvements and hope that it will be useful for someone else.

GitWeb Hacking.

Recently I setup gitweb on one of my servers to allow a web-based frontend to any git projects which the users of the server place in their ~/git/ directory.

After playing about with it, I noticed that it allowed for placing a README.html file in the git config directory to allow extra info to be shown on the summary view, managed to get it to pull the README.html file from the actual repository itself, and not the config directory, thus allowing the README.html to be versioned along with everything else, and not require the user to edit it on the server, but rather just edit it locally and push it.

This is a simple change in /usr/lib/cgi-bin/gitweb.cgi:

From (line 3916 or so):

    if (-s "$projectroot/$project/README.html") {
        if (open my $fd, "$projectroot/$project/README.html") {
            print "<div class=\"title\">readme</div>\n" .
                  "<div class=\"readme\">\n";
            print $_ while (<$fd>);
            print "\n</div>\n"; # class="readme"
            close $fd;
        }
    }

To:

if (my $readme_base = $hash_base || git_get_head_hash($project)) {
        if (my $readme_hash = git_get_hash_by_path($readme_base, "README.html", "blob")) {
            if (open my $fd, "-|", git_cmd(), "cat-file", "blob", $readme_hash) {
                print "<div class=\"title\">readme</div>\n";
                print "<div class=\"readme\">\n";

                print <$fd>;
                close $fd;
                print "\n</div>\n";
            }
        }
    }

I also added a second slightly hack that uses google’s code prettyfier when displaying a file, and makes the line numbers separate from the code so they don’t copy also when you copy the code,

From (line 2476 or so):

print "</head>\n" .
              "<body>\n";

To:

print qq(<link href="http://google-code-prettify.googlecode.com/svn/trunk/src/prettify.css" type="text/css" rel="stylesheet" />\n);
        print qq(<script src="http://google-code-prettify.googlecode.com/svn/trunk/src/prettify.js" type="text/javascript"></script>\n);

        print "</head>\n" .
              "<body onload=\"prettyPrint()\">\n";

and

From (line 4351 or so):

while (my $line = <$fd>) {
        chomp $line;
        $nr++;
        $line = untabify($line);
        printf "<div class=\"pre\"><a id=\"l%i\" href=\"#l%i\" class=\"linenr\">%4i</a> %s</div>\n",
               $nr, $nr, $nr, esc_html($line, -nbsp=>1);
    }

To:

print "<table><tr><td class=\"numbers\"><pre>";
    while (my $line = <$fd>) {
        chomp $line;
        $nr++;
        printf "<a id=\"l%i\" href=\"#l%i\" class=\"linenr\">%4i</a>\n", $nr, $nr, $nr;
    }
    print "</pre></td>";
    open my $fd2, "-|", git_cmd(), "cat-file", "blob", $hash;
    print "<td class=\"lines\"><pre class=\"prettyprints\">";
    while (my $line = <$fd2>) {
        chomp $line;
        $line = untabify($line);
        printf "%s\n", esc_html($line, -nbsp=>1)
    }
    print "</pre></td></tr></table>";
    close $fd2;

This could do with a quick clean up (reuse $fd rather than opening $fd2) but it works.

New Phone – T-Mobile G1.

Recently I acquired a T-Mobile G1 to replace my old T-Mobile MDA Vario 2 (HTC Hermes).

All I can say about this phone is that it is quite awesome. I no longer need to run an exchange server to keep my contacts/calendar synced somewhere as the G1 syncs everything to Google Mail/Calendar.

Its a really good phone and I recommend it to anyone who is thinking of getting a new phone, the integration with Google is especially useful, and the full-html (including CSS and javascript) is very nice.

JDesktopPane Replacement

As as I mentioned before I’ve been recently converting an old project to Java.

This old project was an MDI application, and when creating the UI for the conversion, I found the default JDesktopPane to be rather crappy. Google revealed others thought the same, one of the results that turned up was: http://www.javaworld.com/javaworld/jw-05-2001/jw-0525-mdi.html

So, I created DFDesktopPane based on this code, with some extra changes:

  • Frames can’t end up with a negative x/y
  • Respond to resize events of the JViewport parent
  • Iconified icons move themselves to remain inside the desktop at all times.
  • Handles maximised frames correctly (desktop doesn’t scroll, option to hide/remove titlebar)

My modified JDesktopPane can be found as here part of my dflibs Google code project.

Other useful things can be found here, take a look and leave any feedback either here or on the project issue tracker