DNS Router
Say you have a zone that does not fit in the memory of one machine. Who hasn’t these zones nowadays? How would you solve such a problem? With a DNS router of course!
Dns router is a small Go program I whipped together that acts as a
DNS router. Clients register an <ip:port, regexp>
combination and will then only
receive queries that match that regular expression. The registration happens
in Etcd. Of course “Dns router” (I need a better name), has some
features, it will:
- health checks the server every 5 seconds using TCP using a
id.server.
TXT CH query; - set an Ectd watch to get updates when a new server is added or removed.
So it’s pretty dynamic, but the health checking could be better, as servers will never be re-added once removed.
Ldns actually has an utility to split a zonefile into chunks (with
a new SOA, called zsplit
, see
http://git.nlnetlabs.nl/ldns/tree/examples/ldns-zsplit.1. In this case I just
manually split a zone into 2 chunks, one with names starting with [ab]
and another
with [cd]
. Of course the apex of the zone needs to go somewhere, so this has to
be specified somewhere. See the examples later in this article.
For the purpose of this article I’ve used 2 docker images with BIND9 and the 2 (split) zones I have prepared.
The whole “how-do-I-prepare-an-Docker-image” will be left out, there is plenty
of documentation on the Net on this. In all I’ve created two docker images,
two running BIND9 and pieces of miek.nl
. After
fiddling with docker I found the following command line would start my VMs OK:
docker run -p 5300:53/udp -p 5300:53 -d miek/bind:bind9a
And run the other docker container on a different port:
docker run -p 5301:53/udp -p 5301:53 -d miek/bind:bind9c
So, all a-b names are reachable on port 5300 and all c-d names can be found via port 5301.
Assuming we have an etcd
running on our host we register our two docker VMs with it and then start dnsrouter.
curl -L http://127.0.0.1:4001/v2/keys/dnsrouter/a1 -XPUT -d value="127.0.0.1:5300,^[ab]\.miek"
curl -L http://127.0.0.1:4001/v2/keys/dnsrouter/c1 -XPUT -d value="127.0.0.1:5301,^[cd]\.miek"
And two routes for the apex of the zone, dnsrouter will round robin between the two servers.
% DNS_ADDR=127.0.0.1:5299 ./dnsrouter
2014/05/17 10:54:38 enabling health checking
2014/05/17 10:54:38 setting watch
2014/05/17 10:54:38 getting initial list
2014/05/17 10:54:38 unable to parse node /dnsrouter with value # small bug I need to fix
2014/05/17 10:54:38 adding route ^[ab]\.miek for 127.0.0.1:5300
2014/05/17 10:54:38 adding route ^[cd]\.miek for 127.0.0.1:5301
2014/05/17 10:54:38 ready for queries
So dnsrouter is running on port 5299, lets try some queries and check the logs of dnsrouter.
% dig @localhost +noall +ans -p 5299 TXT a.miek.nl
a.miek.nl. 43200 IN TXT "aa"
% dig @localhost +noall +ans -p 5299 TXT c.miek.nl
c.miek.nl. 43200 IN TXT "cc"
And the logs from dnsrouters:
2014/05/17 11:04:07 routing a.miek.nl. to 127.0.0.1:5300
2014/05/17 11:04:12 routing c.miek.nl. to 127.0.0.1:5301
A request for the apex of the zone fails because we don’t have setup a route for it, so let’s add two:
2014/05/17 11:06:51 adding route ^miek for 127.0.0.1:5301
2014/05/17 11:06:58 adding route ^miek for 127.0.0.1:5300
And dig again:
% dig @localhost +noall +ans -p 5299 SOA miek.nl
miek.nl. 43200 IN SOA linode.atoom.net. miek.miek.nl. 1282630056 14400 3600 604800 86400
And we even see some round robin at work:
2014/05/17 11:07:23 routing miek.nl. to 127.0.0.1:5300
2014/05/17 11:07:23 routing miek.nl. to 127.0.0.1:5301
Let’s kill one of the docker VMs. Dnsrouter should detect this and disable that server. It does not autmatically re-add it, for that you need to write again to etcd, which will then automatically be picked up by Dnsrouter.
% docker stop 29fde54f64f8
...
2014/05/17 11:22:00 healthcheck failed for 127.0.0.1:5300
2014/05/17 11:22:00 removing 127.0.0.1:5300
And starting it again:
% docker start 29fde54f64f8
% curl -L http://127.0.0.1:4001/v2/keys/dnsrouter/a1 -XPUT -d value="127.0.0.1:5300,^[ab]\.miek"
...
2014/05/17 11:23:41 adding route ^[ab]\.miek for 127.0.0.1:5300
In an upcoming article I will describe how I got this running on CoreOS.