Boston Linux & UNIX was originally founded in 1994 as part of The Boston Computer Society. We meet on the third Wednesday of each month at the Massachusetts Institute of Technology, in Building E51.

BLU Discuss list archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Discuss] New document on Unbound caching DNS server

On Thu, Sep 13, 2018 at 07:36:26PM -0400, Steve Litt wrote:
> Hi all,
> The Unbound DNS server is the new kid on the block. A lot of admins are
> replacing BIND9 with Unbound, perhaps plus an authoritative DNS server
> for their domain. 

Why?  BIND9, for whatever flaws it may have, is robust,
well-understood software.  What advantages does Unbound offer that
outweigh the benefit of running well established code?

> More interesting still, a lot of laptop owners are installing Unbound
> to replace their old or per-accesspoint resolvers with a full
> caching DNS, which is more secure, faster, and makes for much faster
> browsing.

FWIW, this is often a bad idea.  On average, you will typically get
the best overall performance by using your ISP's DNS servers (unless
you know they're bad).  If you care about why, the short answer is
CDNs, but here's a somewhat lengthy explanation:

Most of the Internet's traffic is served by one CDN or another, and a
common trick is for them to route you to the most performant CDN
server based on where the DNS request came from, i.e. your local DNS
server, not your desktop.   The CDN has to route you to the servers
before it knows what your IP is, since it does that by responding to
DNS requests for the hosted domain, and those requests came from your
DNS server--not your web browser (or whatever).  So they do this under
the assumption that most people are using their ISP's DNS servers, and
those servers are placed for optimal performance of their customers
(which may or may not be actually true, but usually is).

A secondary effect of this is that since CDNs may want to reroute
traffic around flapping routes, down CDN servers, etc. DNS record TTLs
tend to be quite short for busy sites--and you want that so they can
be rerouted quickly if the servers you're hitting go down/get
slow/etc.  But the side effect of this is that caching DNS servers
will either need to re-resolve them rather frequently, or ignore the
TTL.  Both things are bad, unless the resolver has low-latency,
low-loss connections to the top levels and the authoritative DNS
servers for all the domains you're visiting.  But with your ISP's
servers, they're likely to be busy enough that someone else's request
will re-prime the pump. Occasionally it will be one of your requests
that gets a cache miss that needs to do a full resolution, but
typically this will happen so infrequently that it won't ever seem
like there's a problem to you.  Hosted sites often have TTLs of under
5 minutes, and I've seen them as low as 1s at least once! ;-)

You can, of course, run your own caching server, which will
provide the best locality for the CDNs.  But unless your home network
is busy (and depending on your caching policy), your ISP's DNS servers
are much more likely to have a site you want to visit cached already,
preventing a full look-up for nearly every name resolution.  I ran my
own caching DNS server for a while and I found that except for the
sites I used the most, it was significantly slower than just using my
ISP's DNS servers, and even then when a TTL expired the lookups could
be quite slow.  If you have your DNS server cache longer than the TTL
for the DNS record, to circumvent that problem, then you may miss when
the CDN has rerouted traffic from your network, causing you to use
poorly-performing or even broken servers.

Google's public DNS is probably somewhat immune to these effects,
since it uses anycast to advertise the same IPs for DNS servers in
multiple locations.  As long as there's one reasonably close to you,
in terms of network topology, it should have roughly the same effect
as using your ISP's DNS servers, AND benefit from frequent use keeping
the cache primed.  The down side is, Google will know everything
you're doing on the internet.  But then, using your ISP's servers has
the same down side.  

Currently (at work), I'm getting 14 hops, ~7ms to, YMMV.  For
DNS resolution, though that's almost 3x the latency to my local DNS
servers, using this server would work fine.  If this were at home, I
might consider switching to Google DNS if I found my ISP's servers to
be slow/unreliable.  However, for CDN routing purposes, the 14 hops
cross at least 3 provider boundaries (that I can identify quickly),
could result in potentially extremely non-optimal selections of CDN
servers.  If that turned out to be the case I'd want to switch back to
using the ISP's servers.

Derek D. Martin   GPG Key ID: 0xDFBEAD02
This message is posted from an invalid address.  Replying to it will result in
undeliverable mail due to spam prevention.  Sorry for the inconvenience.

BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!

Boston Linux & Unix /