DekGenius.com
I l@ve RuBoard Previous Section Next Section

8.2 Adding More Name Servers

When you need to create new name servers for your zones, the simplest recourse is to add slaves. You already know how—we went over it in Chapter 4—and once you've set up one slave, cloning it is a piece of cake. But you can run into trouble by adding slaves indiscriminately.

If you run a large number of slave servers for a zone, the primary master name server can take quite a beating just keeping up with the slaves' polling to check that their zone data is current. There are a number of courses of action to take for this problem:

  • Make more primary master name servers

  • Increase the refresh interval so that the slaves don't check so often

  • Direct some of the slave name servers to load from other slave name servers

  • Create caching-only name servers (described later)

  • Create "partial-slave" name servers (also described later)

8.2.1 Primary Master and Slave Servers

Creating more primaries means extra work for you, since you have to keep /etc/named.conf and the zone data files synchronized manually. Whether or not this is preferable to your other alternatives is your call. You can use tools like rdist or rsync[6] to simplify the process of distributing the files. A distfile [7] to synchronize files between primaries might be as simple as the following:

[6] rsync is a remote file synchronization program that transmits only the differences between files. You can find out more about it at http://rsync.samba.org.

[7] The file rdist reads to find out which files to update.

dup-primary:

# copy named.conf file to dup'd primary

/etc/named.conf  -> wormhole
    install ;

# copy contents of /var/named (zone data files, etc.) to dup'd primary

/var/named -> wormhole
    install ;

or for multiple primaries:

dup-primary:

primaries =  ( wormhole carrie )
/etc/named.conf  -> {$primaries}
    install ;

/var/named -> {$primaries}
    install ;

You can even have rdist trigger your name server's reload using the special option by adding lines like:

special /var/named/* "ndc reload" ;
special /etc/named.conf "ndc reload" ;

These tell rdist to execute the quoted command if any of the files change.

Increasing your zone's refresh interval is another option. This slows down the propagation of new information, however. In some cases, this is not a problem. If you rebuild your zone data with h2n only once each day at 1 a.m. (run from cron) and then allow six hours for the data to distribute, all the slaves will be current by 7 a.m.[8] That may be acceptable to your user population. See Section 8.4.1 later in this chapter for more detail.

[8] And, of course, if you're using NOTIFY, they'll catch up much sooner than that.

You can even have some of your slaves load from other slaves. Slave name servers can load zone data from other slave name servers instead of loading from a primary master name server. The slave name server can't tell if it is loading from a primary or from another slave. It's important only that the name server serving the zone transfer is authoritative for the zone. There's no trick to configuring this. Instead of specifying the IP address of the primary in the slave's configuration file, you simply specify the IP address of another slave.

Here are the contents of the file named.conf:

// this slave updates from wormhole, another
// slave
zone "movie.edu" {
                type slave;
                masters { 192.249.249.1; };
                file "bak.movie.edu";
};

For a BIND 4 server, this would look slightly different.

Here are the contents of the file named.boot:

; this slave updates from wormhole, another slave
secondary   movie.edu   192.249.249.1   bak.movie.edu

When you go to this second level of distribution, though, it can take up to twice as long for the data to percolate from the primary master name server to all the slaves. Remember that the refresh interval is the period after which the slave name servers will check to make sure that their zone data is still current. Therefore, it can take the first-level slave servers the entire refresh interval before they get a new copy of the zone from the primary master server. Similarly, it can take the second-level slave servers the entire refresh interval to get a new copy of the zone from the first-level slave servers. The propagation time from the primary master server to all of the slave servers can therefore be twice the refresh interval.

One way to avoid this is to use the NOTIFY feature in BIND 8 and 9. This is on by default, and will trigger zone transfers soon after the zone is updated on the primary master. Unfortunately, it works only on Version 8 and 9 BIND slaves.[9] We'll discuss NOTIFY in more detail in Chapter 10.

[9] And, incidentally, on the Microsoft DNS Server.

If you decide to configure your network with two (or more) tiers of slave name servers, be careful to avoid updating loops. If we were to configure wormhole to update from diehard and then accidentally configure diehard to update from wormhole, neither would ever get data from the primary master. They would merely check their out-of-date serial numbers against each other and perpetually decide that they were both up to date.

8.2.2 Caching-Only Servers

Creating caching-only name servers is another alternative when you need more servers. Caching-only name servers are name servers not authoritative for any zones (except 0.0.127.in-addr.arpa). The name doesn't imply that primary master and slave name servers don't cache—they do. The name implies that the only function this server performs is looking up data and caching it. As with primary master and slave name servers, a caching-only name server needs a root hints file and a db.127.0.0 file. The named.conf file for a caching-only server contains these lines:

options {
	directory "/var/named";  // or your data directory
};

zone "0.0.127.in-addr.arpa" {
	 type master;
	file "db.127.0.0";
};

zone "." {
	type hint;
	file "db.cache";
};

On a BIND 4 server, the named.boot file looks like this:

directory /var/named  ; or your data directory

primary 0.0.127.in-addr.arpa  db.127.0.0  ; for loopback address
cache   .                     db.cache

A caching-only name server can look up domain names inside and outside your zone, as can primary master and slave name servers. The difference is that when a caching-only name server initially looks up a name within your zone, it ends up asking one of the primary master or slave name servers for your zone for the answer. A primary or slave would answer the same question out of its authoritative data. Which primary or slave does the caching-only server ask? As with name servers outside your zone, it finds out which name servers serve your zone from one of the name servers for your parent zone. Is there any way to prime a caching-only name server's cache so it knows which hosts run primary master and slave name servers for your zone? No, there isn't. You can't use db.cache —the db.cache file is only for root name server hints. And actually, it's better that your caching-only name servers find out about your authoritative name servers from your parent zone's name servers: you keep your zone's delegation information up to date. If you hard-wired a list of authoritative name servers on your caching-only name servers, you might forget to update it.

A caching-only name server's real value comes after it builds up its cache. Each time it queries an authoritative name server and receives an answer, it caches the records in the answer. Over time, the cache will grow to include the information most often requested by the resolvers querying the caching-only name server. And you avoid the overhead of zone transfers—a caching-only name server doesn't need to do them.

8.2.3 Partial-Slave Servers

In between a caching-only name server and a slave name server is another variation: a name server that is a slave for only a few of the local zones. We call this a partial-slave name server (and probably nobody else does). Suppose movie.edu had 20 of the /24-sized (the old class C) networks and a corresponding 20 in-addr.arpa zones. Instead of creating a slave server for all 21 zones (all the in-addr.arpa subdomains plus movie.edu), we could create a partial-slave server for movie.edu and only those in-addr.arpa zones the host itself is in. If the host had two network interfaces, its name server would be a slave for three zones: movie.edu and the two in-addr.arpa zones.

Let's say we scare up the hardware for another name server. We'll call the new host zardoz.movie.edu, with IP addresses 192.249.249.9 and 192.253.253.9. We'll create a partial-slave name server on zardoz, with this named.conf file:

options {
	directory "/var/named";
};

zone "movie.edu" {
	type slave;
	masters { 192.249.249.3; };
	file "bak.movie.edu";
};

zone "249.249.192.in-addr.arpa" {
	type slave;
	masters { 192.249.249.3; };
	file "bak.192.249.249";
};

zone "253.253.192.in-addr.arpa" {
	type slave;
	masters { 192.249.249.3; };
	file "bak.192.253.253";
};

zone "0.0.127.in-addr.arpa" {
	type master;
	file "db.127.0.0";
};

zone "." {
	type hint;
	file "db.cache";
};

For a BIND 4 server, the named.boot file would look like this:

directory   /var/named
secondary   movie.edu                192.249.249.3 bak.movie.edu
secondary   249.249.192.in-addr.arpa 192.249.249.3 bak.192.249.249
secondary   253.253.192.in-addr.arpa 192.249.249.3 bak.192.253.253
primary     0.0.127.in-addr.arpa     db.127.0.0
cache       .                        db.cache

This server is a slave for movie.edu and only two of the 20 in-addr.arpa zones. A "full" slave would have 21 different zone statements in named.conf.

What's so useful about a partial-slave name server? They're not much work to administer because their named.conf files don't change much. On a name server authoritative for all the in-addr.arpazones, we'd need to add and delete in-addr.arpa zones (and their corresponding entries in named.conf ) as our network changed. That can be a surprising amount of work on a large network.

A partial slave can still answer most of the queries it receives, though. Most of these queries will be for data in movie.edu and the two in-addr.arpa zones. Why? Because most of the hosts querying the name server are on the two networks it's connected to, 192.249.249 and 192.253.253. And those hosts probably communicate primarily with other hosts on their own network. This generates queries for data within the in-addr.arpa zone that corresponds to the local network.

    I l@ve RuBoard Previous Section Next Section