[ntp:questions] Re: Questions and ruminations regarding NTPD 4 config and XP's bad behavior.

Richard B. Gilbert rgilbert88 at comcast.net
Tue Jan 18 01:00:01 UTC 2005

elickd at one.net wrote:

>Of all the questions facing me, the most vexing is how 170 odd XP boxes
>are showing up in the "ntpq -p" output of our stratum 3 servers (Linux,
>NTPv4); said boxes certainly are not enumerated in their "ntp.conf".
>The only difference between these XP machines and the hundreds of
>identical units that do not show up is that they're set to auto-sync.
>their time once a week.
>I can't think of any other mechanism for their associations other than
>there is some wierd interaction between them and the strat. 3 servers
>when they poll. Perhaps that multicast is turned on (on the servers) is
>a factor?
>All our stratum 3 NTP4 servers have the following stanzas in their
>broadcastdelay	0.008
I missed this the first time around!   Why are servers configured as 
multicast clients?!?!?   That means that they will listen to the lowest 
available stratum that's multicasting!   If the XP boxes are 
multicasting, that means you could be getting time from them instead of 
your stratum two servers, especially if the stratum two servers are 
unavailable for some reason.

Rather than take time from random strangers it's better to configure 
your servers to get their time from specific lower stratum servers!


server server-a.mydomain.com  iburst
server server-b.mydomain.com  iburst
server server-c.mydomain.com  iburst
server server-d.mydomain.com iburst
# Declare the local clock to be the clock of last resort.
# It will be used to serve time in the absence of any other.
server              # Local clock, unit 0
fudge stratum 10

This way, you get time only from the designated servers and, if they are 
unavailable, you serve your local clock to your clients.  Assuming 
(dangerous) that the servers in question more or less agree as to what 
time it is and are synchronized and stable, your servers should 
initialize with eight queries at two second intervals (that's the iburst 
part) and then continue at one query every 64 seconds.  As the 
synchronization gets tighter the poll interval will stretch to 128 
seconds, 256, 512 and, ultimately 1024 seconds.  You then live happily, 
not forever after, but until something happens to disrupt the stratum 
two servers or your network connection.

In reality, the two servers will probably disagree as to what time it is 
and, since there are only two, your stratum three servers will "clock 
hop" between them and you'll never get tight synch.  At least the mob 
will be headed in the same general direction at more or less the same speed.

If you have legal constraints for timestamping transactions, your setup 
is just asking for trouble.  I don't think that anybody has more than a 
slight clue as to what time it is and if you need to be able to say that 
something happened at January 7, 2004 19:16:10.37 plus or minus ten 
milliseconds, you probably can't swear that the timestamp is correct.

If you just need to figure out a sequence of closely timed events at 
different locations, you probably can't do it with any certainty because 
the various clocks are almost certainly not closely synchronized.  With 
low network latency and a stable time source, it's possible to keep a 
large number of machines synchronized within, say, five or ten 
milliseconds.  With your setup, it wouldn't surprise me to find two 
different machines on your network whose clocks differed by at least 
several seconds or even minutes!

The first thing to do is to define your requirements for both accuracy 
and tightness of synchronization  If you can't satisfy your requirements 
with your existing setup and you have a business justification for your 
requirements, you just need to decide what resources are necessary to 
meet current and anticipated future requirements and then request those 
resources.  If you need to meet these requirements even if headquarters 
is down or your network connection is down, you need to consider 
designing some redundancy into the system.

More information about the questions mailing list