[ntp:questions] "ntpd -q" is slow compared to ntpdate

David Woolley david at ex.djwhome.demon.co.uk.invalid
Thu Oct 16 22:12:04 UTC 2008

Richard B. Gilbert wrote:

> As I understand it, "root dispersion" is the difference between my clock 
> and the atomic clock at the root.  If my understanding is correct, I 

It's not the difference.  It is a somewhat worst case estimate of the 
part of the difference due to the time elapsed since the root clock time 
was measured and certain other measurement uncertainties.

> think we do care about it.  If the absolute value is greater than about 
> 100 microseconds I would begin to be concerned.  Others might choose 
> some other value.

ntpd uses 1,000,000 microseconds (I can't remember if root delay is 
included in root dispersion, or whether the limit is on the sum of the 
two).  A value of 100 microseconds would require an excessively high 
poll rate, especially for a high stratum client.

By default, (root) dispersion grows at 15 microseconds per second, so 
one would need to use a root measurement which was less than 7 seconds 
old, if this were the only term in root dispersion.  Standard minpoll is 
64 seconds, of which only 1 in 8 samples may be effective, i.e. a 
stratum 2 server at minpoll will already have accumulated up to 3840 
microseconds.  At maxpoll, it will be more like 120ms.  This ignore 
dispersion from the reference clock polling interval.

At stratum 15, and assuming the polls average half way between the 
interval for the previous server, I believe you will have used the full 
one second budget!

More information about the questions mailing list