[ntp:questions] offset > .5 second. What does this mean?

David Woolley david at djwhome.demon.co.uk
Mon Oct 30 22:08:42 UTC 2006


In article <1162197598.557296.181620 at m73g2000cwd.googlegroups.com>,
nicough at gmail.com wrote:

> (a) The difference in time between the incorrect time on a local
> server, and the correct time on a correct NTP server.
> or

It's closer to this one, but it is not this one.  In the peers output,
it is the difference between the estimated value of the software clock on
the local machine and estimated value of the software clock on the 
server in question, allowing for round trip delay, but not for past
history.

As one normally expects to be able to read the local clock more accurately
than the server clock (NB this does not apply to W2003 latest service
pack W32Time as a client to the reference implementation on Windows or
many Unix(like) boxes), the normal state will be that the local time is a
better estimate of true time than the server time.  The local time will
only be the more "incorrect" time during initial convergence (and even
then there was a big debate, about a couple of weeks ago, as to whether
ntpd could actually avoid this situation in cases where it knew it to
be true).  (Actually, it could be wrong in phase due to assymmetric
delays, as well, but those would not be visible in the reported offset.)

The offset actually used for the disciplining the local clock is, a 
weighted average of the, assumed, best value from several of the 
servers.

> (b) The time delay (caused by latency etc) to receive the correct time
> from an NTP server. Ie By the time the local server receives the
> "correct time", this "correct time" value is now slightly old.

Noting the above caveat about the local time being likely to be the more
correct, the delay parameter gives half the maximum error due to this,
give or take the clock reading precisions.

(Noting that W32Time on anything except the current SP of W2K3 does not
implement NTP, the reason that the W32Time implementation of NTP may be
less correct than the server is that it only reads the software clock to
a one tick resolution, whereas the reference implementation interpolates,
using the cycle counter on the CPU.)




More information about the questions mailing list