[ntp:questions] Server offset included in served time?
david at ex.djwhome.demon.co.uk.invalid
Tue Sep 16 22:04:33 UTC 2008
> David Woolley <david at ex.djwhome.demon.co.uk.invalid> writes:
>> But, if you use individual measurements, you will get a figure that,
>> most of the time, is several times the true error and not necessarily in
>> the right direction.
> No idea what that sentence means. Are you refering to the gps readings
> which will be at worst 2 orders of magnitude better than that offsets from
> a generic network?
I'm assuming a time discipline algorithm that is properly matched to the
system time noise. I tend to agree that NTP probably isn't, but in that
case one should be changing the algorithm to make it properly matched,
rather than trying to record how bad it is.
With such an algorithm, one would expect the measured offsets to be more
or less equally positive and negative and distributed fairly randomly.
That is the mathematical assumption that I believe is the basis of the
theoretical analysis of the behaviour of NTP. The various filters in
NTP will low pass filter this noise and considerably reduce it in
amplitude, resulting in the value in the system clock. As a result, the
jitter in the system clock should be a lot less than the measurement
jitter, and at any one instant may be result in a deviation in the same
direction as the offset reducing the resulting offset.
NTP also assumes that there will be a wander component, and tries to
increase the polling interval until the wander component begins to dominate.
Your position, and I tend to agree with it, is that there is a variation
that there is another noise component, which results in occasional fast
transients. But as I said, the correct approach to that is not to graph
the resulting failure, but to improve the algorithms to give the best
estimate of the time under those circumstances, which is, I believe,
what you think chrony does. My guess is that chrony also produces
offsets with a standard deviation that is significantly greater than the
typical system clock error.
>> What you can do, is to use some hindsight, and make a slightly better
>> estimate of the true time by combining offsets from both before and
>> after the time the local clock was read. That gives you an advantage
> Unfortunately those offsets are not very useful, because the clock that
> read them has had its offsets and rates changed (by ntp) since then. Ie the
> measured offsets are not a good estimate of the offsets from "true time".
Ideally, those adjustments are compensating for parameter changes in the
local clock and making it keep closer to true time. With chrony, the
correction history is also largely measuring things like temperature,
rather than true changes in the clock rate.
>> during startup and after transients, but, in the steady state, the local
>> clock time is going to be the same as such an enhanced statistic.
>> In the steady state, all you can really deduce from the offsets is the
>> amount of noise in the measurements. You can then expect the amount of
>> noise in the local clock time to be several times less.
> No, the noise in the local clock may well dominate those offsets. This is
> what happens for example to a system which is controlled by a hardware
In that case, you have a system that is not matched to the noise
characteristics and you need to improve the algorithms. Again, to the
extent that you can reliably measure the error from true time, you
should be correcting the local clock by that amount, not measuring it
My basic point in this whole thread is that if you can measure the true
error, in real time, you can correct it in real time, resulting in an
error that is statistically zero, based on historic measurements.
More information about the questions