[ntp:questions] Choice of local reference clock seems to affect synchronization on a leaf node

unruh unruh at invalid.ca
Tue Nov 8 00:15:00 UTC 2011


On 2011-11-07, Dave Hart <hart at ntp.org> wrote:
> On Mon, Nov 7, 2011 at 21:58, unruh <unruh at invalid.ca> wrote:
>> Actually, that is not the way that ntpd works. It has no concept of
>> "frequency error".
>
> Sure does, the frequency error is the frequency= value reported by
> ntpq, internally in ntpd stored in drift_comp, and persisted between
> runs in the driftfile.  Perhaps you were thinking of short-term
> frequency error due to temperature changes?

Yes, ntp does remember the frequency change from on invocation to the
next, but it does not use the current frequency for anything within the
clock discipline, except to change it. It does not try to determine the
error in the frequency and use that to discipline the clock. It assumes
that all changes in the frequency are driven by the offsets. It could
measure the frequency against the "true time" frequency and thus use the
error in the frequency to also discipline the clock, but it does not
(except as you say as a modern kludge to try to speed up ntp's intial
convergence).

>
>> All it knows is the offset. It then changes the
>> frequency in order to correct the offset. It does not correct the offset
>> directly.
>
> The offset is corrected directly when it exceeds the step threshold,
> 128 msec by default.

Yes, and that is an error condition which should never happen (except
perhaps at startup). It is a response to an anomaly, and is outside the
theory of operation of the ntp discipline loop. It is a highly
non-linear response, and violates almost all of the design principles
(eg that the clock should always advance, and that the frequency
correction should never be larger than 500PPM). 
It is a kludge.


>
>> It never figures out what the frequency error is.
>
> Sure it does, when started without a driftfile.


Another kludge to try to speed up ntpd's abysmally slow convergence if
it were not done. And then it is never done again. 



>
>> All it does
>> is "If offset is positive, speed up the clock, if negative slow it down"
>> ( where I am defining the offset at "'true' time- system clock time").
>> ?(There is lots that goes into ntp's best estimate of the 'true' time,
>> which is irrelevant to this discussion)
>
> Irrelevant if you want to paper over the minimum delay clock filter,
> which you love to disparage and I view as a key error reduction step.

In this context how it finds the 'true' time is irrelevant (ie, the
context of how ntp drives the clock to the tracking the true time)

I find this again to be a kludge. Yes, I do love to disparage it,
because it is horrendously wasteful of precious data (thowing away
almost 90% of it) without much of a demonstrated benefit. Note that in its handling
of the "deviations from the mean" in the refclock sections, about 40% of
the data is thrown away (60% kept) to get rid of the popcorn spikes etc.
But in handling network delays, onely 12% are kept. Why? As far as I can
see it is a kludgy attempt to "solve" the genuine problem of network
delays, and ntpd admits it is a kludge since the huff and puff filter
was then introduced as a totally different  solution to the same
problem. Which is better? Should the software itself not decide on which
one should be used in various circumstances? After all the designer
should surely know under what conditions one works better than the
other, and that knowledge should surely be put to use in the software. 

But again this is all irrelevant to the issue of how ntpd uses the data
that it does collect and keep to discipline the clock. 

>
> Cheers,
> Dave Hart



More information about the questions mailing list