[ntp:questions] PSYCHO PC clock is advancing at 2 HR per second

Dennis Ferguson dennis.c.ferguson at gmail.com
Thu Mar 22 02:21:50 UTC 2012


On 21 Mar, 2012, at 11:36 , unruh wrote:
> On 2012-03-21, Ron Frazier (NTP) <timekeepingntplist at c3energy.com> wrote:
>> 
>> I noticed that Dave Hart later posted this reply to your question.  I'll 
>> reference that below.
>> 
>> NTP's jitter is root mean squares of offsets from the clock filter
>> register (last 8 responses, more or less).
> 
> Strange, because ntp then takes that entry of those 8 with the shortest
> roundtrip time and uses only it to drive the ntp algorithm. Thus on the
> one hand it is using it as a measure of jitter and on the other hand
> saying it does not trust most of those values, with a distrust so deep
> it throws them away. Why would you be reporting anything for a set of
> data you distrust so deeply.

I see you keep pointing this out in various ways, but I really don't
understand the point.  If you are measuring data with non-guassian,
non-zero-mean noise superimposed you need to find a statistic which is
appropriate for the noise to produce the best noise-free estimate of the
quantity you are interested in measuring.  If someone takes `n' samples
with a (slightly different) non-gaussian noise distribution and finds the
median of the `n' (which is an individual sample) to use for further
processing would you really call that "throwing away 'n-1' of the samples"
rather than just computing the measure of central tendency which is most
appropriate for the noise distribution?  And if he went back over the data
to compute a measure of variability (perhaps computing the median square
deviation would be more appropriate) would that really be reporting something
"for a set of data you distrust so deeply"?  Unless I'm missing something
this seems like a rather bizarre point of view.

If you are looking for something to complain about in this particular bit
of machinery in ntpd I think there are much more interesting aspects you
might consider.  The best might be that this code causes ntpd to sometimes
process offsets which are quite old, in that they were measured at a time
well into the past.  In theory PLLs and other feedback control mechanisms
are unconditionally destabilized if there is any delay in the feedback path.
This is why ntpd makes no use of knowledge of when an offset was measured;
it is feeding those offsets to a PLL and a PLL has no way to deal with data
measured at any time other than "right now".  In practice (as opposed to
theory) approximating stability while using data which is significantly
delayed requires making the time constant of the PLL large enough that
the delay in the feedback path can be assumed to be approximately zero
in comparison.  The time constant of ntpd's PLL, and hence the stately
pace at which it responds to errors, is hence directly related to the
worst case delays between the measurement and the processing offset data
which is caused by this filter.

There are so many things to complain about that actually make sense
(in reality everything can be justified in terms of tradeoffs, but people
can differ about which tradeoffs produce the most attractive result) that
I don't see why you keep harping on something which seems more like
nonsense.

Dennis Ferguson


More information about the questions mailing list