[ntp:questions] Re: Does NTP model clock skew?

Kevin MacLeod kevin-usenet at horizon.com
Sun Feb 13 23:47:35 UTC 2005


In article <1108325658.913576.274150 at l41g2000cwc.googlegroups.com>,
Affan <affanahmed at gmail.com> wrote:
>>
>> Just to be clear, let me define "clock skew":
>> 	clock skew is the difference in clock signal arrival times caused
>> 	by different transmission paths.
>>
>> NTP doesn't "cater to" clock skew, it tries to eliminate it!
>
> I actually consider clock skew to be the frequency difference of the
> oscillator in any node from what it is supposed to be. So even if it is
> synchronized at one instance to the UTC, it would later go off target
> because its local oscialltor does not tick at the frequency at which it
> should.

I don't think this is a widely used definition, and I think you would
be understood much better if you referred to that as "frequency offset"
(if taking about the general case of the difference between two clocks)
or "frequency error" (if comparing a clock to a standard).

The term "clock skew" is a well-established piece of hardware terminology,
referring specifically to the consistent portion of phase differences.
(Short-term variations are known as "jitter".)

Also, "cater to" generally means "accomodate" or "tolerate" in a
permissive sense ("the staff were used to catering to Lord Highbottom's
eccentricities"), rather than in the sense of correcting or opposing.

Since NTP is intended to deliver UTC with bounded error, it must
therefore generate a time scale with zero long-term average frequency
error.  Since no real oscillator has zero error, any such time
distribution system must be able to tolerate some frequency error
in its local clock.

This can be done with a simple type-I phase-locked loop (adding a
correction to the local clock frequency proportional to the measured
time error), but that requires a steady-state phase error to correct
for a frequency offset.

NTP uses a type-II phase-locaked loop, correcting both phase error
and frequency error, to try to reduce steady-state phase error to zero.

It can do this only to the extent that frequency offset is constant,
of course.  Tracking a changing frequency offset would require
higher-order fitting, which gets into stability issues.  The folks who
wrangle cesium standards routinely use one more term in their fits,
but that's it.

> Yes RBS is for only braodcast enabled networks with near simulataeous
> reception times. However what they do is intersting in that they actual
> fit a line to the phase offset value b/w them and another (for
> simplicity consider that a server) node. The slope of this line (that
> is the amount by which the differnce increases) gives an indication of
> the skew of our clock wrt the server clock. 

NTP does something similar, but recursively and in real time as opposed
to processing an array of samples and correcting past timestamps.
Compare Kalman filtering with linear least squares parameter fitting.
They both compute the same answer, but the Kalman filter is designed
to support efficient addition of new observations to an existing
solution.

You might look up the terms "growing-memory filter" and "fading-memory
filter" to see how it's done.

The big problem is dealing with noise.  A straight least-squares fit
is extremely vulnerable to outliers, and most of NTP's engineering is
devoted to dealing with the extreme perversity of real-world network
timing.  You need to be using some more robust regression techniques.
Try using a Poisson, exponential, or Cauchy error distribution instead
of a normal one and see how your algorithms do.


Having said that, high-speed timestamping is generally done in
two steps: first, capture a "raw" timestamp, and then convert
that to a standard timescale.  A system call like gettimeofday() or
clock_gettime() does both, but it is often useful to separate the
two steps.

Given that, it's quite possible to include post-observation time
corrections in the algorithm which converts raw timestamps to the
standard timescale.  A programming interface and algorithm for doing
that would be most interesting.

For the highest-accuracy data, data is reduced once relative to a local
standard and then corrected 30-60 days later when BIPM circular T comes
out, giving the difference between the local standard and global UTC.

This could be done, but for computer purposes, it's probably easier to
map straight from raw timestamps to corrected time.



More information about the questions mailing list