[ntp:questions] Re: Does NTP model clock skew?
David L. Mills
mills at udel.edu
Mon Feb 14 01:28:08 UTC 2005
In the computer science theory community "skew" means frequency offset.
To a machinist or cosmologist it means something else. You may notice I
never use that term, preferring "frequency offset" or "frequency"
instead. I have tried to promote "jitter" as time variations and
"wander" as short-term frequency variations. Physicists used to
precision time and frequency sources have much more precise terminology
for these terms. Nobody outside the NTP community knows what
"dispersion" really means; heck, I'm not sure I know myself. It's the
growth of tolerance with time, something even the community outside NTP
could surely use.
Kevin MacLeod wrote:
> In article <1108325658.913576.274150 at l41g2000cwc.googlegroups.com>,
> Affan <affanahmed at gmail.com> wrote:
>>>Just to be clear, let me define "clock skew":
>>> clock skew is the difference in clock signal arrival times caused
>>> by different transmission paths.
>>>NTP doesn't "cater to" clock skew, it tries to eliminate it!
>>I actually consider clock skew to be the frequency difference of the
>>oscillator in any node from what it is supposed to be. So even if it is
>>synchronized at one instance to the UTC, it would later go off target
>>because its local oscialltor does not tick at the frequency at which it
> I don't think this is a widely used definition, and I think you would
> be understood much better if you referred to that as "frequency offset"
> (if taking about the general case of the difference between two clocks)
> or "frequency error" (if comparing a clock to a standard).
> The term "clock skew" is a well-established piece of hardware terminology,
> referring specifically to the consistent portion of phase differences.
> (Short-term variations are known as "jitter".)
> Also, "cater to" generally means "accomodate" or "tolerate" in a
> permissive sense ("the staff were used to catering to Lord Highbottom's
> eccentricities"), rather than in the sense of correcting or opposing.
> Since NTP is intended to deliver UTC with bounded error, it must
> therefore generate a time scale with zero long-term average frequency
> error. Since no real oscillator has zero error, any such time
> distribution system must be able to tolerate some frequency error
> in its local clock.
> This can be done with a simple type-I phase-locked loop (adding a
> correction to the local clock frequency proportional to the measured
> time error), but that requires a steady-state phase error to correct
> for a frequency offset.
> NTP uses a type-II phase-locaked loop, correcting both phase error
> and frequency error, to try to reduce steady-state phase error to zero.
> It can do this only to the extent that frequency offset is constant,
> of course. Tracking a changing frequency offset would require
> higher-order fitting, which gets into stability issues. The folks who
> wrangle cesium standards routinely use one more term in their fits,
> but that's it.
>>Yes RBS is for only braodcast enabled networks with near simulataeous
>>reception times. However what they do is intersting in that they actual
>>fit a line to the phase offset value b/w them and another (for
>>simplicity consider that a server) node. The slope of this line (that
>>is the amount by which the differnce increases) gives an indication of
>>the skew of our clock wrt the server clock.
> NTP does something similar, but recursively and in real time as opposed
> to processing an array of samples and correcting past timestamps.
> Compare Kalman filtering with linear least squares parameter fitting.
> They both compute the same answer, but the Kalman filter is designed
> to support efficient addition of new observations to an existing
> You might look up the terms "growing-memory filter" and "fading-memory
> filter" to see how it's done.
> The big problem is dealing with noise. A straight least-squares fit
> is extremely vulnerable to outliers, and most of NTP's engineering is
> devoted to dealing with the extreme perversity of real-world network
> timing. You need to be using some more robust regression techniques.
> Try using a Poisson, exponential, or Cauchy error distribution instead
> of a normal one and see how your algorithms do.
> Having said that, high-speed timestamping is generally done in
> two steps: first, capture a "raw" timestamp, and then convert
> that to a standard timescale. A system call like gettimeofday() or
> clock_gettime() does both, but it is often useful to separate the
> two steps.
> Given that, it's quite possible to include post-observation time
> corrections in the algorithm which converts raw timestamps to the
> standard timescale. A programming interface and algorithm for doing
> that would be most interesting.
> For the highest-accuracy data, data is reduced once relative to a local
> standard and then corrected 30-60 days later when BIPM circular T comes
> out, giving the difference between the local standard and global UTC.
> This could be done, but for computer purposes, it's probably easier to
> map straight from raw timestamps to corrected time.
More information about the questions