[ntp:questions] NTP on CubieBoard
elliott.ch at comcast.net
Tue Oct 14 10:50:23 UTC 2014
I don't understand this paragraph at all:
>The correct quality measure is jitter, rather than offset. offset
>varies from sample to sample but still doesn't tell you the systematic
>error in the time.
>From Mills, D., et al., (2010) RFC 5906: Network Time Protocol Version 4:
Protocol and Algorithms Specification, Internet Engineering Task Force
"The NTP performance model
includes four statistics that are updated each time a client makes a
measurement with a server. The offset (theta) represents the
maximum-likelihood time offset of the server clock relative to the
system clock. The delay (delta) represents the round-trip delay
between the client and server. The dispersion (epsilon) represents
the maximum error inherent in the measurement. It increases at a
rate equal to the maximum disciplined system clock frequency
tolerance (PHI), typically 15 ppm. The jitter (psi) is defined as
the root-mean-square (RMS) average of the most recent offset
differences, and it represents the nominal error in estimating the
In other words, offset is defined an estimator of the difference between
the server clock (or a consensus of several server clocks if NTPD
believes it is in synchronization) and the client clock.
Jitter is an estimate of error in computing the offset.
Do you have any quarrel with those two statements?
"The four most recent timestamps, T1 through T4, are used to compute
the offset of B relative to A
theta = T(B) - T(A) = 1/2 * [(T2-T1) + (T3-T4)]
and the round-trip delay
delta = T(ABA) = (T4-T1) - (T3-T2)." (Mills, et al, RFC 5905)
where T1 is the departure time of the NTP packet measured on the client
T2 is the packet arrival time measured on the server
T3 is the packet departure time measured on the server
T4 is the packet arrival time measured on the client computer.
Since ultimately system offset is a weighted average of offsets in times
the client and server computers, how could it be anything but an estimate of
time difference between the client computer time and UTC standard time?
On my system, there are two sources of systematic error in the time.
When the load is shed at 11:00 PM the delay immediately declines from
about 0.23 ms to about 0.19 ms. From 11:00 PM to about 5:00 AM, the
frequency offset changes from about -52.3 ppm to about -54.4 ppm, and
then rises slowly to about -52.3 by 10:00 AM, whereupon it
seems to vary randomly between -52.55 and -51.9 ppm until about 11:00 PM.
These values were read from NTP_Plotter displays.
Do you know of any other sources of systematic error in NTPD's computation
> -----Original Message-----
> From: questions-bounces+elliott.ch=comcast.net at lists.ntp.org
> [mailto:questions-bounces+elliott.ch=comcast.net at lists.ntp.org] On
> Behalf Of David Woolley
> Sent: Monday, October 13, 2014 5:52 AM
> To: questions at lists.ntp.org
> Subject: Re: [ntp:questions] NTP on CubieBoard
> On 13/10/14 09:23, Rob wrote:
> > On the PC platform, with a recent development ntpd I can achieve
> > PPS sync with offset within a couple of us on systems in normal
> > environment, and well within 1us in a temperature conditioned room.
> The correct quality measure is jitter, rather than offset. offset
> varies from sample to sample but still doesn't tell you the systematic
> error in the time.
> questions mailing list
> questions at lists.ntp.org
More information about the questions