[ntp:questions] Why does GPS time diverge from system time?

Dave Hart hart at ntp.org
Fri Apr 13 23:20:34 UTC 2012


On Fri, Apr 13, 2012 at 19:01, Charles Elliott <elliott.ch at verizon.net> wrote:
> I just installed NTPD version ntp-4.2.7p265-win-x86-bin from Dave Hart's web
> site.  I am using a SiRF BU-353 (USB) GPS device.  The time2 fudge factor is
> 0.372258.  NTPD is always synced to the GPS device since all the remote
> clocks are marked noselect.  From clockstats.20120413, the first entry after
> the ntpd restart indicated a difference between the GPS and system time of
> 0:00:00.450 seconds.  Here is the clockstats line:
>
>
>
> 56030 48614.450 127.127.20.2
> $GPRMC,133014.000,A,3959.5981,N,07507.5565,W,0.51,40.43,130412,,,A*4F
>
>
>
> The last clockstats line indicates an offset between GPS and system time of
> 0:00:00.658 seconds:
>
>
>
> 56030 56152.658 127.127.20.2
> $GPRMC,153552.000,A,3959.5979,N,07507.5549,W,0.20,74.34,130412,,,A*46
>
>
>
> I have copied all the clockstats lines since the restart below, where all
> duplicate offsets have been deleted.  You can see that the difference
> between GPS and system time has been increasing almost monotonically.   The
> average offset since the restart from the offsets computed by NTPD and
> output in the loopstats file is -0.054193 (ms), while the stdDev is
> 1.507155.  The maximum and minimum offsets are about +5 and -5 ms,
> respectively, while the overwhelming majority of offsets are between +- 3
> ms.
>
>
>
> Why is the difference between GPS and system time diverging?  Shouldn't they
> be converging?

I'd like to see loopstats showing the problem, rather than clockstats.
 With the NMEA driver as the only selectable source, loopstats will be
clearer as it includes the offset instead of requiring subtraction.

As ever, it's known that some GPSes NMEA timing wanders subtantially,
that may be part of the reason for what you're seeing.

> Also, why is the delay for all remote clocks always 0.977 in this version of
> NTPD, where as normally it is about 0.230 ms?  In addition, why is the most
> common computed jitter equal to 0.977?

You didn't mention which version of Windows, but the 0.977 minimum
delay and jitter values tell me it's Vista or Win7, and ntpd has
disabled interpolation because the native Windows clock is advancing
once per millisecond, too fast for ntpd's interpolation to be able to
schedule enough sampling at 1 msec scheduling precision to accurately
correlate the performance counter timescale with the system clock's.
As a result, ntpd is using the system clock directly, as it will have
reported in its startup log messages, and as can be seen with ntpq -c
"rv 0 precision" which will be -10, or ~1 msec.

With the local clock precise only to 1 msec, ntpd is refusing to
respect the noise measured as apparent delays of less than 0.977 ms
(or 2**-10 sec), instead forcing delays to have a minimum of 0.977.
Similarly, given the low precision clock, jitter figures less than
0.977 ms would be fiction, the clock can't measure differences smaller
than that.

I wish I could tell you what causes your system clock to switch from
its boot-time default of ticking every 15.6 msec to once every msec,
but so far the trigger(s) for that behavior by Vista and Win7 remain a
mystery.  The good news is Win8 provides an alternate high-precision
system clock, and the last few ntp-dev snapshots will use it.  On
Win8, ntpd should perform comparably to the many other systems with
high-precision system clocks.

Cheers,
Dave Hart


More information about the questions mailing list