[ntp:questions] Garmin LVC 18x jitter problem

Charles Elliott elliott.ch at comcast.net
Thu Jul 11 10:04:14 UTC 2019


Hello:
	This email actually identifies two
problems:

1.  The pictures show that the algorithm
stops working after a while.

2.  The control algorithm is not precise
enough to control the frequency effectively
when PPS is the driver.

I put print statements in the refclock_sample
routine and have watched problem 1 occur.
Part of this is documented in Mills' book
(Mills, D. L. (2011). Computer Network Time
Synchronization: The Network Time Protocol on
Earth and in Space (Second ed.). Boca Raton,
FL: CRC Press.),  which says that if an error
is discovered in the NMEA output the PPS is
turned off (and allegedly other, external,
clocks are used to set the time.)  It is true
that the PPS is being turned off, but I have
not been able to discover who or what is
doing that.  It looks and acts like the NMEA
clock is being un-configured, but again I
can't find out how that is occurring.  The
symptom is that pp->io.fd is being changed
from a positive integer file descriptor to
-1.  In any case, instead of NTPD defaulting
to a different time server, it keeps changing
the frequency to eliminate the error in the
PPS signal, which has now become a constant.
As the pictures show that is why the error
becomes larger and larger.  There is no point
in turning off the PPS signal if NTPD
discovers an error in one NMEA sentence.  If
anyone can figure out how this is occurring I
would appreciate hearing about it.

Another question is, "Why does the NMEA
driver think there is an error such that the
PPS needs to be turned off?"  The GPS clock
that I am using dumps about 1194 bytes into
the UART buffer every second; the buffer is
only 1024 bytes in length.  Changing the send
and receive buffer lengths to 1024+256 in
line 345 of termios.c right now seems to have
fixed this problem, but it doesn't always.
In any case, making the buffers larger does
no harm. 


Problem 2 is also alluded to in Mills' book,
which says that initially, right after
startup, NTPD makes large corrections in
frequency to try to bring the clock into
synchronization, but as time goes on the
correction factor is made smaller and smaller
until a constant is reached.  On my system
this constant is too large to make the tiny
corrections necessary at microseconds
accuracies.  So my system hunts back and
forth across zero when it is controlled by
the PPS input.  This is not PID (Proportional
+ Integral + Derivative) control, and not
even PID - I = PD control, where the integral
term is forgotten about because it often
results in a yo-yo effect and is difficult to
get right.  The gods have asked me to put
fuzzy PD control into NTPD, but I simply
haven't had time to do it yet.  I built the
fuzzy PD control example of a truck backing
into a parking slot from an AI book several
years ago, and it works quite well, or at
least that example did.

Hope this helps.

Charles Elliott

-----Original Message-----
From: questions
[mailto:questions-bounces+elliott.ch=comcast.
net at lists.ntp.org] On Behalf Of David Taylor
Sent: Thursday, July 11, 2019 12:42 AM
To: questions at lists.ntp.org
Subject: Re: [ntp:questions] Garmin LVC 18x
jitter problem

On 10/07/2019 22:15, Michael Haardt wrote:
> I use a Garmin LVC 18x with a Raspberry Pi,
process NMEA with gpsd 
> (SHM driver) and acess PPS by GPIO (kernel
PPS driver).  gpsd allows 
> monitoring and passing PPS right into ntpd
avoids gpsd conversion of PPS.
> 
> Sometimes this works, but then there are
times where both clocks are 
> thrown out as falsetickers.  I believe
that's due to the strange 
> jitter behaviour of the NMEA data.  If I
understood the clock 
> selection right, then basically the
measured jitter forms an interval 
> around the offset and the intersection
interval is the root 
> dispersion.  Should the NMEA clock have no
intersection with the used 
> PPS clock, it is a falseticker, but since
PPS depends on it, PPS is so as well.
> 
> I graphed offset and jitter from two days
peerstats where I was locked 
> on PPS (tos mindist 0.1) and the system ran
stable without the need to 
> adjust the crystal frequency at constant
environment temperature.
> The first day was mostly fine, but in the
middle of the second day, 
> the GPS jittered really bad and there are
many occasions where the 
> jitter interval did not intersect with 0.
The ntpd jitter estimation 
> works as expected for a normal
distribution, but the distribution is 
> clearly
> different:
> 
> http://www.moria.de/~michael/tmp/offset.svg
> http://www.moria.de/~michael/tmp/jitter.svg
> 
> Using the tos mindist extends the
intersection interval, but that 
> effects the root dispersion.  That's
correct if it is needed to have 
> an intersection between different clocks of
low jitter, but in my case 
> the problem is a wrong (too low) jitter
estimation of a clock I only 
> need as PPS reference.
> 
> Is there any way to specify the precision
manually, like fudge minjitter?
> Clearly the jitter suffices to keep the PPS
clock running and I would 
> like to have PPS determine the root
dispersion, because the PPS clock 
> has a jitter of 4 us.
> 
> This problem seems to have come up a number
of times in the past, but 
> I never saw the root dispersion impact of
tos mindist mentioned and I 
> suspect in a number of cases configuring a
minimal jitter would have 
> been a better solution.
> 
> Michael

Michael,

I'm not an expert in this, but the timing
seems to be the same in both graphs, the
problem occurring around 8000.  Could that be
due to someone with a GPS jammer parking
nearby?

I always try and have more than just the NMEA
source as a "prefer" 
server if possible, something from the
Internet will be much more reliable than the
serial data!  Try adding a "pool" directive
or some known good servers.

--
Cheers,
David
Web: http://www.satsignal.eu

_____________________________________________
__
questions mailing list
questions at lists.ntp.org
http://lists.ntp.org/listinfo/questions



More information about the questions mailing list