[ntp:questions] How to calculate shm + pps offsets?

Guido Gavilanes gavilanes at ismb.it
Tue Jul 10 11:28:28 UTC 2018



I am currently using a Ublox M8N either for positioning than for time 
synchronization (as stratum0 source and stratum1 server to other 
subsystems). In order to achieve this I have a gpsd listening the device 
in a local port and publishing timing/positioning in shared memory (to 
be visible to NTP.). I have also put PPS electrical signal from the GNSS 
receiver into a gpio handled by a pps-gpio linux driver. That works fine 
in my system.

PPS has been tested with ppstest tool. also ntpshmmon gives 
time/position exports to shared memory for every second.

The problem is that up to now I haven't found a way to calibrate 
accurately the initial calibration time offset estimations given by ntp. 
I used trial-and-error approach to arrive to a combination of 
calibration times for [PPS , SHM]  that keeps my local clock 
synchronized for as long as possible. The most I have got with that 
combination is 300 seconds after ntp service restart.

After the ntp restart, the ntp shows:

|root at hostname:~# ntpq -p remote refid st t when poll reach delay offset 
jitter 
============================================================================== 
*SHM(0) .UBL. 0 l 6 16 17 0.000 -5.601 7.147 oPPS(1) .PPS. 0 l 5 16 7 
0.000 -2.434 0.391 |

but after a (short) while, ntp marks both sources as false tickers:

|root at hostname:~# ntpq -p remote refid st t when poll reach delay offset 
jitter 
============================================================================== 
xSHM(0) .UBL. 0 l 4 16 377 0.000 -11.125 5.132 xPPS(1) .PPS. 0 l 3 16 
377 0.000 -2.701 0.200 |

Sometimes it comes back to synch, and it goes back and forth. For my 
purposes, it might be acceptable (since local time clock does not 
diverge much until the next time ntp resync), but when I apply the same 
ntp configuration to another system with the exactly the same 
hardware/software configuration (even the same GNSS antenna in open 
sky), the synchronization state is not stable (ntps declares false 
tickers almost always). This makes me think that the ntp configuration I 
have obtained is not the optimal.

The NTP configuration I have is this:

|driftfile /netlab/log/ntp.drift statsdir /netlab/log/ntpstats/ 
statistics loopstats peerstats clockstats logfile 
/netlab/log/NTP/ntp.log enable calibrate filegen clockstats ntp.stats 
clockstats type day enable # SHM driver doc: # time1 Specifies the time 
offset calibration factor, in seconds and fraction, with default 0.0. # 
time2 Maximum allowed difference between remote and local clock, in 
seconds. Values <1.0 or >86400.0 are ignored, and the default value of 
4hrs (14400s) is used instead. See also flag 1. # stratum Specifies the 
driver stratum, in decimal from 0 to 15, with default 0. # refid 
Specifies the driver reference identifier, an ASCII string from one to 
four characters, with default SHM. # flag1 0 | 1 Skip the difference 
limit check if set. Useful for systems where the RTC backup cannot keep 
the time over long periods without power and the SHM clock must be able 
to force long-distance initial jumps. Check the difference limit if 
cleared (default). # flag2 0 | 1 Not used by this driver. # flag3 0 | 1 
Not used by this driver. # flag4 0 | 1 If flag4 is set, clockstats 
records will be written when the driver is polled. server 127.127.28.0 
mode 1 minpoll 4 maxpoll 6 prefer fudge 127.127.28.0 time1 0.040 time2 
1.00 stratum 0 flag1 1 flag4 1 refid UBL # driver 22 (ATOM PPS) # flag2 
Specifies PPS capture on the rising (assert) pulse edge if 0 (default) 
or falling (clear) pulse edge if 1 # flag3 Controls the kernel PPS 
discipline: 0 for disable (default), 1 for enable server 127.127.22.1 
minpoll 4 maxpoll 4 version 4 prefer # enable PPS API fudge 127.127.22.1 
flag2 0 time1 -0.040 stratum 0 flag3 1 |

(my apologizes for the long intro)

Is there a way to systematically calibrate the offsets so the 
synchronization is stably maintained (obviously under optimal sky 
visibility) ?




More information about the questions mailing list