[ntp:questions] POSIX leap seconds versus the current NTP behaviour

Miroslav Lichvar mlichvar at redhat.com
Wed May 11 10:55:15 UTC 2011


On Fri, May 06, 2011 at 12:44:42PM +0800, Dennis Ferguson wrote:
> Of course I do want to adjust the system's clock too, I just can't
> do it the way the NTP code was doing it.  What could be done,
> however, was design a system call interface which allowed time and
> frequency adjustments (which are done solely with arithmetic as
> well), but did so in a way which returned enough information about
> what was done to allow one to precisely compute the time one would
> have gotten from the unadjusted clock as a function of the adjusted
> clock's timestamps.

This is very interesting. Do you have a description of the new
interface? I'd really like to see something like that supported in
stock kernels, although I'm primarily interested in Linux. I've
proposed to the subsystem's kernel maintainer extending the adjtimex
interface to include a variant of the SINGLESHOT mode which would
allow slewing at nanosecond resolution, at specified rate and which
would provide timestamp when exactly the adjustment started. This
should allow us to accurately reconstruct any timestamp in history as
if the adjustement never happened or was complete.

Currently, we use an ugly combinantion of three different slewing
mechanisms, each with different shortcomings. First is temporary
frequency/tick adjustment through adjtimex(), samples collected while
the adjustment is running are accurate, but the total adjustment is
not, with each frequency change an error has to be estimated and added
to the dispersion of old samples. Second is adjtime(), which slews
accurately in microsecond resolution, but the reported remaining
adjustment is updated only per second which means that samples
collected while it's running have error up to 500 us and it's hard to
determine when exactly the adjustment finished (or started). Third is
PLL in FREQHOLD mode, it allows nanosecond resolution, but has the
same problems as adjtime() and it's even harder to estimate the error.
It's a horrible mess, but it seems to be able to keep the clock stable
to 200-300 nanoseconds at 16s update interval, on a machine with a PPS
refclock (1us jitter) and an ordinary clock oscillator (wander
estimated at 1ppb/s).

> And as for results transferring time from the card to the system
> clock, I have found that if it samples the offset 4 times per second
> and processes that data to determine time and frequency errors
> (using a least squares fit, after outlier filtering) then, if an
> adjustment is made only when it computes a time or frequency result
> which differs from the current clock setting at an 80% confidence
> level, it will typically end up making an adjustment roughly every
> 10 seconds or so with the time adjustments tending to be about 10
> nanoseconds in size and the frequency adjustments being very roughly
> on the order of 10^-9.

Impressive numbers. I'd expect larger offset corrections if the
frequency needs to be changed by 10 ppb every 10 seconds though. I
assume you have an ordinary clock oscillator without any
stabilization.

How many samples do you use to make the fit? Is it fixed or variable?
We use the runs statistical test (number of runs of offset's signs)
and keep maximum number of samples which pass the test. For best
performance, I think it should correspond to the Allan intercept. 

Thanks,

-- 
Miroslav Lichvar



More information about the questions mailing list