[ntp:questions] POSIX leap seconds versus the current NTP behaviour

Dennis Ferguson dennis.c.ferguson at gmail.com
Fri May 6 04:44:42 UTC 2011


Dave,

Yes, that would be me.  Long time no talk.

POSIX time is also UTC, so that is in accord.  Moreover, POSIX "seconds since
an epoch" timestamps and NTP "seconds since an epoch" timestamps are "UTC" in
exactly the same way, in that they represent a "UTC" timescale where all days
are exactly 86400 seconds long, so NTP and POSIX seconds timestamps are precisely
a constant apart (i.e. 1970-1900) at all times except, as I read it, when a leap
second is inserted (that is, the second starting at 23:59:60.00 UTC) when they
diverge by an additional second.  They come back into alignment at 00:00:00.

So we're not talking about the difference between POSIX and UTC time, we're instead
talking about the difference between POSIX and NTP time at the point where both the
latter timescales are going to have a discontinuity that UTC just doesn't have, and
hence where neither timescale appears to be able claim superior UTCness; they're just
different.  And while the implementation of leap seconds may be much different than
the code I wrote a long time ago the basic action which is taken is still the one I
chose (or maybe it was already chosen in the fuzzball code and I just wrote code which
did the same thing): it moves the NTP timescale back a second just after 23:59:60 UTC,
while POSIX moves it back just before 00:00:00 UTC.  I think these seem like equally fine
choices, I was mostly wondering if there were reasons why they weren't equally fine
choices that I was missing.

While this may be trivia I guess it does inadvertently creep up on the more fundamental
issue I've recently been thinking about: In a situation where ntpd and the kernel don't
necessarily need to be in perfect agreement about the time, so that kernel timekeeping
policy could be chosen for the benefit of the kernel and its time consumers, rather than
ntpd, what would be done differently?  E.g. if those time-consuming applications really
want the POSIX version of the time is there a reason why this isn't a good choice?

As for where this is coming from, I have a PCI-X board which allows the computer it is
plugged into to read an adjacent GPS receiver's time with a precision of about 3 ns, and
with an undetermined fixed-size sampling offset error that may (arguably) be as low as
+/-6 ns with respect to the PPS signal at the computer's end of the coax from the receiver.
In looking at how to transfer this time to the system clock, and now having a time source
capable of measuring things with this precision, I discovered that the time delivered by a
current NetBSD kernel (whose code seems to have been directly cloned from FreeBSD at some
point) jitters by on the order of hundreds of nanoseconds or more even when it is free-running
without adjustment, even using the 0.4 ns precision CPU cycle counter as the hardware
time source.  When I looked at why this was I came to the conclusion it was likely because
the manipulations being done at hardclock() interrupt time were unavoidably incorporating
(at least I couldn't figure out how to avoid it) the clock interrupt latency variations
into the system time.  The best way to fix that seemed to be to remove all clock-diddling
code from hardclock(), which took the ntp code with it, and to replace that with procedures
that were continuous with respect to a non-interrupting counter clock.  This, in turn,
constrains how the clock can be adjusted (though not the ultimate precision of an adjustment,
which can be excellent; only how you get there is constrained), leaving no good way to add
support for the ntp timex adjustment interface back in.  To avoid being stuck with nothing
better than adjtime() I had to design a new system call interface that wasn't the ntp one,
but I wanted to do it in a way which wouldn't change anything from ntpd's point of view.

To truncate a very long story, here's what I arrived at:  What the ntp code in the kernel
does is arithmetic, nothing more.  The same arithmetic can be done, taking care to accumulate
the results very precisely, without actually adjusting the system's clock.  While
this leaves the system with an unadjusted clock, it does allow ntpd to reliably convert the
unadjusted system clock timestamps into the timestamps ntpd would have gotten if the adjustments
had actually been done.  The math can be done with sub-nanosecond precision at not a lot of additional
cost (I do it with a 64-bit multiply and two adds), and all the computing can be done in user
space without any loss of precision since it will arrive at the same numbers no matter where it
does the math.  This leaves one with the quite useless result that the system's clock is
never adjusted, but otherwise leaves ntpd entirely as it was, seeing the world exactly as
it would have had the adjustments been done, both internally and on-the-wire, with just an
extra chunk of code operating where the system call interface would be to maintain the fiction.

Of course I do want to adjust the system's clock too, I just can't do it the way the
NTP code was doing it.  What could be done, however, was design a system call interface which
allowed time and frequency adjustments (which are done solely with arithmetic as well),
but did so in a way which returned enough information about what was done to allow one
to precisely compute the time one would have gotten from the unadjusted clock as a function
of the adjusted clock's timestamps.  This is also just a math problem that can solved quite
precisely by taking care when doing the arithmetic.  If I can arbitrarily adjust the system's
clock without losing track of the unadjusted time, and if I can use the unadjusted time
to determine the time ntpd's adjustments would have resulted in had it made them, then I
can have my full-precision cake and eat it too (and there are other interesting side effects,
like being able to compute the Allan deviation from data collected from a clock which is
being simultaneously adjusted since I can always precisely determine what the same measurements
would have been if the clock were left unadjusted).

So I'm at a point where I can define a policy for adjusting the system clock which is
independent of what ntpd would like the clock to do without any effect on ntpd, but which
can use ntpd's estimates of the time and frequency errors with respect to the unadjusted
clock to inform what it does.  The system's time is still determined by ntpd, but can
be way more loosely coupled than the current implementation has it.  What is the best
thing to do for the system time, now that it doesn't necessarily need to serve ntpd,
is something I'm still trying to figure out.

And as for results transferring time from the card to the system clock, I have found that
if it samples the offset 4 times per second and processes that data to determine time and
frequency errors (using a least squares fit, after outlier filtering) then, if an
adjustment is made only when it computes a time or frequency result which differs from
the current clock setting at an 80% confidence level, it will typically end up making
an adjustment roughly every 10 seconds or so with the time adjustments tending to be about
10 nanoseconds in size and the frequency adjustments being very roughly on the order of
10^-9.  If I haven't made a mistake I think those numbers (10 seconds and 10^-9) should
characterize the thing you call the Allan intercept, though I haven't calculated that yet.
I think it is possible to claim that this system's clock is typically within 20 ns of
the GPS receiver.  I'm also thinking that if an adjustment rate of once every 10 seconds is
all that is necessary to achieve this precision with this system's clock and this fine a
time source, then when you have the same system clock but a much sloppier time reference
source (e.g. time samples from the network) the adjustment rate justifiable by the
achievable timekeeping accuracy is going to be significantly lower (say once every
few hundred seconds, like the Allan intercept with a good NTP source).  This is a good
result, if it can be implemented this way, since being able to keep the clock as accurate
as it can be with a rate of adjustment which is typically quite small has some side
benefits with respect to the implementation of kernel timestamping of packets or other
events, or of system-call-free user space time stamping.

Dennis Ferguson


On 5 May 2011, at 04:10 , David L. Mills wrote:
> Dennis,
> 
> Holy timewarp! Are you the same Dennis Ferguson that wrote much of the original xntpd code three decades ago? If so, your original leapseconds code has changed considerably, as documented in the white paper at www.eecis.udel.edu/~mills/leap.html. It does not speak POSIX, only UTC. This applies to both the daemon and the kernel.
> 
> Dave
> 
> Dennis Ferguson wrote:
> 
>> Hello,
>> 
>> A strict reading of the part of the POSIX standard defining "seconds
>> since the epoch" would seem to require that when a leap second is added the
>> clock should be stepped back at 00:00:01.  That is, the second which should
>> be replayed is the second whose "seconds since the epoch" representation is
>> an even multiple of 86400.  Right now the NTP implementation doesn't do that,
>> it instead steps the clock back at 00:00:00 and replays the second which is
>> one before the even multiple of 86400 in the "seconds since the epoch"
>> representation, to match what seems to be required for the NTP timescale.
>> 
>> For a new implementation of this is there any reason not to do the kernel
>> timekeeping the way POSIX seems to want it?  I thought I preferred the NTP
>> handling since it seemed to keep the leap problem on the correct day (for
>> an "all days have 86400 seconds" timescale, which describes both the NTP and
>> the POSIX timescales), but I've since decided that might not be all that
>> important and I appreciate the symmetry of the POSIX approach (leaps forward
>> occur at 23:59:59, leaps back at 00:00:01, and both leaps end up at 00:00:00)
>> as well as the fact that the POSIX approach yields a simple equation to determine
>> the conversion from time-of-day to seconds-since-the-epoch which is always
>> valid, even across a leap (and even if the inverse conversion is ambiguous)
>> while I'm having difficulty finding a similar description of NTP's behaviour.
>> 
>> Dennis Ferguson




More information about the questions mailing list