[ntp:hackers] smearing the leap second

Mike S mikes at flatsurface.com
Fri Jul 10 20:48:13 UTC 2015

On 7/10/2015 3:45 PM, Harlan Stenn wrote:
> What about the bigger picture, doing data analysis on timestamped data
> across multiple devices?  What about apps that will crash because of the
> backward step?
> Please offer *practical* answers here.

OK. NTP should have a new "on the wire" data structure, which provides a 
true count of seconds since the beginning of an epoch (unlike what it 
currently does), and sync that precisely and reliably. Doing so avoids 
any need for NTP itself to be concerned with leap seconds in any way. 
That's its core function. Ancillary to that, it should also include 
information on upcoming leap second events, and the total number of 
accumulated leap seconds, to be used by hosts/OSs.

The interface between ntpd and the host OS (whether implemented as part 
of an OS specific ntpd implementation or externally) should do all 
timescale conversion and leap second handling.

If a host/OS wants to smear time, it can do so locally by slowly 
incrementing and tracking an offset from the NTP timescale, to be used 
by the conversion function. That would work bi-directionally - add the 
offset when converting to the host timescale, subtract it when 
converting to the NTP timescale. That recognizes the reality that NTP 
depends on the host to actually keep local time, and that host may wish 
give up accuracy in order to honor a false definition of a fixed length 
day. Local configuration can determine the time over which the offset is 
spread (or not spread, if it correctly implements UTC or uses a TAI-like 

(reverse the signs in the above paragraph, depending on positive or 
negative leaps)

It's definitely a *bad idea* to send smeared time over the wire. That's 
deliberately introducing an error when none is necessary. Whether a host 
wishes/needs to trade accuracy to accommodate an inconsistent local 
timescale is strictly a local decision. NTP itself should be concerned 
only with accurately, precisely, and reliably ticking off time.

>>> If they *do* properly handle the leap second, how do you get lots of
>>> instances of *appliations* to properly handle what looks to them like a
>>> backward time step?
>> Ditto, except it is an NTP issue since it only looks like a backward
>> time step because the canonical implementation of NTP doesn't follow its
>> own RFC, and doesn't use a monotonic timescale. If NTP did it right,
>> there wouldn't be any issue.
> What are you talking about?

I'm talking about the NTP RFC stating that the NTP timescale is 
monotonic, and counts time since the beginning of an epoch, when the 
implementation doesn't use a monotonic timescale by stepping backward at 
leap second events, just like POSIX. ntpd simply "forgets" leap seconds, 
instead of counting the number of seconds in the epoch, as stated in the 
RFC (and elsewhere). It claims to count epoch seconds but uses fixed 
length days, which are fundamentally incompatible. If that weren't true, 
the conversion between the (as implemented) ntpd timescale and POSIX 
time would need to include historical leap seconds - it doesn't, it's a 
fixed offset.

More information about the hackers mailing list