[ntp:questions] Leap second to be introduced in June
martin.burnicki at meinberg.de
Mon Jan 12 11:08:07 UTC 2015
> On Sun, Jan 11, 2015 at 11:34 PM, brian utterback <
> brian.utterback at oracle.com> wrote:
>> On 1/11/2015 10:40 PM, William Unruh wrote:
>>> Well, actually as I understand it, ntpd does stop the cclock for that
>> That is not the case. That is the behavior that the kernel reference
>> code implements which is not part of ntpd.
> Presumably unruh at invalid read this discription of NTP leapseconds:
> There are three approaches to implementing a leap second. The first
>> approach is to increment the system clock during the leap second and
>> continue incrementing following the leap. The problem with this approach is
>> that conversion to UTC requires knowledge of all past leap seconds and
>> epoch of insertion. The second approach is to increment the system clock
>> during the leap second and step the clock backward one second at the end of
>> the leap second. This is the approach taken by the POSIX conventions. The
>> problem with this approach is that the resulting timescale is discontinuous
>> and ambiguous, since a reading during the leap is repeated one second
>> later. The third approach is to *freeze* the clock during the leap second
>> allowing the time to catch up at the end of the leap second. This is the
>> approach taken by the NTP conventions.
A modified version of this approach as proposed by Dave Mills is to let
the system time increase by a LSB whenever it is read during the freeze
period, i.e. the leap second. This ensures that applications even see
the time incrementing during the leap second, so there are no duplicate
Even though this approach will probably yield the smallest time error
over a leap second, and ensures the system time is not stepped back, the
big disadvantage is that whenever the system time is read the kernel has
to check whether there's an ongoing leap second, and do some
computations in this case, which costs execution time.
It's much faster to compare the current time to a "leaps second" time,
and subtract a second *once* if the leap second time has passed, so this
is currently the preferred implementation, AFAIK.
>> If the precision time kernel modifications have been implemented, the
>> kernel includes a state machine that implements the actions required by the
>> scenario. The state machine implemented in most recent Unix kernels is
>> described in the nanokernel
>> <sftp://www.eecis.udel.edu/%7Emills/nanokernel.tar.gz> software
>> distribution. At the first occurrence of second 3,124,137,600, the system
>> clock is stepped backward one second. The operating system kernel time
>> conversion routines can recognize this condition and show the leap second
>> as number 60.
> The table presented on that page notes that the NTP timestamps for :60 (the
> leap second) and :00 are the same.
If you look at the time_t value returned by the kernel at the beginning
and end of the leap second, this is surely the case.
I'm not aware that the kernel time counts :59 :00 or :59 :60 :00.
Rather, there are some time conversion routines which can output a :60
*if* they can become aware of a leap second status.
Looking at possible APIs to return the system time: There can be a fast
API which just returns seconds since an epoch and fractions of that
second, and there can be an API which returns these raw time stamp plus
some status information, which can then be used by some function to
provide a nice output like :60 during the leap second. However, the
latter one probably takes longer to execute wince some checks have to be
done for every call.
More information about the questions