[ntp:hackers] smearing the leap second

Martin Burnicki martin.burnicki at burnicki.net
Fri Jun 19 15:37:24 UTC 2015


Miroslav Lichvar wrote:
> On Fri, Jun 19, 2015 at 02:52:58PM +0200, Martin Burnicki wrote:
>>> I would much prefer the rule to be "clients always get the smeared time
>>> (when the runtime switch is active)", while we always use the true
>>> (baseline UTC) time for requests we send off to other servers.
>>
>> Agreed. That's how I've implemented the smearing.
> 
> Good to see someone is working on an implementation for ntpd. Will
> peers get smeared time too, or only clients?

In the current implementation only the clients. However, it would not be
hard to add the smear offset also to the packets sent to peers, and of
course this could also be made configurable.

As mentioned before, with the approach I'm using the internal system
time is still kept accurate, and the smear offset is only applied to the
network packets.

So if there were 2 peers running this version of ntpd they would talk to
each other using real UTC, just like they do now, but both could provide
their *clients* with the same smeared time, if both have the same smear
interval configured.

>> leapsmearinterval 60000
>>
>> so 60000 seconds before the leap second event ntpd starts to compute a smear
>> offsets starting with 0, and ending with -1 s.
> 
> Will there be enough time to get the leap second status on the server?
> Or is it assumed there will always be a leapfile present?

I think this is a problem of the overall system design. If a leap second
file is available then everything is fine anyway. If there's no leap
second file but only a refclock then it depends on the type of refclock,
and which protocol is used to let ntpd talk to the refclock.

For example, the German long wave transmitter DCF-77 starts to transmit
the leap second warning only 60 minutes before the leapsecond occurs.
This is much too short for NTP anyway if you have a hierarchy of
stratums, and of course a 1 hour smear interval would result in a much
larger drift than a longer interval.

If the NTP server is controlled by an IRIG time source it's even worse
since an announcement is only transmitted via IEEE time codes, not via
the standard IRIG codes, and the announcement is only a few seconds or
minutes.

> If the leap
> smear started when the leap second is actually inserted, this wouldn't
> be a problem. Putting the leap second in the middle of the interval
> minimizes the average error, but I'm not sure what advantage does
> ending leap smear at the point of leap second insertion have over
> starting it there.

The main point is just that the time is correct again at the beginning
of the new UTC day. Otherwise it doesn't matter much, IMO.

>> This smear time is applied to
>> both the receive time stamp and the transmit time stamp when answering
>> client requests.
> 
> I think the correction should be applied also to the reference timestamp,
> otherwise the client could drop the packet if the transmit time was
> before reference time.

In fact I've also thought about this and have added such comment to the
code, but it's not implemented, yet.

Martin



More information about the hackers mailing list