[ntp:questions] Server offset included in served time?

Martin Burnicki martin.burnicki at meinberg.de
Tue Sep 16 08:10:40 UTC 2008


David Woolley wrote:
> Martin Burnicki wrote:
>> But what about the behaviour shortly after startup? The NTP daemon tries
>> to determine the initial time offset from its upstream sources. Unless
>> that initial offset exceeds the 128 ms limit it starts to slew its system
>> time *very* slowly until the frequency drift has been compensated and the
>> estimated time offset has been minimized.
> 
> I've had some thoughts about this.  As I see it the problems are:
> 
> - ntpd doesn't have any persistent history of jitter, so has to start by
> assuming that the jitter is of the same order of magnitude as the offset
> (what people looking at offset often forget is that they have the
> benefit of hindsight).

Basically I agree. However, if the packet turnaround time is low, e.g below
1 millisecond on a local net, and evaluation of the timestamps yields an
offset of about 120 ms (according to my example shortly after startup) then
the result clearly indicates the own system time is off by 120 ms.

> - ntpd is already at the shortest permitted time constant, and going
> lower would require faster polling, or compromising the level of
> oversampling, or length of the initial best measurement filter.  It is
> this lower bound on the time constant that means that ntpd can get into
> a position where it should know that the time is wrong, but cannot fix
> it quickly.

Yes, that's what I consider a real limitation of ntpd. 

Of course if you rely to some internet servers then there may be rather
large turnaround times/delays. However, given the upstream server sends
correct time stamps then then the error due to packet delays on the network
can not exceed the packet turnaround interval. So if the turnaround
interval is low (e.g. < 1 ms) and the time offset is high (e.g. ~120 ms)
then ntpd should indeed start compensate the time offset much quicklier
than it currently does.

I often hear from NTP users (here in the NG, and from our customers) that
this is the expected behaviour.

> - the step limit is fixed at configuration time.
> 
> One could deal with the first by making the smoothed jitter be
> persistent.  That way ntpd can detect whether its offsets exceed
> reasonable jitter for the system, before it has enough history for the
> session to know the jitter from measurements just in the current session.
> 
> Once one knows that offsets are high compared with jitter, one can
> address the time constant issue.  Normally jitter << offset would tend
> to force the time constant down, but is has nowhere to go down.  Maybe
> what is needed is to allow the degree of oversampling to compromised
> until one first begins to get offsets of the same order as the jitter.
> Maybe also use less then 8 filter slots.

That's what I mean. If the measurement results undoubtly indicte the time
offset is there then ntpd could immediately start to slew it away.
 
> This may compromise the stability of downstream systems, so it may be
> necessary to stay in an alarm state until this stage of the pocess is
> complete.  This may be a problem for people who want a whole network to
> power up at the same time, and quickly.

If a complete network is powered up, inclusive of the time server, and the
initial time offset of the time server is 120 ms and is only slewed down
after hours, do you really thing it's better to slew the time that slowly
on the time server and providing a time to his clients which is 120 ms off?

So the clients would start to synchronize to a reference time which is off
by 120 ms and then follow the slow corrections of the time server?

Alternatively, if the server would quickly slew its initial time offset then
the clients would initially see a more accurate reference time and thus
would earlier have an accurate time themselves. 
 
Martin
-- 
Martin Burnicki

Meinberg Funkuhren
Bad Pyrmont
Germany




More information about the questions mailing list