[ntp:questions] NTP makes a time jump

unruh unruh at invalid.ca
Wed Jul 10 00:05:21 UTC 2013

On 2013-07-09, Harlan Stenn <stenn at ntp.org> wrote:
> unruh writes:
>> On 2013-07-09, David Woolley <david at ex.djwhome.demon.invalid> wrote:
>> > On 09/07/13 09:07, Miroslav Lichvar wrote:
>> >
>> >>
>> >> I think the kernel would have to be recompiled with a smaller
>> >> MAXFREQ_SCALED constant or ntpd recompiled with smaller NTP_MAXFREQ if
>> >> the kernel discipline is disabled.
>> >>
>> >
>> > The kernel discipline will be disabled if he forces slewing.
>> That is of course another problem. Maybe he should switch to chrony
>> which does allow slewing even of large offsets, and does it via kernel
>> discipline. (Unfortunately it works only on Linux or BSD, so if he is
>> using windows, that suggestion does not work). Chrony will also allow
>> slewing faster than 500PPM so that problem does not occur (unless the
>> slew rate gets up to 100000PPM) 
> That would be fine if he was also OK with "blindly following the current
> leader" instead of "tracking the best source of time".

That was the model he had. A server which all of the clients were
supposed to stay within 50ms of, no matter whether or not that server
was connected to a time source or not, and no matter if,  after a long
disconnect, that server then reconnected to a source and found itself 3
min out.  It was for him of paramount importance that all clients and
server be withing 50 ms of each other. Whether they showed true time was
much less important. 

Of course if his interest had been in each keeping the best time, the
rest of the clients be damned, then this is not a good  procedure.

> Changing the slew rate affects the behavior of the system clock, and
> since the NTP algorithms have been thoroughly tuned and tested in a very
> wide range of conditions, changing the slew rate like this can (and
> likely will) introduce unstable oscillations in the networked collection
> of systems.

??? Why?
Yes, changing the slew rate does affect the behaviour, but it does not
make it unstable. As far as I can see, 500PPM was simply a figure Mills
grabbed as being sufficient for most purposes. That was then frozen into
the adjtimex kernel software. 
Since there are no loops in the client server relationship, I cannot see
how instability could arise.

Note that I would not advise him to put his network of servers and
clients into the pool for others to use as a time source. 

More information about the questions mailing list