[ntp:questions] What level of timesynch error is typicalonWinXP?

David J Taylor david-taylor at blueyonder.co.uk.invalid
Tue Oct 26 07:24:19 UTC 2010

"Miroslav Lichvar" <mlichvar at redhat.com> wrote in message 
news:20101025164632.GA1842 at localhost...
> On Fri, Oct 22, 2010 at 11:39:47AM +0100, David J Taylor wrote:
>> Thanks, Dave.  I may be missing something here, but it seems to me
>> that 4.2.7p58 still takes a number of hours to reach the accuracy
>> limits where thermal effects dominate.  It's that which matters to
>> me, rather than something in the first few minutes.  I agree the
>> graphs would not show such short time-scale initial disturbances.
> Did the clock frequency change before you started the new version?

There was no deliberate change, no, but I would expect the clock frequency 
to vary by small amounts due to temperature variations and other effects. 
The changeover took probably less than a minute.

> I played with the latest ntp-dev a bit and there indeed is a
> improvement on start, mainly when the initial offset is around
> 0.01-0.05s. But the frequency error has to be very small to make a
> difference, see these plots:
> http://mlichvar.fedorapeople.org/tmp/ntp_start_offset.png
> http://mlichvar.fedorapeople.org/tmp/ntp_start_freq.png
> Also, I've noticed when ntpd is started without driftfile and the
> initial offset is over 0.05 second, the overshoot can easily reach 100
> percent, is this expected?
> -- 
> Miroslav Lichvar

Most interesting plots, Miroslav.  On the system I tested the static 
frequency error is about 12 ppm.  On your graph vs. frequency offset, that 
suggests about 5000s (1.4 hours) to reach your criteria.  The plot of 
reported frequency offset versus time, though, shows an initial value 
of -32ppm, followed by an exponential rise to +12ppm.  I don't know why 
the initial value is so wrong.  The drift file contains the correct value, 
but NTP doesn't report whether it has read and used that value or not.

Your plot versus initial time offset is most interesting.  I'm guessing 
that the step improvement at time offsets greater than 128ms is due to NTP 
stepping the clock at startup.  The graph suggests that it would always be 
better for NTP to step the clock, but I'm not sure whether there is a 
startup option for that - i.e. whether you can have "step if over 128ms in 
normal use" as well as "always step at startup".

Thanks for the data.

I have seen NTP "going wild" in some circumstances where the drift file 
becomes written with values near to +/- 500, and then overshoots can 
occur, but I've not seen that recently.


More information about the questions mailing list