[ntp:questions] NTPD can take 10 hours to achieve stability

unruh unruh at wormhole.physics.ubc.ca
Tue Apr 19 00:28:57 UTC 2011


On 2011-04-18, Mike S <mikes at flatsurface.com> wrote:
> At 01:51 PM 4/18/2011, unruh wrote...
>>Since you can measure the time to usec, in 1 sec you can measure rate
>>offsets of 1PPM and offsets of 1usec.
>
> That's the thing. You can't do that.

You are confusing measurements with the effects of noise. 
>
> It's not a matter of time precision on a single machine. It's a matter 
> of comparing times on two or more machines. Network jitter is 
> unpredictable, especially if the hosts NTP is syncing with are remote, 
> so single readings can't be trusted to the us (a full size Ethernet 
> frame @ 1Gbps is ~12 us), or even the ms level. There can also be 
> jitter within the host you're syncing to (irq latency, etc.). Using a 
> longer time constant averages out the jitter. I'm sure there are papers 
> covering the math behind it, somewhere.

Sure, but that is a different question. What I demonstrated was that
with it usec precision ( and yes, computers internal jitter allows usec
precision. The jitter is not terribly large in general-- I have measured
it.) it would take 1 sec to get to 1PPM. That is a factor of 1000-100000
better than ntpd does. That network jitter would have to be really
really horrible to get that bad. And even on a network, I get 20usec (
which means it would take 20 sec to get down to 1PPM).

Your "explanation" of time-rate tradeoff is true, but so far off of waht
ntpd does that it is also irrelevant. 
It is also one of the reasons why chrony gets so much faster a
convergence after an error-- it operates much closer to that ideal.





More information about the questions mailing list