[ntp:questions] Why does ntp keep changing my conf file?

unruh unruh at wormhole.physics.ubc.ca
Mon Sep 20 15:17:54 UTC 2010

On 2010-09-17, Daniel Havey <dhavey at yahoo.com> wrote:
> Now that's a little more like it ;^) Some real information.  The experiments involve hundreds of packets per second so 20,000 packets per hour is trivial.  Also lets drop the public NTP servers stuff, since I'm not using them and not interested in doing so.
> Of course they are not qualified to give advice about keeping good network time, and neither am I.  We study wireless networking, not time keeping.  That is why I am asking questions here ;^)

And so whythen do you reject the advice you are given here? 

> So, you think that a PC clock will drift 20-50ms in 5 seconds?  Seems like a lot, but whatever.  Let me see if I've got this right, you tell me I might get say synchronization of ~10ms with ntpd running on a lan with everybody on the same switch or perhaps one switch away?

> I would actually kind of like to test it myself ;^)  But measuring time skew between machines may not be as easy as it seems at first glance ;^)  Surely there must be a paper somewhere that already does this?

GO out and buy yourself a GPS18 GPS receiver, and run the output PPS
line to your machines with a repeater. Get the machines to all be
withinabout 10micro (not milli) seconds of each other. Far far better
than ntpdate, or even ntpd.

> --- On Fri, 9/17/10, Chuck Swiger <cswiger at mac.com> wrote:
>> From: Chuck Swiger <cswiger at mac.com>
>> Subject: Re: [ntp:questions] Why does ntp keep changing my conf file?
>> To: dhavey at yahoo.com
>> Cc: questions at lists.ntp.org
>> Date: Friday, September 17, 2010, 12:04 PM
>> On Sep 17, 2010, at 10:58 AM, Daniel
>> Havey wrote:
>> > Hmmm, I'm not sure that I believe you guys ;^)? 
>> So you've said before, and I've certainly gotten the
>> impression that you would prefer to make your own mistakes
>> rather than heed advice about best practices.
>> > This is a wireless emulator on a wired testbed, and
>> the packets record a start of transmission time on one
>> computer, and then a Start of Reception time (SoR) on
>> another computer.? If the clocks have different times
>> then the calculation of noise caused by other packets will
>> get screwed up because the receiving computer will either
>> stay in RxPending too long, or not long enough.
>> > 
>> > I think that the slewing behavior is worse than the
>> ntpdate behavior of suddenly changing the time, because the
>> time will remain wrong for a longer period of time.
>> Running ntpdate -b causes the clock to be forcibly reset
>> after exchanging 8 NTP packets to try to estimate and take
>> into account round-trip time.? However, the limited
>> scope of measurement involved is fairly susceptible to
>> network delays due to a momentary traffic peak, routing
>> latency, or other causes, and -b flag invokes settimeofday()
>> rather than the more graceful correction of the clock via
>> adjtime() which ntpd or even ntpdate without -b flag would
>> use.
>> Running ntpdate every second, or 3 times every five
>> seconds, would involve ~20,000 packets per machine per hour,
>> compared with the half-dozen or so needed for ntpd over that
>> same interval.? I can't imagine why anyone would prefer
>> to generate nearly four orders of magnitude more network
>> traffic in order to keep significantly worse time then you
>> would by simply running ntpd with it's default config.
>> As someone else just noted, the traffic volume generated by
>> that script would be considered abusive to public NTP
>> servers.? If it truly was recommended by some hardware
>> manufacturer, whoever it was is simply not qualified to give
>> advice about keeping good network time.
>> The approach they've recommended is unlikely to keep clocks
>> synchronized closer than on the order of tens of
>> milliseconds, with 20-50ms jumps very likely happening every
>> few seconds.? Running ntpd even with a single network
>> source is likely to achieve synchronization at the
>> milliseconds level of accuracy without abrupt changes, and
>> with even a bit more work, can provide ~1ms to
>> sub-millisecond accuracy across fleets of hundreds of
>> machines.
>> Feel free to measure both approaches yourself and
>> compare...
>> Regards,
>> -- 
>> -Chuck
>       =

More information about the questions mailing list