[ntp:questions] NTP vs RADclock?

unruh unruh at invalid.ca
Fri Jun 8 19:15:24 UTC 2012

On 2012-06-08, Rick Jones <rick.jones2 at hp.com> wrote:
> unruh <unruh at invalid.ca> wrote:
>> On 2012-06-08, Ron Frazier (NTP) <timekeepingntplist at techstarship.com> wrote:
>> > I cannot speak to the advanced structure of clock algorithms, but,
>> > regarding wifi performance (of ntp), I can testify that it ranges
>> > from ok to terrible. I've seen machine to machine ping times on my
>> > wifi lan range from a few ms to almost a second. The worst is when
>> > 6 or so devices are all communicating on the same wifi channel and
>> > my wife's computer is connected to her remote access vpn tunnel
>> > for work. I have 3 machines acting as ntp clients and synching to
>> > a gps based server every 8 seconds. Those machines generally
>> > maintain accuracy of + / - 3 - 10 ms versus the server. That's
>> > plenty good for my purposes, but it's nowhere near as good as a
>> > gigabit hardwired lan running through a good switch. I could wire
>> > all these things up together, but for my purposes, it's not worth
>> > the trouble. I prefer the convenience of wifi. However, I'm not
>> > sure if it's even possible to do meaningful testing on wifi since
>> > the performance can be so variable.
>> Actually 100MB ethernet is probably better than gigabit. The gigabit
>> stuff has interrupt coallescing which adds random delays to packets,
>> which really messes up ntp. I used to have really stable 20us timing
>> across the network when I ran 100MB. Now that the switches are
>> gigabit and some of the cards are gigabit, I get really really
>> terrible behaviour ( sometimes 10ms round trip delays, when the
>> minimum is 120us) Gigabit is almost as bad as wireless for timing
>> purposes.
> In the Linux world at least, ethtool can be your friend, and be used
> to disable the interrupt coalescing.  I would expect other
> environments to have similar mechanisms.
> That said, 10ms for an interrupt coalescing-induced delay seems
> unlikely to me - it is possible that some driver/NIC combination has a

This is a rare thing (Ie once in a 100 packets has round trip delay of
10ms, vs 10us usually.) I have no idea if this is interrupt coalescing
or what. Since our network switched to Gigabit switches and some of my
systems use gigabit cards, the timing behaviour has become really bad. 

It used to be that I got consistant 140us round trip delays ( with a
scatter of about 10-20us) Now, although that is still the minimum, all
have a scatter of about 50-100us, some have a bivalent round trip of
either around 150us but also around 300 us. Some have a wildly varying (
a few packets around the 150, most around 300us, with scatter up to
10ms.) round trip. It is a total mess. It really seems like the GB
switches are a disaster for timing. Of course running them at 100 would
not help. It is a problem with the switch design I believe. 

I have no idea where this horror is taking place-- the NICs, the
switches, ....

> really bad setting that high, but it seems unlikely.  Again, if in the
> Linux world, ethtool can be your friend, this time showing you the
> interrupt coalescing settings for a given NIC.
> I doubt that gigabit switches do such coalescing.  And the coalescing
> (like JumboFrames) is an "after market" implementation detail (not
> part of the IEEE standard) thing.  As such, it might be more accurate
> to say that Gigabit Ethernet NICs can be almost as bad as wireless.
> It is entirely possible/likely that a "Gigabit" NIC, operating in 100
> Mbit/s mode, will still do the interrupt coalescing.

Yes, I agree. What I meant was full 100Mbs from the nics, through the
switches to the server. 

> From what little I know about wireless, it will probably always be
> something of a mess, but bufferbloat doesn't make it any better.

> rick jones

More information about the questions mailing list