[ntp:questions] Detecting bufferbloat via ntp?

unruh unruh at wormhole.physics.ubc.ca
Tue Feb 8 23:51:48 UTC 2011


On 2011-02-08, Rick Jones <rick.jones2 at hp.com> wrote:
> Dave T??ht <d at taht.net> wrote:
>> I've been racking my brain trying to come up with a good way of
>> semi-passively detecting bufferbloat at the datacenter. 
>
>> What would wild swings in latency on the order of seconds from a ntp
>> client register on a ntp server as?
>
> Trying to avoid ICMP fast paths?  Once everything is "stable" the
> polling interval is going to get pretty large (1024 seconds) - watch
> long enough and I suppose one will see buffer bloat in the stats, but
> it might take quite a while to "hit." You may need/want to look for it
> a bit more "actively."

Not at all clear what buffer bloat is supposed to be. This seems to
implyu that it is buffer leakage-- ie the buffer keeps growing because
stuff is not properly removed from the buffer. The pointers on the we
seem to imply that the program assigns buffers which are far too large
and that therefor the buffers will need to get paged in slowing
everything down. 

The network buffers in ntp are tiny. The datagram is far less than 1K. 
I have no idea what the OP is asking-- is he afraid that the writers of
ntpd were incompetent and wants to test this particular form of
incompetence? Has he seen evidence that seems to imply that ntpd suffers
from bufferbloat? 

If the latter why not tell us the symptoms that make him suspect this. 
If the former, what makes him think that the programmers screwed up?

>
> rick jones
>
> keeps forgetting if any of the interface MIBs specify an outbound
> queue length statistic...
>




More information about the questions mailing list