[ntp:questions] What traffic from pool is normal ?

Rick Jones rick.jones2 at hp.com
Tue Jun 21 20:22:02 UTC 2011


Condor <john at stz-bg.com> wrote:
> Hello ppl,
> do I can ask what traffic from pool is normal ? I have some times 
> problems ... I think I got too much query. This problem is from long time 
> and it's happened only for small amount of time. For 30 min to 1 hour and 
> usual when Im not logged in to see what's happened. Here is error that i 
> got from kernel:

> net_ratelimit: 686 callbacks suppressed
> nf_conntrack: table full, dropping packet.
> nf_conntrack: table full, dropping packet.
> nf_conntrack: table full, dropping packet.

> I use some optimization on tcp/ip network like:

> # increase TCP max buffer size setable using setsockopt()
> # 16 MB with a few parallel streams is recommended for most 10G paths
> # 32 MB might be needed for some very long end-to-end 10G or 40G paths
> net.core.rmem_max = 16777216 
> net.core.wmem_max = 16777216 
> # increase default values
> net.core.rmem_default = 16777216
> net.core.wmem_default = 16777216
> # increase Linux autotuning TCP buffer limits 
> # min, default, and max number of bytes to use
> # (only change the 3rd value, and make it 16 MB or more)
> net.ipv4.tcp_rmem = 4096 87380 16777216
> net.ipv4.tcp_wmem = 4096 65536 16777216
> # recommended to increase this for 10G NICS
> net.core.netdev_max_backlog = 10000
> net.ipv6.conf.all.forwarding = 1
> net.netfilter.nf_conntrack_tcp_timeout_established = 2000
> net.ipv4.netfilter.ip_conntrack_tcp_timeout_established = 2000

Sigh - as pointed-out, none of that tcp nonsense applies to NTP.  And
frankly, even for TCP, a 16 MB net.core.[rw]mem_max or tcp_[rw]mem is
pointless.  It was (I suspect) merely shotgunning by 10 GbE NIC
vendors.

I will leave the math involving:

Throughput <= WindowSize/RoundTripTime

as an exercise for the reader, but unless one has a non-trivial
latency in their 10GbE LAN, 1MB or so should be more than sufficient
for "link-rate" TCP.  So, one might consider tweaking
net.core.[rw]mem_max to 1048576 or 2X that.  And, as the default (most
of the time anyway) upper bound on tcp_[rw}mem is already 4MB that
does not need to be changed.  I dont think I've ever changed
netdev_max_backlog in any of my netperf testing over 10GbE.

rick jones
-- 
Process shall set you free from the need for rational thought. 
these opinions are mine, all mine; HP might not want them anyway... :)
feel free to post, OR email to rick.jones2 in hp.com but NOT BOTH...




More information about the questions mailing list