[Pool] Firewall recommendations for ntp server?

mrex at tranzeo.com mrex at tranzeo.com
Fri May 8 02:16:55 UTC 2015


Hi all,

I am constantly having a firewall/conntracking problem that I am curious if
someone would like to share their stable firewall recipe.

May  7 19:45:16 ntp2 kernel: nf_conntrack: table full, dropping packet
May  7 19:45:16 ntp2 kernel: nf_conntrack: table full, dropping packet
May  7 19:45:16 ntp2 kernel: nf_conntrack: table full, dropping packet

I have several NTP servers setup on VPS' for providing time (they shouldn't
be doing anything else other than ntp) and they are part of the North
American pool.  They have 512MB of RAM and running Centos 7 and the latest
NTP 4.2.8p2.  They have public IP's and do not have any need for NAT, or
connection tracking (AFAIK).  However, it seems out of the box firewalld
does use connection tracking, which causes a problem with maximum number of
connections before any new connections can be made and the server appears
unresponsive/offline.

Most of the time, the conntrack count (/sbin/sysctl
net.netfilter.nf_conntrack_count) is < 10,000 (anecdotally from the corner
of my eye, mostly < 2000).  Periodically, they will spike to well over 32768
for minutes at a time.  The default conntrack limit (nf_conntrack_max) for
512MB was like 16384, and this was pretty easy to hit.  I've bumped it up to
32768 and decreased many timeouts, and there are still several times a day
where this is reached.  The few times that I was able to tcpdump the
interface when the connection count was high, I only saw NTP traffic,
nothing looked like it was a DDOS or hacking (99% being NTP client/server
packets), so my guess is that something got rebooted and maybe tons of
devices are all hitting the box at once? Not sure, have been limited in that
debugging so far to get something set up to figure out how many clients are
hitting the box at any given moment.  It looks like the server is getting
incoming packets but not able to send responses.  I'm somewhat confused in
that the NTP traffic is UDP and the conntracking extremely high timeout
stuff seemed to be TCP protocol related and not so much UDP...

It sounds like connection tracking is mostly for NAT and that if you have a
public IP without masquerading, it can/should be removed.  So far, attempts
to remove them fails to start firewall, so I'm looking for recommendations
on a decent firewall setup for a basic server with 512MB of RAM.  I can:

1. keep bumping the maximum connectrack limit, but I think the problem will
continue to happen, just less often, until RAM is used up and runs into
other problems
2. Decrease timeouts so that connections are freed much sooner, reducing how
often this limit is reached, maybe
3. Disable connection tracking altogether (I think the best solution, no?)
4. Get a bigger box with more RAM (more money, may not solve the problem
entirely?)

Thanks in advance, much appreciated for any suggestions or advice! 

Mike

Current conntrack timeouts, lower than defaults:

sysctl -a | grep conntrack | grep timeout
net.netfilter.nf_conntrack_events_retry_timeout = 15
net.netfilter.nf_conntrack_frag6_timeout = 60
net.netfilter.nf_conntrack_generic_timeout = 120
net.netfilter.nf_conntrack_icmp_timeout = 30
net.netfilter.nf_conntrack_icmpv6_timeout = 30
net.netfilter.nf_conntrack_tcp_timeout_close = 10
net.netfilter.nf_conntrack_tcp_timeout_close_wait = 60
net.netfilter.nf_conntrack_tcp_timeout_established = 3600
net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 120
net.netfilter.nf_conntrack_tcp_timeout_last_ack = 30
net.netfilter.nf_conntrack_tcp_timeout_max_retrans = 300
net.netfilter.nf_conntrack_tcp_timeout_syn_recv = 60
net.netfilter.nf_conntrack_tcp_timeout_syn_sent = 120
net.netfilter.nf_conntrack_tcp_timeout_time_wait = 10
net.netfilter.nf_conntrack_tcp_timeout_unacknowledged = 30
net.netfilter.nf_conntrack_udp_timeout = 30
net.netfilter.nf_conntrack_udp_timeout_stream = 180

Best reference so far:
http://www.pc-freak.net/blog/resolving-nf_conntrack-table-full-dropping-pack
et-flood-message-in-dmesg-linux-kernel-log/



More information about the pool mailing list