[Pool] Prevent packet storm

lst_hoe02 at 79365-rhs.de lst_hoe02 at 79365-rhs.de
Fri Mar 25 08:52:12 UTC 2011

Zitat von Andy Fletcher <andy at x31.com>:

> On 03/24/2011 02:46 PM, lst_hoe02 at 79365-rhs.de wrote:
>> Hello
>> since some days i have a public ntp server in the pool. Today i
>> discovered that ntpd was using around 5% CPU power and found a constant
>> packet flow of around 500..1000 packets per second from a single IP
>> address.
>> Any hints how to deal with this beside dropping them by iptables
> iptables is the way to go but you don't need to hardcode their address
> but use the recent module to drop any packets from offenders who exceed
> a given number of packed per second averaged over a period. After a
> while they will give up and try a different server.
> This has the advantage that it self resets once they get below the
> threshold, the two lines below will do this (adjust -i to match your
> interface)
> iptables -A INPUT -i eth0 -p udp -m udp --dport 123  \
> -m recent --set --name NTPTRAFFIC --rsource
> iptables -A INPUT -i eth0 -p udp -m udp --dport 123  \
> -m recent --update --seconds 60 --hitcount 7 \
> --name NTPTRAFFIC --rsource -j DROP
> You can view the connecting hosts by looking at the conntrack table:
> cat /proc/net/ip_conntrack | grep dport=123
> And you can see what sort of performance you are getting by looking at
> the iptables stats
> iptables -n -L -v  | grep 123
> I've been running it for a while on a server in Amsterdam and the
> abusive clients disappeared almost instantly. If I check now it shows
> very few attempts:
> iptables -n -L -v  | grep 123
> 1038K   79M DROP       udp  --  eth0   *
>           udp dpt:123 state NEW recent: UPDATE seconds: 60
> hit_count: 7 name: NTPTRAFFIC side: source
>   74M 5613M            udp  --  eth0   *
>           udp dpt:123 state NEW recent: SET name: NTPTRAFFIC
> side: source
> But I'm serving a lot of ntp clients (over 5k in the last minute):
> cat /proc/net/ip_conntrack | grep dport=123 | wc -l
> 5615
> There is a balance in conntrack table size and count period.  A limit of
> 7 packets in one minute for a client appears to work well and allows
> clients to use iburst without being dropped.
> I'd love to hear comments on this.

This reminds me that i have already used ipt_recent some time ago to  
protect a mailserver, but this module was not available inside the  
OpenVZ container i'm now using. I will recheck, thanks for the hint.

The offender is now gone after some 16M Packets (!1,2GB Traffic) dropped.

Chain INPUT (policy DROP 2213 packets, 132K bytes)
pkts bytes target     prot opt in     out     source        destination
16M  1209M DROP       all  --  *      *

This clearly shows why the pool is needed ;-)



More information about the pool mailing list