[ntp:hackers] Blast attack at USNO

David Mills mills at udel.edu
Fri Apr 9 16:12:22 UTC 2010


Terje,

The NIST statement applies to all of their servers collectively, not 25 
million on a single one. As far as hashing the MRU list, Dave Hart's 
latest code does that. The MRU list need not include all clients, only 
the ones likely to cause flooding. My scheme described earlier and in 
the paper I cited amounts to a probabilistic approach that was 
demonstrated to be quite effective in an earlier NIST flood.

Dave

Terje Mathisen wrote:

> David Mills wrote:
>
>> Hal,
>>
>> Yes, my concern is that the source address is hacked, which makes it a
>> likely reflector attack. Rich mentioned there were three attackers, two
>
>
> At this point the best we/ntpd can possible do (on-the-fly) is to 
> rate-limit the responses, i.e. start discarding most of the packets 
> which seems to come from a single IP.
>
>> at the same time. I smell a bot. As for contacting the ISP, that's
>> simply not possible. NIST has reported over 25 million clients of their
>> servers. The primary concern is to protect the servers by intelligent
>
>
> If they really do have 25 M individual IP addresses contacting a 
> single server, then the MRU tables would need to be pretty big indeed...
>
> OTOH, if we compressed that table severely, we could fit a lot more 
> entries, see below:
>
>> deflecting traffic, not necessarily to stop the bots. And, it's not the
>> resource required; when the MRU list gets very long, the time to search
>> it can become significant.
>
>
> It seems obvious to me that the MRU list should be hashed, if 25M 
> entries is a useful target we would need room for at least 30M entries 
> to keep the number of collisions very low.
>
> Each entry needs at least the (real IP4 or real/compressed IPv6) 
> source address (6-16 bytes), port/version (4 bytes), count (4 bytes), 
> exp_avg interval (4 bytes) and time (in seconds, 4 bytes) of last 
> query, for a total of not more than 32 bytes.
>
> This seems like a GB total for a NIST server with all those 25M 
> individual source addresses, in a table with room for 30M entries.
>
> Yeah, an extra GB is significant, but not really very expensive these 
> days, not for a dedicated S1/S2 server.
>
> Checking an incoming address would take on average less than two 
> accesses to the hash table, the real problem is of course that it 
> would make it effectively impossible to handle overflows by quickly 
> locating the least used entry and replacing that one.
>
> I would instead size the initial hash lookup so as to generate chains 
> of 2-5 entries (when getting close to full), and then replace the 
> least used entry in the current chain.
>
> I'll guess I have to take a look at the current MRU code to see how 
> you guys have done it! :-)
>
> Terje
>



More information about the hackers mailing list