[ntp:questions] Testing throughput in NTP servers

Terje Mathisen "terje.mathisen at tmsw.no" at ntp.org
Mon Sep 17 16:34:13 UTC 2012

Ulf Samuelsson wrote:
>>> Instead the ntp code adds a delay to the incoming packet timestamp,
>>> and the FPGA H/W sends out the packet at the correct time.
>> OK, this still means that the host CPU must be involved in every packet.
> Yes, in the current implementation.
> ....
>>> Line speed is 10M+ packets/second.
>> That should be easy. (Famous last words!)
> SMOP = Small Matter Of Programming.

Yep, this is definitely a SMOP.

I don't know exactly how your intended 10GE NIC works, but I'll assume 
it has some form of bus master interface, since anything else will 
definitely NOT run at wire speed, right?

I'll also assume that the NIC driver sets up the required input/output 
buffers, so that user-level processes can access them.

In that case my N-1 cores scenario maps nicely, even without kernel access:

In my cache-aligned round robin shm buffer test code the writer 
thread/core managed to generate 14 M packets/second, and each of the 3 
reader threads/cores picks up pretty much all of them as they pass by.

Using fetch&add (LOCK XADD on x86) to atomically grab the next packet 
works nicely even when 3 cores are spinning in a tight loop doing 
nothing else.

As soon as we add some processing time (to fetch & interpolate the 
current time stamp) lock contention will drop down.

We'll use the same algorithm to drop outgoing packets into the output 
buffer. (With proper packet scatter/gather bus master hw, it is probably 
possible to hand packets to the NIC in the form of a list of 
pointer/size pairs. If so, we can make the list entries fixed size (64 
or 128 bits) and reuse the same buffer for the actual incoming and 
outgoing data, avoiding packet copying.)

Most packets will be processed within a fraction of a us from the time 
it is received, and for NTP this is good enough, even without HW 
timestamping/1588 hw.

- <Terje.Mathisen at tmsw.no>
"almost all programming can be viewed as an exercise in caching"

More information about the questions mailing list