[ntp:questions] Effect of Gigabit Interrupt coalescence on ntp timing
unruh at invalid.ca
Thu Jun 14 15:44:02 UTC 2012
On 2012-06-14, Terje Mathisen <"terje.mathisen at tmsw.no"> wrote:
> unruh wrote:
>> On 2012-06-13, Terje Mathisen <"terje.mathisen at tmsw.no"> wrote:
>>> unruh wrote:
>>>> On 2012-06-12, David Malone <dwmalone at walton.maths.tcd.ie> wrote:
>>>>> some of these cards actually timestamped the frames when received
>>>>> and then the timestamp provided by SO_TIMESTAMP or similar could
>>>>> be corrected. It seems only a few cards can do this though.
>>>> It would be nice. But then what clock would they use to timestamp the
>>>> packets. If it is an onboard network card clock one would then also have to
>>>> discipline that network card clock ( or at least calbrate it, and as we
>>>> know that calibration changes so it would have to be another continuous
>>>> calibration of that clock.)
>>> We would _NOT_ need to tune that clock, a static measurement of the rate
>>> is more than enough!
>> That rate changes over time.
> Doesn't matter as long as it stays within 1000 ppm, which is several
> orders of magnitude worse than any _timing_ crystal.
For ntp, the absolute value of the timestamp is also important, not just
the difference between two times. The round trip time is not really of
use to the algorithm.
>>> Assume a really bad on-NIC crystal, supposed to be 10 MHz, but actually
>>> off by 1000 ppm, i.e. an order of magnitude worse than most of the
>>> really cheap motherboard crystals:
>>> The maximum time from packet arrival until the interrupt service routine
>>> can grab it seems to be around a ms, while most packets are handled in a
>>> us or two, right?
>>> Since the NIC clock only needs to measure the time between packet
>>> arrival and ISR read, an error of 1000 ppm over a full ms interval
>>> corresponds to 1 us total, while for all packets that are serviced in
>>> less than 100 us, the measurement error will be 100 ns, since that is
>>> the resolution of the 10 MHz timer.
>> How would it time the receipt time to the interrupt processing time?
>> That latter is something that takes place on the cpu, not in the nic.
>> The onboard nic clock could timestamp the sendout and receipt of the
>> packets, but that would still mean that you would have to determine what
>> that timestamp means in terms of real time. Thus I agree that the
>> roundtrip time could be measured by a bad nic clock, but ntp needs to
>> know what the absolute timestamps are.
> This is the easy part:
> The interrupt handler code does exactly the same as today, i.e. it reads
> the local system time and stores that along with the packet: This is the
> packet reception time.
> In addition it also receives the on-NIC reception counter value when the
> packet arrived, along with the the current valueof the same on-NIC counter.
> I.e. like this:
> while there are packets in the NIC buffer:
> getpacket_with_nic_countervalue(&ntp_packet, &packet_counter)
> ntp_packet.timestamp -=
> (ntptime) (nic_counter - packet_counter)
> * counter_to_ntp_scale_factor;
IF the nic gives that info, and if they actually put that
inside the loop this would be OK. since otherwise that
getsystemclock_withnsresolution(¤t_time) could be out by more than
a micro second when the actual packet comes in.
But yes, I agree, this would work.
Modulo the noise in system clock reading.
>>> BTW, all the 1588-capable NICs out there, and there must be quite a few
>>> of them now, will do more than this, and with a better on-board timer.
>> more than what?
> They will hardware timestamp both transmit and receive for all timing
> packets, enabling sub-us level transit time measurements.
Transit time is not really important to ntp.
> They can also automatically update an "in-transit" time which
> accumulates the time spent traversing any intermediate routers/switches.
More information about the questions