[ntp:questions] A proposal to use NIC launch time support to improve NTP

Dennis Ferguson dennis.c.ferguson at gmail.com
Thu Dec 20 17:50:35 UTC 2012


On 19 Dec, 2012, at 11:05 , unruh <unruh at invalid.ca> wrote:
> On 2012-12-19, Hal Murray <hal-usenet at ip-64-139-1-69.sjc.megapath.net> wrote:
>> 
>> Doesn't the PPS signal to the kernel have to go over the same PCI bus?
>> 
>> I'd guess that you would get better results from a network card.
>> That's assuming it has a good clock.  All you have to do is read
>> a counter.  There is no interrupt latency.  You can also read it
>> several times and pick the best one.
> 
> Pick the best one? How would you know what the best one was?

If you can sample both the system clock and the network card clock
with a read of the counter you can sample with this instruction
sequence (though it may take some work to find the magic instructions
which do the operations in the order written):

    <read clock_A>
    <read clock_B>
    <read clock_A>

Better, less noisy samples will be associated with smaller values of
(A_after - A_before).  You can eliminate the source of the largest errors
in individual samples if you can disable interrupts on the processor core
while doing that.  You can also take independent samples as often as you
need to, if that improves the filtered result.  That is, you aren't limited
to one sample per second the way the PPS signal is, if 5 or 20 samples per
second improve the accuracy of the filtered result you can just take that
many.

The biggest problem with this isn't jitter, which can be mitigated by
taking and filtering additional samples.  It is instead the fact that the
minimal value of (A_after - A_before) is probably going to be on the order
of 100 ns if clock_B is a PCIe peripheral and it will be difficult to know
exactly where in that interval clock_B is sampled, leading to a (constant)
ambiguity of +/- 50 ns.  When I did an FPGA implementation of this for
PCI-X I had the <read clock_B> part capture two timestamps instead, one
captured speculatively at the very start of the bus cycle, which was returned
to the read, and one at the very end of the bus cycle, which was stored
so that it could be read later.  That reduced the ambiguity to about
+/- 7 ns for that card (though even +/- 50 ns is pretty good).

This is a very good way to do time transfers between a pair of clocks.  What
is needed is a system call interface which recognises that systems often
have more than one clock, only one of which will be the system clock, and
which provides an API to compare these clocks and independently adjust them.

> Not sure what you mean by a "good clock". It certainly will not be an
> accurate clock. It may be one whose drift rate is not too bad, although
> I suspect it will change with temperature. 

There are two contributors to the ultimate synchronisation quality of
a clock, the stability of the frequency source driving it and the
speed/accuracy with which you can measure that frequency and adjust
for the errors.  Dr. Mills's book calls this the "Allan intercept"; lower
is better.  The frequency source on the network card may be no better than
your system clock but the hardware PPS timestamps allow a speedy and precise
determination of that frequency which allows you to adjust for changes in
that frequency before a big time offset accumulates (it moves the Allan
intercept lower and to the left).  A clock which can be kept more accurate
is a "good clock" whether that accuracy is achieved by a more stable
frequency source driving the clock or by more accurate frequency
measurements against a good reference.  Don't fixate on the quality of
the crystal alone, there is more to it than that.

Dennis Ferguson



More information about the questions mailing list