[ntp:hackers] ntp and signalled IO
brian.utterback at sun.com
Tue Apr 18 15:13:15 UTC 2006
Poul-Henning Kamp wrote:
> In message <4444E6B1.4020904 at sun.com>, Brian Utterback writes:
>>> Not very. The numbers of file descriptors which NTPD must
>>> monitor is very small, we're talking ten or maybe twenty.
>> Alas, this is not true. I wish it were. Many of Sun's customers have
>> logical interfaces defined on systems up to the tens of thousands.
>> Since NTP insists on binding to all of them separately, it ends
>> up with the classic scale problems.
> This silliness in NTPD has to stop at some point soon anwyay.
> Doesn't Solaris have IP_RECVDSTADDR ?
>> On a related note, it occurred to me, that for systems that support
>> SO_TIMESTAMP and no serial based refclocks, or SO_TIMESTAMP and
>> serial refclocks with the tty_clk or parse drivers, we could greatly
>> simplify the i/o loop and get rid of recvbufs entirely.
> Once you put in ISC's eventlib, the central loop disappears into
> the eventlib, and exactly how the refclocks get the attention they
> want is entirely up to themselves (by means of eventlib facilities).
> In other words, the core NTPD would not know anything what goes
> on inside the refclock anymore. This is _the_ BIG improvement.
> The most tricky part in using eventlib is actually polling PPS-API,
> because for signals like DCF77, MSF and WWV you want to poll at
> certain times relative to UTC seconds (I'll elaborate on this in
> due time, for now just take it from me) but you want to run the
> eventlib engine to run on CLOCK_MONOTONIC so as to not be affected
> by the steps NTPD might subject the UTC clock to.
I am not sure that this is the only tricky part. My point with the
recvbuffs is that ntpd currently tries to read messages as quickly
as possible, and timestamps them as they are read (using a static
timestamp for each groups of messages, which seems wrong to me)
and only processes the buffers when there are no more inputs to
be read. This is important because the jitter is introduced by the
delay in reading the message, but is immune to delays in processing
So, is there a way in eventlib to prioritize the events in this
manner? My point with the SO_TIMESTAMP and parse driver is that
with those in use, the jitter problem is pushed back to when the
data is received by the system, not when it is read, meaning that
the jitter is now immune to delays in reading. This makes the timeliness
of the reading of much less importance, and thus we can just read,
process, transmit, repeat as necessary.
Quidquid latine dictum sit, altum sonatur.
Brian Utterback - OP/N1 RPE, Sun Microsystems, Inc.
More information about the hackers