[ntp:questions] 4.2.6p1-RC6, new "pool" implementation, "restrict source" in 4.2.7p22

Dave Hart davehart at gmail.com
Fri Apr 2 04:44:59 UTC 2010


Thanks to major efforts by Dr. Mills and Harlan Stenn to overcome
tactical challenges with the integration of patches and rolling of
release tarballs, we now have new ntp-stable and ntp-dev tarballs.

The normal release process emails which trigger the tarball
announcement emails are being queued temporarily, but the new releases
can be found easily at:

http://www.ntp.org/downloads.html

ntp-4.2.6p1-RC6 will hopefully be the last release candidate before
4.2.6p1 in a week or so.  If you use ntp-stable releases, please
consider taking it for a spin as I believe it is nearly polished.

ntp-dev-4.2.7p22 has a large number of changes accumulated since early
February.  I'm including the ChangeLog snippet below for reference
since the normal announcement is delayed.  The bulk of these changes
relate to pool.ntp.org, with improvements intended for both pool
clients and servers.

On the client side, the ntp.conf "pool" directive operates as a first-
class ntpd server discovery scheme, mirroring the existing
manycastclient scheme closely.  Previously, each "pool" directive
resulted in spinning up as many associations as it found IP addresses
for the pool DNS name (such as us.pool.ntp.org).  In reaction to this,
the pool.ntp.org operators reduced the number of addresses in each
response from 5 to 3, with the objective of minimizing the number of
pool servers each client uses.  Importantly, the old "pool"
implementation resolved DNS once at startup and configured each
resulting IP address as a persistent server association, as if it had
been  listed following "server".  If a server became unreachable or
stopped responding, no replacement was used, instead ntpd bullheadedly
continues to poll the configured server.

As of 4.2.7p22, "pool" directives create a prototype association which
stores the DNS name and options (like iburst).  An async DNS query is
started to resolve the pool name to IP addresses.  When this
completes, as long as more pool servers are desired, ntpd spins up a
preemptible pool association for each IP in relatively short order.
Two knobs influence the decision to stop adding pool (or manycast)
associations:

tos minclock 3 maxclock 6

More servers are added when either there are less survivors than
minclock (+ * or o in the ntpq peers billboard) or there are less
total associations than maxclock.  With the above tos directive and:

pool us.pool.ntp.org. iburst

ntpd will immediately resolve three IP addresses from us.pool.ntp.org.
and spin associations for them.  The pool prototype (and any other)
association counts towards maxclock, so in this confguration 5
preemptible pool associations would tip maxclock.  After three
preemptibles are kicked off, ntpd will wait for us.pool.ntp.org to
resolve to IP addresses it's not already using, up to 270 seconds with
the current pool.ntp.org DNS TTL.  Then two more preemptible
associations will be spun, at least.  If the polling of existing
servers has resulted in at least 3 survivors, no more preemptible pool
associations will be started and one leftover IP address will be held
for later.  Otherwise, the third IP address will be used and ntpd will
wait up to 270 seconds for us.pool.ntp.org to resolve to a third set
of three IPs.

Note that maxclock is a soft limit, then, which is exceeded freely
when less than minclock survivors remain.  This overshoot of maxclock
can be increased by using two, three, or four pool directives:

pool 0.us.pool.ntp.org. iburst
pool 1.us.pool.ntp.org. iburst
...

You may wish to increase maxclock by one for each additional pool
directive since they count against it.  With two pool directives, ntpd
would be able to spin up to 6 pool associations every 270 seconds,
given current pool.ntp.org practice.  Three directives, then 9 per
270, etc.

Which brings us to the other side of automatic maintenance of pool
(and manycast) servers:  How do they go away.  When there are more
than maxclock total associations, the hunt for a preemptible to spin
down (demobilize) is on.  After 10 consecutive polls without surviving
the clock selection, each such preemptible association is scored
against all preemptibles on a simple single-digit score:

1 point for good synch and stratum
1 point for at least one bit lit in reach register (one out of last 8
polls got response)
1 point for no synchronization loop (such as syncing to us or our
source)
1 point for good root distance
1 point for surviving the clock selection
1 point for not being excess (a survivor not in the top maxclock
survivors)

If the 10-time-non-survivor's score is equal to the lowest score among
all preemptible assocations it is demobilized.  So ntpd may
substantially overshoot maxclock initially depending on minclock and
the number of pool directives, but after 10 polling intervals it will
trim the least attractive down to maxclock total associations.

In addition to the score-based culling, a preemptible association
which becomes unreachable should eventually be removed even if there
are maxclock or less associations.  Glancing at the code, that might
not be quite right yet, we may keep a preemptible association long
after it's unreachable so long as we have no more than maxclock
associations.  More testing is required.  The intention is that pool
servers which stop responding are removed, and if needed to reach
maxclock again, replacements are spun.

This email is gargantuan already, but there's another pool-friendly
addition as of 4.2.7p22:  restrict source

restrict default limited kod ignore
restrict source limited kod
restrict 127.0.0.1
restrict ::1

"restrict source" establishes a prototype restriction automatically
added for each association's IP address.  Previously using the pool
interfered with some locked-down restriction scenarios because the IP
addresses of the pool servers used for a given run of ntpd were not
predictable, so the default restriction had to be loose enough to
allow retrieving time.  "restrict source" allows the operator to
configure looser restrictions automatically applied to each
association address and tighter "restrict default".

Here's the p22 ChangeLog segment:

(4.2.7p22) 2010/04/02 Released by Harlan Stenn <stenn at ntp.org>
* [Bug 1432] Don't set inheritable flag for linux capabilities.
* [Bug 1465] Make sure time from TS2100 is not invalid.
* [Bug 1483] AI_NUMERICSERV undefined in 4.2.7p20.
* [Bug 1497] fudge is broken by getnetnum() change.
* [Bug 1503] Auto-enabling of monitor for "restrict ... limited"
wrong.
* [Bug 1504] ntpdate tickles ntpd "discard minimum 1" rate limit if
  "restrict ... limited" is used.
* ntpdate: stop querying source after KoD packet response, log it.
* ntpdate: rate limit each server to 2s between packets.
* From J. N. Perlinger: avoid pointer wraparound warnings in
dolfptoa(),
  printf format mismatches with 64-bit size_t.
* Broadcast client (ephemeral) associations should be demobilized only
  if they are not heard from for 10 consecutive polls, regardless of
  surviving the clock selection.  Fix from David Mills.
* Add "ntpq -c ifstats" similar to "ntpdc -c ifstats".
* Add "ntpq -c sysstats" similar to "ntpdc -c sysstats".
* Add "ntpq -c monstats" to show monlist knobs and stats.
* Add "ntpq -c mrulist" similar to "ntpdc -c monlist" but not
  limited to 600 rows, and with filtering and sorting options:
  ntpq -c "mrulist mincount=2 laddr=192.168.1.2 sort=-avgint"
  ntpq -c "mrulist sort=addr"
  ntpq -c "mrulist mincount=2 sort=count"
  ntpq -c "mrulist sort=-lstint"
* Modify internal representation of MRU list to use l_fp fixed-point
  NTP timestamps instead of seconds since startup.  This increases the
  resolution and substantially improves accuracy of sorts involving
  timestamps, at the cost of flushing all MRU entries when the clock
is
  stepped, to ensure the timestamps can be compared with the current
  get_systime() results.
* Add ntp.conf "mru" directive to configure MRU parameters, such as
  "mru mindepth 600 maxage 64 maxdepth 5000 maxmem 1024" or
  "mru initalloc 0 initmem 16 incalloc 99 incmem 4".  Several pairs
are
  equivalent with one in units of MRU entries and its twin in units of
  kilobytes of memory, so the last one used in ntp.conf controls:
  maxdepth/maxmem, initalloc/initmem, incalloc/incmem.  With the above
  values, ntpd will preallocate 16kB worth of MRU entries, allocating
  4kB worth each time more are needed, with a hard limit of 1MB of MRU
  entries.  Until there are more than 600 entries none would be
reused.
  Then only entries for addresses last seen 64 seconds or longer ago
are
  reused.
* Limit "ntpdc -c monlist" response in ntpd to 600 entries, the
previous
  overall limit on the MRU list depth which was driven by the monlist
  implementation limit of one request with a single multipacket
  response.
* New "pool" directive implementation modeled on manycastclient.
* Do not abort on non-ASCII characters in ntp.conf, ignore them.
* ntpq: increase response reassembly limit from 24 to 32 packets, add
  discussion in comment regarding results with even larger MAXFRAGS.
* ntpq: handle "passwd MYPASSWORD" (without prompting) as with ntpdc.
* ntpdc: do not examine argument to "passwd" if not supplied.
* configure: remove check for pointer type used with qsort(), we
  require ANSI C which mandates void *.
* Reset sys_kodsent to 0 in proto_clr_stats().
* Add sptoa()/sockporttoa() similar to stoa()/socktoa() adding :port.
* Use memcpy() instead of memmove() when buffers can not overlap.
* Remove sockaddr_storage from our sockaddr_u union of sockaddr,
  sockaddr_in, and sockaddr_in6, shaving about 100 bytes from its size
  and substantially decreasing MRU entry memory consumption.
* Extend ntpq readvar (alias rv) to allow fetching up to three named
  variables in one operation:  ntpq -c "rv 0 version offset
frequency".
* ntpq: use srchost variable to show .POOL. prototype associations'
  hostname instead of address 0.0.0.0.
* "restrict source ..." configures override restrictions for time
  sources, allows tight default restrictions to be used with the pool
  directive (where server addresses are not known in advance).
* Ignore "preempt" modifier on manycastclient and pool prototype
  associations.  The resulting associations are preemptible, but the
  prototype must not be.
* Maintain and use linked list of associations (struct peer) in ntpd,
  avoiding walking 128 hash table entries to iterate over peers.
* Remove more workarounds unneeded since we require ISO C90 AKA ANSI
C:
  - remove fallback implementations for memmove(), memset, strstr().
  - do not test for atexit() or memcpy().
* Collapse a bunch of code duplication in ntpd/ntp_restrict.c added
with
  support for IPv6.
* Correct some corner case failures in automatically enabling the MRU
  list if any "restrict ... limited" is in effect, and in disabling
MRU
  maintenance. (ntp_monitor.c, ntp_restrict.c)
* Reverse the internal sort order of the address restriction lists,
but
  preserve the same behavior.  This allows removal of special-case
code
  related to the default restrictions and more straightforward lookups
  of restrictions for a given address (now, stop on first match).
* Move ntp_restrict.c MRU doubly-linked list maintenance code into
  ntp_lists.h macros, allowing more duplicated source excision.
* Repair ntpdate.c to no longer test HAVE_TIMER_SETTIME.
* Do not reference peer_node/unpeer_node after freeing when built with
  --disable-saveconfig and using DNS.

None of the new stuff is documented yet.  It will be before too long.
I've described much of it before here and left clues in the ChangeLog,
and questions are welcome.  Part of my reason for deferring
documentation is to avoid documenting just to change some aspects and
need to rewrite the docs.

I will post separately about the improvements in 4.2.7p22 for high-
traffic server operators including pool server operators.

Cheers,
Dave Hart



More information about the questions mailing list