[ntp:questions] Improving ntpd rate controls for busier servers

Dave Hart davehart at gmail.com
Sun Feb 28 02:37:41 UTC 2010


Until now, ntpd's most-recently-used list of remote network addresses
has been limited to 600 entries.  This list is used to (attempt to)
enforce "restrict ... limited" (and the related "restrict ... kod").
If a server running ntpd receives substantial traffic, the 600 most
recently seen remote addresses may not provide enough history to
enforce the default 2s minimum between packets from one address, let
alone the 8s average minimum between packets over a longer period.

As far as I can tell, the main reason the limit of 600 has stayed so
low is the associated "ntpdc -c monlist" operation becomes
increasingly likely to fail the longer the response, due to its use of
small UDP packets and the fact they can be lost but our management
protocols don't recover from such loss.

I have experimented to change the strategy used by ntpd and ntpdc.
For "ntpdc -c monlist" I punted and simply returned the most recent
600 entries.  I welcome others to experiment with that value (in ntpd/
ntp_request.c) and/or investigate ways to make the management protocol
more reliable.  One relatively easy fix that comes to mind is to read
all response packets into buffers before attempting to process/display
anything from them.

I changed the way the MRU list (monlist) is maintained internally:

 * The following ntp.conf "mru" knobs come into play determining
 * the depth (or count) of the MRU list:
 * - mru_mindepth ("mru mindepth") is a floor beneath which
 *   entries are kept without regard to their age.  The
 *   default is 600 which matches the longtime implementation
 *   limit on the total number of entries.
 * - mru_maxage ("mru maxage") is a ceiling on the age in
 *   seconds of entries.  Entries older than this are
 *   reclaimed once mon_mindepth is exceeded.  64s default.
 *   Note that entries older than this can easily survive
 *   as they are reclaimed only as needed.
 * - mru_maxdepth ("mru maxdepth") is a hard limit on the
 *   number of entries.
 * - "mru maxmem" sets mru_maxdepth to the number of entries
 *   which fit in the given number of kilobytes.  4096 default.
 * - mru_initalloc ("mru initalloc" sets the count of the
 *   initial allocation of MRU entries.
 * - "mru initmem" sets mru_initalloc in units of kilobytes.
 *   The default is 16.
 * - mru_incalloc ("mru incalloc" sets the number of entries to
 *   allocate on-demand each time the free list is empty.
 * - "mru incmem" sets mru_incalloc in units of kilobytes.
 *   The default is 4.
 * Whichever of "mru maxmem" or "mru maxdepth" occurs last in
 * ntp.conf controls.  Similarly for "mru initalloc" and "mru
 * initmem", and for "mru incalloc" and "mru incmem".

I also exposed some of the variables used to implement this policy via
ntpq readvar:

ntpq -c "rv 0 mru_enabled mru_depth mru_deepest" -c "rv 0 mru_maxdepth
mru_mem mru_maxmem" -c "rv 0 mru_mindepth mru_maxage"
mru_enabled=0x3, mru_depth=19, mru_deepest=19
mru_maxdepth=24966, mru_mem=3, mru_maxmem=4096
mru_mindepth=600, mru_maxage=64

The proposed defaults would consume up to 4MB of memory allocated 4KB
at a time with a hard limit of around 15,000 entries on 64-bit
platforms and 25,000 on 32-bit.  If the prior 64s of remote addresses
fits in less at all times, the full amount is never allocated.

Setting "mru initmem 4096" (matching the default maxmem) would
preallocate the full amount in one chunk rather than in 1024 pieces
over time, ensuring locality of reference for restrict operations.

5832793c1a95cf9508f1df96a81d1749  ntp-dev-4.2.7p20-poolmon-0228.tar.gz
b333935e146b37c0bc52b311f2c67737  ntp-dev-4.2.7p20-poolmon-0228-win-
x86-bin.zip
4a89336bcff3d24cd1ecbc9c3c991cb7  ntp-dev-4.2.7p20-poolmon-0228-win-
x86-debug-bin.zip

http://davehart.net/ntp/pool/
http://davehart.net/ntp/pool/ntp-dev-4.2.7p20-poolmon-0228.tar.gz
http://davehart.net/ntp/pool/ntp-dev-4.2.7p20-poolmon-0228-win-x86-bin.zip
http://davehart.net/ntp/pool/ntp-dev-4.2.7p20-poolmon-0228-win-x86-debug-bin.zip

I'm particularly interested in feedback from those operating
pool.ntp.org servers. [1]

Cheers,
Dave Hart

http://www.pool.ntp.org/




More information about the questions mailing list