[ntp:questions] NTP, Unix and the 34-year itch (better read this)

David L. Mills mills at udel.edu
Thu Jan 22 05:40:03 UTC 2004


Folks,

Some of you have seen the NTP Timestamp Calculations page at the NTP
project site. That page contains serious errors, as the algorithms and
system clock interface has changed over the years. A corrected
plain-text version is below; the web page will be updated within three
days. Most Unix folks will not be affected, but makers of embedded
systems should take note.

Dave

...

The NTP timestamp format represents seconds and fraction as a 64-bit
unsigned fixed-point integer with decimal point to the left of bit 32
numbered from the left. The 32-bit seconds field spans about 136 years,
while the 32-bit fraction field precision is about 0.232 nanoseconds.
The various arithmetic operations on timestamps are carefully
constructed to avoid overflow while preserving precision. This page
considers important issues in these operations.

First, the only operations permitted on raw timestamps is subtraction.
This produces signed 64-bit timestamp differences from 68 years in the
past to 68 years in the future. As in the protocol specification, let T1
be the client timestamp on the request message, T2 the server timestamp
upon arrival, T3 the server timestamp upon departure of the reply
message and T4 the client timestamp upon arrival. NTP calculates the
clock offset

	[(T2 - T1) + (T3 - T4)] / 2

and roundtrip delay

	(T4 - T1) - (T3 - T2).

These calculations involve addition and subtraction of timestamp
differences. To avoid overflow in these calculations, timestamp
differences must not exceed from 34 years in the past to 34 years in the
future. This is a fundamental limit in these calculations.

It might be possible to avoid overflow by prepending right shifts to
timestamp differences before calculating offset and delay, since this
would extend the allowable differences by a factor of two. However, this
would degrade the resulting precision by a corresponding degree. This
might not be a good idea, as computers are reaching speeds that may soon
challenge the precision of the NTP timestamp itself.

The fundamental formal correctness principles on which NTP is based
requre all system clock operations to be additive; that is, the clock is
never set, only advanced and retarded from the given time. This leads to
the requirement that the system clock must always be set within 34 years
of valid UTC time. This is now and always has been a fundamental
property of the protocol design.

Almost all computers of today have some means, such as a time-of-year
(TOY) chip, to set the system clock at power up or reboot surely within
within 34 years, but some embedded systems do not. For embedded systems
without a TOY chip and running an embedded Unix kernel, the initial time
is usually the Unix base epoch 1 January 1970. Readers will quickly
realize the time since then now in 2004 exceeds the 34-year limit. These
systems have a problem unless something is done.

The obvious thing to do is initialize the system clock to some epoch
closer to the present. For embedded Unix systems, this is simple and can
be done in the startup script. It is conceivable that a NTP command
could be added to do the same thing, but this seems duplicative and
further complicates an already complicated program.

There is another thing we can do to "fix" the problem or at least to
increase the window from 34 years to 68 years. The first differences in
the offset and delay calculations have to be done on the raw timestamps
for the reasons above. However, the resulting differences are generally
very much smaller than the timestamps themselves and could well be done
in floating doubles like almost all the other operations on time values.
While this change may be benefit present and future versions, it will
not of course benefit previous versions that may be etched in hardware
or firmware.



More information about the questions mailing list