[ntp:questions] Re: ntpd transmit timestamp precision
David L. Mills
mills at udel.edu
Fri Feb 17 22:46:11 UTC 2006
The simulator does not use the fuzz because the simulated clock is to
However, as pointed out by others, a j-random system doesn't have to
fill all the bits, which is why the precision is measured directly.
But, I finally punctured my skull about the precision measurement
method, which until now fuzzed the measurement. This wouldn't matter for
an average, but does influence a bottom fisher. It was easy to fix that,
but that doesn't fix cases where the clock resolution is much worse than
microseconds/nanoseconds. Apparently some systems (Linux?) don't even
bother to interpolate the tick, so the precision measured by ntpd is
like -6. If that was the case and clock_gettime() was used, there would
be no fuzz at all.
So, the get_systime() routine in current ntp-dev has been changed to
fuzz the bits only after calibration and to fuzz all the nonsignificant
bits less than the measured precision. This has an interesting side
effect that may result in better time measurements, since it removes
I have no machines left that have dirty rotten hardware or dirty rotten
kernels, so I can't calibrate how good or bad it works with such stuff.
Brian Utterback wrote:
> I think that this is Damion's point. If you look at the code itself,
> the fuzz code is not used if:
> 1. You are using clock_gettime.
> 2. You are using getclock
> 3. You are using the simulator.
> So, the fact that the simulator is doing okay is irrelevant, since it
does not use the fuzz code. But more to the point, the clock_gettime and
getclock functions claim to return nanoseconds, so there are only two
> bits available to fuzz, so the code does not bother to fuzz those last
> two bits. Damion's point is that the actual precision of the clock
> on his system is much more coarse, so more bits are really
non-significant and should be fuzzed, but they are not.
> I don't think he is actually commenting on the accuracy of the time
> derived from fuzzed values, just the fact that he is not seeing any
> fuzz at all.
> David L. Mills wrote:
>> THe ntpd in ntp-dev has been run in simulation with tick = 10 ms and
done amazingly well. The low order nonsignificant bits are se to a
random fuzz that apparently averages out just fine.
>> Damion de Soto wrote:
>>> Brian Utterback wrote:
>>>> Yes and no. If your system supports either clock_gettime or getclock,
>>>> then the code does not bother with the random bitstring, since there
>>>> are only two unused bits to set. Not worth the trouble.
>>> Thanks, but I have a system here that has very low resolution
>>> ntpd correctly detects this via default_get_precision() as:
>>> Feb 13 07:01:31 ntpd: precision = 10000.000 usec
>>> I have clock_gettime() available to me, but the nanoseconds values
will be mostly wrong, since 10ms only gives me 7 bits of precision.
>>> This means all 64bits of the fractional seconds in the Transmit
Timestamp are nearly always the same.
>>> Has no-one else ever run into this before?
Brian Utterback wrote:
> Damion de Soto wrote:
>> I was wondering if anyone knew if ntpd contained code to do this (from
>> It is advisable to fill the non-significant low order bits of the
>> timestamp with a random, unbiased bitstring, both to avoid
>> systematic roundoff errors and as a means of loop detection and
>> replay detection (see below). One way of doing this is to generate
>> a random bitstring in a 64-bit word, then perform an arithmetic
>> right shift a number of bits equal to the number of significant
>> bits of the timestamp, then add the result to the original
>> The ntp packets from my platform all have the same fractional seconds,
>> so I'm guessing it does not. Is there any reason why not?
>> It seems a fairly trivial change in a couple of places in
> Yes and no. If your system supports either clock_gettime or getclock,
> then the code does not bother with the random bitstring, since there
> are only two unused bits to set. Not worth the trouble.
More information about the questions