[ntp:questions] Re: ntpd transmit timestamp precision
brian.utterback at sun.removeme.com
Tue Feb 14 14:24:49 UTC 2006
I think that this is Damion's point. If you look at the code itself,
the fuzz code is not used if:
1. You are using clock_gettime.
2. You are using getclock
3. You are using the simulator.
So, the fact that the simulator is doing okay is irrelevant, since it
does not use the fuzz code. But more to the point, the clock_gettime and
getclock functions claim to return nanoseconds, so there are only two
bits available to fuzz, so the code does not bother to fuzz those last
two bits. Damion's point is that the actual precision of the clock
on his system is much more coarse, so more bits are really
non-significant and should be fuzzed, but they are not.
I don't think he is actually commenting on the accuracy of the time
derived from fuzzed values, just the fact that he is not seeing any
fuzz at all.
David L. Mills wrote:
> THe ntpd in ntp-dev has been run in simulation with tick = 10 ms and
> done amazingly well. The low order nonsignificant bits are se to a
> random fuzz that apparently averages out just fine.
> Damion de Soto wrote:
>> Brian Utterback wrote:
>>> Yes and no. If your system supports either clock_gettime or getclock,
>>> then the code does not bother with the random bitstring, since there
>>> are only two unused bits to set. Not worth the trouble.
>> Thanks, but I have a system here that has very low resolution system
>> ntpd correctly detects this via default_get_precision() as:
>> Feb 13 07:01:31 ntpd: precision = 10000.000 usec
>> I have clock_gettime() available to me, but the nanoseconds values
>> will be mostly wrong, since 10ms only gives me 7 bits of precision.
>> This means all 64bits of the fractional seconds in the Transmit
>> Timestamp are nearly always the same.
>> Has no-one else ever run into this before?
Quidquid latine dictum sit, altum sonatur.
Brian Utterback - OP/N1 RPE, Sun Microsystems, Inc.
More information about the questions