[ntp:questions] Re: ntpd transmit timestamp precision
David L. Mills
mills at udel.edu
Sun Feb 19 16:47:08 UTC 2006
I'm uneasy calling random() for every read of the clock, as it could be
expensive. It seems a bit overkill to fuzz the nanobits when the caller
has timespec and fuzzed to 10 ms. Perhaps a test of precision greater
than a microsecond is advised.
Harlan Stenn wrote:
> Your patch handles the fuzz for gettimeofday() but not the case where some
> OS implements getclock() or clock_gettime() badly.
> What would be bad about moving the fuzz code after the #endif that closes
> the "get time" routines and just fuzzing in all cases? If that is really
> overkill for high-res systems, change the test from:
> if (sys_precision != 0)
> if ((sys_precision != 0) && (sys_precision > -7))
> (for example).
>>>>In article <dt5jnh$aee$1 at dewey.udel.edu>, "David L. Mills" <mills at udel.edu> writes:
> David> But, I finally punctured my skull about the precision measurement
> David> method, which until now fuzzed the measurement. ...
> David> So, the get_systime() routine in current ntp-dev has been changed to
> David> fuzz the bits only after calibration and to fuzz all the
> David> nonsignificant bits less than the measured precision.
More information about the questions