[ntp:questions] TSC, default precision, FreeBSD
mlichvar at redhat.com
Wed Sep 9 10:48:12 UTC 2009
On Tue, Sep 08, 2009 at 10:40:05AM -0700, Dave Hart wrote:
> On Tue, Sep 8, 2009 at 12:40 PM, Miroslav Lichvar wrote:
> > BTW, on current hardware the precision seems to be below the 100ns
> > MINSTEP limit as used in the default_get_precision routine. If the ntp
> > process wasn't interrupted the while loop might run forever.
> Dr. Mills explained to me recently that this was added to deal with
> one platform curiously returning the same time in successive calls
> occasionally. Miroslav, what value do you recommend for MINSTEP,
> which would be below the execution time of getclock() on any machine
> now or in the next few years?
I think 10 ns should be ok for a while. On a 2.0GHz CPU I'm seeing
30 ns for clock_gettime() and 80 ns for get_systime().
> > Also, the calculation doesn't work correctly if the precision is below
> > resolution. The result is just a random value close to 100 ns. Maybe
> > get_systime should be called multiple times before calculating the
> > difference.
> I've argued that it's also wrong on microsecond system clocks, where
> get_systime() fuzzes below the microsecond, and that fuzz will
> convince default_get_precision() the system clock ticks more often
> than once per microsecond. I believe both would be repaired by
> deferring the addition of fuzz in get_systime() until after
> default_get_precision() is done with it, which is to say, until after
> sys_precision is nonzero.
The addition of fuzz could be temporarily disabled by setting sys_tick
to 0. But the result would be resolution, not precision (as defined in
More information about the questions