[ntp:questions] TSC, default precision, FreeBSD

Dave Hart davehart at gmail.com
Wed Sep 9 11:15:56 UTC 2009



On Wed, Sep 9, 2009 at 10:48 AM, Miroslav Lichvar wrote:
> On Tue, Sep 08, 2009 at 10:40:05AM -0700, Dave Hart wrote:
>> > Also, the calculation doesn't work correctly if the precision is below
>> > resolution. The result is just a random value close to 100 ns. Maybe
>> > get_systime should be called multiple times before calculating the
>> > difference.
>>
>> I've argued that it's also wrong on microsecond system clocks, where
>> get_systime() fuzzes below the microsecond, and that fuzz will
>> convince default_get_precision() the system clock ticks more often
>> than once per microsecond.  I believe both would be repaired by
>> deferring the addition of fuzz in get_systime() until after
>> default_get_precision() is done with it, which is to say, until after
>> sys_precision is nonzero.
>
> The addition of fuzz could be temporarily disabled by setting sys_tick
> to 0. But the result would be resolution, not precision (as defined in
> NTPv4 spec).

You're correct, of course.  default_get_precision() save sys_tick,
temporarily set it to zero to disable fuzzing, then loop calling
get_systime() a handful of times to check for the same time being
returned successively.  In that case, it could then spin on get_systime
() until the returned time changes twice.  The number of calls between
the two changes indicates the execution time of get_systime().  In the
case where identical times are not being returned from successive
calls, the precision could be calculated as it is today, as the
minimum difference beween two successive calls in the initial loop.
But I'm not sure this is useful, if the precision (time to read the
clock) is finer than the resolution, I believe the intention is to use
the resolution as the precision.

I'm pretty sure the current precision calculation on microsecond
resolution system clocks is wrong, resulting in a random value
somewhere above 100ns, regardless of whether the intention is that it
be set to the execution time of get_systime() or that it be set to the
resolution, due to the interference of fuzzing.

FYI, 4.2.4 differs.  It fuzzes only after default_get_precision() has
determined sys_precision, and fuzzes the bits below sys_precision.

Cheers,
Dave Hart




More information about the questions mailing list