[ntp:questions] Why does NTPD precision decline when using the HPET ves. the Performance Counter?
martin.burnicki at meinberg.de
Mon Sep 15 14:06:04 UTC 2014
Brian Inglis wrote:
> On 2014-09-04 04:58, Charles Elliott wrote:
>> When the platform clock is changed from the
>> performance counter (freq: 3.554 MHz) to the
>> HPET (freq: 14.318 MHz) the NTP protocol
>> precision declines from -22 to -20. This
>> occurs both in versions 4.2.7p442 at 1.2483-o May 09 10:14:35.18
>> and 4.2.7p467 at 1.2483-o Aug 28 12:01:29.42.
>> The NTPD_USE_INTERP_DANGEROUS=1 environment
>> variable is set. Here are the relevant messages from
>> the Event Log:
>> 8/23/2014 3:34:59 PM Performance counter frequency 3.554 MHz
>> 8/23/2014 3:34:59 PM proto: precision = 0.200 usec (-22)
>> 8/23/2014 3:34:59 PM proto: fuzz beneath 0.100 usec
>> 8/23/2014 3:52:37 PM Performance counter frequency 14.318 MHz
>> 8/23/2014 3:52:38 PM proto: precision = 0.800 usec (-20)
>> → → No "proto: fuzz beneath ... " message ← ←
>> Is this the way it should be, that the protocol precision
>> declines when the clock is more than 4 times faster?
> Precision tells you how quickly ntpd can read the system timer,
> not the resolution of the system timer.
> See http://www.ntp.org/ntpfaq/NTP-s-sw-clocks-quality.htm
Especially for the Windows version of ntpd the reported precision can
- if it runs on Windows 8 or similar which provides the new precise time API
- whether timer tick interpolation is used, or not, on Windows versions
without precise time API
- which timer is used to implement QPC, or if TSC is used directly, if
interpolation is being used
More information about the questions