[ntp:questions] Packet timestamps when using Windows-7/Vista

Martin Burnicki martin.burnicki at meinberg.de
Mon Dec 14 11:40:02 UTC 2009

David J Taylor wrote:
> "Martin Burnicki" <martin.burnicki at meinberg.de> wrote in message
> news:s9u9v6-gj3.ln1 at gateway.py.meinberg.de...
>> Concerning the 1ms-to-15.6ms conversion mentioned above:
>> A *possible* reason I can imagine is that this depends on whether the
>> clock
>> runs too fast or too slow at its nominal tick rate (i.e. the on-board
>> xtal
>> is below or above its nominal frequency). In one case the frequency
>> drift
>> compensation has to *add* an offset to the standard tick rate, in the
>> other
>> case an offset needs to be subtracted. Depending on the way how the
>> conversion has been implemented in the Windows kernel, a positive offset
>> may lead to rounding errors whereas a negative one may not, or
>> vice-versa.
>> All the above are only assumptions.
> Thank you very much for your detailed and considered reply.  With the
> Windows-2000 and Windows-XP systems I am happy with the performance.  I
> was able to add the kernel-mode PPS serial routine to all the GPS/PPS
> systems, which does reduce the jitter reported by NTP slightly.  As you
> say, though, this doesn't help the precision in timestamping the NTP
> network packets.
> Yes, I am running Dave Hart's binaries with the interpolation disabled and
> the  high-resolution timer enabled, so it just relies on the ~1KHz clock.
> You make an interesting point about keeping the 1ms and 15.6ms timers in
> step - that had not occurred to me before!

The Windows API call used to slew the system time reports a standard tick
rate of 15.6001 on a Vista machine here even if the time returned by a loop
of GetSystemTimeAsFiletime() calls increments in 1.000 ms steps. 

So what happens if the default tick rate of 15.6001 ms is actually modified
to compensate the clock drift, e.g. by +5 -> 15.6006 to speed up the system
clock, or e.g. by -5 -> 15.5996 ms to slew it down?

Can you check if the frequency offset measured by ntpd on the system which
has a TX time before the RX time has a different sign as the frequency
offset measured on those systems which work "good"?

The log_adj utility I've written some time ago
also reports whether the adjustment applied to the standard clock tick is
positive, or negative, and which magnitude it is. This may be relevant for
the reason why the problem occurs.

> I'm quite happy to work with someone offline on this, and my test program
> is available.

If you send me that program (or a link to it) I can give it a try.

Martin Burnicki

Meinberg Funkuhren
Bad Pyrmont

More information about the questions mailing list