[ntp:questions] Re: Windows timekeeping - sudden degradation - why?

Martin Burnicki martin.burnicki at meinberg.de
Thu Dec 8 20:13:21 UTC 2005


David,

David J Taylor wrote:
> Just doing a little more work on this.  I wrote a program to display
> (approximately) the resolution of the timer (from a timeGetTime) call, and
> got the following results:
> 
> - QuickTime Player running (not even playing a video), timer resolution
> just under 1ms (about 960 us)

Normally 1 ms is the highest resolution you can set the MM timer to.
 
> - QuickTime not running, timer resolution seems to step between 15.6ms
> (approx) and 10.5ms.
> 
> Now these are early results, and my program isn't highly accurate, but it
> suggests that the program may not /only/ be the multimedia timer running
> or not (or is it more accurate to say without the system timer being
> forced into a higher precision?), but also that something is changing the
> basic system clock from a 10ms set to a 15ms step?  I do recall that there
> are a number of different basic clock periods in Windows, different for NT
> 4.0, 2000 workstation and server, and XP.  Each are either (about) 10ms or
> 15ms.

AFAIK the timer tick interval has been always about 10 ms under WinNT. In
Win2k, the tick interval was also 10 ms on machines with slower CPUs, but
15.625 ms on machines with faster CPU. I don't know whether newer Win
versions would also tick with 10 ms intervals under certain circumstances,
I've yet always seen 15.625 ms with those. 15.625 ms * 64 = 1.000 s, BTW.

The following is based on my assumptions, derived from what I've observed so
far. The hardware time normally generates IRQs at a rate of 10 or 15 ms,
and both the standard tick rate and the MM timer tick rate are derived from
that hardware tick rate.

If the MM timer resolution is set to 1 ms then the timer hardware is
reconfigured to really generate interrupts at the higher rate, e.g. really
in 1 ms intervals to match the MM tick intervals, the standard ticks are
synthesized from that higher tick rate. This also explains the 1 ms jitter
I've observed on the standard ticks if the MM timer is set to high res.

If the sync mode is switched from the first to the second algorithm then a
time offset of a few milliseconds is inserted, which is removed again if
the mode is switched back to the first (default) algorithm. This seems to
have been fixed in the kernel's timer handler of WinXP SP2. Either the
hardware timer always keeps ticking at the same rate, or switching between
the modes works better there.

AFAIK the code which deals with the Windows performance counter is in the
HAL, which is different for uniprocessor and multiprocessor systems. I
assume the code for the timer ticks and MM timer is also in the HAL, since
the behaviour described above is still slightly different on uniprocessor
and multiprocessor systems.

> Anyone for or against that?  Any idea of which program might be doing
> this?  Perhaps it's just my code, and the 10/15ms switching isn't actually
> happening at all!

I think the 10/15 ms switching is just used to maintain a kind of
synchronization between the standard tick rate and the MM timer tick rate.

Martin
-- 
Martin Burnicki

Meinberg Funkuhren
Bad Pyrmont
Germany




More information about the questions mailing list