[ntp:questions] Re: Clock drift problems

hack hack at watson.ibm.com
Mon Jan 19 19:25:07 UTC 2004


In article <bugibv$7au$1 at freenet9.carleton.ca>,
Michael Black <et472 at FreeNet.Carleton.CA> wrote:
>Christopher Browne (cbbrowne at acm.org) writes:

>The Real Time Clock is of course counting interrupts.  Something with
>a presumably accurate timing sends an interrupt to the CPU on a regular
>basis, and software keeps track of those interrupts.

Why is this an "of course"?  It boggles my mind that many people still
seem to think it is.

For machines without access to a regular reasonably-high-resolution
continuously-running counter with a fixed frequency, an interrupt-driven
RTC is indeed a good option, perhaps the only reasonable one.  Therefore
I can understand that an OS that wants to be portable to many platforms,
including those without a usable CPU timer, would support a software
RTC based on counting interrupts.  But why make that the only choice,
given the possible pitfalls of missing interrupts, the complexity of
dealing with this possibility if one wants to correct for it, and the
reliance on an external source of interrupts that may be no more stable
than the CPU's own timebase?  Really old mainframes had mains-derived
counters; later ones had quartz-crystal high-resolution timers.  The
POWER line of computers and microprocessors had a CPU-based RTC.  The
PowerPC has a high-resolution timebase similar to a cycle counter. In
the Intel world, x86 (IA32) has had a program-readable cycle counter
since the 1st Pentium generation.  I'm sure other architectures have
similar capabilities; in fact, I'd like to know more about them.  In
all of these cases, high-resolution time is available either directly
(POWER, S/370 and descendants) or via linear interpolation (others).

Mobile platforms may present a problem, since they often change CPU
clock frequency to save power.  With software assistance from the
power-management software, and assuming certain fixed relations can
be maintained between multiple clock speeds, interpolation parameters
could be maintained in a program-visible place to permit real time
computation to remain unaffected.  (Sometimes there are two separate
clock sources, and the one used for timing remains stable.) 

An OS may also need time-based interruptions for time slicing, and
not every CPU supports an appropriate timer (e.g. "decrementer" or
"CPU timer").  But there's nothing wrong in using a separate external
interrupt source for this -- it need not even be very precise.  If
the external interrupts are not used to construct time, it does not
matter whether a timeslice is 9.8ms or 10.1ms -- the accounting (if
there is such) can always be done using the actual time as maintained
by the OS, using available internal resources.  An added benefit arises
when the machine is idle, since it can then turn off timer interrupts
entirely, or adjust them to match the next scheduled event.  This would
be really nice for mobile platforms, which would then need to wake up
only when an actual key is hit (or a network packet arrives).

On my Linux PC cycle-timer-derived time is stable to better than .5ppm,
whereas Linux time drifts at over 100ppm.  (This can of course be dealt
with using NTP -- but my point is that the machine itself is capable of
much better time than the OS lets it have.)

Michel.



More information about the questions mailing list