[ntp:questions] Re: Post processing of NTP data...

Val Schmidt vschmidt at ccom.unh.edu
Thu Sep 29 13:36:39 UTC 2005


Thanks Danny.

So let me see if I can summarize what I think everyone has said in  
these past few posts and you all can grade how well I was paying  
attention.

1) When one talks about system clock accuracy one should define how  
it is to be measured. One might measure the offset between the local  
system clock and UTC as the difference between the results of the  
gettimeofday(() function executed on the local system and a 1 PPS  
signal standard (whose accuracy is in the 10's of nano-second range  
to UTC).

2) The offset measurement itself between an ntp system trained clock  
and the reference standard is a sum of (at least) the actual time  
error between the two clocks, the precision in which time is reported  
to the system OS by the hardware. the latency involved in executing  
the gettimeofday() function, and the latency involved in  
communicating and timestamping the reference signal. The latter three  
are OS and hardware dependent and depend, in part, on the processor  
speed (how frequently interrupts are processed), the OS (how reliably  
interrupts are processed) and the coarseness of the tick rate of the  
system clock which might be anywhere from ~60 to 1000Hz. (translating  
to ~17ms to 1ms delays).

3) The tick rate can generally be thought of as the system's  
metronome and limits the accuracy at which the system can report  
time. However I think OSes provide higher resolution time stamps, and  
I can only assume they interpolate between ticks with software. At  
any rate, at least some hardware has the ability to interpolate  
between ticks to provide a higher resolution time stamp. But  
ultimately the resolution of the time stamp is limited by the tick rate.

4) While a fast tick rate increases the theoretical resolution of the  
system clock, the increased interrupt rate to the system can cause  
OSes to skip interrupts, which prevents algorithms like ntp from  
accurately training the local clock and ultimately results in larger  
offsets. Catching every tick is better than getting them more rapidly  
but missing some.

5) To date, Sun and FreeBSD have proved very good at catching time  
interrupts reliably, while LInux and Windows often skip interrupts  
when under system load. It is not clear to me if Intel hardware is  
part of the problem.

6) All of this said, ntp is designed to train the local oscillator to  
a reference time standard measured either from a 1) an ntp driver to  
a local standard, or 2) over the network from a higher stratum  
server. Local oscillators drift for whatever reason, temperature, and  
ntp attempts to adjust their frequency to minimize the clock offset  
and drift rate in a smooth fashion. Changing system conditions  
(ambient temp, system load, etc.) and the ntp daemon's ability to  
reliably predict the transmission delay of the reference signal (from  
whatever source) contribute to changes in the local oscillator  
frequency and the reported offset between the local system clock and  
the standard.

7) The consensus seems to be that one should not attempt to increase  
the accuracy of a system's time stamp by measuring the instantaneous  
offset between the local time and the reference time and applying  
that difference. The reason is that the error of any instantaneous  
measurement is potentially very large and any fairing through of the  
measurements done by the ntp algorithm to reduce the errors would  
effectively be negated. That is, one would be introducing errors into  
the time stamps that ntp had worked hard to remove.

Follow up Question:

1)  From the discussion above, it is not at all clear how one  
measures offsets less than about 10ms. How are measurements taken  
between a GPS reference with 1PPS signal and the local system clock  
when people report accuracies in the micro-seconds or even nano- 
seconds? Do I understand correctly that the local system time  
typically can't be reliably determined to better than 10's of  
milliseconds?

-Val



On Sep 28, 2005, at 11:14 PM, Danny Mayer wrote:

> Richard B. Gilbert wrote:
>
>> Val Schmidt wrote:
>>
>>>> At 5:08 PM -0400 2005-09-26, Val Schmidt wrote:
>>>>
>>>>
>>>>
>>>>>  I want to log several things with time stamps on the order of  
>>>>> ~ . 1ms -
>>>>>  maybe less.
>>>>>
>>>>>
>>>>
>>>>     Most modern OSes don't allow you to directly achieve better   
>>>> than 10-20ms accuracy at the level of an individual event.   
>>>> Some  real-time operating systems (RTOSes) may allow you to  
>>>> achieve finer  resolution at that level, but I don't know if any  
>>>> of them are going  to let you get down to the level you want.
>>>>
>>>
>>>
>>>
>>> Can you help me understand why?
>>>
>>>
>> Many operating systems update the clock at ten millisecond  
>> intervals; e.g. the clock "ticks" at 100 Hz.  When queried as to  
>> the time using O/S services, these systems respond with the  
>> current value of the clock register.  The maximum error is thus  
>> 9.99999... milliseconds and the typical error is 5 milliseconds.    
>> Some very new hardware designs allow ntpd to interpolate between  
>> "ticks" and yield a much more precise time, if and only if, you  
>> use NTP supplied functions to get the time.
>>
>
> Well what's really happening is that ntpd is keeping the clock really
> accurate of a long time period. However, when you ask for time from  
> the
> O/S it can only return to you a value accurate to the resolution  
> allowed
> by the function, usually gettimeofday(). There are a number of  
> areas of
> error in the accuracy that need to be considered.
>
> 1. The resolution of the function call. For example if you use
> gettimeofday(), it is capable of returning results to the microsecond.
> It does not, however, mean that the value returned is that accurate.
>
> 2. The resolution of the clock. Every clock is different so it's  
> hard to
> know what its resolution is unless you look at the specs. You'd  
> need to
> go to the manufacturer's site to find this out, if you're lucky enough
> to know who the manufacturer is. Even then, you don't know if your  
> clock
> is from a bad batch, a good batch or an indifferent batch.
>
> 3. The ability of the O/S and hardware to store a value accurately.  
> Even
> if you are able to obtain a value accurate to the microsecond, being
> able to adjust the clock to accept that value is limited by both the
> Operating System and the Hardware. You'd have a very hard time  
> figuring
> out that error estimate.
>
> 4. You are limited by the ability to get an accurate value from an
> external source, whether it be a Cesium clock, GPS or the Internet.  
> This
> error can be hard to estimate, but NTP tries it's best.
>
> 5. You can ask ntpd on the local machine for the most accurate time  
> that
> it thinks it has, but like everything else it takes time to  
> retrieve the
> number, but which time (no pun intended) the value will no longer be
> valid. Think Heisenberg's uncertainty principle: The act of taking a
> measurement causes an error in the result.
>
>   I seem
>
>> to recall that Windows uses some really odd interval like 17  
>> milliseconds between "ticks".
>>
>
> Using clockres from Sysinternals on Windows XP Pro on an Intel CPU  
> on a
> Laptop running on battery:
>
> The system clock interval is 10.014400 ms
>
>   Linux can optionally update the clock
>
>> every millisecond (1 KHz tick rate) but this doesn't work very  
>> well as the system tends to lose clock interrupts when it gets busy.
>> Most applications simply do not require precise and accurate time  
>> and general purpose computers are generally not designed for  
>> precise and accurate timing.
>>
>
> If you do need that kind of accuracy you have to spend money on
> hardware, the cost of doing so increasing with the required accuracy.
> Even then you are limited by the sources of errors listed above.
>
> Danny
>
> _______________________________________________
> questions mailing list
> questions at lists.ntp.isc.org
> https://lists.ntp.isc.org/mailman/listinfo/questions
>

------------------------------------------------------
Val Schmidt
CCOM/JHC
University of New Hampshire
Chase Ocean Engineering Lab
24 Colovos Road
Durham, NH 03824
e: vschmidt [AT] ccom.unh.edu
m: 614.286.3726





More information about the questions mailing list