[ntp:questions] Post processing of NTP data...

Brad Knowles brad at stop.mail-abuse.org
Tue Sep 27 09:05:27 UTC 2005


At 5:08 PM -0400 2005-09-26, Val Schmidt wrote:

>  I want to log several things with time stamps on the order of ~ .1ms -
>  maybe less.

	Most modern OSes don't allow you to directly achieve better than 
10-20ms accuracy at the level of an individual event.  Some real-time 
operating systems (RTOSes) may allow you to achieve finer resolution 
at that level, but I don't know if any of them are going to let you 
get down to the level you want.

>  Although I've read posts in this list archive of others achieving these
>  results more or less reliably, in my own experience I have not seen ntp
>  regulate time of a stratum 2 server without occasional excursions in the
>  100's of ms range (this is for lots of reasons, I don't want to argue
>  with anyone about it here).

	A well designed system, on good hardware with a good OS and 
application configuration might be able to achieve long-term 
stability that is accurate to less than a millisecond, but that's 
accuracy (over a long period of time, taking many statistical 
measurements) and not precision (over a very short period of time).

	See <http://ntp.isc.org/bin/view/Support/NTPRelatedDefinitions>.

>  So I thought, well if one can measure the offset between the local clock
>  and that of a stratum 1 server, why not log this information separately
>  from the other data streams and then after the fact, adjust the logging
>  time-stamps by this offset such that regardless of the drift of the
>  local clock with temperature, network jitter, server load, etc., the
>  logging time stamps will very nearly equal that of the stratum 1 server.

	I don't think that you could log the time information with a 
precision high enough that you would be able to usefully post-process 
that.

	I knew some guys who built an IBM 3081 emulator board to use at 
CERN to pre-process their data they were collecting for their 
particle accelerator, and they weren't using NTP to try to maintain a 
high stability clock with good long-term accuracy.  They were using 
relative timing measurements that they could make very precisely, and 
it didn't really matter to them that this had no correlation 
whatsoever to whatever the wall clock said was the "correct" time. 
All that mattered was that they could tell very precisely that 
particle track X preceded or followed particle track Y by exactly a 
certain time interval Z.

	NTP is always trying to move the system clock stability to more 
accurately track the "true" time, and does so over long periods. 
Yes, we may be able to get that accuracy down to a surprisingly low 
number, but that doesn't mean that our information would have any 
value to someone trying to measure things down at that number.


	Even if you can accurately measure light years down to the 
attometer, you don't want to use a highly accurate light-year scale 
ruler to try to measure attometer distances.

	Trying to use macroscopic techniques and facilities to try to 
measure microscopic values is going to be an exercise in frustration, 
and likely to be a mistake waiting to happen.

	Unfortunately, you are not the first person to make this mistake.

-- 
Brad Knowles, <brad at stop.mail-abuse.org>

"Those who would give up essential Liberty, to purchase a little
temporary Safety, deserve neither Liberty nor Safety."

     -- Benjamin Franklin (1706-1790), reply of the Pennsylvania
     Assembly to the Governor, November 11, 1755

   SAGE member since 1995.  See <http://www.sage.org/> for more info.



More information about the questions mailing list