[ntp:questions] Re: fine granularity timing?
Brad Knowles
brad at stop.mail-abuse.org
Thu Apr 28 00:39:50 UTC 2005
At 9:57 PM +0000 2005-04-27, John Pettitt wrote:
> I'm doing that now- I can keep my machines (FreeBSD on Soekris 4801)
> within about 20-50us of each other this way using the PPS from the
> GPS. However I think to get the 500ns the OP was looking for he's
> going to need to build hardware. Other local machines stay within
> 200-300us of the refclocks when polling at 16s intervals over 100mbit
> net segments. Replacing the crappy oscillators in the pc's would help
> this a lot.
I've just done a quick survey of all the "open access" stratum 1
time servers listed at
<http://ntp.isc.org/bin/view/Servers/StratumOneTimeServers>. Of the
servers I tested, I got no answer from 41, bad stratum values from
four (something other than stratum 1), and 26 valid responses.
Looking at the offsets on those responses, only one outlier
(otc1.psu.edu) reported an offset with a magnitude of 0.050 or
greater. Most were in the single-digit microsecond range.
Most responses also had single-digit jitter values (<0.010), and
almost all showed stability of 0.004ppm or less. But note that doing
"ntpq -c rv" only has a resolution of single-digit microseconds for
the time fields.
Even with a dedicated serial PPS distribution device to multiple
machines and no switches or delays caused by latency-inducing
Ethernet-type network devices, I think you're going to have a hard
time seeing accuracy numbers smaller than one microsecond.
Moreover, NTP was designed to keep overall long-term clock
accuracy, not for nanosecond-by-nanosecond accuracy. Even if you had
the Cesium beam atomic clocks and a reliable low latency method of
distributing that time to all machines, I'm not sure that any kind of
semi-standard computer hardware (or operating system) is going to be
able to keep time that accurately.
The more I think about this, the more it sounds to me like the OP
is going to need to build custom real-time hardware with a custom
real-time OS (or buy a suitable hardware/OS configuration), and then
there's still going to be a time-sync issue. You could use NTP to
handle long-term overall average clock accuracy, but I still question
whether you'd be able to accurately communicate and coordinate that
in a way that would be useful in a short-term
nanosecond-by-nanosecond type of operation.
Oh, and I think the network probably has to change. When
building high performance cluster type of computer systems, network
latency is a real killer. Which is why they almost always use
something other than Ethernet, even Gig-E -- something more like
Emulex/Giganet or Myrinet. See
<http://www1.us.dell.com/content/topics/global.aspx/power/en/ps4q01_ctcinter?c=us&cs=555&l=en&s=biz>
for one explanation of why latency makes such a huge difference.
Here's an interesting question -- How do deep-space astronomers
coordinate their observations at different facilities in order to
make VLBI (Very Long Baseline Interferometry) work? I heard one
report that they wrote everything to tape as it came in (with very
accurate timecodes), but then spent months and years afterwards
poring over all the data, and making sure that they got all the phase
shifts precisely positioned.
Maybe some VLBI techniques might also be useful in a more
real-time environment across a network?
--
Brad Knowles, <brad at stop.mail-abuse.org>
"Those who would give up essential Liberty, to purchase a little
temporary Safety, deserve neither Liberty nor Safety."
-- Benjamin Franklin (1706-1790), reply of the Pennsylvania
Assembly to the Governor, November 11, 1755
SAGE member since 1995. See <http://www.sage.org/> for more info.
More information about the questions
mailing list