[ntp:questions] Proposed NTP solution for a network

Jason bmwjason at bmwlt.com
Wed Mar 4 01:21:53 UTC 2009


Terje Mathisen wrote:
> Jason wrote:
>> The critical time-stamp of the transactions must be tighter than 100us.
> 
> Sorry, no can do, except probabilistically: You can get X% of time 
> samples within 100us, as measured by the loopstats file on each client 
> machine, but you cannot guarantee it up front.
> 
Which is what I've been pushing back to the software guys -- the 
requirement is too stringent. I have a glimmer of hope that this will be 
relaxed.

> OTOH, what you _can_ do is to generate a setup where you can, post 
> facto, calculate the worst case error for any given time stamp:
> 
> First you buy 3 or 6 Garmin 18 LVCs (or better: Synergy/Oncore timing 
> receivers set in Zero-D mode) and connect them directly to your primary 
> servers (using the appliance NTP servers as backup), then run both 
> clockstats and loopstats logging files.
> 
I'm pretty sure the appliance might be more capable than the LVC18, same 
manufacturer as many gov't agencies use, better antenna, highest grade 
coaxial cable from the antennas, etc. Has PPS and 10Mhz outputs. The 
problem is getting the PPS into the servers.
> With loopstats logging on each client server you can go into these logs 
> and calculate the worst-case error for the sync periods just before and 
> just after the transaction timestamp you need to document. These values 
> are a pretty good estimate of the real quality of said timestamp.
> 
> Setting all servers to maxpoll=6 might be needed to follow fast 
> temperature fluctuations quickly enough, even though this results in 
> worse long-term frequency determination. (Your servers will make a lot 
> of micro-adjustments to the clock frequency in order to track as closely 
> as possible.)
> 
> Anyway, with all servers in a given location sitting on the same fast 
> switch, you might in fact be able to stay consistently below 100 us, but 
> it isn't easy!
> ...
> I've just checked my own primary NTP servers, of which I have 2 in each 
> of 3 locations, with a single GPS in each location:
> 
> In Oslo, where the secondary server is using 64 second poll interval 
> (maxpoll=6), the current offset from the primary server is 76 us, with 
> jitter = 89 us.
> 
> In Porsgrunn, with default (1024 seconds) poll interval, the offset is 
> 3.4 ms, with jitter varying between 0.5 ms and 34 us.
> 
> In Bergen the primary server went offline, so the secondary fell back on 
> the WAN connection to Porsgrunn and is running at 1024 seconds poll, 
> with current offset/jitter at 370/46 us.
> 
> In all these locations the two NTP servers are dedicated machines, 
> sitting on the same local switch.
> 
> I.e. you _might_ be able to get below 100 us _most_ of the time, but 
> there's no way you can guarantee it without scrapping your current blade 
> setup and going to servers with serial ports for PPS signals.
> 
Which I explored very briefly today. Let's just say, ain't gonna happen.
> With a low maxpoll value you can estimate timestamps offsets after the 
> fact.
> 
> Terje
> 
I think that I'm getting the collective response that <100us is 
un-reachable given current hardware limitations, so we'll have to get 
close. This response is _not_ a bad thing, it gives me support to push 
back to the software side.

Jason.




More information about the questions mailing list