[ntp:questions] Getting NTP to correct only the clock skew

David Woolley david at djwhome.demon.co.uk
Sat Apr 14 21:19:35 UTC 2007


In article <PcOdnTiZDtzYp7zbnZ2dnUVZ_t7inZ2d at megapath.net>,
hal-usenet at ip-64-139-1-69.sjc.megapath.net (Hal Murray) wrote:

> Then your goal is not to synchronize B's clock to A's clock, you need
> to synchronize B's clock to the source of your CBR data.  How good is

No.  He wants to synchronize B's clock to the that of the machine that
is adding time stamps to the data stream.  The basic problem is to re-time
the data so that the network jitter is removed from the stream being
re-broadcast by B.  Either A is the original source, or A is connected
to the original source over a low jitter path.  If A accurately time
stamps the packets and B re-transmits them at the time stamp time plus
at least the maximum network latency, the packets out of B will have the
same timing relationship as those out of A, subject only to clock errors.

I'm afraid he has been failing to understand that his application is
rather unusual and therefore failing to recognize when people are 
solving the wrong problem, and more clearly explain what he is trying
to do.

> that clock?  Is it synchronized to UCT?  (syncrhonized in frequency,
> I don't think you care about the time)

I think the original presumption was that it was not.  More precisely,
unless he can get a low jitter UTC source at B, other than by synchronising
to A, it doesn't really matter whether or not A is on true time.

> In any case, you have to figure out what you are going to do
> if your buffer overflows or underflows.

What he will get when the system fails through network effects, other than
those affecting time synchronisation, is that a positive spike in network
latency will cause a packet to arrive after its due time for retransmission.
Either there is some theoretical reason why the network latency is bounded, in
which case he makes sure that he delays transmission by more than the 
maximum latency, or he is just going to have to retransmit the packet 
as soon as it arrives.  I think he is assuming that access latency to the
local network at B is negligible, so there will be no transmission queue.
That's the underflow case.  I imagine that it is easy to dimension the
system so that overflow is impossible.

> What happens if B sends a little to fast for a while and then
> a little to slow for a while to get back in sync?  Something
> like that is going to happen.  You only get to discuss the
> value of "little".  How close do you have to get?

I think what he was actually asking for is that B should correct the
frequency error, dead beat, without overshooting to correct the phase
error.  Unfortunately, that ignores that the NTP algorithm is a phase
locked loop, so tries to maintain frequency lock by maintaing phase
lock.  Most hardware solutions to locking two frequencies also do so by
locking the phases, as it is easier to measure phase error than to directly
measure frequency error.

> If both clocks are reasonably stable (needs stable temperature) then
> you can probably build a pseudo clock on B by building a PLL on

Once you've introduced the phase lock, you've introduced part of the
problem with NTP!  NTP is going to have better implemented phase locking
than a one off implementation.  I suspect the changes he wants to NTP
are really:

1) set the step out limit based on the actual network jitter in his case;
2) when a step occurs, do it by adjusting a correction between the local
   clock and NTP's idea of the local clock, so that the actual local clock
   never steps.

> the buffer fullness.  The time constant on the PLL needs to be

That does actually point out that you must use the full phase locking
and must over correct the frequency.  Otherwise, over time, you could
suffer an underflow.

> slow enough to filter out the network jitter and fast enough
> to track the changes in the crystal drift due to temperature.

I think this is what is called the Allen intercept.

> An hour is probably the right ballpark.

I presume that maxpoll in NTP is based on empirical measurements, in
which case the figure you want is more like 20 minutes.  However, if
the latest NTP algorithms work as well as Dr Mills claims, their
adaptive time constanst are going to do better than this (I'm still
not convinced that it really centres phase and frequency as fast as it
might when they both start outside their respective noise bands).

> You need to make the buffer big enough so that it doesn't
> over/underflow too often.  It doesn't have to never underflow,
> it just has to do that rarely relative to how often the network
> breaks.

I think he has a never underflow requirement.  Really, for anything like
this you should always specify a percentile compliance level, but managers
often don't understand that they are dealing with statistics.  I think there
is a good chance, here, that he can actually bound the jitter such that
underflow essentially never happens.  (He would be better using ATM
reserved bandwidth, than a statistical packet network, of course.)  We
don't really even know if this is a commercial or military application.

> I'd probably start with a super big buffer and add lots of instrumentation
> until I was pretty sure I knew what was going on, say log the high/low
> every minute or hour

I don't think his problem is buffer size, it is accurately retiming the
transmissions.  If there is a buffer size problem it is in the final
consumers, and I think he only has a one way path to them.

Incidentally, I suspect he has a requirement to maintain timing to a higher
standard than the final consumer.  The final consumer may well be just
crystal stabilised, so he may be trying to improve that by a factor of
about 10.




More information about the questions mailing list