[ntp:questions] Trimble Resolution T

Hal Murray hal-usenet at ip-64-139-1-69.sjc.megapath.net
Tue Oct 18 09:35:52 UTC 2011


In article <78D1132E-F79C-4C3B-92EA-AB8861057423 at gmail.com>,
 Dennis Ferguson <dennis.c.ferguson at gmail.com> writes:
>
>On 17 Oct, 2011, at 15:14 , Hal Murray wrote:

>> It's not the delay that is the problem.  It's simple to correct for a
>> fixed delay.  (at least in theory)  The problem is variations in the delay.
>> 
>> Jitter can easily be caused by cache faults or jitter in finishting the
>> processing of the current instruction, or having interrupts disabled.
>> (There are probably other sources.)
>
>I actually would have said just the opposite.  Jitter is less of a
>problem because you can see it.  That is, if you take a series of
>samples you can see the variation in delay reflected in the variations
>of the offsets you compute, and you have some basis for filtering out
>the samples which are most severely effected (assuming, for example, that
>those with less delay, and hence a more positive offset, are "better" than
>those with more delay).  Even if you can't eliminate the effect of jitter
>entirely you at least have the data to develop an expectation of the error
>residual error it is causing.

The problem with trying to average out jitter is that you don't have
a good place to stand.  The raw data isn't Gausian.  It might be if
you had lots of data, but for a number of samples small enough to be
useful for ntpd, it could easily be bad luck.  On an idle system,
the critical code might still be in the cache.  On an almost idle
system, some nasty user code could trash the cache without using
much CPU.  There is no way to tell those two cases apart unless
you know the time and if you knew that, we wouldn't be discussing this.

-- 
These are my opinions, not necessarily my employer's.  I hate spam.



More information about the questions mailing list