[ntp:questions] Has anyone thought about this?

Brian Utterback brian.utterback at oracle.com
Mon Apr 14 13:50:54 UTC 2014

I think you are missing a very important point. As the frequency 
correction gets closer and closer to the true correction, the difference 
between the system clock timestamps and the actual timestamps gets 
smaller and smaller, meaning that the error introduced by the frequency 
correction likewise gets smaller. If we assume a situation where the 
offset actually reaches zero at the same time that frequency correction 
becomes correct, then that situation is stable as long as nothing 
changes. Your idea of using the performance counter would then introduce 
errors at this point and would not be stable or possibly have a node at 
some point other than the zero offset and perfect correction point. So 
the answer to your question is yes, there is a way that using the PC 
timestamps will not be more accurate. The PC is only more accurate as 
long as the difference between its calculated frequency and it's actual 
frequency is smaller than the difference between system clock's adjusted 
frequency and its actual frequency.

Brian Utterback

   On 4/14/2014 9:34 AM, Charles Elliott wrote:
> Ntpd on my system uses a frequency offset (according to NTP Plotter,
> thank you very much) of -26 to -28 ppm fairly consistently.  If I
> understand it correctly, that corresponds to a correction of 26 to
> 28 microseconds on every clock tick.  Is there any way that measuring
> t4p = PC(t4) - PC(t1) is not going to be more accurate, given that
> the PC driven by the HPET has a resolution of ~70 ns, than
> t1 = system time and t4 = system time?
> It would be easy to test.  Just record the PC value when p_org (= t1)
> is set, and record the PC again when p_rec (= t4) is saved.  Then
> send the difference between the new PC values (as t4p, say) to the
> rawstats log at the end of the line.  It would only take a few hours
> to see if t4 <> t4p.
> My new DIY NAS has ntpd running on FreeBSD.  It keeps pretty good time.
> I recorded the ping time (pinging constantly for 5 minutes) from the
> NAS to two computers here.  Below is a table of average ping times and
> delay as computed by ntpd:
>              Avg.
>              ping    ntpd
> Computer    time    delay
> 1         0.264681 0.236   ms  (Win 7, Core i7 3820, GA-X79-UD3 (rev 0) mobo, Gigabit Ethernet)
> 2         0.337169 0.321   ms  (win 8, Core i7 3820, GA-X79-UD3 (rev 1) mobo, Gigabit Ethernet)
> Note that they are different.
> Charles Elliott
> -----Original Message-----
> From: questions-bounces+elliott.ch=verizon.net at lists.ntp.org [mailto:questions-bounces+elliott.ch=verizon.net at lists.ntp.org] On Behalf Of Terje Mathisen
> Sent: Thursday, April 10, 2014 10:21 AM
> To: questions at lists.ntp.org
> Subject: Re: [ntp:questions] Has anyone thought about this?
> Brian Utterback wrote:
>> On 4/10/2014 3:22 AM, Terje Mathisen wrote:
>>> The maximum ntpd slew is   500 ppm, which means that the absolute
>>> maximum possible slew between UTC and the local clock would be 1000
>>> ppm (i.e. the clock is maximally bad, at +500 ppm, and we are
>>> currently slewing at -500 ppm), in which case the maximum error
>>> component from this would be 1/1000th of the actual time delta. (In
>>> real operating systems the actual errors are several orders of
>>> magnitude less! Typical clock frequency adjustments due to
>>> temperature cycling are in the single ppm range, but even a few tens
>>> of ppm gives relative errors in the 1e-4 to 1e-5 range, which doesn't
>>> impact the control loop at all.
>> I am pretty sure that the   500 ppm is absolute and is already the sum
>> of the frequency correction and the current clock slewing. But one of
> Oh sure, that is why I wrote that this is the theoretical maximum possible, with real-life servers being at least an order of magnitude better behaved.
>> the reasons for having a maximum in the first place is to put a cap on
>> the error introduced because if the instantaneous frequency
>> corrections taking place at the time the timestamps are taken. This is
>> all covered in chapter 11, Analysis of Errors, in the first edition of
>> Das Buch (Computer Network Time Synchronization, Mills, 2006). I am
>> pretty sure that it is also in the 2nd ed, but I don't have access to that one.
> Neither do I, but I am absolutely sure Dr Mills included this error component in his stability and convergence calculations. :-)
> If you do allow far higher slew rates, like some other programs do, then you would indeed have to separate the offset slew from the frequency correction, and use the frequency clock only to measure delta-Ts.
> The easiest way to do this is of course to keep a sw delta clock around, this one would start out the same as the OS clock, but then only include any frequency adjustments to its rate, and not any slew adjustments. At this point either HPET or RDTSC could be used as the common frequency source for both clocks.
> Terje
> --
> - <Terje.Mathisen at tmsw.no>
> "almost all programming can be viewed as an exercise in caching"
> _______________________________________________
> questions mailing list
> questions at lists.ntp.org
> http://lists.ntp.org/listinfo/questions
> _______________________________________________
> questions mailing list
> questions at lists.ntp.org
> http://lists.ntp.org/listinfo/questions

More information about the questions mailing list