[ntp:questions] TSC, default precision, FreeBSD

Todd Glassey tglassey at earthlink.net
Sat Sep 5 19:44:44 UTC 2009


David Mills wrote:
> Dave
>
> I lobbied hard for NIST and USNO to use FreeBSD 
They do - they are Dell boxes running FreeBSD - just not with NTP.ORG's 
code base on them.
> rather than Linux and a 
> modern NTP release; USNO does, but NIST does not due to internal inertia 
> and the fear to change things. Also, NIST does not use the clock 
> discipline algorithm nor the ACTS driver, but rather another one called 
> lockclock. USNO uses a modern NTP release ex box including Autokey. NIST 
> has no agenda to lie or cheat, just to use whatever becomes public. It 
> happens the version they chose had the misinterpreted precision code, 
> but that would not change any selection algorithm distanced by a few 
> milliseconds.
>   
Wrong - the NIST servers are using ACTS...
> Minor nit. The system clock read routine does not fuzz below the 
> precision; it fuzzes below the resolution. The significant bits of the 
> clock are not affected. Most serious engineers would agree the precision 
> defined as in NTP is a correct interpretation. It says the user must 
> expect the error in reading the system clock will be degraded by the 
> time to read it. There is no probability question here. You can argue 
> the actual error in reading the clock is somewhere within the latency 
> period, but you don't know exactly where. The precision statistic is an 
> upper bound.
>
> I don't know why Janusz is so stirred up about this other than to make 
> noise. I sure wish my comments made it to the newsgroup; it would avoid 
> a good deal of misinformation.
>
> Dave
>
> Dave Hart wrote:
>
>   
>> On Sep 5, 10:06 am, "Janusz U." wrote:
>>  
>>
>>     
>>>>>>>> -#if defined(__FreeBSD__) && __FreeBSD__ >= 3
>>>>>>>> - u_long freq;
>>>>>>>> - size_t j;
>>>>>>>> -
>>>>>>>> - /* Try to see if we can find the frequency of of the counter
>>>>>>>> - * which drives our timekeeping
>>>>>>>> - */
>>>>>>>> - j = sizeof freq;
>>>>>>>> - i = sysctlbyname("kern.timecounter.frequency", &freq, &j , 0,
>>>>>>>> -     0);
>>>>>>>> - if (i)
>>>>>>>> - i = sysctlbyname("machdep.tsc_freq", &freq, &j , 0, 0);
>>>>>>>> - if (i)
>>>>>>>> - i = sysctlbyname("machdep.i586_freq", &freq, &j , 0, 0);
>>>>>>>> - if (i)
>>>>>>>> - i = sysctlbyname("machdep.i8254_freq", &freq, &j , 0,
>>>>>>>> -     0);
>>>>>>>> - if (!i) {
>>>>>>>> - for (i = 1; freq ; i--)
>>>>>>>> - freq >>= 1;
>>>>>>>> - return (i);
>>>>>>>> - }
>>>>>>>> -#endif
>>>>>>>>              
>>>>>>>>
>>>>>>>>                 
>>> seems to change it. It appeared and dissapeared in ntp_proto - no documented
>>> reason or who and when did it.
>>>    
>>>
>>>       
>> It was removed from ntpd by Dr. Mills on 2001-10-08 in revision
>> 1.99.1.2 of ntp_proto.c, part of ChangeSet 1.706.1.10. Please see:
>>
>> http://ntp.bkbits.net:8080/ntp-dev/?PAGE=cset&REV=3bc262a7lht0pKJJOm0ZB12BsLruxw
>> http://tinyurl.com/l5vcjf
>>
>> It had been in ntpd since before the switch to BitKeeper near the end
>> of last century, and I don't know how to research changes before then.
>>
>>
>>  
>>
>>     
>>> Quote of Poul-Henning Kamp:
>>> This is the correct way to get the precision on all FreeBSD versions
>>> after FreeBSD 3.
>>>
>>> I have no idea why that code was removed."
>>>    
>>>
>>>       
>> That seems like a pretty clear hint Poul-Henning added it.
>>
>>  
>>
>>     
>>> Quote of Dave Mills:
>>> "The FreeBSD precision code was removed because it was misguided[...].
>>> By definition in the spec and implementation the
>>> precision is defined as the latency to read the system clock. The
>>> FreeBSD code was an attempt to define the resolution, not the precision.
>>> Now you have an idea why the code and any other like it was removed."
>>>
>>> Summary: your choice? NO - "By definition in the spec and implementation the
>>> precision is defined as the latency to read the system clock"
>>>
>>> My short explanation:
>>> The key is "PRECISION" word
>>> look at:http://en.wikipedia.org/wiki/Accuracy_and_precisionandhttp://www.tutelman.com/golf/measure/precision.php
>>> look at:http://www-users.mat.uni.torun.pl/~gapinski/storage/NTP.ppsandhttp://phk.freebsd.dk/pubs/timecounter.pdf
>>>    
>>>
>>>       
>> These external references are absolutely irrelevant to the question at
>> hand.  This is not about what "precision" means in English, or in any
>> other scope than the NTP specification and reference implementation.
>> It is common practice to "hijack" an existing word and nail it down to
>> a particular meaning within an engineering specification.  Do not
>> confuse NTP's precision with any other uses of the word precision.
>> That would be imprecise, to say the least.
>>
>>  
>>
>>     
>>> default_get_precision() measures system precision according to definition:
>>> "the latency to read the system clock". So according to Poul-Henning Kamp it
>>> is a precision. According to eg. wiki it is an accuracy...
>>> Then, when NTPD have got measured "precision" variable it is used as random
>>> parameter (extra noise) in time reading function. So then "precision" means
>>> really precision.
>>>    
>>>
>>>       
>> I am able to parse no meaningful content from this paragraph.  NTP
>> precision means what the NTP specification says it means.  "really"
>>
>>  
>>
>>     
>>> It seems the precision could be not good value. Where is the truth I am
>>> asking again? Software or hardware solution?
>>>    
>>>
>>>       
>> I claim the truth is ntpd 4.1.1 lies about its (NTP) precision on
>> FreeBSD thanks to code which was subsequently yanked, because it was
>> wrong.  I do not know why NIST opts to use such an old version of
>> ntpd, but I doubt it's because they are vain and like to see
>> precision=-29 from ntpq.  For clarity, that means 2 raised to the -29
>> power seconds.
>>
>> I will note that if PHK added the code to ntpd, his motives in doing
>> so appear questionable to me.  The code essentially gives FreeBSD an
>> unfair advantage over other OSes as seen through the lens of ntpq and
>> ntpd.  PHK is a prominent FreeBSD developer.  Draw your own
>> conclusion.
>>
>> Why, you might ask, is it wrong for ntpd to use the resolution of the
>> source underlying the system clock as the NTP precision of that
>> clock?  Simply because any (English) precision the hardware may have
>> internally that exceeds the NTP precision is unavailable to ntpd in
>> operation.  That is the reason the NTP precision is defined in terms
>> of the time to read the system clock, as ntpd reads it.  As you have
>> noted, ntpd "fuzzes" every timestamp read from the system clock by
>> using random bits instead of whatever the system clock returned for
>> bits beyond the measured precision.  This is clearly so that ntpd's
>> calculations, and the calculations of remote clients, are not
>> influenced by what amount to garbage bits returned by the system
>> clock.  Over repeated samples, the random bits will average to the
>> middle value, which is the most accurate representation of the clock
>> as readable by ntpd.
>>
>> I hope this help you understand NTP precision.  I also hope you will
>> cease attempting to conflate it with any other definitions of
>> "precision" or related terms.
>>
>> Cheers,
>> Dave Hart
>>
>> _______________________________________________
>> questions mailing list
>> questions at lists.ntp.org
>> https://lists.ntp.org/mailman/listinfo/questions
>>  
>>
>>     
>
> _______________________________________________
> questions mailing list
> questions at lists.ntp.org
> https://lists.ntp.org/mailman/listinfo/questions
> ------------------------------------------------------------------------
>
>
> No virus found in this incoming message.
> Checked by AVG - www.avg.com 
> Version: 8.5.409 / Virus Database: 270.13.78/2347 - Release Date: 09/05/09 05:51:00
>
>   




More information about the questions mailing list