[ntp:questions] Red Hat vote for chrony

Charles Swiger cswiger at mac.com
Fri Dec 5 20:35:42 UTC 2014


On Dec 5, 2014, at 11:47 AM, Paul <tik-tok at bodosom.net> wrote:
> On Fri, Dec 5, 2014 at 9:37 AM, Charles Swiger <cswiger at mac.com> wrote:
>> I also make sure that my
>> timeservers are running in temperature-controlled environments so that
>> such daily drifts you mention are minimized.
> 
> I'm starting to think that people answering questions are unsure of the
> real question so they make a number of assumptions.  If you care about
> sub-millisecond time then you need to say that and the question should be
> answered in that context.

Well, we do have time enthusiasts around who like to achieve the best
precision they can, regardless of whether there is a specific business
justification or not.  :-)

> I suspect most of the questions here refer to
> sub-second accuracy and most of the elaboration is unneeded.

True; if you just want accuracy to the nearest second, you don't need
to do anything elaborate to achieve that.

Absent any other requirements, I think it reasonable for folks to target
~1 millisecond level of accuracy, which is quite doable on many platforms,
even using remote timeservers over the WAN.

> If all your external clocks fail I suspect the typical user can depend
> on the disciplined virtual clock for days.

For real hardware, sure-- once the intrinsic frequency drift has been setup,
you can free-run for days into weeks without drifting too far.  Cell phone
towers (especially CDMA) are a decent example of such fault-tolerant systems.

>> For almost all of human history, the sun or the "fixed celestial heavens"
>> have provided the most accurate time reference available.  Even today,
>> we add (or subtract, in theory) leap seconds in order to keep UTC and UT1
>> aligned to better than a second courtesy of IERS.
>> 
>> Yes, the USNO, CERN, and so forth now do have sufficiently high quality
>> atomic clocks which have better timekeeping precision than celestial
>> observations.
>> 
> 
> I think there's some confusion here.  Search for BIPM paper clock or read <
> http://www.ggos-portal.org/lang_en/GGOS-Portal/EN/Topics/Services/BIPM/BIPM.html

What confusion?  Certainly it's a decent paper to read....

>> Such a point is orthogonal to the notion of how to measure a local clock
> 
> I think this is an interesting question.  How does one get high resolution
> measurements of the error in the virtual clock maintained with NTP (or
> Chrony)?  I thought it was done with purpose built systems.

Yes, you need to compare timestamps using purpose-built systems like a
TCXO, Cesium, or Rubidium clock connected ideally via fast-interrupt
driven parallel, serial, or network port which hopefully also provides
hardware timestamping to minimize the processing latency.

> I don't expect a random version of Linux on generic hardware to be able to
> maintain the clock at nanosecond scale.

True.  I don't expect any version of Linux to perform at nanosecond scale, but
that has as much to do with kernel bugs and compromises in timekeeping that
particular OS has chosen.

Even back in 2002 with very inexpensive commodity hardware, FreeBSD was able to
achieve accuracy measured to ~260 nanoseconds:

http://phk.freebsd.dk/soekris/pps/

...per a Rb-based atomic clock.  This is the sort of analysis I want to see
for chrony.

Regards,
-- 
-Chuck



More information about the questions mailing list