[Pool] time reset

Chuck Swiger cswiger at mac.com
Fri Mar 25 18:31:19 UTC 2011


On Mar 25, 2011, at 10:41 AM, Ask Bjørn Hansen wrote:
>> Well, take a look at the peerstats-- especially offset and jitter-- and ntpd's rv assessment of the offset, jitter, and stability of the kernel clock it is adjusting.  The worst-case peer jitter if a factor of 4 worse in the VM (2.468 vs. 0.585); best case is a factor of 7 difference: 0.276 vs 0.041; and the jitter for the stratum-1 source using PPS is also a factor of 7 worse for the VM (0.910 vs 0.126). You can repeat the same analysis with offset values and rv stats.
> 
> Yes, definitely -- the non virtualized hardware is better at keeping time (and for practically anyone who's nitpicky enough about this to be subscribed here, maybe even significantly).

The difference with a VM in good shape for you case was millisecond-level accuracy vs ~100-200 microsecond level; and people who really care about time can get results at the ~100 nanosecond level.

> However, I think the original assertion were:
> 
> 1) Running ntpd is the best/easiest way to keep time in most/some/popular virtualized environments.

Running ntpd is not easier than setting Xen's independent_wallclock = 0, or VMware's vmx option tools.syncTime = true.  As for best, well, I've yet to see data where that happens:

"6) I have yet to see an example where ntpd running in a DomU/guest kept
better time than using independent_wallclock = 0, tools.syncTime = true, etc
if the Dom0/host OS is sync'ed."

> 2) The ntpd will keep time "pretty good" -- and certainly good enough to participate in the pool.  (As the other thread that's going right now made clear; part of our job here is to just help deal with the junk and gazillion SNTP clients and really any sort of time that's stop-watch close to real time is plenty good for that).

Running ntpd in a VM can sometimes provide good enough time to participate in the pool.

On the other hand, the VM might be fine for a while, and then suddenly experience significantly worse or variable latency due to loading of other VMs, leading to results like:

  http://www.pool.ntp.org/scores/128.177.28.170

...or a long thread on comp.protocols.time.ntp (starting from Message-id:
<568225d2-c7f0-4416-940b-9374da4d8003 at glegroupsg2000goo.googlegroups.com>) where a VM was seeing:

ntpq -p
    remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
rdns01.nexcess. 66.219.116.140   2 u   44   64  377   28.107  982913. 88317.2
mirror          204.9.54.119     2 u    6   64  377   28.574  1106774 124694.
clock.trit.net  192.43.244.18    2 u   46   64  337   10.098  1120629 154394.
mailserv1.phoen .LCL.            1 u   34   64  377   17.445  1127424 156997.
louie.udel.edu  128.175.60.175   2 u   57   64  377   40.732  1114835 156655.
ns.unc.edu      204.34.198.40    2 u   39   64  377   50.814  1020974 89826.4
ntp-3.cns.vt.ed 198.82.247.164   2 u    6   64  377   52.976  1143785 155905.
ntp-2.cns.vt.ed 198.82.247.164   2 u   18   64  377   38.473  1067816 102721.
clock.isc.org   .GPS.            1 u   34   64  377   18.472  1128019 158703.

If you have to enable "tinker panic 0" to keep ntpd from bailing, it shouldn't be surprising that it can sometimes end up in a very bad state....

Regards,
-- 
-Chuck



More information about the pool mailing list