[ntp:questions] Flash 400 on all peers; can't get ntpd to be happy

Chuck Swiger cswiger at mac.com
Wed Mar 9 20:24:39 UTC 2011


On Mar 9, 2011, at 3:36 AM, Miroslav Lichvar wrote:
> On Tue, Mar 08, 2011 at 03:26:34PM -0800, Chuck Swiger wrote:
>>>> You are better off running ntpdate (or sntp) periodically via cron in
>>>> the DomUs.
>>> 
>>> Perhaps in certain cases, but not across the board.
>> 
>> I'd be happy to review counterexamples to my generalization....
> 
> I'd say it depends on the VM.

OK.

> For instance, Fedora 14 running in kvm on Fedora 14. There are four
> clocksources available in the guest system: kvm-clock tsc hpet
> acpi_pm. With each of them the frequency seems to be stable, even when
> the host or guest CPU is heavily loaded. The kvm-clock and hpet
> clocks seem to be running at same rate as the host's system clock, tsc
> at the real CPU's rate and acpi_pm is off by few tens of ppm.

I'm less familiar with linux-kvm than some of the alternatives, but what
you've described here seems pretty reasonable.  For instance, there's only
one real CPU-- or maybe several real CPUs or CPU cores, depending on what's
in the box-- so RDTSC is going to get the same results in the host OS and
guest.  (Again, depending availability of p-state invariant TSC,
multicore TSC synchronization, etc.)

Ditto for the real HPET timers, or ACPI, etc if they are exposed to the guest.

However, note the caveats mentioned at http://www.linux-kvm.org/page/KVMClock

"Some basic mechanism to make sure it works:

* Try guests without kvm-clock too. Make sure they at least boot
* make sure successive gettimeofday calls never go backwards (testing this can take days)
* make sure that calls to different time sources (like gettimeofday and monotonic) do not deviate too much, nor go backwards."

	-------

The first line quoted above is the last sentence in a paragraph; there are
prior assumptions which condition this statement.  Let me try to rephrase
more clearly:

1) Running ntpd in the Dom0/host ESX/host is very useful.  Keeping good time
there means that good time will be available to all of the VMs/guests via
independent_wallclock = 0, tools.syncTime = true, etc.

2) Running ntpd in the Dom0/host ESX/host also means that system/kernel clock
will or ought to be sync'ed, which means that the periodic updates to the
RTC/TOD/TOY clock are also good.

3) Running ntpd in a DomU/guest is possible, however a DomU/guest OS cannot
update the time seen in other DomUs/guests, and it cannot update the
RTC/TOD/TOY clock.

4) Furthermore, ntpd in a DomU/guest may experience large jumps in time
depending on the loading of the physical host, whether the VM is swapped
out or otherwise suspended for long periods of time, etc.

[ This is why the suggested ntp.conf's for DomU/guests use "tinker panic 0",
and recommend not to use to undisciplined local clock.  This whole
thread started because Ralph, the OP, couldn't get ntpd to keep time
within a guest without that option. ]

5) I have yet to see an example where running ntpd in a DomU/guest kept
better time than ntpd running in Dom0/host OS.

6) I have yet to see an example where ntpd running in a DomU/guest kept
better time than using independent_wallclock = 0, tools.syncTime = true, etc
if the Dom0/host OS is sync'ed.

Because of the above, I've drawn the conclusion that "running ntpd's in the
other DomUs/guest VMs is almost entirely pointless".

Regards,
-- 
-Chuck




More information about the questions mailing list