[ntp:questions] Re: Frequency and leapseconds!
cii99ec7 at i.lth.se
Fri Mar 11 01:53:33 UTC 2005
Hmm, I don't think the phase adjustments can be done once every second. At least that wouldn't
explain the difference between the meassured frequency and (nominal frequency)*(1+freq*10e-6).
If the phase adjustment was applied once every second I would expect to sometimes
see them and sometimes not, when I meassure the frequency over fractions of seconds.
This is not the case, even if I meassure frequency over just 300ms the frequency is
consistant within the polling interval and doesn't change much until the next NTP packet
I have really tried to look into this issue, and I think that the phase is adjusted
every interrupt. Every interrupt, the clock steps up 'tick' micros. If 'tick' was
adjusted it would explain what I see. However, the 'tick' I get from ntp_adjtime() is
always 10000 microseconds unless I change that myself. There must be something like a
variable tick_adjust somewhere, but I cant find it anywhere.
Isn't there any possibility to know what the frequency (including both phase and frequency
adjustments) is in terms of real time?
Also I have tried to include the line 'disable pll' in my ntp.conf and restarted ntpd, but still
ntp seems to control my clock. Is it possible to disable "local clock adjustments" but keep the stream
of ntp packets?
A: Frequency meassured in terms of computer nanoseconds/TSC cycle with
repeated gettimeofday() and TSC readings with sleeps in between.
B: Skew from nominal frequency given from freq in ntp_adjtime()
C: Estimated real skew from nominal frequency.
Nominal frequency: 1.6752428945
16 secs between rows
NTP polling rate 16 secs.
The "default skew" is about 27 PPM on this machine.
A B C
1.6753108606 39.43089 41
1.6753107645 39.19969 40
1.6753106240 38.94756 40
*[ntptime -f 400 sets the skew to +400 PPM]*
1.6758960721 400.0000 401
1.6757398998 397.3787 297
1.6756331461 394.6032 234
1.6754730843 386.4843 137
1.6754097199 381.0005 99
1.6753613307 375.1088 71
1.6753249189 368.9386 49
1.6752478486 333.8764 29
From: kevin-usenet at horizon.com [mailto:kevin-usenet at horizon.com]
Sent: Wed 3/9/2005 1:27 PM
To: erik corell
Cc: kevin-usenet at horizon.com
Subject: RE: [ntp:questions] Re: Frequency and leapseconds!
> Is it any difference between phase error and offset error? This phase
> correction works the same way as a tiny reset, doesn't it?
"Phase" and "offset" are synonymous. The word "phase" is more
frequently used with periodic signals (where there is an ambiguity
modulo the period), while "offset" is more often used when there is
not, but they mean the same thing here. It's just that we also
use "frequency", and "phase" is generally the word used with that.
(Like acceleration, velocity, and position.)
At the risk of confusing you further, any operation on a quantized clock
can be considered a small phase step, but NO.
NTP implements the phase correction (unless it is very large) by
adding an appropriate value to the clock frequency for a while.
> Does NTP apply a phase correction once per second, if needed, or more
Actually, I'm not 100% sure. I think it's 1 second, but it might
be a small number of seconds. Anyway, assuming it's 1 second, the
algorithm is as follows:
Every poll interval:
freq_correction = (complicated computation)
phase_correction = (complicated computation)
phase_correction_this_second = phase_correction / 64
phase_correction = phase_correction - phase_correction_this_second
freq_correction_this_second = freq_correction + phase_correction_this_second
(I seem to recall that constant of "64" from an NTP paper, but I didn't
look it up to confirm.)
> Is it possible to limit the phase correction or switch it off all =
> together? Is it wisely to do so?
Not without hacking the code. And it's not wise. It would
destabilize the NTP PLL. If you wanted maximum frequency
stability at the expense of phase stability, you could move
the phase_correction_this_second accumulation out of the kernel
and into a software emulation layer inside NTP.
That is, lie to it when it calls gettimeofday() and tell it that its
phase corrections are being applied, but don't actually apply them.
That would maximize frequency stability, although the phase could
drift arbitrarily far off.
> I want to meassure relative time in the range (0.2 ms-100ms) between two =
> gettimeoftheday() calls (one in the kernel and one in userland), that is =
> why I am fussy about frequency and possible resets or phase adjustments. =
> I don't want to switch NTP off though.
Note that a lot of modern processors change frequency on the fly, making
the TSC unusable for this purpose in the general case.
Actually, why not switch NTP off (or at least disable updates) for the
period of measurement?
The exact kernel frequency is actually irrelevant unless you're
comparing the results to an external time standard. If you're
only comparing two gettimeofday() implementations, as long as they
both compute based on the same frequency, it doesn't matter if
that frequency is somewhat fictitious.
(BTW, on many processors, you'll find a simple integer relationship
between the processor clock and, say, the 8254 PIT clock.
IT's one crystal and PLLs.)
> One last tricky question:
> TSC is used to interpolate between two clock interrupts on x86 to get a =
> higher resolution. Do you know if the freq skewing effects the =
> interpolation or only the size of the step between the interrupts?
I'd have to read the code. I think Linux implements the interpolation
without considering NTP's desired frequency correction, but FreeBSD
may behave differently.
More information about the questions