[ntp:questions] Re: Frequency and leapseconds!
cii99ec7 at i.lth.se
Wed Mar 9 08:24:43 UTC 2005
I have been stuck with this problem too long but now all the pieces are starting to fall into places.
Sorry for being vague. There is a swedish expression saying "the vague thought is the vague said"...
About the TSC: I view that as a (almost) steady rate that I can compare the clock frequency adjustments against. Lets talk about true time instead (1 second/second).
I had no idea about this phase adjustment. Interesting, that would explain everything. You are absolutely right that the clock was SYNCHRONIZED when I made this sample. Only then does this additional adjustment occur, so NTP does have something to do with it.
Is it any difference between phase error and offset error? This phase correction works the same way as a tiny reset, doesn't it?
Does NTP apply a phase correction once per second, if needed, or more often?
Is it possible to limit the phase correction or switch it off all together? Is it wisely to do so?
I want to meassure relative time in the range (0.2 ms-100ms) between two gettimeoftheday() calls (one in the kernel and one in userland), that is why I am fussy about frequency and possible resets or phase adjustments. I dont want to switch NTP off though.
One last tricky question:
TSC is used to interpollate between two clock interrupts on x86 to get a higher resolution. Do you know if the freq skewing effects the interpollation or only the size of the step between the interrupts?
Thanks again Kevin,
From: kevin-usenet at horizon.com [mailto:kevin-usenet at horizon.com]
Sent: Wed 3/9/2005 5:01 AM
To: erik corell
Cc: kevin-usenet at horizon.com
Subject: RE: [ntp:questions] Re: Frequency and leapseconds!
I have now done meassurements on the computer frequency in terms of TSC
clock frequency follows the (nominal frequency)+(skew from ntp_adjtime)
exactly. However, when the clock is feeling alright the skewing is
veird. If I or the NTPD sets the skew to 400 ppm for example this result
in a correct frequency change at first. After only a little while though
the meassured frequency skew starts to move down much faster than the
skew from ntp_adjtime. See for your self.
Note the difference between column B and C below.
> A: Frequency meassured in terms of computer nanoseconds/TSC cycle with =
> repeated gettimeofday() and TSC readings with sleeps in between.
> B: Skew from nominal frequency given from freq in ntp_adjtime()
> C: Estimated real skew from nominal frequency.
> Nominal frequency: 1.6752428945
> 16 secs between rows
> NTP polling rate 16 secs.
> The "default skew" is about 27 PPM on this machine.
> A B C
> 1.6753108606 39.43089 41
> 1.6753107645 39.19969 40
> 1.6753106240 38.94756 40
> *[ntptime -f 400 sets the skew to +400 PPM]*
> 1.6758960721 400.0000 401 =20
> 1.6757398998 397.3787 297
> 1.6756331461 394.6032 234
> 1.6754730843 386.4843 137
> 1.6754097199 381.0005 99
> 1.6753613307 375.1088 71
> 1.6753249189 368.9386 49
> 1.6752478486 333.8764 29
This is wierd. You say that the clock is unsynchronized, but
*something* is reprogramming the freq value. Are you *certain*
that ntpd is not adjusting the clock?
As for the masured frequency error (as I said, electrical engineers
use the word "skew" specifically and exclusively to refer to the
*time*, not frequency, difference between the arrival times of
two nominally simultaneous signals), it would make perfect sense
*if* ntpd were adjusting the clock.
ntpd adjusts the frequency of the clock in two ways: first, to
correct long-term frequency error. Second, to correct short-term
phase error. For example, it may decide that the clock frequency needs
to be raised by 10 ppm *and* it wants to add 5 us to the phase of the
clock this second. Thus, it will apply an overall correction
of +15 ppm to the clock frequency.
If you manually increase the clock frequency and ntp is watching, it
will notice that the reference clocks are falling behind by
-361 us/second. It will slowly adjust the clock frequency
back to nominal, but within 128 seconds, that phase error will have
accumulated to 46.2 ms, and it will be applying a fraction of that
each second as a phase correction to try to null that out.
Remember, even a type-I (phase-only) phase-locked loop can eliminate
frequency error. It will apply a frequency correction propertional
to the measured phase error, and will stabilize when the phase error
is just enough to produce the frequency correction necessary to match
the reference frequency exactly.
NTP adds an integral term to this controller to reduce the offset to
zero in the long term, but that's to improve the *offset* error; the
*frequency* error is taken care of already.
> Does anyone know what's wrong and/or how to get around it?
The only thing possibly wrong is that NTP is operating when you
don't want it to be. If that's not a problem, then nothing is wrong.
> I desperately want to know the clock frequency in terms of TSC cycles
> without meassuring it all the time. I dont need rocket science precision
> only like +/- 50 PPM.
NTP, by itself, will not help you. NTP knows precisely *NOTHING*
about the x86 Time Stamp Counter. Nada, zip, zero, zilch.
ntpd uses the operating system facilities (typically gettimeofday())
to get the current time. If, and only if, that time is derived
from the x86 TSC, or a clock synchronous to it, then NTP's measurement
of the rate of gettimeofday() can be used to indirectly measure
the TSC frequency.
If you want to try to do that, the very first step is to figure
out how the operating system computes the time when applications
ask for it.
Note, however, that depending on what you mean by "without measuring it
all the time", this may not be what you want. NTP does its work by
measuring the clock all the time. Do you mean that you want to pass
the responsibility for measuring the frequency on to someone else
(doable), or do you mean that you don't want anyone doing it?
If the latter, then obviously NTP can't help much.
More information about the questions