[ntp:questions] oscillations in ntp clock synchronization
ibuprofin at painkiller.example.tld
Sat Jan 19 01:03:05 UTC 2008
On Fri, 18 Jan 2008, in the Usenet newsgroup comp.protocols.time.ntp, in article
<byVjj.7241$yQ1.3876 at edtnps89>, Unruh wrote:
>ibuprofin at painkiller.example.tld (Moe Trin) writes:
>>But are you discussing the variation of an individual unit changing
>>up to 100 ppm, or that two (or more) units differ by up to 100 ppm.
>Sorry, no, that 0-100 is the range of rates of the various clocks,
>not the changes due to temp.
OK that's to be expected. Most of the oscillators I see for PC use are
spec'ed at a 25C/Rated voltage accuracy of +/- 50 ppm. Then on top of
that, you add the effects of temperature, supply voltage, load,
shock/vibration and even physical orientation. That can total another
50 to 100 ppm.
>the period seems to remain roughly the same 24/7, although sometimes
>the oscialltion will just cease.
Fixed period smells VERY strongly of feedback loop time constants.
>>I'd be wary of the data anyway. The "Thermal Reference Byte" is
>>subject to _calibration_ errors, (zero and scale), while the
>>"lm_sensors" data is subject to the sensor error as well as the
>>errors in the circuit that is converting the voltage to a digital
>>representation (I've discussed this in the past - recall that this
>>is commodity gear, accurate to 5-10 percent AT BEST, never calibrated,
>>and tested by half-starved chimpanzees to a "yes it's working/no it's
>Sure, but it is the changes that are of interest. Ie I do not care if
>it is 45.1 to 45.7 or really 39.6 to 40.2 . It is the fluctuation and
>the correlation of the fluctuation with the rate that I am interested
>in, to see if it really is the temp which is causing the oscillations.
The Intel "Thermal Reference Byte" would be your best bet, but that
is showing the internal temperature of the CPU - with the data
conversion to digital done on-die. Trying to assume that the die temps
are the same as ambient kills this idea. The "lm_sensors" data probably
also has an external (to the CPU) thermistor, and as we've discussed
in the past, you get into the problem of "where is it?" That's a
board manufacturer decision. But you have several sources of
1. The scale and zero are a function of the material that makes up
the thermistor. Commodity product - accuracy of 5-20%. Also, the
resistance of the thermistor assumes no power is being dissipated
in the thermistor. But as the ambient temperature changes, the
resistance changes, and that _can_ change the amount of power
dissipated which changes the resistance... lather, rinse, repeat.
2. The thermistor circuit is some form of a bridge - three fixed
resistors and the thermistor (think the uprights of the letter
"H" with a voltmeter replacing the horizontal line). The values
of those three resistors effect scale and zero as well. Those
resistors might be as good as 1%, but are far more likely to be
10% - and the circuit isn't calibrated.
3. It's rarely mentioned, but the act of soldering these components
can cause a permanent error. The amount of the error is dependent
on the component material, lead length, and length of time the lead
is at the elevated (solder melting) temperatures, and so on.
4. The stability of the "thing" that is making the voltage measurement
can be crucial. Again, the main source of error is temperature and
the supply voltage, but things like power supply bypassing can be
a large source of problems. It's also effected by circuit
impedances - the values of those resistors in the bridge.
The first three errors are going to cause stable errors - 40C might read
36 or 44 - no big deal. They also will cause gradient errors, so a 5C change
may show as 4 or 6C - again, no big deal to your problem.
The last error is the killer, because that device could be waltzing
around, independently of what's happening in the actual sensing component.
This is something like trying to measure a distance using a ruler made
out of chewing gum.
More information about the questions