# [ntp:questions] Any chance of getting bugs 2164 and 1577 moving?

unruh unruh at invalid.ca
Thu Mar 22 20:36:42 UTC 2012

```On 2012-03-22, Richard B. Gilbert <rgilbert88 at comcast.net> wrote:
> On 3/21/2012 7:39 PM, Alby VA wrote:
>> On Mar 21, 7:36 pm, unruh<un... at invalid.ca>  wrote:
>>> On 2012-03-21, Alby VA<alb... at empire.org>  wrote:
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>> On Mar 21, 3:55?pm, unruh<un... at invalid.ca>  wrote:
>>>>> On 2012-03-21, David J Taylor<david-tay... at blueyonder.co.uk.invalid>  wrote:
>>>
>>>>>> "unruh"<un... at invalid.ca>  wrote in message
>>>>>> []
>>>>>>> But -19 is about 2 microseconds if I understand it correctly. That means
>>>>>>> that the clocks are incapable of delivering more than about 2
>>>>>>> microseconds of accuracy. What is you ?that last decimal digit of
>>>>>>> accuracy in the offset is thus pure noise-- dominated by clock reading
>>>>>>> noise. Why is it important for you then?
>>>
>>>>>> When I can see the decimal places, then I will know whether the precision
>>>>>> estimate is reasonable. ?Just getting values such as -1, 0, 1 microseconds
>>>>>> is insufficient to make that call.
>>>
>>>>> And how will the extra decimals help? The -19 was determined by making
>>>>> successive calls to the clock and seeing how much it changed between
>>>>> successive readings. That gives a good estimate of how long it takes to
>>>>> make a call to the clock. Any precision in the answer beyond that is not
>>>>> accuracy. I could give you the time to 60000 decimal places, each one of
>>>>> the diffetent, but the last 5995 just being garbage (random numbers)
>>>>> Would that tell yo uanything?
>>>>> If for some reason you do not believe ntpd's estimation of your clock
>>>>> accuracy, develope a better algorithm for determining it. It is a bug is
>>>>> ntpd is reporting an accuracy much worse than it actually is.
>>>
>>>>> Ie, you have no data to make that call even if you get more digits.
>>>
>>>>>> David
>>>
>>>> unruh:
>>>
>>>>    My take is the precision output might say your device is -19 so you
>>>> know its
>>>> accuracy is around 2/microseconds. But the offset several decimal
>>>> places
>>>> allows you to see its ever changing accuracy within that 2/microsecond
>>>> band
>>>
>>> But that is not accuracy. That is presumably (if that -19 is accurate
>>> and not a bug) is simply noise. If your measurement technique is only
>>> good to 2us, then any additional precision is just noise. It may be fun
>>> to see the noise, but not terribly useful. If it is not noise, then that
>>> -19 is wrong, and one has a bug in the determination of the accuracy of
>>>
>>>> to a greater detail than just -1, 0, or 1 microseconds. I guess its
>>>> just a matter
>>>> of getting more granular details for cool MRTG charting. :)
>>>
>>> It could well be that charting looks better without just bands on the
>>> page. But is it worth it if that detail is just junk? It certainly is
>>> not great art.
>>
>>
>>   It there any good way to determine what is noise and what isn't?
>>
>>
>
> Try comparing against a known good source!  Time from sources on the
> internet is almost always noisy!  If you must use internet sources
> try to query them between 0200 and 0400 local time; the net will be as
> quiet as it's likely to get!

We are discussing GPS sources. The way the gps goes is that the gps
receiver emits a Pulse, whose leading edge is accurate to say 10ns (if
you do not smear it out with TTL to RS232 converters). That pulse hits
some port on the computer-- serial or parallel. The circuitry in that
port then switches on an interrupt line (with possibly some smearing out
of the pulse due to that detection circuitry and interrupt triggering.)
After some delay inside the computer, that interrupt trigger gets the
operating system to run some code which stops what one of the cpus is
doing, pages out the code it is running, and pages in the interrupt
service routine. That routine then determines which interrupt was
actaully triggered, looks in a table for all of the interrupt service
routines which have been registered for the interrupt, pages in the
approriate routines and sends a message-- ie branches to them-- to each
of those routines. Those routines decide if the interrupt is for them.
If not they return immediately (one hopes). If it is for that routine,
and it is a routine to timestamp the interrupt, that routine then makes
a kernel system call to get the time. That kernel routine then reads the
appropriate counter and the time at the last timer tick, and interpolates the
current time, and returns that to the program, which stores that time in
some memory location. Ie, there is a lot that happens between the gps
sending out its pulse, and the time actually getting stored, all of
which could get delayed.

In tests I ran where I switched on one of the output pins of the
parallel port just after I had gotten a timestamp, and that pin was
connected to the ACK (interrupt) pin of the parallel port, and then had
the little parallel port interrupt service routine timestamp the receipt
of that interrupt gave me a timestamp of 1-3us between the timestamping
of the output pin,and the timestamping of the interrupt.

>
> Time/Frequency Standards are not cheap!  If you can afford one, the
> National Bureau of Standards, in the U.S., will be happy to calibrate it
> for you.  Governments of countries other than the U.S. should have
> similar facilities.

The timestamp of the GPS is more than adequate. The problem is not
having some event-- whetehr the output of a high accuracy clock, or the
timestamp from a gps-- whose time you know exactly. The problem is
using that event to discipline the clock within your computer. There are
two independent problems. One is that the clock in your computer is not
that great, but its rate and time can be disciplined so that it is
accurate to a few us, and the other is getting that external time into
the computer to use it to discipline the clock. That is probably rougher
than the former.

>
>

```