[ntp:questions] ntp server help

Unruh unruh-spam at physics.ubc.ca
Tue Aug 12 03:03:34 UTC 2008


"Maarten Wiltink" <maarten at kittensandcats.net> writes:

>"Mikel Jimenez" <mikel at irontec.com> wrote in message
>news:489DAFD4.8050009 at irontec.com...
>[...]
>> How can I configure the server to cameras get server time every very
>> very short time? My objective is to get the server and 4 cameras
>> syncronizhed, not more 0.01s desyncronizhed, taking reference the
>> server.

>One way would be to configure the server for broadcast mode, and the
>clients for listening to the broadcasts. Then the server would send
>out timestamps every 64 seconds (I think), which for NTP purposes
>qualifies as quite often.

>But that's not the normal way to have a small number of clients work.
>It is more common to run NTP on the clients and let it adjust the clock
>until it runs very nearly exactly right. That works much better than
>leaving the clock to run slow or fast and jolt it back or forward as
>required 'every very very short time'.

>Two things may be wrong with a clock: it may simply be off (reading
>for example ten past midnight at midnight), and it may be running
>fast or slow (say, advancing sixty-one minutes every hour). Most
>clocks suffer from both. Most people know no better than to set back
>that clock twenty-four minutes every day. Doing that more often will
>require smaller adjustments each time, and also have the clock being
>closer to real time on average.

>But NTP can slow down or speed up the clock as well. It can really
>make the clock run at sixty minutes per hour[0]. And then you only
>need to make the rough adjustment once, if at all. After that, the
>clock is adjusted by making it run faster or slower (only a _veeery_
>little bit) when it needs it. Very soon, you get to the point where
>the time difference between client and server must be allowed to
>accumulate for quite a long time before you can even reliably see
>it, so you are correcting real error and not just measurement noise.

The problem is that we have no idea how the time keeping on the camera is
actually done. It may just have a real time clock-- no system time, which
it reads, and which has a 1 second quantization, a la the PC real time
clock. It may have no software on board to run a system clock based on the
processor, or video, or whatever other frequency. Asking the user to write
the software to install a system clock on board the camera with all of the
discipline that the Linux or Unix kernel has is asking a bit much. After
all millions use ntp on a windows machine which has pretty poor machine
time disciplining from all I have heard. And very very very few people have
rewriten Vista's system clock software to make it behave itself better. 

Ie, we really need to know what in the world he has -- hardware and
software-- on the camera before we can give much advice. 



>NTP will start by polling every 64 seconds. When it is running well,
>it will poll less and less, until it stops at polling every 1024
>seconds (just over 17 minutes). And the offset will be not just
>under 0.01 seconds, it can be under 0.01 _milli_seconds. If it's
>running well.

It can be well under .1ms with a reasonable network connection. 
But that is only because the system software supports such behaviour. Even
NTP's software clock discipline would be dead in the water if the per
second select() time fluctuated by .5 sec from time to time. Ie, the
software and hardware on the camera system has to be designed properly. 
We have no idea if it is. 
 

>Groetjes,
>Maarten Wiltink

>[0] These numbers are faked. A more realistic error is a tenth of
>    a second per hour.





More information about the questions mailing list