[ntp:questions] Getting precision when server side only offers seconds... Ideas?

sgb1010 at hotmail.com sgb1010 at hotmail.com
Mon Feb 25 13:52:16 UTC 2008


Greetings,

I didn't know exactly where I could post this, but hovering around I
found this group
which I think could perhaps throw some ideas into my problem.

My issue is that I need to time sync two machines. My first idea was
to use NTP,
of course. But the problem is that the hardware I have to implement it
in (the "server"
side, so to speak), which has the "right" time, doesn't offer me even
millisecond
precision. I only have seconds available.

Explaining it better, it is a kind of network switch for which I don't
have total access,
and can only talk through a protocol, that offers me a date in format
YYYY-MM-DD hh:mm:ss. Period.

On the "client", though, I have total hardware access to do what I
want (normal PC, linux, etc).
That being said, I immediately thought: I will have a maximum possible
error of 0.999... seconds:

Remote time
09:40:22:0000 (would see as 09:40:22 - Error: 0.0000)
09:40:22:9999 (would see as 09:40:22 - Error: 0.9999)

The precision I need is a MAXIMUM of 1.0001 seconds error. Given the
worst case scenario,
this leaves me with a remainder of only (1.0001 - 0.9999 = ) 0.0002s
remainder to "play" with.
Given the network delay (the switch is over a satellite connection, so
the difference between the
several takes for the round trip time would have to be less then
that), this just seems like
not possible at all.

Am I right and is it impossible to get the desired precision or does
anyone have
any idea of how could I accomplish this? Is it possible to perhaps
"guess" the
millisecond "behind" the seconds I receive? The satellite delay is
something on
the order of 700-1200 ms.

Thank you for any ideas regarding this issue,

Soleth




More information about the questions mailing list