[ntp:questions] A question

David Woolley david at ex.djwhome.demon.co.uk.invalid
Wed Apr 30 22:44:10 UTC 2008

Bruno Cocciaro wrote:
> Yes, binary results may be good. But I am not able to get this result. I

Note that the ASCII protocol that does this is standard in every Unix, 
and might also be in Windows.  In Unix, it is normally implemented in 
inetd as an internal function.  However, it is now almost universally 
disabled and/or blocked by firewalls, on the the basis of not opening 
any unnecessary services.  It only has one second resolution.

> installed any progs, for example Dimension 4, which connect to any server
> and syncronize my pc clock, but this is not what I want.

Getting the time at the time that the server constructed the return 
packet is a fairly trivial bit of programming; just formulate an NTP 
client packet, and read the response.  You ought to comply with the SNTP 
rules for clients, but they are fairly easy.

However, it is unlikely that you actually want that time.  It is more 
likely that you want the time either at which you launched the enquiry, 
or at the time the reply returned to you, and you want that even if you 
don't accurately no the network propagation delay.  Quite honestly, the 
easiest way of achieving that is to use a time synchronisation protocol, 
  pure NTP or chrony with NTP wire formats, for example, to discipline 
the local software clock and then to read the local software clock.

Note, I wouldn't touch anything except the reference implementation of 
NTP and chrony.  If you use the full NTP algorithm, no third party 
program is going to do it better.  Chrony uses a different algorithm, 
but also one that has been thought out.  Any other programs either use a 
very cut down algorithm or offer no advantage over the reference NTP.

> My problem is this: I run a Labview program which repeat several
> measurements (for example 10^6 measurements, 0.1 sec for each one). Labview
> prog uses an internal clock which says each measurement is 0.100 s, but I
> need to know the instant at which prog performs the first measurement and I
> must check that after 10^6 mesurements 1000??.??? seconds was spent. I am
> not sure of the fidelity of the internal clock used by Labview. My idea was
> that the fastest way to control the fidelity of the internal clock is to add
> a little part in my prog where Labview ask to any server the time, for
> example by a TCP or by any other way.
> Thank you very much to any user answered.

More information about the questions mailing list