[ntp:questions] NTP "maximum error" on local network

ajit.warrier at gmail.com ajit.warrier at gmail.com
Thu May 18 01:48:04 UTC 2006

I am currently involved in a research project requiring tight time
synchronization between nodes on an ethernet LAN. We require pairwise
time offsets between nodes to be accurate by less than 1ms. As a simple
test scenario, we set up a Linux machine with 2.6.16 kernel as our ntp
server and use the local clock as a reference:

------- /etc/ntp.conf at server -------------------
restrict nomodify
driftfile /var/lib/ntp/ntp.drift
server prefer
fudge stratum 0 refid NIST

Two clients connect to this server through a 10Mbps hub, and get

------- on one of the client -------
# ntptime
ntp_gettime() returns code 0 (OK)
  time c8164ed5.d3251000  Wed, May 17 2006 21:39:33.824, (.824784),
  maximum error 41499 us, estimated error 2 us
ntp_adjtime() returns code 0 (OK)
  modes 0x0 (),
  offset -1.000 us, frequency 212.242 ppm, interval 4 s,
  maximum error 41499 us, estimated error 2 us,
  status 0x1 (PLL),
  time constant 0, precision 1.000 us, tolerance 512 ppm,
  pps frequency 0.000 ppm, stability 512.000 ppm, jitter 200.000 us,
  intervals 0, jitter exceeded 0, stability exceeded 0, errors 0

Clearly, I am able to achieve an "offset" of 1us and an "estimated
error" of 2us, which looks extremely good. But strangely, the "maximum
error" field gives an error of about 42ms. When I compare time stamps
between the server and client (at the MAC layer, to cancel out effects
of latency at higher layers), I see a gap of almost 60ms. Does anybody
have any idea of this kind of problem ?


The above thread seems to indicate that the error is due to the
unstable cpu clock being used as a reference. But still, a gap of 60ms
seems inordinately large.


More information about the questions mailing list