[ntp:questions] NTP absolute accuracy?
jason at extremeoverclocking.com
Mon Jul 2 18:26:59 UTC 2007
Taken from RFC-1305:
Root Dispersion is the number indicating the maximum error relative to the
primary reference source at the root of the synchronization subnet, in
So depending on your stratum, it *should* (from how I read it) add up all
the errors from you and up the chain of servers to the stratum 1 server.
I'll have to verify this when I get home tonight.
One thing I'm not sure about is for instance: if your root dispersion is
40ms, I'm not sure if it mean +/- 40ms (a total of 80ms) or +/- 20ms (a
total of 40ms). Dr. Mills or someone on the NTP development team can
probably clear this up.
You can poll for root dispersion quite easily via another program, and if it
exceeds your bounds then you can do whatever.
>I guessed so since it wouldn't be possible under the network environments
>you pointed out.
>Then, my question is how to detect a case with absolute time inaccuracy
>beyond a certain limit? Is the Root Dispersion the best indicator? How
>Offset and RTT? Also, I'm wondering whether there is any way to find out
>status of network connection such as software and hardware delays---are
>there any parameters about them?
>Basically, I'm trying to find or combine parameters to detect a certain
>outage case where absolute time inaccuracy exceeds my limit. Let me know if
>anyone has experience on this.
More information about the questions