[ntp:questions] How should an NTP server fail?

Miroslav Lichvar mlichvar at redhat.com
Wed Jun 9 19:36:06 UTC 2010

On Wed, Jun 09, 2010 at 07:41:28PM +0100, David L. Mills wrote:
> When a server loses all sources, its own indicators reveal that.
> However, the only way downstream clients see this is increasing
> dispersion. Depending on other available sources a client has no way to
> know (or care) about that other than increasing maximum error. If no
> other sources are available, a client may well cling to that server, as
> by design it <continues> to provide service within the maximum error
> statistic.

Continuing discussion from https://bugs.ntp.org/show_bug.cgi?id=1554

When a server loses connectivity to a source, why is it allowed 
for the source to stay marked as system peer?

Normally in such situations the server is unmarked which generates
no_sys_peer event if it was the only source. But sometimes it stays
selected which means the event is unreliable and the operator has to
use something else for monitoring, probably track the reachable status
for each peer, or is there something better?


Miroslav Lichvar

More information about the questions mailing list