[ntp:questions] How should an NTP server fail?

David L. Mills mills at udel.edu
Wed Jun 9 18:41:28 UTC 2010


David,

When a server loses all sources, its own indicators reveal that.
However, the only way downstream clients see this is increasing
dispersion. Depending on other available sources a client has no way to
know (or care) about that other than increasing maximum error. If no
other sources are available, a client may well cling to that server, as
by design it <continues> to provide service within the maximum error
statistic.

Dave

David Woolley wrote:

> David Mills wrote:
>
>> goofyzig,
>>
>> This issue is widely misunderstood; yours is the second such message 
>> to me today. So, please spread the word.
>>
>> When a server loses all sources it does not necessarily become 
>> unsuitable for downstream clients. Ordinarily, it inherits error 
>> statistics from upstream servers and provides them to downstream 
>> clients. Servers and clients use these statistics to calculate the 
>
>
> Whilst it may make sense to retain the system peer status, I don't see 
> that it makes sense to retain the selected status.  As system peer 
> currently also includes selected, maybe there is a need to split 
> system peer from its selected implication.
>
>> maximum error statistic which represents the maximum clock error 
>> relative to the primary reference clock. See the error budget called 
>> out in the specification. Once determined, the maximum error 
>> increases at a rate (15 PPM) determined as the maximum disciplined 
>> clock frequency error of the server clock. This increase continues 
>> indefinitely or until the sources are again found.
>
>
> _______________________________________________
> questions mailing list
> questions at lists.ntp.org
> http://lists.ntp.org/listinfo/questions




More information about the questions mailing list