[ntp:hackers] NTPv4 Brian Version

Brian Utterback brian.utterback at sun.com
Tue Aug 9 13:25:25 UTC 2005


Of course, this all depends on your definitions of good and bad guys.

Using only the data from the last 8 polls could give you an idea of the 
server from a window of anywhere from 7.5 minutes to 2.25 hours. Worse 
still, all the servers available may have different windows. I hate the 
idea of tossing a server that has behaved well for the last week, but 
because of a transient network problem, has looked bad for the last ten 
minutes. I understand the flipside, i.e., a server that has served well 
for the last week, but has permenantly gone south being saved over one 
which is doing well now.

Perhaps a hybrid approach, with rankings done both ways and rossing the 
worst of the real time that is also in the lower half of the long term. 
If there isn't anybody in the set, just add a new guy and reset the long 
term assessment, and then do the winnnowing as done in your proposal.

David L. Mills wrote:
> Brian,
> 
> The current code has nothing to do with billboards, ntpq or anything 
> like that. It  has to do only with ntpd. It starts out with the 
> configured servers, then mitigates a la ntpd and casts out outlyers 
> until ending with the pick of the litter. It then continues from there 
> absent refresh. The refresh is the issue, and that's what the 
> asynchronous or equivalent is for. The scheme with multiple pool 
> configured servers used to work, but now does not with ntpd. It appears 
> to work from what evidence I can collect with Solaris tools. As for the 
> period between refresh, I would think it something like one hour.
> 
> Experience running it behind a ISDN line is that it can pick up some 
> somewhat suboptimal servers, but in each case the mitigation algorithms 
> did what they were told. Long term ranking is a really dangerous game. 
> It can rank good guys bad or bad guys good over the long term. Behind 
> the ISDN line it seems better to go with what ntpd measures in the short 
> tierm.
> 
> Dave
> 
> Brian Utterback wrote:
> 
>> Very close to what I am looking for Dave. The problem you are seeing 
>> is due to the caching in nscd. However, the default cache for hosts is
>> something like 5 minutes, so if you did have what you suggested,
>> (i.e. configure all addresses returned) and the period for revisiting 
>> is longer than the cache time, then you would be okay anyway.
>>
>> My only concern at this point would be the "revisit". I think that you 
>> need a longer term criteria for judgement than just looking at the 
>> data in the billboard. A fairly short term network glitch could cause 
>> you to lose your best server.  That is the purpose of the rank 
>> variable, to provide a longer term memory over ntpd's own thoughts on 
>> what is the best server.
>>
>> And, of course, this doesn't help an admin choose between non-preempt 
>> servers, although the admin might be able to use the preempt feature.
>>
>> So, if you have such a long term criteria in mind, then set rank to 
>> that. If not, then might I suggest using rank as I have it proposed? 
>> And again, is there any downside? "rank business", ha.
>>
>> David L. Mills wrote:
>>
>>> Folks,
>>>
>>> The rank business scared me, as I had very different plans. Try the 
>>> ntp-dev version with new server subcommand preempt
>>>
>>>
>>> server host1 iburst preempt
>>> server host2 iburst preempt
>>> server host3 iburst preempt
>>> server host4 iburst preempt
>>> server host5 iburst preempt
>>> server host6 iburst preempt
>>> ...
>>> tos floor 2 ceiling 3 minclock 3 maxclock 4
>>>
>>> It will mobilize all the servers listed, then proceed to cast off all 
>>> but the best four servers at stratum 2, then trim those to the final 
>>> cut 3. The four servers stay around, so the final cut can be 
>>> retrimmed. This is what I think Brian wants, but doesn't require 
>>> manual intervention.
>>>
>>> Now, if an asynchronous resolver or equivalent was available, the 
>>> loser in the above step would be discarded periodically and 
>>> replentished from the available population. Manycast should work the 
>>> same way, but it does periodically refresh. I haven't tested the new 
>>> scheme with manycast yet. My intent is that the manycast and pool 
>>> schemes should work much the same way, although using different 
>>> discovery schemes.
>>>
>>> For some reason the pool is giving attitude. The resolver returns the 
>>> same server every time asked. The nslookup and dig utilities in 
>>> Solaris apparently return the right stuff, but the ntpd resolver 
>>> seems stuck. None of the region zones work except us and asia. It 
>>> would be nice that, by direction, the entire list returned in one 
>>> query would be lit up rather than callinb many times.
>>>
>>> Dave
>>> _______________________________________________
>>> hackers mailing list
>>> hackers at support.ntp.org
>>> https://support.ntp.org/mailman/listinfo/hackers
>>
>>
> 
> _______________________________________________
> hackers mailing list
> hackers at support.ntp.org
> https://support.ntp.org/mailman/listinfo/hackers


-- 
blu

Remember when SOX compliant meant they were both the same color?
----------------------------------------------------------------------
Brian Utterback - OP/N1 RPE, Sun Microsystems, Inc.
Ph:877-259-7345, Em:brian.utterback-at-ess-you-enn-dot-kom


More information about the hackers mailing list