[ntp:questions] broadcast client

David L. Mills mills at udel.edu
Sun Nov 23 17:50:29 UTC 2008


Danny,

I came in the middle of this, but can see which way the ship is sailing. 
There is excellent evidence that the default lower and upper poll 
interval limits are completely justified for the public Internet and 
have been for the last two decades as the Internet has evolved. There is 
also excellent justification for other choices where indicated by 
analytical and mathematical considerations, especially the Allan 
intercept statistic. If Bill doesn't believe that or doesn't understand 
the oversampling, time constant and Nyquist issues, further discussion 
is pointless.

Dave

t Danny Mayer wrote:

> Bill Unruh wrote:
>
>> On Sat, 22 Nov 2008, Danny Mayer wrote:
>>
>>> Normally I would not respond to a 2 month old message but I need to
>>> correct some things here that were written.
>>>
>>> Unruh wrote:
>>>
>>>>>> connected via 1 Gbps switch. The network is lightly loaded and I
>>>>>> configured the clients as such
>>>>>>
>>>>>> server ntp minpoll 4 maxpoll 4 iburst
>>>>>
>>>>> Dave Mills, please note, yet another non-believer in the NTP
>>>>> algorithms.
>>>>
>>>> What this has to do with not believing in the algorithm I have no
>>>> idea. If
>>>> ntp runs from a refclock that is EXACTLY the default behaviour.
>>>
>>> This is NOT the default behavior. minpoll defaults to 4 and maxpoll
>>
>> minpoll defaults to 6 as far as I know.
>>
>
> Correct, that was my mistake when I looked it up.
>
>>> defaults to 10 and you should NOT change them unless you understand the
>>> discipline arguments and how these changes affect the discipline.
>>
>> Sure, and I gave my reasons below.
>>
>>>> Running on
>>>> a local private network where you are referencing your own server, that
>>>> behaviour is also fine. The reason for the backup to long poll
>>>> intervals is
>>>> a) to save the public servers from flooding, and b)to discipline the
>>>> local
>>>> clock's drift rate in case there are long periods of disconnection
>>>> from the
>>>> net. If you have constant connection and it is your own server,
>>>> neither of
>>>> those apply, and short polling is better.
>>>>
>>> a) This is not correct. It has nothing to do with public servers. In
>>> addition I've conducted tests where I've fired 100's of packets per
>>> second and not even noticed any affect on other work on the target
>>> server. In the case of your own servers you won't notice the traffic in
>>> any event.
>>
>> This is simply untrue that the large public servers would not notice if
>> everyone used minpoll 4. Many of the public servers get 10s of thousands
>> of queries a second and would be even higher if poll intervals were
>> shorter. Some were completely brought to their knees when the ntp on
>> routers were set to poll interval of 0 ( once per second) Minimizing
>> network traffic IS one of the considerations.
>
>
> I wasn't talking about public servers but even so that is not the reason
> for the default minimum poll. My comment about my testing was on a
> private server in any case.
>
>
>>> b) is also incorrect. The purpose of the backup is in the algorithms and
>>
>> ^^^^^^^^^^^^^^^^^^^^^^ Not clear what this
>> means.
>>
>
> I should have used the word backoff meaning that you increase the poll
> interval. Again this is to do with reducing the oversampling.
>
>>> has to do with oversampling. In this case you are making your local
>>> clock less stable rather than more stable. More is not necessarily
>>> better and the sooner that people understand this the fewer problems
>>> that they will have.
>>
>> The allan minimum on a local
>> network is far less than David assumes in his model. The frequency shits
>> are dominated by temp variations which have hourly changes typically for
>> a computer which actually does work other than ntp. This means that the
>> long time intervals assumed by David are in fact inapporpriate for local
>> networks. Ie, short intervals are better especially if you have constant
>> and fast connectivity.
>>
>
> The poll interval min and max are compromise choices to provide the best
> results across the widest possible range of configurations and networks.
> To state that it's not appropriate for a local network is overstating
> the case. It's not as simple as that. Your local network is just as
> subject to network congestion and other issues and takes much more work
> to figure out a best set of minpoll and maxpoll.
>
>>>
>>>> I have no idea why you make the first claim. Yes, the rate will vary
>>>> as the
>>>> network rates vary, but who cares. The purpose is to discipline the
>>>> TIME,
>>>> not the rate.
>>>
>>> No. The purpose is to discipline both.
>>
>> No, the purpose of ntp is to discipline the time and it does this by
>> disciplining the rate. The rate discipline is there to discipline the
>> time.
>>
>
> The whole point is to correct the clock to be at the right time when it
> is first set and then keep it correct and you need to do that by making
> sure that the rate is disciplined to keep it moving so that the clock
> does not drift off the correct time even in the absence of incoming
> reliable packets.
>
>>>> And short polls discipline the time better.
>>>
>>> It doesn't.
>>
>> Agrument by blatant assertion?
>>
>
> No, it gets into the issues of stability. Dave, Stuart and I had a
> discussion about this around the time of the last IETF meeting back in
> August, but I don't have that in front of me. Bottom line however is
> that more is not better.
>
>>>> Rate discipline
>>>> is overrated because of the rate variations due to temperature changes
>>>> during the day anyway.
>>>>
>>> No, it is not overrated unless you are oversampling like you are
>>> recommending.
>>
>> Yes, I am advocating "oversampling". You are claiming that it is not a
>> good thing to do. Please defend that position with a bit of maths, not
>> assertions.
>>
>
> I don't need to, Dave does an excellent job of that and is in a much
> better position to answer that having had some 30 years of experiments
> to back it up.
>
>> Oversampling as you call it will discipline the time better especially
>> with the Markovian feedback filter ntp users. It will be worse at
>> disciplining the clock rate which will fluctuate more when you
>> oversample. Ie, oversampling will make int (t(T)-T)^2 dT smaller, where
>> T is the true time and t(T) is the clock time at true time T. It will
>> however make int (dt/dT-1)^2 dT larger. Now, for most people, it is the
>> time that is important, not dt/dT. If it is the rate that is important
>> to you ( ie you must know short intervals of time accurately -- ie to
>> better than 1PPM-- eg you are timing things that last one second and you
>> want to know them to 1usec) then use longer poll intervals with the ntp
>> algorithm
>> ( or use a different algorithm and different hardware).
>
>
> The simple answer is to talk about the proverbial spring yo-yoing around
> a fix point. However you don't want it to yo-yo you want it to be damped
> so that the fluctuations around the mean get smaller. By oversampling
> you are perturbing the calculated time and frequency and the increased
> perturbation means that you reduce the stability. This is what I call my
> pebble in the pond analogy. Your discussion above indicates that it is
> okay to have wild fluctuations around the mean even though it means that
> you may have taken a clock reading at one of the peaks of the
> fluctuation. You really need to damp the fluctuations as much as possible.
>
> Danny





More information about the questions mailing list