[ntp:questions] The libntp resumee...
kayhayen at gmx.de
Sun Sep 7 06:15:59 UTC 2008
Hello Mr. Unruh,
> >What worried me more was how often we can query the local ntpd before it
> > will have an adverse effect. Meantime I somehow I sought to convince me I
> > should be able to convince myself that ntpq requests are served at a
> > different priority (other socket) than ntpd requests are. I didn't find 2
> > sockets though.
> Depends on the system but thousands of times per second is not out of the
> ballpark. I assume you are not planning anything that severe.
> (Some servers bombarded by those idiotic people I believed managed those
> kinds of rates.)
No, not at all. We will only be targeting our local ntpd with ntpq requests
and then we will likely be able to use low rates.
As we are now for the offsets going to monitor them on our own contacting the
external ntpd at a rate, we will only need to know when its going to contact
an ntpd, and then restrict via another ntpq request possibly.
For all of that is no longer critical to be fast. Thank you for pounding on me
with that. :-)
> >> Briefly, you use the defaults for MINPOLL and MAXPOLL. You may use the
> >> "iburst" keyword in a server statement for fast startup. You may use
> >> the "burst" keyword ONLY with the permission of the the server's owner.
> >> 99.99% of NTP installations will work very well using these rules". If
> >> yours does not, ask here for help!
> >Now speaking about our system, not the middleware, with connections as
> >External NTPs <-> 2 entry hosts <-> 8 other hosts.
> >And iburst and minpoll=maxpoll=5 to improve the results.
> On which? That should NOT be on the external NTPs unless you own them. That
> will not necessarily improve results-- depends on whether you want short
> term accuracy or long term (eg what happens if the connection with the
> outside world goes down for 3 days. Do you want to make sure your systems
> will keep good time during those three days? Are you willing to buy 25usec
> rather thahn 50usec short term accuracy for 10 sec drift over that 3 days?
If the NTP connections fail, we can accept a slow drift very well. But see my
last response to Richard B. Gilbert about why this is needed. We want the 8
other hosts to synchronize fast.
When they "iburst" none of the entry hosts may already have completed its own
startup, so they need to poll quickly even after the "iburst" or else
sychronization after reboot will take too long.
> >Currently we observe that both entry hosts can both become restricted due
> > to large offsets on other hosts, so they become restricted and that will
> > make the software refuse to go on. Ideally that would not happen.
> >I will try to formulate questions:
> >When the other hosts synchronize to the entry hosts of our system, don't
> > the other hosts ntpd know when and how much these entry hosts changed
> > their time due to input?
> Yes, and no. On one level no-- they trust their sources. However part of
> the information they get is the dispersion. That gives some info about how
> well those servers are tracking the outside world.
But that would be more of "no". All the increased dispersion on "entry hosts"
due to required time shifting is going to give us is a slow down in the
synchronization of the "other hosts".
> >Is the use of ntpdate before starting ntpd recommended and/or does the
> > iburst option replace it?
> Not recommended.
I sort of think that we can build something for the "other" hosts that makes
them wait for the "entry" hosts to be synchronized. See that response to
Richard B. Gilbert again.
We could alternatively want to change ntpd in a way that the iburst lasts
until a sufficient synchronization was achieved. But it appears to be more
simply to delay the iburst by delaying the ntpd start until sufficient
conditions are met.
For the startup of our system that could be a solution that removes the need
for permanently low poll intervalls, although they are only needed initially.
More information about the questions