[ntp:questions] problem with pool directive?
Mike Cook
mike.cook at orange.fr
Wed Nov 12 08:59:19 UTC 2014
Sorry if this is a dup. My first went through in a non text format. Just upgraded to Yosemite.
> Le 12 nov. 2014 à 00:15, Brian Utterback <brian.utterback at oracle.com> a écrit :
>
> I believe that the number of pool servers used is determined by the minclock and maxclock parameters.
>
Hmmm. Good idea, but looks like there may be work required. Results of a quick test on a cubietruck with :
mike at cubieez:~$ ntpq --version
ntpq 4.2.7p452 at 1.2483 Fri Jul 18 15:35:11 UTC 2014 (2)
mike at cubieez:~$ ntpq -pn
remote refid st t when poll reach delay offset jitter
==============================================================================
+192.168.1.23 .GPS. 1 u 33 64 377 0.429 -0.031 0.005
0.europe.pool.n .POOL. 16 p - 64 0 0.000 0.000 0.002
*192.168.1.4 .PPS1. 1 u 64 64 377 0.648 0.027 0.024
+192.168.1.15 .GPS1. 1 u 24 64 377 0.595 0.068 0.031
-149.210.163.34 80.94.65.10 2 u 1 128 377 22.070 1.290 0.013
-212.51.144.44 .PPS. 1 u 105 128 377 32.521 0.185 0.024
-212.70.148.17 109.160.0.154 3 u 78 128 377 51.914 -1.027 0.077
-46.4.205.42 192.53.103.108 2 u 104 128 377 33.453 -2.408 0.031
-91.240.0.5 212.82.32.15 2 u 36 128 377 25.943 5.495 0.106
add our limiter and restart ntpd
mike at cubieez:~$ sudo vi /etc/ntp.conf
[sudo] password for mike:
mike at cubieez:~$ sudo /etc/init.d/ntp restart
[sudo] password for mike:
[ ok ] Stopping NTP server: ntpd.
[ ok ] Starting NTP server: ntpd.
mike at cubieez:~$ ntpq -pn
remote refid st t when poll reach delay offset jitter
==============================================================================
+192.168.1.23 .GPS. 1 u 6 16 37 0.193 -0.017 0.038
0.europe.pool.n .POOL. 16 p - 64 0 0.000 0.000 0.002
*192.168.1.4 .PPS1. 1 u 7 16 37 0.498 -0.055 0.062
+192.168.1.15 .GPS1. 1 u 5 16 37 0.578 0.065 0.036
-62.75.236.38 192.53.103.108 2 u 2 64 3 26.994 -1.179 0.091
-193.225.118.163 121.131.112.137 2 u 56 64 1 43.189 -0.369 0.441
mike at cubieez:~$
so tos maxclock 5 limits those allocated as Brian suggests, but removing access to a local clock leads to instability from which the pool allocations do not recover.
mike at cubieez:~$ # pull a local clock
mike at cubieez:~$
mike at cubieez:~$ bin/ntpchk
NOTICE: using /usr/local/bin/ntpq to query ntpd.
ntp is up and running - checking peer status
Wed Nov 12 08:50:27 CET 2014
remote refid st t when poll reach delay offset jitter
==============================================================================
+192.168.1.23 .GPS. 1 u 19 32 377 0.450 0.008 0.022
0.europe.pool.n .POOL. 16 p - 64 0 0.000 0.000 0.002
*192.168.1.4 .PPS1. 1 u 4 32 377 0.629 0.022 0.053
+192.168.1.15 .GPS1. 1 u 2 32 377 0.618 0.055 0.034
-62.75.236.38 192.53.103.108 2 u 23 64 177 27.043 -1.064 0.065
-193.225.118.163 165.94.197.40 2 u 24 64 177 43.191 -0.371 0.543
Wed Nov 12 08:51:32 CET 2014
remote refid st t when poll reach delay offset jitter
==============================================================================
+192.168.1.23 .GPS. 1 u 51 32 376 0.450 -0.044 0.045
0.europe.pool.n .POOL. 16 p - 64 0 0.000 0.000 0.002
*192.168.1.4 .PPS1. 1 u 4 32 377 0.629 0.022 0.045
+192.168.1.15 .GPS1. 1 u 2 32 377 0.618 0.055 0.043
-62.75.236.38 192.53.103.108 2 u 20 64 377 27.030 -1.169 0.081
-193.225.118.163 165.94.197.40 2 u 22 64 377 43.173 -0.369 0.483
So 192.168.1.23 is down and unreachable.
Wed Nov 12 08:59:01 CET 2014
remote refid st t when poll reach delay offset jitter
==============================================================================
192.168.1.23 .GPS. 1 u 500 64 0 0.450 -0.044 0.000
0.europe.pool.n .POOL. 16 p - 64 0 0.000 0.000 0.002
*192.168.1.4 .PPS1. 1 u 3 32 377 0.626 0.007 0.008
+192.168.1.15 .GPS1. 1 u 30 32 377 0.613 0.003 0.015
+139.112.153.37 146.213.3.181 2 u 17 64 7 45.515 -3.319 0.214
-147.231.100.5 147.231.100.11 2 u 18 64 7 51.684 -10.521 0.300
^C
mike at cubieez:~$ # this appears stable
An extra pool server is not allocated to fill the hole. This may be as designed as the association has not disappeared but ideally one would like the hole filled with something that works.
But worse is to come.
Nov 12 09:06:33 cubieez ntpd[16279]: 147.231.100.5 local addr 192.168.1.124 -> <null>
Nov 12 09:12:19 cubieez ntpd[16279]: 139.112.153.37 local addr 192.168.1.124 -> <null>
Wed Nov 12 09:05:59 CET 2014
remote refid st t when poll reach delay offset jitter
==============================================================================
+192.168.1.23 .GPS. 1 u 3 64 377 0.298 0.019 0.072
0.europe.pool.n .POOL. 16 p - 64 0 0.000 0.000 0.002
*192.168.1.4 .PPS1. 1 u 33 32 377 0.648 0.019 0.018
+192.168.1.15 .GPS1. 1 u 24 32 377 0.557 0.020 0.023
-139.112.153.37 146.213.3.181 2 u 27 64 377 45.530 -3.295 0.049
-147.231.100.5 147.231.100.11 2 u 32 64 377 52.166 -10.782 0.131
Wed Nov 12 09:07:03 CET 2014
remote refid st t when poll reach delay offset jitter
==============================================================================
+192.168.1.23 .GPS. 1 u 67 64 377 0.298 0.019 0.072
0.europe.pool.n .POOL. 16 p - 64 0 0.000 0.000 0.002
*192.168.1.4 .PPS1. 1 u 64 64 377 0.648 0.019 0.028
+192.168.1.15 .GPS1. 1 u 56 64 377 0.557 0.020 0.023
-139.112.153.37 146.213.3.181 2 u 91 64 376 45.530 -3.295 0.049
Wed Nov 12 09:11:20 CET 2014
remote refid st t when poll reach delay offset jitter
==============================================================================
+192.168.1.23 .GPS. 1 u 63 64 377 0.446 -0.047 0.014
0.europe.pool.n .POOL. 16 p - 64 0 0.000 0.000 0.002
*192.168.1.4 .PPS1. 1 u 59 64 377 0.647 0.005 0.031
+192.168.1.15 .GPS1. 1 u 52 64 377 0.578 0.037 0.026
-139.112.153.37 146.213.3.181 2 u 7 64 357 45.530 -3.295 0.079
Wed Nov 12 09:12:24 CET 2014
remote refid st t when poll reach delay offset jitter
==============================================================================
+192.168.1.23 .GPS. 1 u 63 64 377 0.450 -0.055 0.009
0.europe.pool.n .POOL. 16 p - 64 0 0.000 0.000 0.002
*192.168.1.4 .PPS1. 1 u 57 64 377 0.647 0.005 0.028
+192.168.1.15 .GPS1. 1 u 50 64 377 0.583 0.054 0.044
Now we only have the local servers and we don’t recover from this position. So if it maybe that a client could end up with two clocks and we know how good that is.
Restarting the node recovers a stable 5 clock status.
> Brian Utterback.
>
>
> _______________________________________________
> questions mailing list
> questions at lists.ntp.org
> http://lists.ntp.org/listinfo/questions
More information about the questions
mailing list