[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[questions] Re: NTP community feels broken

Jim Pennino wrote:
William Unruh <unruh@xxxxxxxxxx> wrote:
On 2022-06-19, David Woolley <david@ex.djwhome.demon.invalid> wrote:
On 19/06/2022 01:06, chris wrote:
In practice, that will be small, since the
data sheet figures for a typical max232 assume a 2.5nf capacitive
load on the output, whereas a few inches of wire into a rs232 line
receiver setup might be much faster.

As we are talking about compliant RS232, which is the only real world
reason for not just connecting TTL directly to the RS232 port, the the
2.5nF condition is the maximum capacitance that can appear across the
driver output as the result of what it is driving.  That sets a minimum
possible slew rate.

However a compliant RS232 system also has a maximum permitted slew rate,
intended to minimise cross talk, and probably also to ensure that long
cables don't ring, as the initial transient reflects backwards and
forwards.  30V/µs is the maximum permitted slew rate for a compliant
system.  If your system exceeds that, even if it is using RS232 drivers,
it is not a compliant system.

Who cares if it is compliant esp as most real RS232 do better. RS232
standards are about 50 years old. They were still using vacuum tubes:-)

Test your system to see what it can do.

Hand-wringing over the serial physical characteristics are pointless
unless you have a very long cable or are designing your own interface.

Commercial hardware is orders of magnitude better at both latency and
jitter than most all computer hardware/software combinations unless you
are using computer hardware specifically designed for real time systems
with a real time OS.

The limiting factor for PC's and things like the Pi will be the jitter
in processing the interrupt generated by the PPS signal.

This tends to be on the order of 1 microsecond at best on every PC and
SBC board I have used.

FYI the best I've seen so far is about 900 nanoseconds on a Pi4.

Back before OoO processors, spread spectrum clocks and variable frequency boosts, i.e. Pentium days around 1994-1997, it was trivially easy to get far below your stated 1 us jitter.

It is indeed harder today!

A propr GPS clock these days needs to be inverted, i.e. the server asks the clock what time it is _now_, and gets back a timestamp.

So, by taking the local system clock, sending off this querey, and then checking the local clock again, you can both measure the latency of this reading and get the official/gps-based time.

In order to work this way, the GPS needs a way to use its local osc (typically running at 10 MHz) to count ticks since the last second transition, there has been at least one reference clock that worked this way.

Repeat this call maybe 5 times and pick the one with the lowest turnaround time.


- <Terje.Mathisen at tmsw.no>
"almost all programming can be viewed as an exercise in caching"
This is questions@xxxxxxxxxxxxx
Subscribe: questions+subscribe@xxxxxxxxxxxxx
Unsubscribe: questions+unsubscribe@xxxxxxxxxxxxx