[ntp:questions] Using NTP to calibrate sound app

no-one at no-place.org no-one at no-place.org
Sun Jan 27 15:51:45 UTC 2013


Wow, 10 responses in one day.  Thanks to everyone.

@David Woolley:  Perhaps you are right about SNTP vs. NTP.  My app in
iPhones and Android devices cannot have access to kernel timing
modification, as a well-behaved app.  And the system timing crystal in
a smartphone is usually not the same one that runs the audio sampling
clock.  So the audio sampling clock needs to be compared to NTP
time/frequency directly.

@unruh: You are exactly right.  Whatever network jitter there is I
just need to have a calibration run sufficiently long so that the
derived frequency is close enough.  Given infinite time I can make
arbitrarily precise frequency measurements.  Of course the limiting
factor is user bother in putting up with a long calibration run.  I
hope to mitigate that problem by the following:

1. This calibration is only needed once when the user installs my app.
2. The calibration, once initiated, will run unattended.  So the user
will be instructed to start the calibration and then leave the
smartphone alone.  During that time our app will have to be running
continuously, so we will recommend that the user leave the phone
plugged in to a charger, preferably overnight.  When he wakes in the
morning the calibration will have been completed.

Unfortunately the calibration software cannot be interrupted by a
phone call.  So if it is interrupted the user will just be notified
that the calibration was aborted because of the interruption and he
will have to start the calibration run over again.  But I don't see
that as a big problem.   My estimates of the time span now, based on
expected network jitter, is about one hour.  It is not that hard to
have one hour of uninterrupted time, especially if the user starts the
calibration run just before retiring for the night.

@Maarten Wiltink and Maurice:  OK, I will read up on using
<vendor>.pool.ntp.org to select my time server.  But is there any
problem trying to use the same time server (or servers) at the
beginning and the end of the calibration run?  I don't need to access
the time servers in the middle of the run, just and the beginning and
the end.  Or is there even any advantage to trying to use the same
servers on both ends?  After all the network delays could change quite
a bit over the course of an hour, or even from one sample to the next.
And the protocol is supposed to factor out symmetric delays anyway.

@David Woolley:  Your comment about phone networks being digital is a
concern I have been wondering about too, especially regarding Skype.
Since Skype relies on indeterminately delayed Internet data, the
timing of the reproduced audio is bound to be a function of the local
computer's sound system oscillator, maybe soft locked to the Internet
data stream.  But calibrating for the reproduced sound sampling rate
in Skype is no easier than the problem I am facing.  So I suspect that
short-term frequency variations in Skype-reproduced audio is
inevitable.  With my old method of calibrating to NIST tones over the
phone I advise my users to avoid any form of Internet phone service.
But I think cell phones are probably OK because they can be tightly
locked to the cell towers' timing, which as you say is usually very
precise.  However I have seen desktop computer sound cards whose
actual audio sampling rate is off from nominal by as much as 6 parts
per thousand, which is strange because I know that even the cheapest
quartz crystals are rated for better accuracy than that.

In case you are wondering, my app is a professional piano tuning app.
The standard in this industry is that tuning devices should be
accurate to 12 parts per million.  I know that is probably overkill
for tuning pianos, but that is what the professionals expect from
their equipment.

Robert Scott
Hopkins, MN



More information about the questions mailing list