[ntp:questions] Using NTP to calibrate sound app

no-one at no-place.org no-one at no-place.org
Sat Jan 26 02:53:51 UTC 2013

I am an app developer who has a precision audio frequency app for
iPhone and Android devices.  For my app the nominal crystal oscillator
accuracy in these devices is not sufficient.  Up until now I have been
providing frequency calibration in my app by instructing the user to
call the telephone feed of WWV (NIST) audio (using a separate landline
phone) and let my app listen to the 500 Hz and 600 Hz tones.  By
analyzing the audio I can correct for the device's audio system clock
deviation.  Normally they only need to do this once after the app is
installed because the stability of these devices is OK once I memorize
the offset.

Now I am considering an alternate means of performing this calibration
using NTP.  The iPhone and Android devices deliver audio to my app in
small packets.  A calibration run would consist of an initial NTP
syncronization with an audio packet, followed by a period of some
number of minutes during which I will just count audio packets,
followed by a final NTP synchronization with the last audio packet.
By knowing the time difference over some number of audio packets I
hope to calculate the actual audio clock frequency for that device.

My question is about the NTP procedure I should follow to do this.  I
obviously don't want to hard-code for a specific time server because
things could change after the user gets my app and it is unfair to
send a whole block of users to the same server.  The Server Pool looks
promising.  Does pool.ntp.org just behave like a Stratum 2 server so I
could hard-code that URL into my implementation of NTP in my app?  I
would appreciate any observations on the promise of this approach.

Robert Scott
Hopkins, MN

More information about the questions mailing list