[ntp:questions] NIST vs. pool.ntp.org ?
no-one at notreal.invalid
Wed Mar 27 22:23:09 UTC 2013
On Wed, 27 Mar 2013 16:14:53 -0400, "Richard B. Gilbert"
<rgilbert88 at comcast.net> wrote:
>I think you need to start by defining the quality of time you need. Is
>4:13 PM Eastern Timegood enough? Some people just want to "Get me to
>the church on time!" OR the bus, or the train or plane. A radio
>astronomer is almost certainly going to insist on time to the nearest
>Most of the range above is probably going to be "overkill" for most people.
>Please try to define your requirements a little more closely!
>Nano-seconds, "get me to the church/train/plane . . . . on time!" or
I did post my requirements a month ago in a thread about calibrating a
piano tuning app using timestamps, and it degenerated into a long
discussion on the accuracy of quartz crystals and the accuracy needed
for piano tuning, which eventually spilled over into comp.dsp. Please
let's not open that discussion again. But for those of you that may
have missed it, here are the main points.
I have an app I have been providing to professional piano tuners for
many years. My competition advertizes an accuracy of 0.02 cents
(which is 11 ppm). So whether or not you think piano tuning standards
need to be that good, I need to meet that standard to be competitive.
For whatever reason, the manufacturers of smartphones (especially
Android devices) do not hold the audio sample rate of their devices to
that standard. So I need to provide a way for the user to perform an
initial calibration after the app is installed. Currently I am doing
that by having the user get a trusted audio frequency source and let
the app listen to it. One of my competitors actually sends each user
a calibrated tone source/metronome to use in calibrating the app.
This is sometimes difficult for non-tech-savy piano tuners. I thought
it would be great if I could provide a means to calibrate the app
using network time servers.
The plan is to start up an audio input stream on the smartphone and
timestamp the blocks of data as they are received from the microphone.
The rate of arrival of these blocks of data is tied to the audio
sample rate I wish to determine. If I timestamp a block of audio
data, then count data blocks for about 3 hours, then timestamp another
block of data I will be able to calculate the audio sample rate.
To achieve 11 ppm accuracy in frequency I need to have a calibration
time interval that is about 90,000 times as long as the timestamp
uncertainty. If the timestamp uncertainty is, say, 100 msec., the
calibration time period needs to be at least 2.5 hours. That's where
my figure of 3 hours comes from.
I don't think it will difficult for a user of my app to perform this
calibration. All he has to do is to ensure Internet connectivity is
turned on (it could be cell or wi-fi), hit the calibrate button in my
app, and leave the phone on charge and go to bed. The app will only
hit the time servers a couple of times at the beginning of the
calibration period and a couple of times at the end. In between time
it will just be counting audio data blocks.
More information about the questions