[ntp:questions] Re: Audio refclock+linux+soundcard = yuck
David L. Mills
mills at udel.edu
Sun Sep 28 21:44:17 UTC 2003
As long as the codec samples reliably at 125 us/sample and the timestamp
on the audio buffer is within a sample or two, it doesn't matter what
the playin/playout delays are, even as much as 600 ms, although that
sounds (sic) ridiculous. In fact, even if the timestamp is jittery, the
driver should still synch on the codec sample stream. I assume you have
turned on the sidetone fudge flag and listened to the audio and watched
the AGC in the clockstats trace to be sure the signal is good and the
amplitude in range.
There is some rationale in setting the playout delay rather high, but
not the playin delay. In Solaris you can set the buffer size, which
determines the playin delay. I haven't tried this with FreeBSD, although
there are reports at least one of the audio drivers works with FreeBSD.
I don't allow Linux within a picofarad of this place, so I can't help
Watch out for the mixer. It could well be the codec runs at 44 kHz rain
or shine and the mixer downsamples to 8 kHz, which is what I think is
going on in Solaris 8/9 and what has destroyed the once really low
jitter in prior versions. It's not clear whether the downsample is in
the codec chip itself or if some dirty rotten driver is doing it. Note
that this affects only the time; the sample stream is still golden.
Assuming you have set the WWV and WWVH propagation delays correctly, the
driver should switch from one to the other seamlessly.
Tim Shoppa wrote:
[deleted due fascist news system]
More information about the questions