I am having a very strange problem when using LabVIEW to acquire audio
data via the Windows API from a Creative Professional E-MU 1616m sound
card. The goal is to acquire sound in 24-bit resolution.
When capturing sound in 16-bit mode (as set in the LabVIEW software),
the E-MU 1616m behaves as expected, with a 105dB SNR and approximately
-130dB noise floor, after dithering. However, when switching to
24-bit capture mode, a very severe truncation occurs. This sends
the harmonic distortion and noise through the roof. After
investigating fairly deeply in our LabVIEW code, I am wondering what
might be the problem.
I have compiled a number of screenshots which showcase the problem in more detail.
Here is a background of the experiment:
For these tests, both the analog and digital audio was generated by an
Audio Precision System Two system, and was passed directly into the
respective line-level or digital audio inputs. Digital audio was
tested using both coax and optical cable. In the sound card
control software, the audio was sent directly from the input channel
into the WAVE IN L/R (via the Windows API, I assume). The
sampling rate for the profile was 96kHz.
The sampling rate in all LabVIEW functions was set to 96kHz. The sample rate set in the AP Digital generator was 96kHz.
ANALOG 20dBu 96kHz 16bit.jpg
In this test, everything looks fine. The audio input is at
Full-scale for the E-mu's ADCs. It is exhibiting expected 16-bit
performance (with dithering).
ANALOG 20dBu 96kHz 24bit.jpg
Now we instruct the driver to capture sound in 24 bits. Notice
that the noise floor and THD+N go up considerably. Effects of
truncation become visible on the time-domain display.
ANALOG -20dBu 96kHz 16bit.jpg
Now we drop the input level to -20dBu. The performance
starts to look a little messy but is still acceptable. Note,
however, the high peaks on the odd harmonics.
ANALOG -20dBu 96kHz 24bit.jpg
Now we try to capture at 24 bits. The effects of truncation are extreme at this low signal level.
ANALOG -60dBu 96kHz 16bit.jpg
Now we are at extremely low signal levels. Individual
quantization levels can be seen on the signal. Dither is also
present. Performance is still good.
ANALOG -60dBu 96kHz 24bit.jpg
However, when increasing the resolution to 24 (which should increase
the number of quantization levels), our signal is reduced to a square
wave. Obviously something is wrong.
DIGITAL 0dB 16bit 96kHz 16bit.jpg
Now on to the digital tests. We start with full-scale. We
used an AP outputting a properly dithered 16-bit signal over an optical
cable. The soundcard is instructed to receive in 16 bit
mode. It looks good.
DIGITAL 0dB 16bit 96kHz 24bit.jpg
Using the same input, we change to 24 bit receive mode.
DIGITAL 0dB 24bit 96kHz 16bit.jpg
Now we set up the AP to output a properly dithered 24-bit signal at
full-scale. The dips in the frequency domain show us that
something is wrong.
DIGITAL 0dB 24bit 96kHz 24bit.jpg
Receiving in 24-bit mode. Same story as before.
DIGITAL -90dB 16bit 96kHz 16bit.jpg
Now we decrease the amplitude to a low level. Well-implemented dither is shown here clearly.
DIGITAL -90dB 16bit 96kHz 24bit.jpg
However, receiving in 24-bit mode reduces the signal to a dithered square wave.
DIGITAL -90dB 24bit 96kHz 16bit.jpg
Here is the low-level signal with the AP generating a 24-bit
signal. Dither is applied, but vanishes in the e-mu 1616m.
It seems the dither level has been changed. This is the cause of
our dips from before.
DIGITAL -90dB 24bit 96kHz 24bit.jpg
And finally, we transmit and receive in 24-bits. Here are the results.
We have achieved similar results using several of your breakout boxes and soundcards.
Attached are all screenshots, as well as the main VI (AP Test.vi) and
the dependent vi's. There are a number of SVT vi's in the
project, but they can be ignored since they are not related to the
problem.
Any help would be greatly appreciated.
Best Regards,
Brett Gildersleeve