LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

LabVIEW Sound and Vibration read write .WAV and gain handling

Solved!
Go to solution

Hi,

 

I am mid-way through developing an application to monitor and record dynamic (acoustic) pressure signals.  The development is going well with the exception that I am having a hard time understanding the best way to handle the gain associated with reading a .WAV file.  I am running LV 2018 SP1 Full Development System as well as the Sound and Vibration toolkit. Using the built in examples illustrates what I am talking about:

 

Monitoring:

C:\Program Files\National Instruments\LabVIEW 2018\examples\Sound and Vibration\Frequency Analysis\Multifunction FFT (DAQmx)

 

Writing:

C:\Program Files\National Instruments\LabVIEW 2018\examples\Sound and Vibration\WAV\Write Waveforms (DAQmx to WAV File)

 

Reading:

C:\Program Files\National Instruments\LabVIEW 2018\examples\Sound and Vibration\Getting Started\Demonstration of Analysis VIs (Simulated)

 

TEST: Perform steady state test using Multifunction FFT (DAQmx) example to monitor amplitude of signals in frequency domain, take screen shots of display.  Stop example VI. Record raw (unscaled) data using Write Waveforms (DAQmx to WAV File).  Read back files using same sensor sensitivity used in monitoring step using Getting Started\Demonstration of Analysis VIs (Simulated).

 

I noticed that there is a difference between the signal level when monitoring using FFT and when playing back the .WAV that I recorded.  I am using the same sensor sensitivity and dB reference for both operations.

 

I posted a snippet of the write wave vi to illustrate the coercing/gain of the signal before being written (sorry for the broken wires but this is just to illustrate the signal handling).  I can't find anything that looks at the header data to interpret the gain in the read example vi.

 

My question is, why is there no gain handling on the read example and what is the best way to implement it?  Or alternatively, do I bail on using .WAV for storage and go with TDMS?  It may be a little late to do so which is clearly my own fault for not fully researching the file format's limitations (RE: DBL full scale +1.0/-1.0).

 

A little background is in order for those asking "Why do you care?" I realize that ideally there would be an end to end calibration of the recording signal chain which is not possible in my application.  We are using brand new calibrated hardware, amplifiers and DAQ equipment.  The signal needed to excite these sensors (microphones) is too great for a pistonphone.  So I am stuck relying on the factory NIST traceable calibrated values.

 

I posted a snippet of the write wave vi to illustrate the coercing/gain of the signal before being written.  I can't find anything that looks at this data in the read example application.

 

Thank you for making it through this rambling post!

 

 

 

 

0 Kudos
Message 1 of 2
(2,364 Views)
Solution
Accepted by topic author billybaru13

I answered my own question.  I'm just "undoing" the scaling which occurs right before the "Sound File Write Simple.vi" to undo it upon reading the file for processing.  In my case that's "State 0" in the "Demonstration of Analysis VIs (Simulated).vi" . Good but scary to know that scaling was occurring without me immediately realizing! I guess it just goes to show you kneed to know every block that is affecting your signal example or not...  Hope it helps someone else down the road.

0 Kudos
Message 2 of 2
(2,350 Views)