LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Trouble with accurate (ms) timing of Sound Output Write.vi

Hello all,

 

I am delivering a 10 ms 1KHz sine wave to a speaker via the Sound Output Write.vi. I need this stimulus to be delivered with millisecond precision. I am triggering it with a case structure gated by a millisecond timer within an independent while loop. I am measuring the stimulus delivery by recording the voltage across the terminals of the speaker with an AI port on a BNC-2090A board. The problem is that there is a delay of about 30-60 ms (it is variable from trial to trial) between triggering and delivery of the stimulus.

 

Any ideas as to how I can increase the temporal accuracy of the stimulus delivery or why the delay is so variable?

 

I am preparing the waveform outside of the while loop with the sine waveform vi (5000 samples/s, 1 channel, 16 bit resolution). I'm preparing the sound task outside the while loop with Sound Output Configure (50 samples) /Sound Output Set Volume. I'm running LabView 2012 SP1 64 bit on 64 bit Windows 7. 

 

I am using the same timer and case structure setup to trigger a 10 ms LED pulse in parallel and the LED always triggers perfectly. I get the impression the delay has something to do with the operating system interfering with the sound generation, but I could be wrong.

 

Any tips for precise timing would be greatly appreciated!

Thanks very much,

Josh

0 Kudos
Message 1 of 5
(3,115 Views)

 use 555 multivibrator ic 

0 Kudos
Message 2 of 5
(3,099 Views)

I've considered it, but I would like to be able to control the volume from software.

0 Kudos
Message 3 of 5
(3,095 Views)

WIthout thinking about it too deeply, I can expect that there is some variability in getting the sound output task going, likely OS-related.  The only workaround I can think of is to do this:

 

1) Run a sound input and a sound output (in parallel) within the same while loop, at the same sample rate, as a continuous operation.  Why run the concurrent sound input, you ask?  Because it paces the loop perfectly (you have to wait for input samples), and prevents you from overstuffing (or starving) your generation of output samples. This is a very old trick, BTW.

 

2)  Have a means (zero-delay notifier comes to mind) to tell that loop to substitute the tone pulse instead of silence samples on the sound output task.

 

3) Measure and correlate the delay from your trigger/notifier (you said you were assuming some other LED-controlling was low-latency) till the actal pulse appears, and compensate for it in your other processing.  A delay will be there, of course, but I predict that there will be almost no variability since the output generation is continuous.

 

Hope this gives you a way forward.

 

Dave

David Boyd
Sr. Test Engineer
Abbott Labs
(lapsed) Certified LabVIEW Developer
0 Kudos
Message 4 of 5
(3,054 Views)

@adam.douglass wrote:

I get the impression the delay has something to do with the operating system interfering with the sound generation, but I could be wrong.

 

 

Josh


Dear Adam/Josh,

 

     In this case, you are almost certainly correct.  While LabVIEW does have millisecond timers, when you run it in Windows, you need to remember that Windows takes precedence and doesn't know (or care) about LabVIEW's notion of "time".  In particular, you are asking Windows to "play the sound", and who knows what goes on behind the curtain.

 

     If accurate timing is important to you, you probably need to think about a DSP chip that can be triggered by LabVIEW through a DIO line.  Note that you'll still need to go through Windows, but with a bit of luck, this will be more predictable.  If this is really important to you, consider exporting the code to a real-time platform, something along the lines of a myRIO (if you are a student, these are not that expensive!).  I should have mentioned earlier that (if you are a student) a myDAQ probably has the hardware you need for a non-Real-Time platform.

 

Bob Schor

0 Kudos
Message 5 of 5
(3,039 Views)