I'm working on a piano project. I built the following vi for testing. When the falling edge is detected, two "Play Sound Files" play two different .wav files at the same time.
It turns out when the case structure is activated, two sound files are not synchronized, there's a tiny delay between two playing. However, if a falling edge is detected again before the playing ends, two sound files will be totally synchronized.
Could anyone tell me what's going on there? Two "Play Sound Files" are placed in the same case structure, but why there's a delay between two playing.
Any help will be appreciated.
Solved! Go to Solution.
Assuming your source audio files are the same channel count, sampling rate and bit depth (e.g. stereo / 44.1 kHz / 16-bit), you could load both files as waveforms, then add the two waveforms together. The resulting waveform can then be played using either Sound Output Write or Play Waveform express VI.
As a side note, that 0.1s delay probably won't be doing anything, as it's running in parallel with the Play Sound VIs.
Thank you for your reply.
Actually I don't want to add two waveforms. This is a piano project, when I press the piano key, a falling edge will be detected, then the vi plays the corresponding sound file.
It turns out if I press two keys at the same time, two sound files are played separately, not totally synchronized. But if I press two keys again before the playing ends, two files will be synchronized. So I want to know is there any way I could synchronize them?
Change the VI and its subVIs to reentrant, and save them to your project folder with a different name.
When the two wave files are being added together, you're basically 'mixing' them. At a lower level the audio driver and audio hardware are doing the exact same thing.
Any sort of real-time audio processing and playback is about minimising latency. In your posted code the first thing I'd do is open a reference to your audio device, and keep it open for the duration of the VI. You want to open the audio device using the Continuous Samples option, so it's constantly streaming data to the audio device. The buffer size you use will depend on the audio sample size you wish to play back.
I would also pre-load all of the audio samples so they are available in memory and ready to be played immediately, rather than accessing disk every time. Then when a trigger is detected you simply write the waveform out to the audio device, and rely on it to do the mixing. Ensure there's a zero timeout when writing a waveform, so you can immediately begin writing the second waveform.
This code is untested, but I'd imagine you'd have something similar.
Thank you so much! It works! The only problem is I have to place 12 "Play Sound Files" as I have 12 keys. Is there any way I could simply place one in the loop but creat 12 clones?
It sounds like you're on the right path. Ultimately you only need a single 'Sound Output Write', with a flag for each sample as to whether to play it or not in that loop iteration. In this case that's your falling edge trigger. The last screenshot you posted is almost exactly what you'd need to do, but rather than an array of paths, you'd have an array of your audio samples. Keep in mind a single audio file is already a 1D array of waveforms, so you'd need to build each of the 12 waveform arrays into either a 2D array, or a cluster array.