03-15-2021 02:22 PM
Hi All,
I am using cDAQ-9174 to read two waveforms on a 9205 module and generate an output signal on 9264, which is the product of the two input signals. The AI and AO operations should be synchronized with minimal delay at the output (not more than a millisecond or so).
I built the code based on this post. The issue I am currently facing is that there is a large lag at the output (on the order of a second). To measure the delay, I routed one of the input signals directly to the output (without the multiplication operation with the second signal) and captured the input and output on an oscilloscope (see attached). As you can see there is a few hundred ms delay between them.
Can you please help with resolving this issue? I know that there are quite a few posts and examples on this topic but I am new to Labview and after going through many of them I am still not able to figure it out myself. So your help with this specific example will be highly appreciated.
Thanks,
Hayk
03-15-2021 02:32 PM
Here are the VIs as well.
03-15-2021 08:01 PM
Few things first,
03-16-2021 04:07 AM
I was initially surprised that your picture didn't show considerably *more* latency. You have latency contributions due to an AI task buffer, an AO task buffer, and a hardware output buffer (FIFO) on the cDAQ. I assumed each would be significant contributors, but it turns out that only one of them is.
Your AI task is giving you 200 msec latency to start with because you're reading 10k samples from a 50 kHz task. The 1st sample from each set of 10k samples will always have happened at least 200 msec ago.
Your AO task buffer turns out to be very small. Its size is set by the # samples you write to the task prior to starting it, which is only 10. I'm kinda surprised that doesn't lead to chronic errors from buffer underflow. The driver can only move 10 samples at a time across USB, and must be managing to do that at a 5 kHz rate to keep up with your 50 kHz sample rate.
The good news of this small AO buffer is the minimal latency it adds -- about 0.2 msec.
The cDAQ 9174 appears to have a 127 sample FIFO, so that'll tend to add about another 2.5 msec in latency. There *might* be some deep-down DAQmx properties that'll let you use less than the entire 127 sample FIFO to reduce latency there, but I've only explored such things with desktop devices - PCI, PCIe, PXI. Since the 127 sample FIFO is shared among all channels, a sneaky trick might be to write, say, 4 channels of AO. That'd cut the FIFO latency contribution to ~0.6 msec.
So let's aim for around 10 msec latency like this: Set the AI task to read 400 samples at a time. That's 8 msec latency. Pre-write 50 samples to the AO task instead of 10 (it feels safer to me to increase the AO task buffer at least a little). That's another 1 msec latency. And try the trick of writing 4 channels of AO to reduce the FIFO latency. The other 3 can be copies of the 1 you care about if they aren't physically wired to anything, or they can just be 0 values.
If you can get all that working and can confirm the ~10 msec latency, you'll now also know what you can try to tweak to change it. Just don't expect to get all the way down to 1 msec or less.
-Kevin P