Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

Synchronous AO based on two AIs

Hi All,

 

I am using cDAQ-9174 to read two waveforms on a 9205 module and generate an output signal on 9264, which is the product of the two input signals. The AI and AO operations should be synchronized with minimal delay at the output (not more than a millisecond or so). 

 

I built the code based on this post. The issue I am currently facing is that there is a large lag at the output (on the order of a second). To measure the delay, I routed one of the input signals directly to the output (without the multiplication operation with the second signal) and captured the input and output on an oscilloscope (see attached). As you can see there is a few hundred ms delay between them. 

 

Can you please help with resolving this issue? I know that there are quite a few posts and examples on this topic but I am new to Labview and after going through many of them I am still not able to figure it out myself. So your help with this specific example will be highly appreciated.

 

Thanks,

Hayk

 

hihike_0-1615835269065.png

hihike_1-1615835458688.png

 

 

 

0 Kudos
Message 1 of 4
(991 Views)

Here are the VIs as well.

Download All
0 Kudos
Message 2 of 4
(980 Views)

Few things first,

  • Any software layer based transaction is non-deterministic
  • In your case, you read a chunk of samples (10k) write to AO - in this, there are software (OS+driver+hardware comm.,) overheads that are unaccounted
  • If your concern is about synchronous, I believe they are synchronous (data change at the AO and AI happen synchronously) due to the fact that you are using a Start Trigger and share the same sample clock
  • When you mean you're seeing a delay, it indicates a non-deterministic behaviour which is expected since the data handoff between AI and AO is happening at the software layer and not directly looped back
  • If you are looking for a system with a deterministic delay between AI and AO, you might need to look at using a cRIO instead of cDAQ
Santhosh
Soliton Technologies

New to the forum? Please read community guidelines and how to ask smart questions

Only two ways to appreciate someone who spent their free time to reply/answer your question - give them Kudos or mark their reply as the answer/solution.

Finding it hard to source NI hardware? Try NI Trading Post
0 Kudos
Message 3 of 4
(940 Views)

 

I was initially surprised that your picture didn't show considerably *more* latency.  You have latency contributions due to an AI task buffer, an AO task buffer, and a hardware output buffer (FIFO) on the cDAQ.   I assumed each would be significant contributors, but it turns out that only one of them is.

 

Spoiler
You will not be able to achieve consistent 1 msec latency from a cDAQ system hanging off a USB connection to a Windows PC.  You *might* be able to get down closer to 10 msec (just an educated guess) but it won't be entirely consistent.

Your AI task is giving you 200 msec latency to start with because you're reading 10k samples from a 50 kHz task.  The 1st sample from each set of 10k samples will always have happened at least 200 msec ago.

 

Your AO task buffer turns out to be very small.  Its size is set by the # samples you write to the task prior to starting it, which is only 10.   I'm kinda surprised that doesn't lead to chronic errors from buffer underflow.  The driver can only move 10 samples at a time across USB, and must be managing to do that at a 5 kHz rate to keep up with your 50 kHz sample rate.

   The good news of this small AO buffer is the minimal latency it adds -- about 0.2 msec.

 

The cDAQ 9174 appears to have a 127 sample FIFO, so that'll tend to add about another 2.5 msec in latency.   There *might* be some deep-down DAQmx properties that'll let you use less than the entire 127 sample FIFO to reduce latency there, but I've only explored such things with desktop devices - PCI, PCIe, PXI.   Since the 127 sample FIFO is shared among all channels, a sneaky trick might be to write, say, 4 channels of AO.  That'd cut the FIFO latency contribution to ~0.6 msec.

 

So let's aim for around 10 msec latency like this:  Set the AI task to read 400 samples at a time.  That's 8 msec latency.  Pre-write 50 samples to the AO task instead of 10 (it feels safer to me to increase the AO task buffer at least a little).  That's another 1 msec latency.   And try the trick of writing 4 channels of AO to reduce the FIFO latency.  The other 3 can be copies of the 1 you care about if they aren't physically wired to anything, or they can just be 0 values.

 

If you can get all that working and can confirm the ~10 msec latency, you'll now also know what you can try to tweak to change it.  Just don't expect to get all the way down to 1 msec or less.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 4 of 4
(933 Views)