Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

Fastest Achievable PID Rates with cDAQ

Solved!
Go to solution

All,

 

I currently have an 8 slot cDAQ9189 with a variety of AI, AO, and other types of cards. I am currently implementing multiple PID controllers using feedback measured with NI 9205 cards and writing the controller output to a handful of analog write channels on an NI 9269. As far as the software architecture is concerned, I have one producer loop that acquires the data via DAQmx using Continuous Sampling (Start and Stop and called once at the beginning and end of program, not with each iteration), applies the relevant calibration curves, implements some basic controller logic in conjunction with some PID functions, and then writes it to the AO channels. There are no other operations in this loop. I also have a handful of consumer loops running in parallel for streaming data to disk @ ~3MB/sec and updating front panel graphs. I can also sample and log the data pretty quickly at ~1kHz by streaming the waveforms directly to disk using the waveform write subVI. I tried TDMS but there appears to be a good bit over lag (~100 ms) every time you call the TDMS write block and have had much more success with the waveform write block.

So my question here is that if I want to have a PID in the mix which has to be executed by the host terminal, I have to sample, process, and then write the output in discrete chunks at some frequency and hopefully not miss too many data points between AI sampling calls. What is an achievable control rate with the above methodology using a cDAQ system? I can hit 30 Hz no problem but as I increase the producer loop speeds, my consumer loops have a hard time keeping up. Any thoughts?

0 Kudos
Message 1 of 5
(1,816 Views)

I will also note that we intentionally avoided going the cRIO route so we could hopefully avoid having to develop the code for three separate targets instead of just one. That was a deliberate decision to minimize NRE and we are currently exceeding project requirements as far as data rates are concerned.

I am just curious as to where the cDAQ architecture starts to break and forces you to go the cRIO route.

0 Kudos
Message 2 of 5
(1,810 Views)
Solution
Accepted by topic author Rockhound

No absolute answers or specs here, but the cDAQ breakdown will most likely be due to latency across the USB/Ethenet bus.

 

The DAQmx driver is more optimized for continuous throughput than for low latency, and I wouldn't count on finding a way to get an order of magnitude improvement over the 30 Hz you've already achieved.

 

Your fairly high sample rates will likely lead to some input-side latency as you read samples in chunks.  There will be some additional latency for your output signals, especially if you have buffered, hw-clocked output tasks.

 

Lack of control loop rate *determinism* is your other issue when running under Windows.  Some processes are tolerant of potentially large timing fluctuations for control signal updates, many aren't.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 3 of 5
(1,795 Views)

Thank you for the reply. I've managed to get some extra performance out of my code by removing all TDMS subVIs from the main run loops. Intead, I write everything as binary files and then convert all of the binary into TDMS when I stop the program or later in post-processing. This has greatly increased the performance and 'pseudo deterministic behavior' of the code.

 

I also realize that we do run the risk of falling victim to non-deterministic execution due to running on a non-RT OS or FGPA but given our setup, I feel comfortable with what we have, especially since we have Watchdogs running on our cDAQ9189 that automatically safe the system if we run into any major lag issues.

 

I still get into a race condition very, very rarely and the streaming of the data to disk is missed once every few hundred or even thousand clock cycles but it isn't anything major.

 

<bad dad joke> Thank you for the feedback. </bad dad joke>

0 Kudos
Message 4 of 5
(1,712 Views)

I'm not an expert at TDMS, but I'm surprised that you get such big gains from binary files.

 

As long as you're streaming everything to disk for the possibility of later recall, have you done things to reduce the data you handle and process in your other consumer loops?

 

How are you delivering the data to all the consumer loops?   If you have separate queues being fed from the producer, you're likely making multiple data copies.  When you have just one queue to feed, the consumer can retrieve the data without making copies.  Some kind of pointer magic happening under the hood.

    So another thought is this:  send DAQ data directly to only the file streaming queue in your producer loop.  Let the file streaming loop then reduce the data to GUI-sized proportions and rates, and send copies off to your GUI consumers via their queues.

    Or maybe the control loop needs to take first precedence before the file streaming.  But the main idea is to keep the producer very lean to avoid DAQ errors, send a lossless stream to your files, and greatly reduce the data bandwidth going to the GUI and other things that can be lossy.  

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 5 of 5
(1,702 Views)