Real-Time Measurement and Control

cancel
Showing results for 
Search instead for 
Did you mean: 

Real time analog output update and synchronization with analog input.

Solved!
Go to solution

Hi

Thanks for the detailed explanation. I have a couple of questions:

 

 1. Just to make sure, 'Samples to read' in the 'DAQ assistant'  is the 'buffered data points' you talked about? 

 2. What is the difference between the following sampling configurations: 50 samples per loop at 25kHz, 200 samples per loop at 100kHz? For my VI, the later is not working since the application cannot keep up with the hardware acquisition.(They are both given the same time for every loop to process the data).

 

Sorry if I make some silly questions. Thanks for your patience!

 

0 Kudos
Message 11 of 18
(1,704 Views)

Your acquisition mode should be set to "Continuous Samples" and not one of the other options.  If you instead have a finite acquisition configured, LabVIEW will create, configure, run and stop the task every time it is called, increasing overhead, and you will miss data between iterations.  That said, if you have it set to "Continuous Samples", then LabVIEW uses the "Samples to Read" and "Rate (Hz)" inputs to determine the input buffer size according to the table:

 

Sample Rate Buffer Size
No rate specified 10 kS
0-100 S/s 1 kS
101-10,000 S/s 10 kS
10,001-1,000,000 S/s 100 kS
>1,000,000 S/s 1 MS

 

Your loop rate needs to be sufficient to avoid a buffer overrun (i.e. read data at least as fast as it comes in, on average), while not being so fast that the unnecessary overhead of processing smaller data sets uses all of your CPU resources, which is what sounds like is happening with your program.  I am curious to know what you are doing which requires output or processing updated at 1 kHz?  Certainly, 1 ms is not perceptible by humans, and if you're dealing with some sort of esoteric research / control application, a real-time hardware target seems indicated.  In any case, I would slow down your loop rates as much as you can.  Even 10 ms will be a huge processing overhead improvement.  Executing at a 10 ms loop rate, you would acquire ~100 samples every iteration (buffered acquisition at 1 kHz), and process them all as a batch before calling the DAQ assistant VI again for more data.  Alternatively, you could separate the data acquisition and data processing loops in order to reduce overhead, calling the DAQ Assistant VI less frequently (but more samples on each call), and then running your processing more frequently on smaller batches in a separate parallel loop, using a queue to share the data.

0 Kudos
Message 12 of 18
(1,701 Views)

Hi 

Thanks for your reply.

 

The main function of the program is to detect a downward peak in the current signal and then reverse the voltage after a few ms delay. The peak contains information we want, with a duration of around 0.1ms (The duration is part of the information). So that is why we need such high sampling frequency and update rate of the loops.

 

0 Kudos
Message 13 of 18
(1,683 Views)

Sampling frequency is taken care of by your data acquisition hardware, which appears to be more than capable of the 20 kHz sampling rate that you need to capture a 0.1 ms event.  Unfortunately, it also appears as though you are attempting to do software driven real-time control at a comparable rate, which is not within the capabilities of your desktop PC unless you accept buffered data acquisition and batch processing at a somewhat slower rate.  If updating your control outputs at that high rate is critical, you need to consider a FPGA enabled real-time controller.

0 Kudos
Message 14 of 18
(1,678 Views)

Hi:

Thanks a lot! Now I understand my situation.

 

So I looked up on google about FPGA. I am overwhelmed by the variety. Some of them are extremely expensive. If you are familiar with such devices in analog reading and control, would you please give me some links on the product?

 

BTW, is it possible to do hardware sampling while processing the data and sending out control signal during the process using labVIEW and PCIe 6321?

 

Thanks.

 

 

0 Kudos
Message 15 of 18
(1,674 Views)
Solution
Accepted by Haisenberg

Despite the fact that your DAQ device can acquire at 250 kS/s, and update its analog output at 900 kS/s (single channel), this only applies to things like a finite output, such as writing out a waveform at the card's clock rate.  You don't have access to the hardware clock on the DAQ device directly for using as a clock source in e.g. a timed loop in LabVIEW (running on Windows).  In fact, due to the limitation of the non-deterministic operating system, using timed loops in Windows code doesn't generally provide advantages over using simple while loops with loop timing implemented with ms timer functions.  Now, what you want to do is possible within that limitation, meaning that you can acquire your lossless high rate data (using buffered acquisition), identify triggers within it or post-process it however you want, but you will need to benchmark that processing code to determine how fast it can execute, and implement it in a loop that runs no faster than that (and ideally slower in order to cede CPU cycles to other processes).  You can then write to the analog output(s) on the DAQ device, but can do that no faster than your loop rate, which will depend on how long your processing takes but in any case cannot be faster than 1 ms on the Windows platform, because that is the resolution of the clock available to LabVIEW.  Now, your PC processor has a faster hardware clock rate than the BIOS clock accessed by LabVIEW, but you don't have direct access to it - only your operating system kernel does, and this is by design, as the operating system wouldn't work if applications could override the Windows scheduler. If your processing code takes several ticks (milliseconds) to execute in LabVIEW, you could possibly stand to optimize it by implementing multiple tasks in parallel, or pipelining, etc. in order to achieve a single tick (1 ms) loop rate, but if you need to update the AO faster than 1 ms your only choices are to either trigger AO output tasks on the DAQ card configured with a higher clock rate (using the DAQ device clock) where you send it predetermined arrays of data, or to switch to a real-time execution system which will permit you to run your LabVIEW loops deterministically with access to the 1 MHz clock.

 

Consider a CompactRIO if that is the case.

0 Kudos
Message 16 of 18
(1,670 Views)

Appreciated! You replies have been very useful!

 

0 Kudos
Message 17 of 18
(1,665 Views)

Just to clarify: If you choose to buy a real-time system, you will have access to a 1 MHz clock on the CPU of the RT target, and a 40 MHz clock on the FPGA.

0 Kudos
Message 18 of 18
(1,661 Views)