05-11-2018 12:48 PM - edited 05-11-2018 12:55 PM
No. Channel count has nothing to do with it. This is accounted for when you specify the queue element type. (Space allocated per element will resize according to the largest 1D array element enqueued). Queue size is set according to the number of elements of whatever data type you specify when you obtain the queue, so number of elements in your queue will correspond to the number of 1D arrays (1 per acquisition) stored in the buffer. Also, your logging loop rate is not dependent at all on the acquisition rate. You can set it to anything you like. A faster rate will read less data on each iteration, but perform disk writes more often. A slower rate requires a larger buffer queue, but doesn't access the disk as often. The former uses more CPU and keeps your disk busy. The latter cedes CPU and disk access resources at the expense of memory. You should set your queue size to the largest anticipated size, and unless you're dealing with huge data sets or are already low on available memory, you will experience very little impact from pre-allocating a large memory space for this purpose. If you acquire at 5 Hz and log to disk every 2 seconds, for example, you will nominally have 10 x 1D arrays in the buffer every time you read and process that data. In this case, I would set a queue size of 100 just to be safe (i.e. maximum of 100 1D arrays, which in your case have an array length of 104). This would at best provide 20 seconds of buffering before you encounter an overrun, and I might even be inclined to go larger, because I have seen a Windows OS do some pretty frustrating things which interfere with executing code.
05-11-2018 01:28 PM
If I understand right, queue cares about data type, sampling rate and logging rate to the disk, but not element numbers in the array. So in my case, I should set my queue size 5 x 5 x 10 which is 250. Is it right, sir?
05-11-2018 01:46 PM
Minimum number of queue elements = ( Acquisition Rate [Hz] / Logging Rate [Hz] ) * Safety Factor.
In my example above, this corresponds to ( 5 / 0.5) * 10 = 100 elements.
05-11-2018 05:39 PM
I just realized that we're talking about a CompactRIO here, so I would do the acquisition using the FPGA, and transfer the data to the RT via with a DMA channel. In a deterministic loop on the RT system, read from the DMA and put that data into a RT FIFO. In a parallel, lower priority loop on the RT system, read from the RT FIFO and write to a network stream. On your host PC, read from the network stream and log your data there.
05-12-2018 09:13 AM
Can you give me some information about what deterministic loop and low priorty loop? What is the difference? This is the best way to take data from RT and to transfer it to the Host? At the host Side, after taking the data from Network stream, should I use the queue structures again for the best performance? Some of our tests last during 200 hours. And I have to log datas at 5 Hz or in some tests at 10 Hz. For example, we will start a test that will last during 200 hours and I will have to log the datas at 10 Hz. I will use queue structures as you adviced. Is there a problem during the test? Or I have to change the program again as what you say above?
05-13-2018 01:41 PM
@Gilfoyle wrote:
Can you give me some information about what deterministic loop and low priorty loop? What is the difference?
Ok, time to give you some homework:
05-14-2018 11:26 AM
Do you have an active license / service contract with National Instruments? You might have access to online training, in which case I would suggest you undertake the LabVIEW Real-Time 1, LabVIEW Real-Time 2, and LabVIEW FPGA online courses.
05-14-2018 12:23 PM
I will take the courses, sir. But now it is not the time. I just had to solve the problem quickly. Again thank you for your care
05-14-2018 12:53 PM
You appear to understand the basic idea. Just separate your time-critical code (i.e. control, data acquisition) from your non-critical code (signal processing, data logging, UI updates), and make sure that the former operates reliably at the desired timing. Improve performance by only running your loops at the necessary rates, and no faster. 100 ms is about as fast as you can resolve an indicator change, so you don't want to update graphs and any other front panel indicators faster than that, for example. Latency (delay) doesn't matter for post-processing and data logging, provided you act on it quickly enough that you keep up with the incoming data. If you need to preserve acquisition timing information, you can still write that data to disk late, you just need to use the waveform data type or independently log your timestamps in order to preserve the actual time of acquisition. Incoming data will be regular from the deterministic process (higher priority loop). Removing that data will be sporadic at lower priority, but on average will execute fast enough to empty the buffer faster than your acquisition loop fills it.
05-14-2018 01:36 PM
I learnt lots of things because of you, sir. Thank you again.
My LabVIEW projects stand for usually third-party embedded systems. I communicate with them using some communication protocols such as CAN Bus, Modbus or TCP / IP etc .. Lots of them send me data packet that has packet header, data payload, start bit, stop bit etc.. To parse the related bytes in the packet regularly, I have to execute my loop at desired time. That can be 1 ms or 10 ms or 100 ms. It depends on how frequency the embedded system send me the packet. If my loop does not execute as faster as the sampling packet, the packet shifts and I have to do some manipulation to take that packet regulary.
As a result, to handle the hard situations, I have to learn fundamental logic of DMA, Queue, RT FIFO, FPGA, Network Streams, Timed-Loop etc.. 🙂