Real-Time Measurement and Control

cancel
Showing results for 
Search instead for 
Did you mean: 

The Best way to transfer datas from CompactRIO to Host.vi

Solved!
Go to solution

Sir, as I said before I guess my logging algorithm causes the loop to slow down. Today, I put "Write Delimeted Spreadsheet.vi" outside of loop and its reference. The result was better.

 

But approximately one hour later, it began to slow down again. I observed it real time.

 

At the end of the test, I have to have a 2-D text file. So I insist to use Write Delimeted Spreadsheet.

 

I attached a picture. This has two options for logging algoritm. Since my original project is not on this computer, I made a demo to approach the problem.

 

Which one of these options should I follow??

0 Kudos
Message 11 of 32
(1,309 Views)
Solution
Accepted by topic author Gilfoyle

On the basis of your posted image, I would suggest that your problem is due to dynamic memory allocation.  When you build an array within a loop, the LabVIEW memory manager keeps having to allocate space for that array every time it grows, and eventually you don't have a large enough contiguous memory block available to handle it.  For this reason, NI recommends that you pre-allocate your arrays to the largest anticipated size prior to entering the loop, and then use shift registers with e.g. the replace array subset function to change individual elements within the array.  Once it is allocated initially, the memory manager never has to move it again or allocate additional memory.  Concatenating tunnels work exactly the same way as a build array function within your loop would.  The concatenated array is built dynamically and retained in memory until your loop exits.

 

What I would suggest is this:

 

Outside of and prior to entering your acquisition loop, you preallocate an array or obtain a queue with a predetermined size to serve as a data buffer.  Entering the acquisition loop, you don't do anything within it that would require dynamic memory allocation.  Just read your acquired data and enqueue it in your buffer, and run that loop at the appropriate rate for your data acquisition.

 

Outside of and prior to entering a separate, parallel loop, you open / create your data file, and pass the file reference in to the loop, as well as the queue reference from the data queue.  This loop will execute at a slower rate than your acquisition loop, but you will process more points at once.  In this loop, you read all available data from the queue, and then write it to file / append that data to your existing open data file.  This way, the file is getting updated continuously, so you aren't building anything in memory larger than the already-defined queue size, but you also aren't trying to execute disk writes continuously at the acquisition rate.  When you hit your stop button and exit both loops, you then add code to release your queue reference, and close your file.

0 Kudos
Message 12 of 32
(1,302 Views)

Sir, the demo that I attached may be what you mention about??  This is what I understood from your suggestion. Differently, I put the write spreadsheet outside of the loop

0 Kudos
Message 13 of 32
(1,296 Views)
Solution
Accepted by topic author Gilfoyle

@Gilfoyle wrote:

Differently, I put the write spreadsheet outside of the loop


Just stop using the Write To Delimited File.  That is made for logging short, quick data sets.  You are streaming data, so you have to be smarter.  Open the file before the loop, close it after, and write as much as you want inside of the loop.  This avoids huge arrays and constantly increasing memory.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
0 Kudos
Message 14 of 32
(1,280 Views)

Sir, I am sorry because I can not understand your suggestions quickly. I try to improve myself. I can say that a big aviation company has been waiting me to solve this problem. Thank you for all the things.

 

I will run this program and observe the execution time of the loop. I will back to you tomorrow. Again thanks for everything

0 Kudos
Message 15 of 32
(1,273 Views)

Sir, I tried this code today. I made data acquisition during 5 hours (exactly 18273 seconds at sampling rate 5 Hz).

 

The execution time of the two loops did not change. At the end of the test, I had (18273 x 5) rows.

 

So the problem has been solved because of you.

 

I will change all of the programs I wrote according to these structures.

 

NI Forum saves life 🙂

 

Thanks for everything.

0 Kudos
Message 16 of 32
(1,245 Views)

Just one caveat:  At your Obtain Queue function, make sure that you actually wire a size to the max queue size (-1, unlimited) input.  If you accept the default, there is no limit to how large your queue can grow, and if for any reason your data logging loop slows down with respect to your acquisition loop, your queue can grow unrestricted as elements are added, resulting in the same memory management problem and potential program freeze / crash.  Instead, give it a size limit, and then add code to detect and handle a buffer overrun if it occurs.

0 Kudos
Message 17 of 32
(1,231 Views)

So if I enqueue 104 element-array, I should set that size as 104, sir?

 

Also I saw Flush Queue function in the palette. Should I use it in the loop after enqueue ??

0 Kudos
Message 18 of 32
(1,228 Views)

No.  Where you obtain your queue, you set the element datatype.  In your case, this could be an array of double, for example.  You have N channels, but are only acquiring a single data point on every iteration of the data acquisition loop.  That data point is a 1D array, type double, size N.  The size of the queue defines how many elements the queue can contain, but you also define what constitutes an element.  The maximum queue size you choose will depend on what rate your are acquiring data at, versus what rate you are pulling data from the queue and logging it to disk.  For example, if you acquire and enqueue your data at 5 Hz (200 ms loop rate), but your are only pulling that data out of the queue and logging it to disk at 1 Hz (1000 ms loop rate) in the data logging loop, you will have at least 5 elements in the queue every time you read the data out.  Realistically, you need more space than the bare minimum, and I would suggest 4 to 10 times that figure (so 20 to 50 elements of type: array of double).  The idea is that if your disk write operation takes longer than you expect due to e.g. your operating system doing something in the background, the acquisition loop continues to run and the buffer just grows in size.  Data is not lost - you have just stored it in the buffer while you continue acquisition at the target rate.  The next time you read from the buffer in the data logging loop, you read ALL the data, so that you catch up and empty the buffer of data.  The queue size is just how much space you allow for those sort of fluctuations before a buffer overrun occurs.

 

The Flush Queue function allows you to dequeue all elements in the queue at once, and returns the array of such elements for processing.  This is probably what I would use in order to empty the queue to log the data, although you could also just dequeue one element at a time in a loop and process it that way.

0 Kudos
Message 19 of 32
(1,223 Views)

So if My acquire loop’s rate is 5 Hz and also my logging loop’s rate depends on the rate of enqueue loop which is 5 Hz, if assumed that there are 104 channels, the ideal maximum size of obtain queue should be in between 104x4 and 104x10. If it is right, for the best efficienty and less allocation, should I choose the smallest number in 4 to 10 which is 4? 

0 Kudos
Message 20 of 32
(1,207 Views)