LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

How to efficiently collect data in while loop

As I have been reading various threads, different posters have mentioned that continuously building arrays within while loops is a memory hog.

 

In the applications where I have been using LabVIEW, I periodically collect some kind of data (typically collected from a DMM and usually multiplexed from different DUTs, etc.) to store it in a text file for import into Excel. Because the data collection typically runs run for several hours, I have added on-screen graphs that update after every collection to give me (or a future user) a picture of what is going on as well as to show that the program is running. 

 

I developed my basic program that I have tweaked several times during the discussions of this thread:  http://forums.ni.com/t5/LabVIEW/How-To-Exit-While-Loop-with-Timing-Delay/td-p/2151844

 

As I have been using a producer-consumer architecture for my programs, I have not specified a max number of samples to collect - or therefore a max size for the array that is formed during the data collection. My data collection samples are taken on a frequency measurable in minutes, and the programs are not expected to run continuously for more than overnight. Should I be concerned with the memory hog issues? What could I or should I do after execution to ensure that the memory is released for future use?

 

Can someone recommend how I can more efficiently collect my data?

 

Thanks!

Jeff

Jeffrey Zola
0 Kudos
Message 1 of 8
(5,649 Views)

Consumer-producer loops. Collect with one and use the other to save the data in a tdms file for later review or extraction. minimizes the data stored in ram as you save it to a file. if you cant save it fast enough pray to the ram gods 🙂

0 Kudos
Message 2 of 8
(5,639 Views)

It's the building of arrays that is inefficient inside of a while loop.  If you are just acquiring data and putting into a queue, there is no building of arrays there.  If the consumer loop is just taking the data and saving it to disk, there is no building of arrays there.  The only place you could get into trouble is the display.  If you use a chart, then I would say there's no worry.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
0 Kudos
Message 3 of 8
(5,633 Views)

Thanks for the responses.

 

My program "looks" the attached diagram. If I understand correctly, my desire to show progress is what causes the potential problems, as the production of graphs requires that the data be assembled into arrays,and that the assembly of arrays that do not have a maximum dimension defined can cause memory problems.

 

I am currently using a producer/consumer architecture, with the producer providing start/stop indications and the data collected in the consumer. The suggestions are to use a producer/consumer with the data provided to a producer by a consumer. It is recommended form to embed a P/C within another P/C? If I go this route should the new data P/C be contained entirely within the consumer while loop of the current consumer, or should the data-producer be the current consumer while loop and the data consumer be a third top-level while loop?

 

I am envisioning that the data-consumer would take data off a queue, update the TXT file, update the onscreen graphs, and possibly provide on-screen summary statistics. If the arrays that provide these onscreen graphs still do not have a maximum dimension defined, how does this solve my potential memory problem? Alternatively, I could (1) define a Max dimension for the array and either stop execution when it is reached, (2) shift data to always show "last n" in the graphs, or (3) something else?  But these alternatives are "nice to haves" that will take development time away from the project of collecting and analyzing the data. If you say that the memory issue "goes away" after execution stops, and my programs are only intended to run for a maximum of overnight, then this discussion is only an intellectual exercise to further my understanding of LV . But I do appreciate the words of wisdom that the community has provided!

 

Jeff

Jeffrey Zola
0 Kudos
Message 4 of 8
(5,597 Views)

If you log day in day out you better have producer consumer loop. Ram fills up quick. There is a template for produce consumer in your LabVIEW getting started page. Under NEW>More...> then its under VI>From Template>Frameworks>Design Patterns>Producer/Consumer design pattern (Data)

 

P.S.. Don't forget to use consumer as a write to file. This way every time you write to file a portion of the data it is removed from memory. That's why produce/consumer's are so well maintained.

0 Kudos
Message 5 of 8
(5,590 Views)

I am leaning towards using the attached architecture, based on the design apok provided me in that previous thread and on this: https://decibel.ni.com/content/docs/DOC-15453

 

I will change the pseudo-timed sequence in my current consumer loop into what I believe is a state machine architecture. The data will be collected by this loop and processed by another loop.

 

The data processing consumer will append dequeued data to the text file(s). My line of thinking is to update the on-screen graphs by reading from the text files anew for each update in order to avoid the open-ended array issue. Does this make sense?

 

I still need to figure out how exactly the data will be passed and processed, but I want to see if there are any other comments on this 40,000 foot view as I proceed.

 

Thanks!

Jeff

 

 

 

Jeffrey Zola
0 Kudos
Message 6 of 8
(5,569 Views)

You got the right idea. However you might want to use the designed architecture made for data the one you showed on the attachment is a event structure that is mostly for interface commands (use case structure with enumerator or Boolean). Also with this structure I would think its safe to use arrays. The way I do it is initialize an array of two points outside a while loop insert it into shift registers and keep adding components inside the while loop. Just make sure that you only append the last say 2000 points and leave the rest in the file. You have to pick some limitation to your graph that will work nicely with the allotted memory. For example in my current project I have 25 simultaneous graphs so all of them only display K of sample points at a time. (my sample points are slow roughly 10ms apart). However to speed up the process I only display this information every 50 ms otherwise other processes slow down.

Also to your question about state machine. Yes a producer/ consumer loop can be turned into two state machines. I have worked this before where the producer had to program a device and collect output data while a consumer was recording output data into file.

0 Kudos
Message 7 of 8
(5,553 Views)

My sketch covered measurement state transitions primarily because I hadn't thought as much about what I was going to do with the data once I collected it.  Smiley Wink

 

After working on it yesterday afternoon and this morning, I have the state transitions and the measurements working in a way that's making me happy. Now to focus on what to do with the data, which will consist of three groups of measurements that are collected sequentially.

 

Thinking aloud- each group will be assembled into an array by the FOR loop that scans channels. Array should be bundled with time/iteration information and a label indicating which of the three measurements the data represents. Bundles are dropped onto a Data Queue as they are collected.

 

Analyzer takes data off the queue, identifies which of the three measurement types is represented, appends it to its file, and puts it into an array for on-screen graphing using a shifting mechanism to limit the size of the array.

Jeffrey Zola
0 Kudos
Message 8 of 8
(5,525 Views)