LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

not enough memory to complete operation

Solved!
Go to solution

I've looked through the forums for the "not enough memory to complete operation" error and despite following the advice I found, the error still occurs.

 

I'm using LabVIEW 2012 to try and continuously monitor our system, recording temperature, power, etc vs time (values obtained from USB-6008 DAQ).  The data being saved to file, is done every 60 seconds using a small array (no problems here).  The typical run/time memory allocation to LabVIEW is about 180MB (4GB RAM on computer).

 

The issue I feel is related to our wish to display this data on graphs for extended periods of time.  The current iteration of the code works as follows.  

1) we have 2 XY graphs with 2 plots each.  

2) for each plot, I am initializing clusters of 2 arrays of 100,000 (XY pairs) which are wired to shift registers.  I know this is larger than can be displayed on a graph but I am currently more concerned with reducing the number of data copies.  

3)  Every 10th data point is added into the arrays using an In Place Element Unbundle/Bundle along with a "Replace Array Subset".  This means there are approximatly 8640 points per day. (A single day is the shortest time span typically viewed)

4)  For two plots on an XY graph, two clusters are combined in an array (using Build Array).  I think this is my problem right here.  Since everytime I update the graphs Labview has to allocate memory for the 4 XY plots. ( Am I correct here?)

 

 

Decimating the data further when looking over multiple days will reduce the amount written to the plots.  However this operation creates data copies.  Is it worthwhile in this case? 

 

Instead of initializing 4 clusters (1 for each plot) and combining into arrays later, would it be better to initialize Array of cluster of arrays (2 plots per graph) and update the data by "index / unbundle / replace array subset / bundle replace array subset" series of operations?

0 Kudos
Message 1 of 11
(7,220 Views)

You can significantly reduce the data complexity by using a plain complex 1D array. (No pun intended). Not sure if it will help with the memory issues. Can you show us some simplified code?

Is the data spaced equally in x? If so, you don't need an xy graph at all.

 

Posting by phone, cannot test at the moment.

0 Kudos
Message 2 of 11
(7,202 Views)

Post what you can of your code.  Decimation sure sounds like it may help but might not solve all your issues.  Lets say your graph takes up the whole screen on a 1080 monitor.  That means at most you can only display 1920 pixels from left to right so loading up the graph with 100,000 points or 8640 seems over kill.

 

Pre allocating the memory needing is usually another approach that helps with this type of issue and helps keep an array bounded in size.

0 Kudos
Message 3 of 11
(7,198 Views)

I'll work on writing up a simplified code to post but I'm not sure if it will even show this error at all.  The current program can run for days without incident (record is 9 days).

 

The x-axis is the timestamp of the data taken.  While i would like it to be every second, some other bits of code seem to be slowing it down (detla x ranges from 1 to 1.2 seconds generally).  I'm using XY graphs so that I can specify the array size and use "replace array subset".  It is my understanding that waveform charts continually increase their chart history so running this continuously would be bad for memory allocation so i've elected to use XY graphs.  The end goal is to have this program running 24/7.

 

Altenbach, could you elaborate on the complex 1D array?  It was my understanding that in order to display >1 plot on an XY graph, you had to have an array of clusters.  If using the complex 1D array, how are multiple arrays wired for multiple plots?

0 Kudos
Message 4 of 11
(7,188 Views)

@pjr1121 wrote:

It is my understanding that waveform charts continually increase their chart history so running this continuously would be bad for memory allocation so i've elected to use XY graphs.  The end goal is to have this program running 24/7.


This is not correct.  A Waveform Chart has a limited chart history and only keeps so many data points then throws away the old like a circular buffer.  The default size is 1024 points, but can be changed just not at run time, by right clicking and selecting Chart History Length.

0 Kudos
Message 5 of 11
(7,186 Views)

Here is a "gutted" version of the code.  The data acquisition sub-vis are now just constants and some other the other sub-vis have had their code copied directly to the block diagram (so i can post just a single VI).  Sorry for the larger than normal BD as a result.

 

There are 3 while loops (1 acquired data and passes it in a QUEUE to the consumer loop; 1 handles user events and graph range updates; the largest places data in arrays/clusters for graphing and recording).

 

 

0 Kudos
Message 6 of 11
(7,165 Views)

Thanks for taking the time to comment up the code it helps a lot.  If I were given a task like you were I wouldn't be trying to keep the entire multi day test in memory all at once.  Is the user requesting the data from 5 days ago constantly?  If they request it you could show it but is it necessary to have it all available all the time in memory?

 

In the past I have written to a TDMS file with the new data flushing once every 60 seconds.  Then the graph can load a subset of that data.  Using the TDMS read you can specify the length of the read and the offset, so you can know to read starting from the beginning, or with some math read the last set of data written.  But don't get me wrong TDMS is not the only way to do this you could do something similar with your existing file structure.

 

But what I think is more important is your middle loop is unbounded in size for its arrays.  Memory will continue to grow until it crashes.  You removed the write VI but I'm guessing you are essentially overwritting the old file with all the same data but with 1 extra data point.  Why not just write that one extra data point by appending to the existing file?  Look at the Write To Spreadsheet which shows how to append to file (it is an optional input).

0 Kudos
Message 7 of 11
(7,157 Views)

@Hooovahh wrote:

 

 

But what I think is more important is your middle loop is unbounded in size for its arrays.  Memory will continue to grow until it crashes.  You removed the write VI but I'm guessing you are essentially overwritting the old file with all the same data but with 1 extra data point.  Why not just write that one extra data point by appending to the existing file?  Look at the Write To Spreadsheet which shows how to append to file (it is an optional input).


If you'll notice the section where I comment that the save VI is deleted, there is a null array wired to the shift register.  While this probably isn't the best practice, the array builds to 60, appended to the text file, and overwritten with the null array.  This is to avoid opening/closing the file every second.

 

The other array there stores power and time for every data point when the sun is up for the day (probably near 50,000 data points).  This is done to calculate the days insulation by integration.  This array could probably stand some improvement using an initialized array.  (i got tunnel vision on the other part of the code and missed this).

 

In regards to the graphs containing multiple days worth of data to be viewed at any time, yes this is a requirement (the more the better).  This is for monitoring a solar array at our University and, once free from bugs, will be linked to a web page using the Web Server.  So individuals may view data within the past 1,2, or even 3 weeks.  Normally, I would just have a separate VI for viewing data when desired but 24/7 access to view the updated data is a requirement.

 

 

 

Am I correct in that the Build Arrays (just prior to the graphs) makes a data copy of each cluster?  Could it be this large data copy that's the cause of the error?  My understanding from other posts is that this error is generally linked to non-continuous memory allocation for arrays/clusters.  

0 Kudos
Message 8 of 11
(7,148 Views)
Solution
Accepted by topic author pjr1121

Nothing obviously bad jumps out at my, but that could be why it runs for days sometimes.  A few things to try:

 

1) The section where you overwrite the oldest 3600 data points and rotate the arrays has code to initialize a new array.  I'd move this initialize outside the loop to make sure you only do it once.  You could always just replace the elements in the existing array with a NaN.

2) I'd try getting rid of the build arrays like you highlighted.  Initialize these arrays outside the loop and replace elements as appropriate.

3) You build arrays of Irr and Time using the build array and reset them once/day.  You could try initializing these outside as well.

4) If something happens and the file write hangs your queue can fill up.  The dequeue element already waits for data before it does anything, so I don't think you need the delay in the same loop.  This might help the dequeue catch up if the queue gets big.  You can use a Get Queue Status vi to display the number of elements in the queue on the screen.  As long as it isn't hidden by the error message it might give you an idea if the queue is causing your error.  You could also log the queue size to see if it grows.

5) If your error handling of the DAQ loop isn't good, an error there can cause that loop to run as fast as possible.  If you added the 1000ms wait just to show that it isn't a greedy loop, then this could easily be the cause.  I don't think the USB DAQs are very reliable, so trying to use one for days might run into this error.

 

Hopefully this at least gets you pointed in the right direction.  My bet is on #5 🙂

0 Kudos
Message 9 of 11
(7,137 Views)

@Wart wrote:

Nothing obviously bad jumps out at my, but that could be why it runs for days sometimes.  A few things to try:

 

5) If your error handling of the DAQ loop isn't good, an error there can cause that loop to run as fast as possible.  If you added the 1000ms wait just to show that it isn't a greedy loop, then this could easily be the cause.  I don't think the USB DAQs are very reliable, so trying to use one for days might run into this error.

 

Hopefully this at least gets you pointed in the right direction.  My bet is on #5 🙂


Since the data aquisition loop sends data to the processing loop via a Queue, I can easily see the duration of the data acquisition.  In my experience, I have never noticed a string of measurements with very fast aquisition.  The 1000ms wait until next is to slow down the acquisition since the reponse time of the hardware being monitored is rather slow.

 

I will implement some of the other suggestions and watch for the error again.  Although it may be a couple weeks before I am certain the error is gone.

0 Kudos
Message 10 of 11
(7,084 Views)