LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

How to correct "Not enough memory to complete this operation" error using profiler?

You'll need to use the chart properties multiplier and offset for the X-axis.  I would suggest testing for the system crash first before you invest time in making it work with DBLs to find it doesn't fix your problem.
0 Kudos
Message 11 of 19
(2,674 Views)
I have got the scaling done and it appears it works good. Still need help with the historical chart.
0 Kudos
Message 12 of 19
(2,669 Views)

Hi tahurst,

To address the potential "memory leak", my testing shows no memory leak. Once the waveform chart has been filled up with data (500,000), it's memory allocation can be seen to stop (in Task Manager->VM Size). If the chart is then cleared, the allocated memory does not decrease, but it also never increases with continued data output to the waveform chart. I tested this with 100 chart history length with rate set to 5,000 and samples set to 5,000. This is equivalent to the memory allocation for 1 Hz 1 Sample 500,000 chart history length. All testing was done with only the top graph having data written to it.

To address "not enough memory to complete this operation", I would first begin testing the top graph (use diagram disable or case structure to disable the bottom loop on your block diagram). My guess is that you will not run into any errors with your top graph (given that you have adequate memory on your computer). After you have logged plenty of data (five hours or so), try reading that data file into your bottom graph and see if that causes your error. You could speed that test time up by increasing the sample rate and samples to read appropriately. I don't quite understand why you're continually reading from a file for your bottom graph. This is quite inefficient, and depending on your requirements, a different option should be looked into.

I have attached an edit to your VI. I've fixed some obvious potential problems as well as adding the diagram disable structure that I mentioned. The chart history length is set to 100 (for my testing of 5,000 rate). How much memory do you have in your computer? What version of LabVIEW? I look forward to hearing the results of your testing.

0 Kudos
Message 13 of 19
(2,639 Views)
Mike,

How is a chart hostory length of 100 the memory equivalent of 500,000?  If the chart history length is 100, then LabVIEW allocates memory for 100 samples (in this case for 4 waveforms).  So, with your setup, when you plot your 5000 samples, you only save the last 100 in the chart.  So a chart history length of 500,000 would allocate 5,000 times as much memory.  You can see that your modifications are not the same, since after 1 min, 40 secs, you start losing data.  This does not happen in his VI, as he has almost 6 days worth of history in his chart.

If you run his orginal VI and yours and look at WIndows Task manger, you can see his VI uses a tremendous more memory than yours.

As to the "memory leak", you will see that the first time the chart is updated, LV takes a huge amount of memory, well more than one samples worth. which I assume is preallocation for the chart.  But then, as the plotting continues, more and more memory is used until the history is filled up.  If you change the datatype to DBL, you don't see this.  To me, it appears that LV allocates memory for the waveforms, then doesn't use it and dynamically grabs more data for each data point.

Regardless, your example does not mimic the memory usage of the chart.  I agree that the data file being created will have 500,000 samples in it, but that's not the source of his problem.
0 Kudos
Message 14 of 19
(2,633 Views)
After changing to 2D DBL and then convert to dynamic data for the TDMS write it does not give the error any more even with 20 Hz and running for almost 24 hours. Now I just have a problem with putting abolute time on the live chart and getting the correct abolute time from the historical chart when the sample rate is anything but 1 Hz.
0 Kudos
Message 15 of 19
(2,627 Views)

Waveform charts and graphs assume a timebase of 1 by default, thus it appears that only a 1 Hz rate works.  That is because they only take in an array of Y values or a waveform with an inherent spacing between elements of 1.  Only an XY graph lets you define a Y vs. X relationship where the X values can be unequally spaced.

You can go into the properties of the chart with a right click at edit time and define the X scale multiplier there to be something else such as .001 for a 1 kHz rate.  Or you can write the value to the property node of the chart Xscale.Multiplier during run time.  The property Xscale.Offset lets you define the point of time for the origin of the chart so that the data looks like it is starting from the current time rather than 0 or the year 1904 (time constant of 0.)



Message Edited by Ravens Fan on 11-26-2007 10:41 PM
0 Kudos
Message 16 of 19
(2,624 Views)
Hi Matthew,
 

How is a chart hostory length of 100 the memory equivalent of 500,000?  If the chart history length is 100, then LabVIEW allocates memory for 100 samples (in this case for 4 waveforms).  So, with your setup, when you plot your 5000 samples, you only save the last 100 in the chart.

In the example I posted, the waveform chart was a waveform data type and not a double data type. These two charts behave differently. In the case of the waveform data type you specify the "Number of waveforms in chart history buffer" as opposed to the "Number of data points in the chart history buffer" of the double data type. Therefore 100 waveforms of 5,000 points is the data point equivalent to 500,000 data points. You are correct in stating that they are not the memory equivalent however. Plotting 500,000 waveforms of 1 data point does seems to take more memory than 100 waveforms of 5,000 data points. This is likely due to the fact that waveforms are clusters containing y0, dt, and an array of data. With the 1 sample waveforms you get 500,000 y0 and dt values where you only get 100 y0 and dt values with the waveforms containing 5,000 samples.


As to the "memory leak", you will see that the first time the chart is updated, LV takes a huge amount of memory, well more than one samples worth. which I assume is preallocation for the chart.  But then, as the plotting continues, more and more memory is used until the history is filled up.  If you change the datatype to DBL, you don't see this.  To me, it appears that LV allocates memory for the waveforms, then doesn't use it and dynamically grabs more data for each data point.

The ability to preallocate is heavily dependent on the data type. For DBL datatype, it's possible to predict how much memory to allocate because the graph is set to display a specified number of points. For the Waveform datatype, it is impossible to predict how many data points will be in each waveform so it will not be able to pre-allocate all of the necessary memory. This is why you see LabVIEW allocating more memory as more waveforms are graphed as opposed to doubles.

Your report of the clear chart issue found here has been filed to R&D. I do admit that it seems that a clear chart should not allocate any more memory or at the least not operate differently than the button in your example. My initial testing shows that it might be possible that the memory allocated for the clear chart before the chart history length is full may eventually be used by the chart when displaying waveform data. R&D will definitely have to look into this issue in greater detail. Thank you for pointing out this behavior.

0 Kudos
Message 17 of 19
(2,587 Views)
tahurst,
 
Please see this KB for more information about Ravens post. You can also see the image below for what this would look like in your code.
 


Message Edited by lion-o on 11-28-2007 09:44 AM
0 Kudos
Message 18 of 19
(2,581 Views)
Mike,

I guess this is where my confusion is.  I would have thought that a waveform chart with a history of 500,000 would plot 500,000 data points, regardless of how many points showed up in an individual waveform.  So, if I plotted 500,000 1 point waveforms, the history would be filled, and if I plotted 100 5,000 point waveforms, I would fill the history.  If the chart history is 500,000 waveforms, then I could plot 500,000 5,000 point waveforms or 500,000 1 point waveforms or a mixture of the two, and the chart could have varying number of data points depending on how many points I collect in my waveform.

It is a very interesting decision NI made to have it work this way.  Because as you said, the memory usage is extremely non-deterministic.  I guess it seems non-intuative to me.
0 Kudos
Message 19 of 19
(2,563 Views)