From 04:00 PM CDT – 08:00 PM CDT (09:00 PM UTC – 01:00 AM UTC) Tuesday, April 16, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

LabVIEW memory is full

Solved!
Go to solution

Hello Folk, 

 

I am working on data acquistion project that acquire data from NI daq cards, I am having 192 channels and sampling rate is 1ms. The application is being viewed on 6 monitors with total 32 graphs. I need these 32 waveform graphs open at same time for 600 seconds during data acquistion.(1ms). 

 

Now the problem that I face, the moment I start acquiring data, amount of memory used by the application is getting high once it reach to 2.5 GB (500 seconds) it gives the message Labview memory is full but I need to go on for more 100 seconds. Though I can see occupied physical memory in task manager that is 60%. None other application is being run during data acquisition too.

 

My system configuration is 

8 GB ram, Xeon processor and windows 7 64bit.

 

Is it such that Labview application can not use more that 2.5GB ram? and If yes what could be alternate solution to it?

Your help will be truly appreciated.

CLAD
Passionate for LabVIEW
0 Kudos
Message 1 of 20
(5,872 Views)

I assume you are using LabVIEW 32bit. Is that correct?

 

The maximum a 32bit application can use is 4GB (details), you have data in the wires as well as data in all the graphs, etc. So there are several datacopies of everything floating around. Arrays need to be contiguous in memory, so you not only need sufficient ram, you also need suffcient contiguous ram for these data structures.

 

Maybe you need to re-think your requrements to fit within the bounds of the system capabilities..

 

 

 

0 Kudos
Message 2 of 20
(5,861 Views)

Hello altenbach, 

 

Thanks for your concern, 

 

Yes I do use labview 32 bit. 

 

As you mentioned 32bit application can use 4GB memory, what is possibly preventing my application to work beyond 2.5GB?

 

Thanks again.

CLAD
Passionate for LabVIEW
0 Kudos
Message 3 of 20
(5,843 Views)

I would recommend decimating the data that you plot. There is no reason to draw 1000 points for each second of data acquisition on the graph. Take every 10th point when you plot, that will be plenty. Visually, the human eye probably won't be able to tell the difference anyways.

0 Kudos
Message 4 of 20
(5,836 Views)

32 bit Windows systems can't allocate the entire 4GB to a single task.  There are tricks programmers can do to make a 32 bit application running in a 64 bit environment access up to 4GB of memory, but I think that's a bit beyond your question.  It really sounds like you need to display a subset of the data you've gathered.  I'll leave it to your GUI experts to decide if simultaneously displaying dozens of graphs is the best approach, but if it is, then some sort of decimation is the way to go.  

 

Alas, I don't think we know the entire application, but from your description, it sounds like you're only acquiring 219MB of actual data across all channels after 600 seconds.  I believe the memory issue could be resolved without decimating data (if for some reason you can't) by better allocating memory copying in your code... note that each time you do 'stuff' to your data, you're potentially making a copy of it in memory.  There's a way to show all memory copies (Tools»Advanced»Show Buffer Allocations) in LabVIEW... I recommend using that to see where the real memory shark is.

-John Sullivan
Problem Solver
0 Kudos
Message 5 of 20
(5,817 Views)
Solution
Accepted by topic author jatinpatel1489@gmail.com

I assume that the 6 waveform graphs should display all 192 channels over 600s each. Correct?

 

So let's make a small projection.

Assuming that each data point is scaled, we are talking about double values => 8Byte per sample.

192 (channels) * 8 Byte (per sample) * 1000 (samples/second) * 600 (seconds) = 921.6MByte (=> 879MB)

So if you have all values in a single 2D array, the data alone requires a chunk of round about 900MB.

Splitting that up to 6 waveform graphs, you require additionally about 150MB per graph also continuously. This totals up to about 1.8GB if there is no additional copy of the data.

 

The reason why you are running out of memory with the 4GB limit is that arrays has to be in memory contiguously (as altenbach already told you), so you are simply running into the issue of memory fragmentation.

As Greg told you, reducing data for display is a very clever and efficient way to reduce overall memory consumption (nobody can see a single sample in graph having 600,000 values for a single channel...at least i havent seen a monitor with >600,000 pixels horizontal).

That is only valid if your original data set does not create copies in your application though, as reducing the copies for display does not reduce the raw data pool (remember: close to 900MB contiguously!).

 

Honestly, i would stream the basic data to a file and only keep a reduced data set in memory for display (somthing like 1:500).

 

Norbert

Norbert
----------------------------------------------------------------------------------------------------
CEO: What exactly is stopping us from doing this?
Expert: Geometry
Marketing Manager: Just ignore it.
Message 6 of 20
(5,798 Views)

I second Norbert. Also, one option that can be done for large data sets that we have used in the past if you need the data to be in memory (maybe for some calculation or something) is have an array of data value references of arrays. This helps with the continguous memory issue. Each DVR represents a channel, and then you store just that channels data in its respective DVR. This allows for many smaller contiguous chunks rather than one large one.

0 Kudos
Message 7 of 20
(5,775 Views)

You may want to read this white paper.  It is old, but the techniques still apply.  If you read up about the data value reference and the in place element structure and add this to the info in the white paper, you can solve most memory issues to some extent or another.  The paper does include a section, with code, on decimating for display without losing data / peak information.

0 Kudos
Message 8 of 20
(5,747 Views)

Yup, you can decimate by a factor 100 and it's still more than the monitor can display. I had an app like this where we needed to display long data sets with the ability to zoom in and out. I stored several copies to disc at different decimation levels and displayed the appropriate one to keep things running smoothly at all levels of zoom.

0 Kudos
Message 9 of 20
(5,725 Views)

Thanks Norbert B, 

 

I was using a VI(32 times) that builds the data but instead staying only one VI in to memory they all stayed in memory and caused a lot of memory problem. Solving this issue I can run the VI for more than 600 seconds and decimating data would definitely improve memory usage to 10 times.

 

I would appreciate if you would suggest me several paper or link on VI memory usage and contigous memory for labview.

 

Thans again.  

CLAD
Passionate for LabVIEW
0 Kudos
Message 10 of 20
(5,686 Views)