From 04:00 PM CDT – 08:00 PM CDT (09:00 PM UTC – 01:00 AM UTC) Tuesday, April 16, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

VI using too much memory

Solved!
Go to solution

Hi all,

I am trying to write a code to peak pick in 2 dimensions on a rather large data file (approximately 1000x72000 points). The results from this should be integer values where each peak has been reduced to its maximum point for further statistical analysis (see attached code). The problem is when this code is run, the VI very quickly eats through all of the available memory and crashes. I am running a system with 3 processors and 4 gigs of ram, and the program has crashed every time after using about 3.5 gigs, so I need to find a way to reduce the memory usage of this VI. Attached is my VI and an example data file. Thanks.

<>< Eric

 

 

Download All
0 Kudos
Message 1 of 5
(2,926 Views)
Solution
Accepted by topic author Eric-APU

First, reading 1000x72000x8bytes= ~600MB for a single copy in memory is pushing it. Then you have additional data copies in both graphs.

 

All your values seem to be small integers, so a 2D array of U8 would work equally well using 8x less memory.

 

Then you are sending these massive arrays to two intensity graphs. The number of pixels in your graph is a tiny fraction of the array, thus you need to reduce the data for display. Graphing 72000 columns on a display less than 1000 pixels wide is ridiculous.

 

Message 3 of 5
(2,921 Views)

Thank you very much. A combination of these two suggestions fixed the problem. Unfortunatly, I have to deal with the whole data set later, so I can't cut some out to display, the display was just checking to make sure it was working properly. However, by going to U16 integers, the memory overflow was averted. Thanks!

0 Kudos
Message 4 of 5
(2,890 Views)

You could easy make intensity graphs that are 100x smaller in memory footprint, e.g. by remapping your original array into a 2D array where each dimension is 10 smaller and then setting the axis increment at 10. You could use 10x10 sections and either take that max or the average, for example. and reduce it to a single element in the new array.

0 Kudos
Message 5 of 5
(2,884 Views)