LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Labview 64 bit crashes when using 16 GB memory

Solved!
Go to solution

Hi

 

I am using the 64 bit version of labview to process and render a large 3D volumetric biological scan. The data acquired is saved in a series of .tdms files and i read them in a for loop to generate 2D projections but keep each plane to render the final 3D volume. So as i am reading the files i notice the memory labview is using begins to increase as each plane is concatenated per loop to save the 3D volume data in memory. When the memory being used exceeds 16GB labview crashes.  I have attached the errors messages as pngs.  when i attempt to debug the problem , labview states a win32 unhandled exception occured.  i read online that labview 64 bit could support upto 16 TB of virtual memory:

 

https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z000000PAEOSA4&l=de-DE

 

am i doing something wrong? 

 

and by the way when i load the data into matlab reading the tdms files i recieve no error but i would like to render volumetric data on labview rather than matlab. is this is possible?

Download All
0 Kudos
Message 1 of 10
(1,810 Views)
Solution
Accepted by topic author zak9001

Let me guess: You append all the data to a single 3D array of double precision numbers?

 

If that is the case then yes that is fully expected behaviour. LabVIEW multidimensional arrays are internally stored as one  linear memory block with as many elements as the multiplication of each dimension results in. When your array reaches 16GB, it contains 2G of double precision elements which is the maximum a single LabVIEW handle can contain. This could have been 4G if the internal size parameter in a handle had been made an unsigned int but this is not something that can be changed now without a lot of backwards compatibility headache.

 

Anyways, if this is your problem you have to rethink your approach. While LabVIEW 64-bit can indeed handle whatever total memory the underlaying OS can provide to a single process, up to 16 TB, which by the way is not what current Windows versions will provide to a process. Windows 10 limits this to 2 or 6 TB per process depending on the version you use and 128GB on Windows 10 Home. Segmenting your array data into an array of clusters containing 2D arrays might be a possibility. This way you make each individual array only contain the amount for one 2D data set and LabVIEW 64 -bit certainly can handle that (within what your system can provide, going beyond what you have physically available in your computer is, while possible, certainly giving you a painfully slow experience).

Rolf Kalbermatter
My Blog
0 Kudos
Message 2 of 10
(1,756 Views)

Let me guess: You append all the data to a single 3D array of double precision numbers?

 

yes i was , i changed it to single precision and now it works thanks alot!

0 Kudos
Message 3 of 10
(1,729 Views)

Hmmm, that feels weird. You still end up having the same 2G elements in that array. At best it buys you a little leeway but certainly can't solve the underlying problem.

Rolf Kalbermatter
My Blog
0 Kudos
Message 4 of 10
(1,719 Views)

I thought it is I32 per dimension.

 

In any case, apending is a bad idea because you are in constant need to reallocate as the array grows. Arrays need to be contiguous in memory. Since you seem to know the final size, it would be significantly more efficient to allocate the final size once, then replace with data.

0 Kudos
Message 5 of 10
(1,682 Views)

@altenbach wrote:

I thought it is I32 per dimension.


If that is the case then reshaping an array may not work; that doesn't seem right.

 

mcduff

0 Kudos
Message 6 of 10
(1,665 Views)

@mcduff wrote:

@altenbach wrote:

I thought it is I32 per dimension.


If that is the case then reshaping an array may not work; that doesn't seem right.


Yes, I agree that it is not entirely clear, but I don't have access to a machine with >16GB to do some tests.

0 Kudos
Message 7 of 10
(1,653 Views)

@mcduff wrote:

@altenbach wrote:

I thought it is I32 per dimension.


If that is the case then reshaping an array may not work; that doesn't seem right.

 

mcduff

it’s definitely an I32 per dimension. But each resize needs to calculate the new size by multiplaying each dimension. This might go wrong if the sizes are not explicitly coerced/casted to an i64 or better yet a size_t, before the multiplication. Then this result needs to be multiplied again with the sizeof() the array element. My guess was that while the last multiplication is obviously done in int64 space, (since sizeof() returns a value compatible to size_t according to the standard) for 64-bit LabVIEW (or you could not create a 16GB array), the first likely isn’t (which is a very easy mistake to make). C does not do auto-promoting of variables to avoid such problems.All such things have to be done explicitly.

 

And changing the size of each dimension itself is not an option (except by adding new 64-bit type codes for String and Arrays). But I would claim that supporting 64-bit sized arrays while nice would be not as important than a new Unicode string datatype.

The 32-bit dimension as part of the handle is fully documented since the earliest days and would break just about any DLL interface using LabVIEW native handles and screw up everything that assumes anything about the flattened format of LabVIEW data.

What makes me doubt my theory is the claim that moving to single precision for the array type would solve the crash.

Rolf Kalbermatter
My Blog
Message 8 of 10
(1,639 Views)

So back to the original problem. I could not find where you say how much physical memory you actually have. Can you give us some numbers? Where does the excessive network usage of LabVIEW come from? Can't you close some of the other memory hogs, such a Matlab?

 

Can you explain in a bit more details what kind of operation you need to do on these 3D arrays? Just projections along the dimensions or in arbitrary orientations?

 

As Rolf already suggested, a 1D array of clusters where each cluster contains a 2D array will get you around some of the array limitations. (It seems you are using LabVIEW 2017. For newer versions (2019+) you could also do a map with an I32 key defining the plane and the value a 2D array representing the plane data.)

 

What is the actual data range and quantization? If these are B&W images, maybe 8 or 16 bit integers are sufficient to represent the data if scaled properly.

 

As I already said, growing arrays can get expensive because of memory allocation issues and is is much better to preallocate once at the final size. LabVIEW will preemptively overallocate when arrays seem to be growing, but eventually it will run out again and needs to find a new and larger contiguous stretch of memory, copy everything over, and continue there. Wash, rinse, repeat. You get more memory fragmentation and you will run out of a sufficient stretch of contiguous memory way before you run out of free memory.

 

What are the sizes in each of the three dimensions?

 

You also need to be careful to avoid unnecessary copies in memory. Don't shuffle these arrays across subVI boundaries, don't connect them to controls and indicators, especially on open front panels. etc. Don't use local variables or value property nodes. Can you show us a simplified version of your program (using simulated data and much smaller 3D arrays) so we can see if optimizations are possible?

 

 

 

0 Kudos
Message 9 of 10
(1,598 Views)

changing to single precision from double precision reduces the memory needed to continuously concatenate a new layer into the 3D volume per layer. Hence with single precision I do not exceed the 16GB limit over my field of view whereas with double precision i do and my labview crashes. 

 

and I am using Labview 2017. Thanks again for the solution! 

0 Kudos
Message 10 of 10
(1,592 Views)