02-14-2017 03:44 PM
I wondering if anyone has work around to prevent memory crash or if this is fixed by some later release. I could remove some UI code but why bother with 64 bit and 16G of RAM.
I am doing a fetch data from an RSA test instrument. I convert the data from little to big endian and plot. I've minimized memory allocation as much as possible. The 32 bit Labview reports Memory Full (2 sec acquisition) and from viewing Win7 task manager this occurs when memory hit 2G. I recompiled and save the Project to 64 bit Labview and an issue still persist. I can view the memory going up to 3G and 64 bit Labview crashes. The is no error message about "Out of Memory" but just a crash report. I'm fetching 200MB. Anyone else experience this?
I wondering if anyone has work around prevent memory crash or if this is fixed by some later release. I could remove some UI code but why bother with 64 bit and 16G of RAM.
02-14-2017 04:34 PM
If you fill 2 GB with 200 MB of data, you have 10 copies of data.
Do you work only with fetched data or do you accumulate from multiple runs?
Can you step through operations and figure out portions of code when "memory goes up"?
Do you have open subVIs that process this data? Each control/indicator on open front panel = plus one copy.
Do you have "retain wire values" on?
If you can build exe, I would debug with small data, and process full data with exe.
02-14-2017 10:39 PM
I not asking to optimize my code, its already optimized. I'm asking for 64bit Labview to handle memory allocation > XGigs.
I'll give more insight to kill curiousity. I'm fetching IQ data from RSA, I convert the 200MB bytes, little endian to big endian (Labview format) using (1) Reverse String (2) String to Byte Array (3) Type Cast to double format. The Type Cast allocates memory. I then reverse the array which allocates memory. Then I Interleave Array (allocates) to get my I & Q data. Now I have 2 arrays that I have to Transpose (allocates) to show in a UI plot (Labview graph plots by column as you may know). Now what the plot shows is identical to what is plotted on the RSA screen.
You see I don't want to conserve any more memory, I want to use memory resources. I'm wondering if there is a fix for 64bit Labview 2015 no SP?
02-15-2017
12:28 AM
- last edited on
03-12-2025
09:32 AM
by
Content Cleaner
Sorry for asking these silly question:
-Are you sure you are running LabVIEW 64bit?
-From the below explanation i doubt you might be still running LabVIEW 32 bit.
How much memory does LabVIEW have access to?
A. An application can request memory, but it is up to the Operating System to accept or deny that request based on what is available (either physical or virtual). LabVIEW 32-bit on Windows XP 32-bit, by default, can only use up to 2 GB of address space. There is a 3 GB boot option that can allow applications on Windows XP 32-bit to use up to 3 GB of address space. LabVIEW 32-bit running on Windows Vista 64-bit Windows 7 64-bit can use up to 4 GB of address space. In any of these configurations you can still run into large buffers failing to allocate if enough contiguous memory is not available. LabVIEW 64-bit on an 64-bit Operating System supports as much RAM as the Operating System supports (theoretically, 16 exabytes). Currently, 64-bit Windows imposes a 16 TB limit.
-Do you have
The 32 bit version LabVIEW in : "C:\Program Files (x86)\National Instruments\LabVIEW 2015\LabVIEW.exe"
The 64 bit version LabVIEW in : "C:\Program Files\National Instruments\LabVIEW 2015\LabVIEW.exe"
-You can use the Profile Performance and Memory window to monitor the memory used by all VIs in memory and identify a subVI with performance issues. Check if any particular VI using more memory.
-Do you have all the Toolkits and drivers support for 64 bit? Ref: https://www.ni.com/en/support/documentation/compatibility/09/national-instruments-product-compatibil...
02-15-2017 08:44 AM
Hi Uday,
oh your on point with the question. I'm running 64bit for sure, the icon is 64 and its installed in Win7 Program Files folder. It does appear to behave as if its 32 bits though and that why I mentioned developing the VI using 32bit then recompiled & save using 64 bit. The only subVIs I have is VISA command/response over ethernet. I checked your links for compatibility and my VISA version 5.4 is 64bit compatible. I fetching the data OK, it the first to allocate memory, its the processing then store to an array and plot that fail to allocate memory. I do believe the problem is finding that contiguous memory space for arrays... how to fix it?
02-15-2017 11:18 AM
02-15-2017 01:09 PM
It appears to crash at different location depending on how many memory alloc... reduce memory alloc it goes away. But I want to use more and more memory. I only have VISA tcp/ip on the block diagram and array handling VIs from the labview pallete.
I could make a this vi a wrapper VI and grab the data in chunks, but why should I since I have plenty of memory unallocated before array allocation becomes a problem. I noticed that VISA grabs the data in chunks anyhow if the Bytes at port reach some limit (but VISA parts work just fine).
This VI is just a driver file, straight forward and concise to used in upper level LV project work. Once I use it as a wrapper subVI, the front panel is not shown and will reduce memory alloc, but maybe the user will try to acquire 5 or 10+ seconds of data instead of 2 seconds.... if its 64bit this should work.
02-16-2017 12:57 PM
Here is a code snippet. Shown is "Show Buffer Allocation" box. There is not a particle step where memory could crash as I've mentioned. The array size could vary and depend on the sample length. Long length, memory allocation crashes. Crashes happen before plotting at times or at plotting (I've observe most prior to plotting).
Also, at the beginning of this VI I use property nodes to free up allocated memory by writing a empty-null value to the array indicator and plot.
Rich J
02-16-2017 02:51 PM - edited 02-16-2017 02:51 PM
Hi!
If you hit the 2GB and then it crashes, then LV thinks in some way that it hits the memory ceilling.
Have you tried forcing LV to recompile your code by pressing CTRL+SHIFT+Run arrow on your top-level VI ?
Also, regarding your memory allocation, can't you do your array manipulations over the U8 source array and then cast to DBL ? Inherent copies would then be the U8 values instead of the DBL (so memory copy is divided by 4..)
02-16-2017 02:59 PM - edited 02-16-2017 03:26 PM
The only inherent memory limit LabVIEW has is that none of its handles can contain more than 2GB of memory. So if you end up appending data to an array and eventually hit the 2GB mark things indeed get hairy. But LabVIEW can allocate in 64 Bit mode as many handles of memory up to 2GB as the underlaying OS is willing to support.
Remember that an array in LabVIEW is always one handle, even if it is multidimensional. So having for instance a 2D array with 2 times 200M samples of double precision values is going to kill your app ( 2 * 200M * 8 Bytes = 3.2GB of memory).
But your entire conversion routine looks pretty convoluted for sure. First LabVIEW is not using big endian internally, but whatever endianess is the prefered one for the platform it is running on. For x86 CPUs this is definitely always little endian. LabVIEW's default endianess when flattening data to a byte stream for transfering it to TCP/IP, or a byte file however is big endian, and the same is true for reading it back as data. Then LabVIEW will do a byte swapping on little endian machines as its prefered flattened byte stream format is big endian.
Now your File read function is fully capable of reading in such data directly without the rather cumbersome string reversal and following reshuffling of data to make the data back into the right order. The Read from binary File node has a parameter where you can tell it what is the expected external endianess and let LabVIEW do the correct byte swapping for the platform it is currently running on.
This code should pretty much do what you are trying to do with all the string to array, typecast, decimate and reshuffle array code, just about 10 times faster and with a lot fewer intermediate memory buffers.
Ohh well sorry, but I misread the original image, you are reading the data from a VISA instrument. Doesn't make a big difference though if you use the Unflatten from Bytes instead of the typecast, you should be able to just ditch about all the code between the VISA Read and the Unflatten from Bytes function and pass the result of that function into the code shown in my image after the Read from binary File.