LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

64 bit Labview 2015 memory access

Read binary data from byte stream.png

You may have to play with the inputs to the Reshape Array function to get the data into the correct row column data organization and possibly use a Transpose Array afterwards to make it work for your further needs but using this correctly can safe you the decimate and combine array rube goldberg code too.

Rolf Kalbermatter
My Blog
0 Kudos
Message 11 of 31
(1,594 Views)

zyl7,

I've mentioned recompiled all 32bit code (shown in the pic) to 64bit and saved. The data from the RSA comes in as I,Q,I,Q... so I need to convert to single (32bit real number) (instrument uses) first before parsing I,Q. If your suggestion is possible I haven't tried it, at this time its just a mind thought it won't work.

 

rolfk,

I not appending any arrays, its all array allocation. If your saying an array is limited by 2GB, I'm not near that size yet. Its more like 200MByte memory allocs at a time (1/10th the size of 2GB). Not sure what you mean by quote: "But your entire conversion routine looks pretty convoluted for sure". If your talking about little to big endian, I use 3 pallette tools to do the conversion... that idea came from a post on this forum (kudos I don't remember who). If your talking about the parcing of I,Q, I use 4 icons from the array pallet then plot. If your talking about the WFPlot property, well that pretty straight forward looking at the BD property node.

 

Let me make it clear I'm plotting I & Q vs time. The WFGraph get a 2D array (2 columns), an I and Q and the property node sets the time axis. I mention this in the initial post. I cannot see why this would have a memory allocation limit as yet.

 

OK let me add,

Rolf,

your code with possibly eliminate 1 memory allocate which occurs when I use the Type Cast. So this will reduce memory alloc but it won't solve the problem to plot the IQ  of the data to a WFPlot... but maybe it will work, I'll try it and get back to on this... wait wait Labview is big endian so your code need to convert little to big then plot.

0 Kudos
Message 12 of 31
(1,589 Views)

The code I propose definitely will use less intermediate memory buffers so will be a lot more memory optimized despite your initial claim to have optimized everyting already.

your code with possibly eliminate 1 memory allocate which occurs when I use the Type Cast. So this will reduce memory alloc but it won't solve the problem to plot the IQ  of the data to a WFPlot... but maybe it will work, I'll try it and get back to on this...

There are three buffers that the code will create in addition to the buffers needed for any terminal you may attach to any of the wires (yes, your string control you add to the wire of the GPIB read will also consume an extra memory buffer and so does the graph and the array output).

When I look at your code I see at least 6 buffer allocation dots, with two of them being for the half amount of the full array, so still 5 times the original memory, plus the 3 for the mentioned controls. This quickly adds up and unless your machine has 16GB of memory you might be already resource constrained when LabVIEW is starting up. Those property nodes you mentioned earlier do absolutely nothing before you run your VI the first time, and still serve no purpose when you run the same VI over and over again. They only can eliviate the issue a little when you run different VIs alternatingly and only when the previous one has gone idle (is not executing) as they will tell LabVIEW to release memory it has kept reserved for the next run of that same VI.

Rolf Kalbermatter
My Blog
0 Kudos
Message 13 of 31
(1,584 Views)

Your code needs to do the following:

recieve little endian (single format) from the instrument.

parce I,Q,I,Q alternate values NOT IIIIIIIQQQQQ half of array.

convert to big endian (for plotting) and output array

 

I'm not reading data from a file, your code has more icons from the pallete if I were to implement as is for the little to big endian conversion (is it correct conversion), so is it more optimized, I don't know for sure. The idea is to show what's on the instrument in a WFGraph and pass the array out of the VI.

0 Kudos
Message 14 of 31
(1,574 Views)

@richjoh wrote:

Your code needs to do the following:

recieve little endian (single format) from the instrument.

parce I,Q,I,Q alternate values NOT IIIIIIIQQQQQ half of array.

convert to big endian (for plotting) and output array

 


The IQIQ formatting can be easily done by exchanging the size inputs to the Reshape Array function and then maybe followed by a Transpose Array to get the row and column right.

 

Where does the sudden request for converting to big endian happen? Unless you execute this code on an old Mac Classic or one of the VxWorks cRIOs which use a PPC, LabVIEW will need little endian data to work with! The typecast function assumes that the binary data side is in big endian and does a swapping on little endian machines that you then have to undo before or afterwards, but the Flatten and Unflatten functions give you a choice as to what endianess your data is on the byte stream size!

 

Rolf Kalbermatter
My Blog
0 Kudos
Message 15 of 31
(1,567 Views)

After a second look at your sample code, it's no different on memory allocation than converting using the pallette Type Cast (single precision). Turn on Show Buffer Allocation. 

 

The Unflatten From String Function inputs "binary string" of flattened data of type you wire to type string. Since you need to plot you need to use big endian ultimately LV format. For your code to work you need to Flatten the little endian from the Instrument first, then unflatten to big endian and in this case that is 2 memory allocates. My code uses 1 memory allocation (only the type cast to single).

 

The Instrument sends me data little endian, 1st value arrives first. Since I reverse the data converting it (the single precision) must be reverse after the conversion or my plot show up incorrect (right to left). Your comment is NOT correct on parsing the I Q again. The Array values index by pairs, e.g. I(s) are index 0,2,4 evens Q(s) is 1,3,5 odds. Labview has an array function on the pallete to perform this function.

 

I'm using a Tek 5160 RSA, running windows XP OS, it data format is little endian like all MS OS(s).

 

As I said the code is very straight forward and solid but that is not what I'm posting here to debate (how much memory alloca can I reduce). Instead, how much memory can I use.... OK if its 2G for any array I'm not anywhere near this limit.

0 Kudos
Message 16 of 31
(1,546 Views)

I put a In Place Element (IPE) around the Reverse Array (1D) and it now works using a 200MB Fetch array. Memory Alloc failure happened at the Decimate 1D Array prior to making this change. This Labview 2015 64bit reach 4GB of Memory from Win7 Task Manager.

 

Its a good idea to unallocate memory by clearing arrays and control at start of a vi to free up memory allocated memory. I didn't get a chance to test this with even larger size but I'll do that in a few days and report back.

 

Even though IPE showed a memory alloc from the Reverse Array it appears to make a big difference to use IPE. I wish IPE could handle difference Representation, e.g. Unsigned input to double output instead of just 1 Representation in and out of the IPE.

 

Thanks for all the comments.

0 Kudos
Message 17 of 31
(1,529 Views)

@richjoh wrote:

 

The Unflatten From String Function inputs "binary string" of flattened data of type you wire to type string. Since you need to plot you need to use big endian ultimately LV format. For your code to work you need to Flatten the little endian from the Instrument first, then unflatten to big endian and in this case that is 2 memory allocates. My code uses 1 memory allocation (only the type cast to single).

 


Please calm down. I started to use LabVIEW in 1992 and have used the Flatten, Unflatten and Typecast functions many times in those 25 years, so you can safely assume that I know what I'm talking about.

It's not the LabVIEW graph that requires big endian data, but the use of the Typecast function you insist on using. LabVIEW internally always uses the prefered endianess of the underlaying CPU and OS architecture, and since all current desktop versions of LabVIEW only run on Intel x86 architecture, they ALL use little endian format.

However when converting to and from flattened data, LabVIEW assumes by default big endian (also often called network byte order) data on the byte stream side. The Flatten and Unflatten functions received a selection input in LabVIEW 8.0 that allows you to change this default to something else, the Typecast never received such an upgrade and therefore still insists on big endian byte swapping when executed on a little endian system.

 Unflatten versus reverse string, typecast, unreverse.png

 

This VI snippet shows you exactly how the Unflatten from String can replace your entire reverse string array, typecast and unreverse floating point array with one single function, resulting not only in a cleaner but also more performant solution, since the Unflatten will basically just convert the byte stream into the desired floating point array without doing any byte swapping at all. While LabVIEW is smart enough to not copy the entire 200MB of data at the two Reverse String and Reverse Array functions (it instead remembers that the two arrays are to be considered reversed and downstream functions often but not always can just interpret the data accordingly), the Typecast definitely does a complete byte swapping of every single 4 byte value in the array, resulting in a slower execution speed than a mere Unflatten from string, especially for a 200MB array of data.

 

Now we have an array of interleaved single precision values that you would like to have correctly deinterleaved. Here you can of course Decimate the array to deinterleave it explicitedly and then combine it back again into a 2D array, or you can simply Reshape the array into a 2D array with the correct values applied to both dimensional inputs. Yes the array is then in the wrong row/column order to directly be displayed in a graph, but you can either explicitedly Transpose the array on the diagram, or enable the option on the graph to view the data in transposed mode when drawing the plots.

correct deinterleaving of data.png

 

All of these changes to the code will not only reduce the number of memory buffers that LabVIEW will need to create and the CPU load from doing unneccessary memory copies, but it will actually also deconvolute your diagram and make it easier to understand for anyone who has a good understanding of LabVIEW functions!

Also note that the Transpose Array function is generally also not a complete memory copy function just like the Reverse String and Reverse Array node, since LabVIEW also can remember when an array has been transposed without needing to actually transpose the whole array. It will still show as a full memory allocation dot, since technically LabVIEW creates a so called subarray. That is a data management structure that remembers if an array has been reversed, transposed or some other such operation and maintains a pointer to the original data. Many functions that accept an array as input can directly work with this subarray and just interpret the data in the original array accordingly. Some can't and they will invoke an implicit subarray to real array conversion which will of course need to copy the entire array data anyhow.

 

Download these two VI snippets to your desktop and place them from there into an empty LabVIEW diagram and you can directly experiment with them (just as you could do with the previous snippets I included in my posts).

Rolf Kalbermatter
My Blog
Message 18 of 31
(1,522 Views)

Depending on what you do with the data later, another possibility would be to unflatten to a Complex Single 1D array (CSG) directly. No reshape needed. All in one step!

 

Now you have the "I" in the real part and the "Q" in the imaginary. Often operating on IQ data is easier when using a complex datatype. 😄 (e.g. to get the magnitude, just take the absolute value!)

 

 

Message 19 of 31
(1,504 Views)

@richjoh wrote:

I not asking to optimize my code, its already optimized. I'm asking for 64bit Labview to handle memory allocation > XGigs. 

 

I'll give more insight to kill curiousity. (Here I desecribe a lot of memory allocations to work on the same data, the opposite of optimization)

 

You see I don't want to conserve any more memory, I want to use memory resources. I'm wondering if there is a fix for 64bit Labview 2015 no SP?


There's a maximum amount of memory any application can handle.  If your data set is only 200MB and you're growing beyond 2GB, you need to stop looking for more ways to be inefficient easier and look at optimization.  You haven't optimized the code. 

 

Did you see the crash in both 32/64 or just 64?  With your non-optimized code, what's the maximum growth you expect to see.  What other apps run on that PC?  How much RAM do you have installed?  What OS do you have? I'm assuming 64 bit since you installed LV)

Message 20 of 31
(1,480 Views)