LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Preallocate array memory without initializing it

Solved!
Go to solution

Hello everybody,

I have an application where I receive acquisition data every second. I have to keep a buffer of this data that contains each new iteration. This buffer is read or flushed by the program when some conditions are met. The buffer is a “2D” buffer, each data packet that I receive is a 1D array that when they stack they form a 2D array.

The problem is that I know the maximum array length, but the actual length of the array won't be the maximum all the time and downstream code expects to receive a buffer of the correct length. So I can't initialize the array with silly values and then replace them, because the array length won't be fixed.

I have read that you can preallocate a queue writing the needed elements and then flushing it, because the queue keeps its maximum size in memory. I have tried this with initialize array and delete from array but it doesn't seem to work.

So, I have created a group of subVIs that keep the data in a preallocated queue of 1D arrays. This solves the problem of the poor writing performance in an uninitialized array, but then I have a slow read because LV returns an 1D array of clusters of 1D arrays, instead of a 2D array, when you want to flush or to read the whole queue. When the number of 1D arrays to read is big, the time to change the data format isn't negligible.

So, there is a way to do it with arrays? Preallocating the memory without having to initialize and fix the array to its maximum size?

I have thought about implementing an index that shows the number of valid rows of the 2D array. This way, the 2D array will have always the maximum size, but it will be returned only the first valid rows. This index will be used also to know where to replace the next value. But I don't know if there is an easier or smarter solution.

 

Thanks for your time,

 

 

EMCCi

0 Kudos
Message 1 of 36
(3,769 Views)

Initialize an (2D) array of the maximum size.

Fill the array with values, and keep the pointer of the size.

When the size equals the maximum size, either stop filling, or start at index 0 and keep a value to note that you've gone round at least once.

 

This is called a circular buffer. Searching for that term will probably give you more details.

 

When done right, you will get a 2d array...

 

BTW. This is usually either slow-ish when adding data, or slow-ish when reading data. Or faster and memory hungry.

0 Kudos
Message 2 of 36
(3,727 Views)

What you *want* is something like a 2D array but where rows can have different lengths.

 

You cannot have that in LabVIEW, not directly.  You've already tried one option via queues -- each row becomes its own 1D array wrapped up inside a cluster, then an array of those clusters represents your "ragged" 2D array.   But you don't like that option.

 

Here's another, though I can't really say whether it'll work out better for you or not.  Maintain 2 distinct 1D arrays.  The data array simply keeps appending your 1D chunks into a larger 1D array.  The other array is populated with the lengths of each of those chunks.  The two together can be used to extract the data in their original chunk sizes, and your consumer code can do whatever it needs to do to handle these variable-length chunks.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 3 of 36
(3,694 Views)

@Kevin_Price wrote:

What you *want* is something like a 2D array but where rows can have different lengths.


This can also be done with Maps starting in LabVIEW 2019.  You can have the iteration index be the key and then the array is the data.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
Message 4 of 36
(3,690 Views)

@crossrulz wrote:

@Kevin_Price wrote:

What you *want* is something like a 2D array but where rows can have different lengths.


This can also be done with Maps starting in LabVIEW 2019.  You can have the iteration index be the key and then the array is the data.


An array of a cluster with an array does that, and although the map is fast, indexing the array is even faster.

Message 5 of 36
(3,685 Views)

I'd also say that an array (1D) of clusters of arrays seems like the logical fit.

 

I haven't tested this, but if you're concerned about allocations, you might consider allocating an array of clusters of arrays (with filled elements, e.g. 0 or NaN or whatever) and then trying the In-Place Element Structure and the Swap Values node to switch the values.

 

You'll still end up allocating memory each time you receive new data, but perhaps LabVIEW will be able to optimise the code and reuse the space being swapped for the next acquisition? 

 

I'd suggest testing it if the simple array of cluster of array solution is insufficient.


GCentral
0 Kudos
Message 6 of 36
(3,672 Views)

What about using a lossy queue of 1-D arrays?

0 Kudos
Message 7 of 36
(3,667 Views)

wiebe@CARYA wrote:
An array of a cluster with an array does that, and although the map is fast, indexing the array is even faster.

That would probably need some extensive testing. Plain arrays are contiguous in memory, but I don't know the allocation game of ragged arrays as described here. I am not sure about the overhead if the inner sizes constantly change. 

0 Kudos
Message 8 of 36
(3,651 Views)

@altenbach wrote:

wiebe@CARYA wrote:
An array of a cluster with an array does that, and although the map is fast, indexing the array is even faster.

That would probably need some extensive testing. Plain arrays are contiguous in memory, but I don't know the allocation game of ragged arrays as described here. I am not sure about the overhead if the inner sizes constantly change. 


I was thinking (both for the maps as the [cluster]) to preallocate the arrays.

 

I'm actually not sure what the problem is here anymore. If you can pull this trick for a 1 channel (1D array), a copy would do it for 2? Unless you've put it in a FGV of course...

 

Initially I thought OP wanted to fill data to maximum, but the different channels fill the data at irregular speed. That can be fixed by preallocating a sufficiently sized 2D array. When filling it, simply keep a pointer for each channel.

 

Guess I could use a better explanation of the problem. I'm pretty sure we all can find several solutions if the problem is clear.

Message 9 of 36
(3,640 Views)

To the OP: You are collecting a set of 1D arrays with varying lengths.  You talk about "downstream code" that needs "a buffer of the correct length".

 

1. What's your downstream code going to do with the data?  What's your idea of the right format, keeping in mind that LabVIEW has no such thing as a 2D array whose rows have different #'s of elements?   One way or the other, you will HAVE TO choose a different method for expressing this kind of "ragged" 2D array format.

 

2. Is there an *actual* problem with your queueing approach?  Or are you just working from a hunch that it's going to be inefficient?   How many 1D arrays might you need to buffer?  What range of sizes do you need to support?

    I kind of have a suspicion you're over-complicating this, working on optimizations that may not really be needed.

 

3. You've found that when you Flush the queue, you get an array of <clusters containing these varying-length 1D arrays>. Did you realize you could instead call Dequeue in a loop and retrieve the original 1D arrays 1 at a time, in order?   Could your "downstream code" work with the data this way instead?

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 10 of 36
(3,598 Views)