LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Avoiding Usage of Array Subset

I've read many times (like in here, for instance) about how using Build Array in loops is a Very Bad Thing and that you should always start off by initializing an array once and replacing the values therein to improve performance.

My understanding is that using Array Subset in a loop is also bad, because it once again causes LabVIEW to reallocate memory for the array in each iteration.

My question is, how can I avoid using Array Subset? I need to do an operation on an array in my program, remove some points from the array, and then perform the operation again. While I can move the points to the end of the array, I can't simply replace them with zeroes or the mean or something else innocuous. Any suggestions?
0 Kudos
Message 1 of 8
(3,932 Views)
If you know the index position for the values that you want to remove, you can use the delete from array. But you have to sort and reverse the index values when you delete because the remaining items move positions in the array. I'm not 100% sure if delete from array reuses memory or reallocates new memory.
0 Kudos
Message 2 of 8
(3,908 Views)
I wouldn't say that using such techniques is a Bad Thing (TM).  Something to be avoided, yes, but bad?

There are circumstances where it is more effective to use "bad coding".  Thing like you have five minutes to get an answer and you'll never use the program again, to improve code readability when efficiency is not a concern, and so forth.

The best way I can think of is to plan out your code.  Think how it will work two, three, and ten steps down the line as well as looking at it from inches, ten feet, 1000 feet, and 50,000 feet.  What do you want to get out of the VI?  Would using a technique here or there give you something you need later on?  What tools do you have and how do they work?  How does the system work?  How does the system interact with the systems around it?  My group has found that a large whiteboard and flow-charting skills may cost, oh, 20 man-hours, but will identify missing data, identify critical failures (some of which would have broken the machinery), illuminate interaction issues, and gives everyone an idea of how they need to code to make the system work.  OK, that's more on a project level, but efficient code has to work with everything around it.  Besides, having the larger picture mapped out can provide you better focus on the details.

Look at other people's code, especially programmers who you know are efficient.  Think not only how the code works but how the VIs work together.  Is there a way to improve the code?  Is there something that you could integrate into your code?  Is there someone who can look at your code and constructively criticise it?

The biggest thing I can think of to avoid inefficiencies and poor techniques is to write clean, readable code.  The code should fit on one monitor window, it should be blatent where wires run, comments clarify how and why code works, VI's have icons that make sense to their purpose, etc., etc.  The idea here is to make it easy to follow what is happening when you step back and look how the code works.  From there you can identify where improvements can be made.
0 Kudos
Message 3 of 8
(3,907 Views)


kehander wrote:
...
My question is, how can I avoid using Array Subset? I need to do an operation on an array in my program, remove some points from the array, and then perform the operation again.
...


(1) Well, "array subset" only needs to allocate space for the new subset array, it does not touch the original array. How big are your "subsets"? How are you using "array subset" to delete certain elements from an array? Can you explain?

(2) What kind of "operations" are performed?

How big are your arrays? Can you give some typical example. How many elements are removed at each iteration? Are they randomly distributed over the entire array or do they form a contiguous range?


 

Message 4 of 8
(3,900 Views)
The array is between 50 and 100 units, and I'm removing a couple of values from it each iteration. The values I want to keep are moved to the beginning of the array, so I use Array Subset to get a smaller array starting at index zero and whose length is equal to the number of values I want to keep. Make sense?

The principle operation I am performing uses the Histogram function, which is sensitive to the presence of outliers. I suppose I could possibly rewrite (from scratch if necessary) the Histogram VI and force it to ignore all values beyond a range I know will never be reached by my data, but that could get ugly.
0 Kudos
Message 5 of 8
(3,888 Views)

For array sizes of 50-100, don't worry about memory issues! It's all peanuts. 😉

For huge arrays it might be worth to code your own histogram function and do the rejection test in a loop where you would skip bad values and increment the correct bin for good values.

What histogram function do you currently use, there are at least three? Histogram, General histogram, create histogram(express)).

If you would use the General Histogram, you could specify the upper and lower limits, thus outliers beyond those limits would be dropped automatically. There is even an output with information about dropped values.

Message 6 of 8
(3,887 Views)

@altenbach wrote:

For array sizes of 50-100, don't worry about memory issues! It's all peanuts. 😉




Oh! Well, that's good to know.


What histogram function do you currently use, there are at least three? Histogram, General histogram, create histogram(express)).




Blarg, I completely missed General Histogram. That looks like the ticket, all right.
0 Kudos
Message 7 of 8
(3,838 Views)

I'm curious,

If there is a definite need to do an operation on an array that requires only a partial of that array,  then why not use "array subset primitive function".  It seems like the proper tool for the job.  Delete from array was the function I had chosen for an earlier program until I had done some timing statistics on just those two operations (speed only) and have since replaced ALL delete from array functions with get array subset functions where applicable.  Please let me know if my judgement is in error. 

There is alot of discussion about data copies in memory everywhere (which I am currently studying when and where and amount).  I find my mind wandering to the premature conclusion that although it is necessary to minimize data copies, why not accept the fact it is going to happen and commit to programming around this by perhaps chuncking all data processing in much the same manner it was written by an acquisition device.  Why do I feel the need to completely read the file... Manipulate the file.... create massive burden on system resources... make the user interface and computer unresponsive... all to put it in a graph or indicator wicth it does'nt fit into nicely.  I know I am just a LabVIEW adolescent, but I want to learn.  The LabVIEW paper on managing large data sets is mandatory reading if one is concerned with memory issues.  If there are any more links that someone thinks I should read....please, i am but a baby in this world. Please let me know if my judgement is in error. 

 

Chris Co

0 Kudos
Message 8 of 8
(3,819 Views)