LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

How can I make a simple 1D array filter.

When I aquire my data I get a 1D array with anywhere from 10 - 30 data points. There are always ten good data points the rest are noise or false triggeres. a good data point is in the range of .1-.2 a bad point is .001-.01 how can I filter out thse bad points? Im using labview 8.6. also I think this program will work if it could be backconverted from 9 to 8.6.

0 Kudos
Message 1 of 10
(10,265 Views)

This should do it I think

Edit: My first attemp had broken wire. Sorry!

Message Edited by Coq Rouge on 06-08-2010 05:16 PM


Besides which, my opinion is that Express VIs Carthage must be destroyed deleted
(Sorry no Labview "brag list" so far)
Message 2 of 10
(10,256 Views)

hi -

 

here is how i would do it. 

 


abcd.JPG

 

from L to R -  

1 - get "array size" and connect that to the N of the for loop.

2 - "index array" so you can compare values one at a time.

3 - "in range and coerce" - (note: you need to wire values above and below the orange line shown for the range you'd like)

4 - case structure: if "in range" is true, then take the value and put it into a new array (this new array only has the good values you want) (false case is blank)

 

good luck!

0 Kudos
Message 3 of 10
(10,240 Views)

Dan K wrote:

hi -

 

here is how i would do it. 

 


abcd.JPG

 


That is not very good coding practice at all, it is Labview NO-NO coding practice. But is is very typical coding practice for someone who has moved from C to Labview. In Labview a local variable is a copy of the data in the control/indicator. Not a pointer. So then data is changed, every locale (data copy) has to be updated. For large data set. This can slow down your program considerable, or make it unstable. Then using global or local variables. You can also experience something named race conditions. This can be very hard to debug in some cases. So in other words you should be very careful then using this kind of variable. In most cases it is not needed at all. And it should certainly NOT be used as a tool for cleaning up diagrams.



Besides which, my opinion is that Express VIs Carthage must be destroyed deleted
(Sorry no Labview "brag list" so far)
0 Kudos
Message 4 of 10
(10,215 Views)

jbaack wrote:

When I aquire my data I get a 1D array with anywhere from 10 - 30 data points. There are always ten good data points the rest are noise or false triggeres. a good data point is in the range of .1-.2 a bad point is .001-.01 how can I filter out thse bad points? Im using labview 8.6. also I think this program will work if it could be backconverted from 9 to 8.6.


 

 
Define "filter out". If you drop values, the position of the rest of the data in the array changes, so I assume that the order of points is not important. Is this assumption correct?
 
If you want exactly 10 good data points and order does not matter (e.g. to take the average), just sort the data and take the 10 largest values.
 
If the position and order matters, you could zero all unimportant points, keeping the array at the same size.
 
Please be more specific on what you actually want. 
 
(And Yes, please ignore the example of Dan K. it is overly complicated and inefficient and does not follow any established coding rules).
0 Kudos
Message 5 of 10
(10,179 Views)

Dan K wrote: 

1 - get "array size" and connect that to the N of the for loop.

2 - "index array" so you can compare values one at a time.

3 - "in range and coerce" - (note: you need to wire values above and below the orange line shown for the range you'd like)

4 - case structure: if "in range" is true, then take the value and put it into a new array (this new array only has the good values you want) (false case is blank)


 

 
 NOOOOOO!!!!
 
Step 1 and 2 are not necessary. Simply autoindex the original array at the loop boundary. Voila!
(an "index array" with the index wired to [i] is never needed. Autoindexing does the same thing and automatically determines the number of iteration, eliminating step 1). 
 
The second array is not initialized, meaning it will grow without bounds of you run the VI many times. Every time the loop runs, the array is resized, requiring a new buffer allocation. The original code is much better, because it preallocates the output array, replaces elements, then strips the extra length at the end. For large datasets, the original code will be orders of magnitude faster.
0 Kudos
Message 6 of 10
(10,174 Views)

Quick question, how come the processed data does include the low range but not the high range?

Thank you for this by the way it was very helpful.  

0 Kudos
Message 7 of 10
(7,329 Views)

@Pacarranza9727 wrote:

Quick question, how come the processed data does include the low range but not the high range?

Thank you for this by the way it was very helpful.  


Because this is a super-common mistake.  The top diamond should be filled in.  (Right-click, select "Include upper limit.")

Bill
CLD
(Mid-Level minion.)
My support system ensures that I don't look totally incompetent.
Proud to say that I've progressed beyond knowing just enough to be dangerous. I now know enough to know that I have no clue about anything at all.
Humble author of the CLAD Nugget.
Message 8 of 10
(7,323 Views)

I guess using FP numbers adds another layer of complexity to the issue...

Bill
CLD
(Mid-Level minion.)
My support system ensures that I don't look totally incompetent.
Proud to say that I've progressed beyond knowing just enough to be dangerous. I now know enough to know that I have no clue about anything at all.
Humble author of the CLAD Nugget.
0 Kudos
Message 9 of 10
(7,320 Views)

Also note that this is an old thread and many things have improved. For example we now have conditional tunnels, eliminating the need for a case structure and explicit array operations.

0 Kudos
Message 10 of 10
(7,312 Views)