ni.com is currently undergoing scheduled maintenance.
Some services may be unavailable at this time. Please contact us for help or try again later.
ā12-10-2024 10:17 AM
Hi everyone,
I'm working on a project where I receive an array of values ranging from 0.01 to 100,000. The issue is that the array sometimes contains some noise (electronic artifacts). My goal with this VI is to filter the array and create a new one that only contains values close to each other, based on their standard deviation.
However, for some reason, the result isn't what I expected. Iāve attached my VI to this post. Could someone help me figure out what I might be doing wrong or point out improvements I could make?
Here's a summary of what I'm trying to do:
If you need any additional information about my approach or the data I'm working with, feel free to ask. Iād appreciate any insights or suggestions!
Thanks in advance for your help!
Solved! Go to Solution.
ā12-10-2024 10:26 AM
You did not tell us what you expect.
Should the new array be shorter, missing the stray values?
Should the new array be the same size, but having stray value replaced based on neighbors?
There are many confusing points in your VI:
I'll try to figure it out...
ā12-10-2024 10:34 AM
For example if you only want to keep values that are withing 100 of the mean, here's one possibility:
ā12-10-2024 10:36 AM
Thank you for your response, and I apologize for leaving things unclear.
What I want is for the array to be filtered so that only the values close to each other remain.
Here's an example of the type of array I receive:0; 1; 0; 2; 0; 3; 0; 5; 0.1; 6; 0.1; 7; 0.11; 8; ...
And then it goes back down:7; 0.2; 6; 0.1; 5; 0.1; 4; 0.1; 3; 0.2; 2.
What I want is for the resulting array to look like this:1; 2; 3; 4; 5; 6; 7; 8; 7; 6; 5; 4; 3; 2; 1.
The issue is that I couldnāt organize the VI in a way that accomplishes this properly. That's why I'm seeking your help. Iām trying to use standard deviation to identify and filter out the noise, but the logic in my VI isnāt giving me the expected result.
Any guidance or suggestions to organize the VI better would be greatly appreciated. Thank you for your patience!
ā12-10-2024 10:38 AM
Thank you for your response, and I apologize for not being clear.
The main issue is that I receive a wide range of values, from 0.01 up to 10,000, but the array often includes problematic patterns like:100; 0.1; 101, which causes significant errors when trying to organize the data properly.
What Iām trying to do is filter out the noise and get a clean array where only the relevant values remain, organized in ascending and descending order (depending on the incoming data).
For example, if the raw array looks like this:100; 0.1; 101; 0.1; 102; ...,
I want the output to look like:100; 101; 102; ...
Iāve been trying to use standard deviation to detect and remove outliers, but my VI doesnāt seem to handle this correctly. I think thereās a problem with how Iāve implemented the logic, and Iād really appreciate some help to fix it.
Thank you for your time and assistance!
ā12-10-2024 10:44 AM - edited ā12-10-2024 10:49 AM
In all your examples, it seems that the elements with odd or even index are bad. If this is not true, please provide a more realistic example.
Else you could just decimate the array:
Please attach your VI making sure that the input array has typical default values. Also tell us what you expect as result.
ā12-10-2024 10:53 AM - edited ā12-10-2024 10:54 AM
What Iām trying to do is filter out the noise and get a clean array where only the relevant values remain, organized in ascending and descending order (depending on the incoming data).For example, if the raw array looks like this:
100; 0.1; 101; 0.1; 102; ...,
I want the output to look like:100; 101; 102; ...Iāve been trying to use standard deviation to detect and remove outliers, but my VI doesnāt seem to handle this correctly
Standard Deviation alone can't get you there with that kind of data set. The bunch of values near 0.1 carry just as much weight as the bunch of values nearer 100. You aren't doing enough to *discriminate* between the reasonable and the unreasonable values.
Where's this data coming from? Why are you getting garbage values for (apparently) 1/2 the data? Can you fix this at the source end rather than the destination?
When you, as a human, look at your data set, *exactly* how do you think through which values are reasonable and which are unreasonable? Where do your expectations come from? Can you translate that thought process into a different algorithm than a mere standard deviation?
-Kevin P
ā12-10-2024 11:15 AM
Thank you for your patience. I was finally able to set up the machine, and Iāve updated my VI to better illustrate the issue Iām facing.
The problem boils down to dealing with two arrays:
Iāve attempted to create a VI that uses standard deviation to filter the array, aiming to retain only the values that are close to each other while removing the extremely low noise values that are being transmitted alongside the desired data.
Despite my efforts, the result isnāt working as expected. The noise values still persist in the array, and the logic doesnāt seem to properly isolate the desired values.
Iāve attached my updated VI, where you can see:
If anyone can help me refine the VI or suggest a better way to tackle this, Iād be very grateful. Thank you for your time and support!
ā12-10-2024 11:36 AM
Your two arrays just differ by a factor of 50, which seems redundant.
As you can see, a simple decimation, keeping the even indices, is all you need.
ā12-10-2024 12:04 PM
Thank you so much for the help! It was really that simple... I can't believe I was struggling with something so straightforward, but your guidance made all the difference.
Iāve implemented the solution, and itās now working perfectly. I really appreciate the time and effort you all put into helping me out. This community is amazing!
Thanks again!