01-28-2015 03:38 AM
Hello,
I have a 2D array of floating point values. Each row represents the profile of an object (the width in a given plane). Each object is sampled about 40 times - so 40 values per row, and we will end up with about 150 profiles. In other words it's not exactly a small array.
I need to compare these profiles with an incoming profile and return the best match for these values. Note that the floating point values will never match exactly.
I need to perform this comparison about 3 times per second, so I'm looking for a fast yet reliable solution.
I have found a couple posts here on the forum, which suggest simply running through each and every value and compare the variance. Though I haven't benchmarked it, I fear it may be resource intensive (and find it rather unelegant as well).
I have looked into CrossCorrelation.vi, which seems to be what I'm looking for. The problem there is that I'm not really sure what to make of the output from that function.
As far as I understand, the higher the values outputted from the Rxy terminal, the higher the deviation between the inputs. If that's correct, I could simply add up the Rxy values for each comparison and pick the one with the lowest value.
What I'm looking for is thoughts and ideas in general about my cross correlation solution (would it work at all?), and about any alternative solutions you might know of.
Thank you,
-Tobias
Solved! Go to Solution.
01-28-2015 04:07 AM
Hi Tobias,
to find a "best match" you have to define a criteria for your matches.
Usually people use:
- max deviation
- mean deviation
- root mean squares…
Once you decide which criteria you want to use you can calculate that value for each profile. Then simply select the profile with best criteria value…
01-28-2015 04:33 AM
Hi GerdW,
Many thanks for your reply.
Do you have any idea which of these are most precise and fastest?
Do you know if people decide based on aspects other than these two?
Thanks for the fast reply, much obliged.
01-28-2015 04:36 AM
Hi Tobias,
calculating differences is rather fast. Adding or getting the mean of the differences is fast too. A modern CPU should do all this in some µs for arrays of 1000 elements…
How fast should that calculation be done?
01-28-2015 05:33 AM
Yeah, a bit of testing seems to show I'll have little trouble using the deviation VIs.
Thanks a bunch for your help 🙂