07-25-2007 05:05 PM
07-26-2007 04:57 AM - edited 07-26-2007 04:57 AM

Message Edité par TiTou le 07-26-2007 11:58 AM
We have two ears and one mouth so that we can listen twice as much as we speak.
Epictetus
07-26-2007 11:26 AM
Hi TiTou,
Thank you very much for the posting. Yes, It seems to be depend on lot of stuff. One more interesting data i have found is, it depend on the content of the image also. Meaning.. if you keep the portion of the image where you will get a match of 1000 score and delete the content of other parts in paint and give the same image it is finding the match. Where as if you have some other pictures in the other parts of image, looks like the algorithm gets distracted. It is surprising to me because the match is obvious in my case, because i am searching something on the Image taken from PRINTSCREEN. So the image is always 100% match.
I guess they are trying to work fast and losing the obvious matches, which may be good for time critical applications. But i dont see any documentation or possibilities to make the algorithm robust by giving it more time using the input parameters.
Any NI folks or experts can comment on this problem?
Thanks,
logic
07-27-2007 02:20 AM
We have two ears and one mouth so that we can listen twice as much as we speak.
Epictetus
07-27-2007 10:30 AM
Hi TiTou,
I will take back my words. It is possible to make it work in the way we want and there are documentation available. We need to spend some time to search and learn it.
I found some top level information in the "NI Vision for LabVIEW" manual as well. Which gives a better understanding about setting the parameters which increases the accuracy. Thank you very much for the help. I do appreciate it.
Thanks,
logic