From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

Machine Vision

cancel
Showing results for 
Search instead for 
Did you mean: 

Feature based geometric match stability issue

AndyN,

one word of advice is not to use templates which are "rotated" relative to the images where you want to look for them. In general i had bad experience with Vision if i try to detect rotated shapes if angles are more than 5 degrees. So it might be worthwhile to try to analyze your image first and rotate it to some predefined position. Shift based match pattern works way better than rotation based.

www.xinstruments.com - Custom Software for Industrial Automation

www.hdrconverter.com - Picture processing made easy

Message 21 of 36
(1,580 Views)

I have been in touch with our specialty product engineer on the topic, and though I have no big news, I have a couple of suggestions.  

 

His first suggestion on seeing the situation was to adjust the template.  The settings in the Template Editor can influence to a large extent how stable the matching is.  For example, Making sure the edge detection settings are set up for the kinds of edges you are seeing can make a difference.  If with the Vision Assistant, you use the Line Profile tool to examine the edges, you can see and evaluate them for this purpose. 

 

XInstruments has a suggestion worth considering, but it sounds like your application involves varying orientations and you ultimately have little control over the angle.  

 

I will be required on another project and may not be able to follow up on this issue, but if that is the case I will see to it that another engineer is filled in and brought up to speed.  The specialty product engineer will take a look at this, and it may merit some further investigation.

0 Kudos
Message 22 of 36
(1,569 Views)

It turns out that after closer inspection, the template was neither the same size or the same bit depth as the images you were searching.  We made a new template, and are getting two nearly perfect matches.  Let me know how this new template works on your system.

0 Kudos
Message 23 of 36
(1,552 Views)

I am not sure if we are on the same page anymore.

 

Notfound.png 510x500 8b/pix

Found1.png   510x500 8b/pix

FTemplate.png 225x186 8b/pix

 

Both test images same resolution

The template is  smaller because it should be.

All images, test and template, same bit depth.

 

We are still about those three png's. All done acquired and saved through labview.

 

The image you used was intended for visual comparison and and nothing else. It was put together with image editing software. I thought it was obvious.

 

 

 

Message 24 of 36
(1,545 Views)

Hello AndyN,

 

I am another applications engineer who will be working with you on helping to resolve this issue.

 

As the geometric pattern that we are matching increases in complexity, it makes sense that this would also increase the number of false negatives obtained. If we simplified this pattern by selecting one of the interface surfaces, and one section of that interface it may increase the robustness of our detection.

 

I built a template using a section of the lower surface that was able to detect the pattern in both images, however I am unsure how well it would perform on other images.

 

Do you have any additional test images available? I feel that it would be necessary to test several images before I would declare any of the templates that I have built as robust, or to determine what is increasing or decreasing the reliability.

 

Thanks,

 

- Joel

0 Kudos
Message 25 of 36
(1,533 Views)

Joel,

 

Both, the edge and feature based methods are producing a correlation based score.

Please, correct me here, if I am wrong.

It usually means that it is a similarity based measure. 

So C is similar to A at x level and to B at Y level.

 

What it seems is wrong here is the fact that the method/math/algorithm does not return the similarity correctly. It says C is similar to B at zero level.

If this is true, than there is no point in using that method. It is a binary method and the correlation factor means nothing.

It does not matter what feature I use and how good the image is. Your hope is as good of a measure of robustness.

You come up with a set of feature on two almost identical images, they will fail on the third, three, it will fail on the fourth, and so on.

NI usually does not believe in a problem until  someone from there can reproduce it 😉 Glad, doctors and mechanics don't require that.
I give you something you can reproduce. My question is why the method fails delivering any similarity?

No more massaging the template or trying convince me I did something wrong, filters etc. 

If the math is right, why it returns zero? If it is wrong, please file a CAR.

 

 

Message 26 of 36
(1,530 Views)

Hello AndyN,

 

I think that since the three methods are unique, that there is no real transitive property of correlation based scores. If method A and method B detect a match, that does not prove that a separate method C has to detect a match at any level of certainty.

 

I am more than willing to admit that there may be an issue with the algorithm. I think that most of our advice was, how can we work with this in order to get you up an running as successfully and quickly as possible. This may have come off as attempting to persuade you that the algorithm was not broken, which was not our intent. Our advice of modifying templates and implementation was merely advice for if this VI is not functioning correctly how can we still obtain the desired functionality with it.

 

I am filling a CAR on this VI, and I will be using these templates and images as an example of how this is not behaving as expected. However the results of this investigation will not be useful for this project unless we are able to wait a long time for the next patch if it is even implemented that quickly.

 

If you take your car to a mechanic and tell him the engine is stuttering and you can't replicate the issue when he is there the mechanic would likely not know where to start. I am sure they would gladly take the car in, since they charge by the hour Smiley Wink. In order for us to investigate this issue and find out what is going wrong we do need to be able replicate the issue otherwise we are left without any starting point.

 

Moving forward on how to make this project successful would require that we consider massaging the template until we find a combination that proves robust, or we can consider migrating this inspection to an alternative VI or method for pattern matching. 

 

Cheers,

 

Joel

0 Kudos
Message 27 of 36
(1,525 Views)

Thanks Joel,

I see that we are making a great progress.

Is there a place I can see the CAR request and its description?

 

I appreciate all the suggestions. In my project I process 60fps for up to several minutes at a time so I have to have a pretty reliable method.

If it fails the task starts form the beginning. 

If this wasn't real-time and involving  quite a number of people project I would have a different perspective.

 

Long weekend around the corner!

 

0 Kudos
Message 28 of 36
(1,497 Views)

Hello AndyN,

 

I am glad to hear that this application is making good progress. I can see how the this application has a high reliability requirement with 60 fps for a few minutes which would mean we need to examine 180 images per run.

 

I reported this issue to our R&D Team under CAR #384585, the title is "Feature based geometric match instability" and it is currently under investigation. Having example code and images greatly speeds up the investigation as we have a known benchmark, and I would like to thank you for your investigation in to specific examples that can display this instability.

 

Thanks for your feedback, and have a great weekend!

 

Joel

 

0 Kudos
Message 29 of 36
(1,489 Views)

Was there any progress on this issue?

0 Kudos
Message 30 of 36
(1,362 Views)