Machine Vision

cancel
Showing results for 
Search instead for 
Did you mean: 

normalized cross correlation vs. pattern matching

Hi all,

Software used: NI Labview 2012, NI Vision, normalized cross correlation, match pattern 2

What im trying to do:

- For a manufacturing inspection we want to compare beam images of an illumination device. i.e. we want to compare each beam image with another image in the manufacturing lot to detect the stability or variations.

- The images are automatically optimised regarding exposing time so that the intensity range always ends up to be 0-240..255 8bit

 

Solution approach:

- I do a normalized cross correlation between each of the images of one lot. then for each correlation. I take the maximum of the correlation image/map as best correlation/ similarity measure. I generate a matrix where each of the images is correlated with each other.

- to proof the concept i searched for a second possibility to do the same. I found the match pattern 2. I always used one image as template and the respective other images as search image. I use the match score as similarity measure. I use learn mode all and match mode shift (cause we do not await to have >4° tilt)

- beforehand all images are cropped to an equal ROI size and downsampled to equal size to increase speed, (cause we are not interested in the high frequency/ low size features) 

 

Questions:

1. The match score seems not to be proportional to the correlation maximum, What can be the reason? cause xcorr is just shift invariant?

2. The match score seems to be much more sensitive than the x corr. Ranges normalized to 1, results of one test run on same images:

xcorr range (0.90...0.988)

match score (0.6...0.96)

3. Does the match pattern work similarly to cross corr? I mean we are comparing images of the same size. Usually learn patterns are smaller than the search image. Am i right to assume that  in pat. match. also the entire images are x correlated and so comparing images of the same size directly with each other makes sense?

 

 

Attachments: the vi with vision xcorr and match.

0 Kudos
Message 1 of 3
(2,353 Views)

Hi,

 

IMAQ Match Pattern is designed to locate templates, not to score similarity. I have played around with the scores and have not yet found a direct mathematical relationship with localized correlation scores.

 

The function does a lot of internal optimization .. not to mention that IMAQ FFT is kind of sloppy with non-even image sizes. There are a couple of different methods implemented in IMAQ Match Pattern, you are dealing with something based on normalized xcorr, but there are some nice proprietary ideas in the background (cough.. read NIs patents .. cough..)

 

I would use solve this problem in two steps

1) Locating the template I don't know how your images vary.. but I often use either correlation or IMAQ Match Pattern to locate templates. Looking at your image, I can imagine that the centroid / centroid of a low-passed image could be an alternative that is more robust to aberrations / noise, but you need to make a call based on your imaging system and problem space.

2) Comparing the template
If you are interested in structural changes (shape - yes, intensity - no), IMAQ SSIM gives you a nice "difference image" and scores. Otherwise, Golden template matching is probably the way to go. Or write your own comparison algorithm. What kind of variations are you looking for?

 

Btw: Try to avoid using the full dynamic range (255) and aim for 250. Otherwise, you'll saturate the sensor and won't notice.

 

Cheers,
Birgit.

Message 2 of 3
(2,338 Views)

Dear b.ploetzeneder,

Many thanks for the reply and suggestions! I'm going to have a look on the "cough" suggestion 😉

The location problem:

Cause we do have irregular beam shape and intensities the center of gravity wont work properly and xcorr might be the best solution. The image in the vi is just one and the variations are quite big. (in fact its a multi beam optic and we need to compare multiple beams all effected in a different way by the manufacturing process..)

The comparison:

Shape: Due to intensity vatiations that are important we can not use just shape measurements unfortunately.

Golden template: Currently we are in a state of production where we dont have compensated optics yet. First we just need to stabilize or increase repeatibility regardless of the shape. Therfore it's not easy to define a "golden" template. Before we thought about to optically simulate a golden template but that is not possible at this state of manufacturing, but later on... Therfore i was searching for more gerneral solutions what led me to the xcorr and match pattern. But I'm going to have a look at that, maybe we can anyway build a mean golden template image from a set of measurement/production images! Thanks for that suggestion! 🙂

 

0 Kudos
Message 3 of 3
(2,328 Views)