Machine Vision

cancel
Showing results for 
Search instead for 
Did you mean: 

How exactly does Labview Vision acheive subpixel accuracy?

I'm working on a VI that tracks a particle via a microscope and CCD. I need to be able to measure the position of the particle to extreme accuracy... it has been done before using other software, and the key is to measure the particle to subpixel accuracy. I've noticed that when I apply the centroid function to an image (or any other position routines) you can get the x and y positions quoted to several decimal places, i.e. subpixel. How does Labview do this? Do these values really mean what they claim?
0 Kudos
Message 1 of 5
(6,753 Views)
There are two methods I know of to get subpixel accuracy that are used by Vision. The first is averaging, and the second is interpolation.

For the centroid, the standard sum Ax/sum A formula is used, where A is the intensity of each pixel. The result is the x coordinate of the centroid (balance point). This is essentially using the average value of a large number of pixels to give you subpixel accuracy. One warning about the centroid function is that it must be a white object on a black background to get useful results. The background is included in the calculation, so if your background is not black it will shift the centroid value.

Interpolation is mostly used when finding an edge. Based on the high and low grayscale values, the intermediate grayscale value at the edge is used to determine the edge location to subpixel accuracy.

Bruce
Bruce Ammons
Ammons Engineering
Message 2 of 5
(6,745 Views)
Nearly all vision vendors use interpolation to achieve sub-pixel accuracy. A mathematical fit is used to determine the "true" edge of an object.

If you rely on sub-pixel accuracy, or even if you don't, it's best to test your vision system by conducting a gauge R&R test. You should be able to achieve consistent measurements even if you remove and then replace the object to be measured. Manufacturer claims of "1/10th pixel" or "1/60th pixel" accuracy are generally bogus, and quite irrelevant to real-world applications; accuracy depends on your optical system.

In practice you might achieve 1/6th or 1/8th pixel repeatability (1 sigma) under good conditions. 1/4-pixel repeatability is more realistic.

If you rely on sub-pixel measurements, you'll need to calibrate your image. A nonlinear calibration technique that calibrates the entire 2D image would be best. In an uncalibrated image, for example, it's possible that two objects that measure 100.0 pixels apart when they appear centered in the image may measure 99.2 pixels apart when they are moved to a corner of the image.
Message 3 of 5
(6,664 Views)
Good points Rethunk!

Have you considered proper lens selection matched to your needed field of view, CCD pixel size and number of pixels, proper transfer tubes/optics to transfer image to camera focal plane to, depth of focus needed, and lighting. These are all important aspects of machine vision. The weakest link in the chain will limit your accuracy. You may want to assess the quality of your acquired images with a USAF1951 resolution target (Edmunds optics) and quantify the MTF (modulation transfer function) of your system. Machine vision is and art and a science!

Good Luck!
~~~~~~~~~~~~~~~~~~~~~~~~~~
"It’s the questions that drive us.”
~~~~~~~~~~~~~~~~~~~~~~~~~~
0 Kudos
Message 4 of 5
(6,661 Views)
I am working on the exact same problem.

Our tracker is up and working pretty well. It uses the pattern matching VIs which gives it advantages over the Matlab and IDL programs written by Crocker and co.

If you go to our website
www.st-and.ac.uk/~gfm2/tracker.htm

You will be able to get the latest version of the VIs.

I am waiting for a piezo stage to calibrate the sub-pixel accuracy. Once that is done the program should be pretty powerful.

Graham
Message 5 of 5
(6,475 Views)