I'm working on a VI that tracks a particle via a microscope and CCD. I need to be able to measure the position of the particle to extreme accuracy... it has been done before using other software, and the key is to measure the particle to subpixel accuracy. I've noticed that when I apply the centroid function to an image (or any other position routines) you can get the x and y positions quoted to several decimal places, i.e. subpixel. How does Labview do this? Do these values really mean what they claim?
There are two methods I know of to get subpixel accuracy that are used by Vision. The first is averaging, and the second is interpolation.
For the centroid, the standard sum Ax/sum A formula is used, where A is the intensity of each pixel. The result is the x coordinate of the centroid (balance point). This is essentially using the average value of a large number of pixels to give you subpixel accuracy. One warning about the centroid function is that it must be a white object on a black background to get useful results. The background is included in the calculation, so if your background is not black it will shift the centroid value.
Interpolation is mostly used when finding an edge. Based on the high and low grayscale values, the intermediate grayscale value at the edge is used to determine the edge location to subpixel accuracy.
Nearly all vision vendors use interpolation to achieve sub-pixel accuracy. A mathematical fit is used to determine the "true" edge of an object.
If you rely on sub-pixel accuracy, or even if you don't, it's best to test your vision system by conducting a gauge R&R test. You should be able to achieve consistent measurements even if you remove and then replace the object to be measured. Manufacturer claims of "1/10th pixel" or "1/60th pixel" accuracy are generally bogus, and quite irrelevant to real-world applications; accuracy depends on your optical system.
In practice you might achieve 1/6th or 1/8th pixel repeatability (1 sigma) under good conditions. 1/4-pixel repeatability is more realistic.
If you rely on sub-pixel measurements, you'll need to calibrate your image. A nonlinear calibration technique that calibrates the entire 2D image would be best. In an uncalibrated image, for example, it's possible that two objects that measure 100.0 pixels apart when they appear centered in the image may measure 99.2 pixels apart when they are moved to a corner of the image.
Have you considered proper lens selection matched to your needed field of view, CCD pixel size and number of pixels, proper transfer tubes/optics to transfer image to camera focal plane to, depth of focus needed, and lighting. These are all important aspects of machine vision. The weakest link in the chain will limit your accuracy. You may want to assess the quality of your acquired images with a USAF1951 resolution target (Edmunds optics) and quantify the MTF (modulation transfer function) of your system. Machine vision is and art and a science!
~~~~~~~~~~~~~~~~~~~~~~~~~~ "It’s the questions that drive us.” ~~~~~~~~~~~~~~~~~~~~~~~~~~