Machine Vision

cancel
Showing results for 
Search instead for 
Did you mean: 

Origin location in a corrected image (using calibration data)

Solved!
Go to solution
Hi there,

I am using the vision toolkit to correct deformation in images.
I am not using the dot grid template to let the software generate the calibration map, but instead use a set of points (which I can precisely define in the deformed image) and provide their respective "real world" coordinates. I can then apply this correction to other images.
Up to now everything is fine.
Now I want to overlay two sets of images, which are deformed differently (for instance, they represent the same object, but have been imaged in slightly different conditions). I
can correct one set using the approach above, and the other separately, using its own separate correction, but I am stuck there.
Each calibration/correction is characterized by a scaling factor and an origin coordinate. I need both to be able to overlay the two sets of images.
I have chosen non-linear calibrations that preserves area and I figured a way to recover the scaling factor by computing the areas of all the square pattern in my set of calibration points (as computed in the original image), taking the average and then the square root of this average area: this gives the spacing between equidistant points in the corrected image.
However, I can't figure out how to get the location of the origin in the corrected image. One of my calibration points is associated with position (0, 0) in the real world. However, this is neither its position (in pixel unit) in the original image or the corrected image. I need the latter. How can I obtain it simply (I can think of sophisticated ways to obtain it, but I suspect there might be trivial one).

Thanks in advance for your answer.
Sincerely,
X.
0 Kudos
Message 1 of 5
(3,888 Views)

Dear X,

I'm not sure if this is the information that you are looking for, but take a look in the Vision Concepts Manual, page 3-10

start»programs»National Instruments»Vision»Documentation»NI Vision»Concepts_Manual.pdf
 
"If you specify a list of points instead of a grid for the calibration process,
the software defines a default coordinate system, as follows:
1. The origin is placed at the point in the list with the lowest x-coordinate
value and then the lowest y-coordinate value.
2. The angle is set to zero.
3. The axis direction is set to indirect."

If not, I don't know of a good way to recover the transformation between an original image and a correct image for a specific pixel. It would be one thing if it were a simple linear calibration, but we don't have access to the algorithm used to apply the nonlinear correction, which changes based on the locations of the distortion in the image.

What are you looking for in overlaying the images? Are you doing a comparison?

I don't know the constraints of your project, but one thing you could try is doing a pattern or geometric match for a unique feature in both of the corrected images that you want to overlay (one you know the location of, and that will serve to align the images properly), and then define the coordinate system based on that match.

 

Hope that helps!

 

~Nate 

0 Kudos
Message 2 of 5
(3,873 Views)
Would (0,0) *Transform to space 1*Inverse Transform to space 2 work?
0 Kudos
Message 3 of 5
(3,856 Views)
Solution
Accepted by topic author X.

Hi,

 

I suppose you are referring to the "Convert to/from Real World" Vis. As far as I understand what these Vis do, they only refer to two "spaces". One is the original deformed image (with coordinates in pixel) and the other is the ideal "real world" space from which this image has been generated, with its coordinates expressed in whatever units you choose. I am concerned with a "third space", that is the corrected image generated by the "IMAQ Correct Calibrated Image" Vi, and I want to know where the point, which in the real world has coordinates (0, 0), is located in this image (if possible with fractional pixel coordinates, not just integer pixel coordinates).

I did not post my answer to Nate, but we basically agreed that what I was looking for was not available yet. I have some work to do to obtain this information (as I suspected). This is a shame, as the above Vi obviously knows how to compute what I am looking for, since it maps the pixel (or fractional pixel coordinate) where the origin is located in the real world, onto another (fractional) pixel coordinate in the corrected image.

This might be available in future releases, but for now, some work is needed from us...

X.

 

0 Kudos
Message 4 of 5
(3,850 Views)

This is indeed an unsatisfactory state of affairs! I have reached the same inescapable conclusion.

 

I use a grid target to get a calibration image with a camera that is slightly off-axis to the plane. This gives me a coordinate system so that I can convert from a pixel measured in the raw image to a mm value in the real world image.

 

However, what I want to do is to use the calibration image to un-warp the raw image and do some processing and get some measurements. There is no way to go from a pixel position value in the un-warped image into a real world coordinate.

 

What the calibration package does not provide is the origin (in the un-warped image) and scale of the unwarped image. Surely this is not too much to ask? 

 

Very frustrating.

0 Kudos
Message 5 of 5
(3,119 Views)