Hello,
I posses Labview 7.1 and Imaq Vision.
I have built an application, which acquires an image streaming out of a camera, which is of a line scan type, and is based on mirror movement on axis x and axis y.
Now, that I'm able to get the image, I would like to have a transformation between pixels coordinates on the acquired image to the mirrors coordinates of the camera. The idea is that by clicking a location on the image, the application will be able to move the mirrors to the analogous location.
Suppose I am able to have a number of points with their locations in pixel coordinates and in mirror coordinates. I have read that there are ways to use Imaq built in calibrations in order to transform pixels into real world coordinates.
My questions are these:
A) Is my approach of getting reference points is the best way to use calibration in my case ? I've read that there are also other ways such as using a circled map, calibration based on axes. Quite frankly I have not fully understood how this axes calibration works and if it is neccessary.
I would like to emphasize that calibration, based on a circled map pattern is not practical in my case, since the calibration is intentioned to be done by the user, and so he won't have access to actually removing the object that is getting shot by the camera in order to set this pattern.
Also, the image that is actually seen in the application, its borders are not necessarily the borders of the mirrors movement. In other words, there is data loss on the way to the image matrix and so I suspect that axis calibration cannot be used in this case.
B) In case reference points is indeed the right way to start with, is there an example how I can implement this in practice ?
I'm not familiar with calibrations, and the mass of the parameters in addition to various calibrations methods (simple, with distortion, without distortion, liner, non linear.....) is quiet confusing. So, can anyone please advice me on what I should start with and hopefully link an example ?
Thanks.