12-16-2014 02:43 AM
I am wondering how can the extrinsic parameters of a camera be constant? I know that the rotation matrix aligns the world coordinate system axises to the camera coordinate system, and the translation matrix/vector aligns the origo on top of each other.
But how can the parameters be constant? Would it not somehow be required that I know the orientation of the camera in world space? I.e. by an accelerometer or something? I hope someone can help me wrap my head around this.
Solved! Go to Solution.
12-16-2014 05:30 AM - edited 12-16-2014 05:31 AM
Hello,
I do not understand what do you mean by "constant". The extrinsic parameters of a camera are not constant (they are in the case where the setup remains exactly the same), but are dependent on the camera position. Extrinsic parameters relate the camera position and rotation to a known reference frame (calibration grid, calibration body, another camera...).
Please explain your problem more clearly.
Best regards,
K
12-16-2014 05:52 AM
I actually almost think you answered my question - that it is not constant, only in a constant setup. In my case I need the rotation matrix of a depth camera, so I can relate it to the color camera.
However I am not sure I understand how the calibration (obtaining the rotation matrix) from a chessboard pattern shown at different angles will relate to another camera, since it is, as you say, in fact not stationary?
12-16-2014 06:31 AM
Hello,
if you want to relate the depth camera to the rgb camera, you basically need to perform stereo calibration. Then you can map the depth pixels to the rgb pixels.
See here for example:
http://nicolas.burrus.name/index.php/Research/KinectCalibration
Regarding the different positions of your calibration pattern - the number of corresponding points is relevant for the accurate calibration. More points/features (corners, dots, etc...) in general mean better calibration. So, you either need to have a lot of points in each image or a large set of calibration images.
I have read somewhere that 25 images is a good minimum, while 100 image pairs yield the best calibration.
You could probably do your calibration fairly quickly in Labview. Labview has a stereo library, so you would only need to extract the corresponding points on the depth image(s) and the color image(s).
Best regards,
K
12-16-2014 06:48 AM
Yes, that's exactly what I am talking about.
However, the way I understand it the rotation matrix can perform something like this:
,
Source: http://qvision.sourceforge.net/group__qvprojectivegeometry.html
However, that requires the setup to be stationary.
Where I am losing it is how, when the setup is not stationary when the calibration plate is moved, is the rotation something that can be determined as constant with respect to the other camera in the stereo setup?
P.S. Thanks for letting me steal your time 🙂
12-16-2014 07:06 AM - edited 12-16-2014 07:08 AM
Hello,
what you have showed in the images is called plane-to-plane homography (you need at least four points to do that - for exact solution, more for minimization. The transformation is 3x3 matrix with 8DOF. This is basically correction of the perspective).
During the calibration, while the cameras setup is constant, you are acquiring the grid from different positions in order to minimize the error of extrinsic/intrinsic parameters calculation.
Also, you need translation in addition to rotation to align the images/coordinate systems.
YES, WHEN YOU CALIBRATE YOUR SYSTEM THE ROTATION MATRIX AND TRANSLATION VECTOR WILL REMAIN CONSTANT AS LONG AS THE CAMERAS ARE IN THE SAME RELATIVE POSITION TO EACH OTHER.
Consider your calibration as a stereo setup.
12-16-2014 07:10 AM
Thank you, that's what a knucklehead like me needed to hear!
12-16-2014 07:10 AM - edited 12-16-2014 07:19 AM
No problem 🙂
BTW, it is good to recalibrate the extrinsic parameters from time to time for accurate measurements. Especially if you are transporting the system a lot.