LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Problems with camera calibration

Hi!
I am trying to create a project that realises stereo vision with two cameras using the VIs provided by LabVIEW. During development I ran into problems with the calibration of my camera(s).
 
My questions are:
I would like to calibrate the camera-images with the IMAQ Learn Camera Model VI. For this purpose I have to show the camera a calibrational image (containing a group of points) in different angles. However even after multiple shots the value of the Insufficient Data part of the Internal Parameters output remains TRUE. I also experienced the same thing while running the Stereo Vision Example.vi. I held the calibrational image according to the directions in both cases. I would like to know what does this VI need for finishing succesful computations? Is the implementation of other VIs is necessary (apart from IMAQ  Calibration Target To Points - Circular Dots VI)?
Another question: If I give a TRUE value to the Add Points And Learn input, will the VI use every previously given data for computation or it will use only the currently received values until I change it back to FALSE to start accumulating data after that?
I generated the data to the Reference Points input with the IMAQ Calibration Target To Points - Circular Dots VI. Are there other VIs with which I should use the Learn Camera Model VI?
Apart from the things listed above a general description of this VI (and the other calibrational VIs) would be very useful and maybe a simple example program that illustrates the function and work of the Learn Camera Model VI.

 

0 Kudos
Message 1 of 10
(4,142 Views)

Hi nagy.peter.2060!

 

Sorry for the late reply. 

 

Reason of Insufficient Data output returns true because any of the following conditions are met: the number of different projection planes is fewer than 5, the angle difference of the projection planes is less than 20 degrees, or the lens in use is a telecentric lens. In some internal resources I read, that the function needs minimum 2 set of reference point from different images to compute the internal parameters and added image must have covered 90degree angle range. 

You can find more details about the VI in the help: http://zone.ni.com/reference/en-XX/help/370281P-01/imaqvision/imaq_learn_camera_model/

 

What grid image do you use for the calibration? (It can be any grid Dot,Chessboard or Line Grid.)

 

Could you tell some details about the application? 

 

You can find some additional resources related to the topic:

3D Imaging with NI LabVIEW: http://www.ni.com/white-paper/14103/en

Does NI Vision Support Stereo Vision or Depth Perception: http://digital.ni.com/public.nsf/allkb/86FEFA3CA7EA8B6B86256DB300719CE3?OpenDocument

 

BR,

CLA, CLED
0 Kudos
Message 2 of 10
(4,086 Views)

Hi!

 

I use the calibration grid provided in the Vision toolkit. And about my application... There isn't really much to tell. Right now I just want to understand how the different calibrational VI-s work, and I put them together in very simple VI-s.  The current application does nothing but takes an image every 10 seconds and then gives it to the Learn Camera Model VI. I also have an indicator to show if there is any result (nothing so far).

 

What do you mean by saying "2 set of images" is necessary? I have showed the grid in more than five diffferent position (by five I mean the positions showed in the Stereo Vision Example.vi).

 

Thank you very much for the help!

0 Kudos
Message 3 of 10
(4,077 Views)

Just one more thing: I have already read the resources you suggested 😄

0 Kudos
Message 4 of 10
(4,076 Views)

Hi nagy.peter.2060!

 

I found just a few hints and example:

 

Learn Camera Model
  1. This VI has 2 images as input, Calibration Template Image and Grid Image
  2. When First Image comes as input, we have to wire that image to Calibration Template Image.
  3. We will only learn Grid of Calibration Template Image if and only if no Camera model is attached to the image.
  4. If no camera model is attached, we have to learn Grid and calculate camera model and attach calInfo to Calibration Template Image
  5. If Calibration Template image and grid image both are present, we have to learn grid of Grid Image and attach calInfo to Calibration Template Image.
 
I searched for examples related to camera calibration.
 
The example number 1666 illustrates how to use a calibration grid, either with a live acquisition or from a file, to correct for perspective distortion. The example documents the process for calibration as you learn a grid, apply the calibration information to your image, and then either correct the entire image or convert individual pixels to real-world distances. Link: http://ftp.ni.com/pub/devzone/epd/1666.zip
 
 
I found an other example without description, but it uses Learn Distortion Model VI. Please find the image of the VI attached, I hope it will be useful.
CLA, CLED
0 Kudos
Message 5 of 10
(4,045 Views)

Hi BalazsNagy,

Check your signature..

 

 

---
Silver_Shaper | CLD
0 Kudos
Message 6 of 10
(4,038 Views)

Thanks 😉

CLA, CLED
0 Kudos
Message 7 of 10
(4,035 Views)

this tutorial helped me a lot in my project https://www.youtube.com/watch?v=WvGcxJ1L3NM

...you can refer to this...hope it will help

0 Kudos
Message 8 of 10
(2,954 Views)

the youtube link you attached does not work. please reply to me and send it again.

0 Kudos
Message 9 of 10
(1,781 Views)

You’ll have to do your own research. That user signed up on that day to post this single message and hasn’t visited the forums since. It may even have been a spam link that was consequently taken down by youtobe at some point.

Rolf Kalbermatter
My Blog
0 Kudos
Message 10 of 10
(1,775 Views)