From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.
We appreciate your patience as we improve our online experience.
From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.
We appreciate your patience as we improve our online experience.
07-02-2014 01:42 PM - edited 07-02-2014 01:44 PM
Hi,
I am very new to LabVIEW (in fact to any coding at all) and helping my adviser to get the 3D coordinates of a few reflective markers using two cameras. I am able to read the marker coordinates (x, y) from two cameras simultaneously by processing the data in real-time using codes generated from vision assistant. However, we want to get the depth position by triangulating the markers. I have seen stereo vision doing something similar to this, but I think the stereo vision may not work with our calibration frame (markers) and we don’t need the whole depth image, but only the maker’s z coordinates. I also want to use Region of Interest to mask out other regions that are creating reflections. However, I am not sure if triangulation would work if we select region of interest (as the origin of the camera coordinates would change after selecting ROI). I saw this link http://kwon3d.com/theory/dlt/dlt.html#3d where they used DLT (direct linear transformation) method, but it is too much to code from the beginning. Is there a subVI in LabVIEW or some sort of prewritten code that can be customized? Can anyone please give me some advice on how to solve this problem?
07-02-2014 03:49 PM
Well in theory, if you know exactly where the cameras are pointed, how far apart they are, and how far the reflector images are above or below the horizon and to the right or left of center line, a little simple math should give you the answer. Concerning the ROI I would think all you needed to know was where the ROI was relative to horizon and centerline. You could then calculate an absolute position from there, which would also give you the angles you would need.
Unfortunately, I don't know of any readily availble code. But I'm sure there is some! With the emphasis on FIRST robotics, I got to believe that judging distances in 3D space is something for which there is a lot of code.
Mike...
07-02-2014 04:05 PM
From the FIRST Robotic community page
https://decibel.ni.com/content/docs/DOC-20173
https://decibel.ni.com/content/docs/DOC-26318
These are all for high school students
07-07-2014 10:42 AM
Hi Mike,
Thank you for your reply. Can you please let me know whether I should use the shortest distance between the two cameras? and also please explain how do I calculate "how far the reflector images are above or below the horizon and to the right or left of center line"?
07-07-2014 10:46 AM
Hi Omar,
Thank you for posting the links of tutorials for image processing.
@Omar_II wrote:
From the FIRST Robotic community page
https://decibel.ni.com/content/docs/DOC-20173
https://decibel.ni.com/content/docs/DOC-26318
These are all for high school students
07-07-2014 11:14 AM
Distance between centers of the lens'. That's what the reference for the image is.
Mike...
07-09-2014 04:44 PM
Thank you for the reply. Can you please explain what is meant by above or below the horizon and to the right or left of center line.
07-09-2014 09:16 PM