From Saturday, Nov 23rd 7:00 PM CST - Sunday, Nov 24th 7:45 AM CST, ni.com will undergo system upgrades that may result in temporary service interruption.
We appreciate your patience as we improve our online experience.
From Saturday, Nov 23rd 7:00 PM CST - Sunday, Nov 24th 7:45 AM CST, ni.com will undergo system upgrades that may result in temporary service interruption.
We appreciate your patience as we improve our online experience.
Use the FRC Vision examples to better understand how to use the camera and the Vision software to find and track objects. This tutorial demonstrates components of the Vision Example project and how to incorporate it into your robot code.
Note: This example project assumes the following:
For the retroreflective portion, you have a ring light mounted to the camera.
For Axis cameras, you have used the camera setup utility to configure the camera.
For the RT portion, you have configured the laptop networking to a compatible IP address.
Part 1—Open the Example
First you must open the example you want to use. Complete the following steps to open the Vision Example.
In order to achieve a better distance estimate, you should correct for distortion caused by lens and camera mount angle. This can be accomplished by running the Vision Calibration Training application located at Start>>National Instruments>>Vision>>Utilities. The Distortion Model(Grid) should be an appropriate method. You will print the grid of dots located at C:\Users\Public\Documents\National Instruments\Vision\Documentation and attach it to your target. You may also need to mount the printed pdf so that it is flat (not warped) and held vertical, parallel to the object you are measuring to. You must acquire an image from your camera to begin calibration. You can acquire an image through Measurement and Automation Explorer. The image should have the calibration grid clearly visible. Follow the remainder of the wizard directions and store the calibration file near the example code. The calibration image needs to match the size of the camera's returned images. You may also want to update the code constants in the top left of the diagram to point to the calibration image.
There is an additional example in the My Computer section of the project that identifies some colored pattern on the lower area of the scoring tower. Note that the colors are not unique on the field, so it is key that geometric information is used to ignore taped lines, bumpers, and other objects using this color. Also, don't forget that the color will change depending on which alliance your robot is scoring against.
There is also an example that may assist with setting up the camera to use with a ring light. It reads from the camera and graphs statistical info about the pixels in the image. If calibrated with the LED color, it will also display the mask and indicate the saturation level of the masked color area.
Part 2—Run the VI on the RT roboRIO Target
Complete the following steps to run the vision example on the roboRIO.
To avoid window clutter, you may want to close the My Computer VIs that were opened. Do not save any changes that you made to the VI.
Part 3—Integrate the Example VI into a Robot Project
After you experiment with the example and see how it works, you can save a copy of the VI and all the files it depends on to a new location to use it in your robot project.
If you are using calibration, you will also want to ensure that your calibration file is deployed to your controller and loaded from the correct location. The code in the upper left of Vision.vi expects the calibration file(s) to be located in the data directory of your built robot program. First, locate the calibration files and drag/drop them to beneath the roboRIO target of your project. Next, edit the project's Build Specifications. In the Source Files section, select the calibration files and click the arrows in the Always Include section to include them with a deployment. You may also want to ensure that in the Destination section, your Support Directory points to the natinst/bin/data directory.
If you would like to incorporate any of the vision processing into the dashboard, it is best to base them on the My Computer versions and merge the processing into Loop 2, the one that retrieves images from the robot camera. It is also possible to incorporate calibration in the Dashboard, but processing on the roboRIO. Dashboard calculated details can be communicated to the robot using the Network Table VIs as described in the Dashboard customization tutorial.