FIRST Robotics Competition Documents

cancel
Showing results for 
Search instead for 
Did you mean: 

Tutorial 8—Integrating Vision into Robot Code

 

Use the FRC Vision examples to better understand how to use the camera and the Vision software to find and track objects. This tutorial demonstrates components of the Vision Example project and how to incorporate it into your robot code.

 

Note:  This example project assumes the following:

For the retroreflective portion, you have a ring light mounted to the camera.

For Axis cameras, you have used the camera setup utility to configure the camera.

For the RT portion, you have configured the laptop networking to a compatible IP address.

 

Part 1—Open the Example

First you must open the example you want to use. Complete the following steps to open the Vision Example.

 

  1. On the Getting Started window, go to the Support tab and click Find FRC Examples to display the NI Example Finder.
  2. Under the FRC Robotics folder, double-click the Vision folder to view Vision examples.
  3. Double-click this year's Vision Example.lvproj to open the project.
  4. In the My Computer section of the Project Explorer, double-click FRC Color Processing Example.vi to open the VI.

    Note: Notice that there are additional example VIs in the project, including Vision Example Robot Main VI in the RT roboRIO Target section. You will use that VI to deploy the code to your roboRIO in part 2, but right now you just want to run and experiment with the example on the host computer.

  5. Run the VI by clicking the Run button and use the provided images to explore manual and automatic color calibration, the range of scores, etc.
  6. Connect the USB or IP camera to your computer and flip the Source tab to Camera. Check the address used to communicate with your camera. USB 0 or USB 1 or axis-camera.local are common addresses. You should also be able to find your camera name and address in Measurement and Automation Explorer (NI MAX) under Devices and Interfaces for USB Cameras or Devices and Interfaces >> Network Devices for IP cameras.
  7. You should see the Original Image and possibly other fields update when your camera is working properly.
  8. Aim the camera at a target to verify that the Masked image identifies it and displays measurements of it. If necessary, you can adjust the LED Color fields to improve the mask. You can also auto-calibrate LED Color fields by clicking and drawing a line on a colored section of the image.
  9. Measure the distance to the target and compare to the value reported in the Detected Targets section of the VI. The distance is computed using camera lens information. You may need to select a different camera or measure the lens field-of-view and modify the distance calculation to use your specific value.

In order to achieve a better distance estimate, you should correct for distortion caused by lens and camera mount angle. This can be accomplished by running the Vision Calibration Training application located at Start>>National Instruments>>Vision>>Utilities. The Distortion Model(Grid) should be an appropriate method. You will print the grid of dots located at C:\Users\Public\Documents\National Instruments\Vision\Documentation and attach it to your target. You may also need to mount the printed pdf so that it is flat (not warped) and held vertical, parallel to the object you are measuring to. You must acquire an image from your camera to begin calibration. You can acquire an image through Measurement and Automation Explorer. The image should have the calibration grid clearly visible. Follow the remainder of the wizard directions and store the calibration file near the example code. The calibration image needs to match the size of the camera's returned images. You may also want to update the code constants in the top left of the diagram to point to the calibration image.

 

There is an additional example in the My Computer section of the project that identifies some colored pattern on the lower area of the scoring tower. Note that the colors are not unique on the field, so it is key that geometric information is used to ignore taped lines, bumpers, and other objects using this color. Also, don't forget that the color will change depending on which alliance your robot is scoring against.

 

There is also an example that may assist with setting up the camera to use with a ring light. It reads from the camera and graphs statistical info about the pixels in the image. If calibrated with the LED color, it will also display the mask and indicate the saturation level of the masked color area.

 

Part 2—Run the VI on the RT roboRIO Target

Complete the following steps to run the vision example on the roboRIO.

 

To avoid window clutter, you may want to close the My Computer VIs that were opened. Do not save any changes that you made to the VI.

  1. In the Project Explorer, right-click RT roboRIO Target and select Properties from the shortcut menu. Change the IP Address/DNS Name to match your roboRIO and click OK.
  2. In the RT roboRIO Target section, double-click Vision Example Robot Main.vi to open the VI.
  3. Connect the roboRIO and computer using USB, wifi, or ethernet cable so they are able to communicate. Click the Run button to deploy and debug the VI on the RT target.
  4. From the block diagram of Vision Example Robot Main, double-click the Vision Processing VI, and then repeat the steps in part 1 to experiment with the VI when it runs on the RT target. Camera configuration takes place on the block diagram.
  5. Close all the VIs from the example Vision project. Do not save any of the changes you made.

Part 3—Integrate the Example VI into a Robot Project

After you experiment with the example and see how it works, you can save a copy of the VI and all the files it depends on to a new location to use it in your robot project.

 

  1. Open the Vision Example Robot Main VI from the RT roboRIO Target section in the Project Explorer, and open the Vision Processing VI.
  2. From the Vision Processing VI, select File»Save As and select the Duplicate hierarchy to new location option. Then click Continue.
  3. Navigate to your robot project directory and click the Current Folder button to save a copy of the example Vision VI and all of its dependency files to your robot project. 

    Note: After you click Current Folder, a dialog box appears warning you that you are overwriting two VIs in your robot project, the existing Vision Processing VI and the Robot Globals. This is correct. You do want to overwrite these two existing VIs with the example VIs you are saving.
  4. Run your robot project code.
  5. From the block diagram, open the Vision Processing VI and verify that it works as expected.

If you are using calibration, you will also want to ensure that your calibration file is deployed to your controller and loaded from the correct location. The code in the upper left of Vision.vi expects the calibration file(s) to be located in the data directory of your built robot program. First, locate the calibration files and drag/drop them to beneath the roboRIO target of your project. Next, edit the project's Build Specifications. In the Source Files section, select the calibration files and click the arrows in the Always Include section to include them with a deployment. You may also want to ensure that in the Destination section, your Support Directory points to the natinst/bin/data directory.

 

If you would like to incorporate any of the vision processing into the dashboard, it is best to base them on the My Computer versions and merge the processing into Loop 2, the one that retrieves images from the robot camera. It is also possible to incorporate calibration in the Dashboard, but processing on the roboRIO. Dashboard calculated details can be communicated to the robot using the Network Table VIs as described in the Dashboard customization tutorial.

Austin
Staff Software Engineer
NI
Contributors