Hi , there is 3 days left till the official tournament of the FRC robotics competition and our new rookie team has been depressed because we can't figure out how to track a rectangle with the camera. There is a rectangle processing project in the labview example but I don't know how to use it or even to implement it.. Please , we desesperately need help.
Thank you very much!
The right example is the one in the FRC section of the examples. Have you tried running it? Note that you can run it either on the cRIO, or directly on your computer, depending on whether you run it from the cRIO target or the "My Computer" target in the project explorer. It's probably easiest to start by running it on your computer. If the camera is configured properly and you run the example you should see the camera image (you might need to enable the camera image). You'll need something that looks like the actual vision target in order to do something useful with the example. Try pointing the camera at the vision target and see if the code finds it properly.
There are a number of settings you can adjust. One is the color that the code uses to identify the border of the target; you can change this by clicking on the target. You can also change the mode; it can look either for a specific color, or it can run the camera in grayscale and look for particular shades of gray. I don't have the code in front of me, but if I remember correctly, you can adjust how close the color needs to be to considered a match. There are other properties you can adjust as well, such as the aspect ratio (ratio of width to height) of the rectangle. I assume these are preset to match the real target, but you can play with them.
The display on the left shows you the raw camera image. The display on the right shows you results of the processing (if you enable it). You'll get, I believe, a black and red image. The red is what is identified as the target and the black is everything else. The code identifies "particles" - regions of the image that match the vision pattern. Each one is assigned a score. The code displays information about these particles, as well as the calculated distance to them.
I hope that's enough to get you started.
I would just like to add that we could not track the rectangle until we put some led lights on the camera. The reflective tape will reflect back in the direction the light is coming from. When we tried ambiant light (i.e. just the standard room lights) we got nothing. It wasn't until we added lights that it worked. We also had best results with the "Intensity" tab in the example.
LockheedJoe would you care to pass me your project because I really have no idea where or what to do .We bought the light and we just need the files . This would be forever remebered if you could pass me or tell me the steps to what you did to obtain working project!
I tried merging rectangle processing vi with my main project and it failed...
Thank you very much!
I won't be near the code until later today. However, all we did was used the locate rectangle example (under find examples and vision) and got that working. If I remember right, we then moved the vision.vi (i think that was the name) to the robot porject we were using just using windows explorer. The next time we opened the project for the robot, the vision code was there.
So we really didn't modify anything, we just copied a vi.
Have you taken a look at the example for this? Are you having specific issues with the example? You should be able to just run the code out of the box to track the rectangle. The code will show you what its tracking right on the front panel. You can also change the tuning parameters on the fly which helps you adapt the code to your specific lighting conditions.
I managed to merge it with my main project but now, how do I make it detect the rectangle. On the processed image frame, there's too much red meaning it's detecting thr wrong stuff!
How can I fix it?
The easiest way to "tune" it is to run it on your computer (not on the cRIO) as I suggested in my earlier reply. This makes it easy to experiment with the settings. Once you have settings that work well, transfer them to the robot code.
By the way, what is the difference between the Intensity and Intensity Open tab? Which one should I use considering my ring of LED lights are white light.
Just use the Intensity Tab. The example code has two windows, the live video and the processed video. The rectangles should be clearly visible in the live video or you are not using your lights correctly - they need to be mounted near to the camera lens. Read up on the vision procession where it describes retro-reflection. When you illuminate the scene with your white LEDs the white retro-reflective tape will send almost all of the light directly back to the source - in this case your camera. When that happens, the rectangles will be many times brighter than the rest of room. You can help this process if you turn down the gain on your camera and set the limits on the intensity to be near the upper 1/3 of the scale (say 180 to 220) or maybe (150 to 200) Then the image processing will ignore the 20% gray world in the background and also any bright light fixtures in the ceiling or back of the room. Using this approach, I found that a small pocket size LED flash lamp would give near 100% reliable results at distances up to 40 feet from the target.
Also, the field of view of the Axis camera is large so you will capture all 4 baskets in any image captured from at least 10 feet away from the goals. With the example code and the relatively simple structure, I found this to be the easiest application of video image processing ever in an FRC event.