I am in a project conceptualization stage and no hardware has been finalized yet. It's a vision control system. Have following queries, will be great if someone can help in answering it.
Is it possible to interface a camera to cRIO on USB port and acquire images on FPGA?
In the first stage, I would like to see these images on the windows desktop computer, but later would like to have some processing in RT. I came across references that, images can be tranfered from FPGA to RT. But wasn't sure if we can do that between FPGA to windows PC. Can we directly transfer images from FPGA to Windows PC? Basically, in my FPGA pallete, I can only see the VIs/option for image transfer and vision assistant. But couldn't locate VIs to grab images/ initialise camera etc.
With cRIO, you can acquire images using the IMAQdx driver, but one thing to note is that the image acquisition is done on the processor of the cRIO. The images can be then transferred to the FPGA for processing and transferred back to the cRIO memory for display (or further processing on the processor). In other words, the FPGA is not on the data path.
You're right that in the system can be broken down into 3 components: Windows PC, cRIO Processor and cRIO FPGA.
Let me try to answer your second question about Visualizing the images on the PC:
If you're familiar with the LabVIEW project, you know that you can run a VI on either of these 3 targets.
Here is what a typical architecture would look like:
The USB camera is connected to the cRIO.
Your Host PC Has LabVIEW, LabVIEW RT and LabVIEW FPGA installed.
Your cRIO has LabVIEW RT installed (and appropriate drivers to acquire the images and communicate with FPGA)
1) You'll have one VI that will run on the cRIO chassis (processor), that acquires images, and use the transfer VIs available in the Vision palette to transfer the image to the FPGA and retrieve it from the FPGA and display the image. Images are transferred back and forth with our transfer API using DMA FIFOs.
Because of this, you cannot transfer an image acquired on a VI running on the host PC directly to the cRIO FPGA or vice versa. Only from a VI running on the cRIO processor to a VI running on the FPGA (and back).
2) A second VI will run on the FPGA for the image processing, and will retrieve the image sent by the first VI and send the processed image back.
Now, you say that you want to visualize the image on the Host PC. LabVIEW does that for you when you run the cRIO VI from the LabVIEW project on your host PC. You'll see the front panel of the VI that is running on the cRIO that will show the image.
There are also other ways to visualize the image retrieved from the VI running on the cRIO:
a) LinuxRT based cRIO have a feature called embedded UI, where you can directly visualize the UI of a VI running on the cRIO on the video out of the cRIO. Furthermore, you can connect a touchscreen using both the display port and USB connection, and you can interact with the front panel of the VI running on the cRIO
b) LabVIEW RT features a web server, and you can publish the front panel of a VI running on the cRIO.
You'll be able to access the UI of that VI using a web browser on any PC connected to the same network.
c) You can write a monitoring application in a LabVIEW VI running on the PC. For a monitoring application, the easiest way to transfer images from the cRIO VI to the VI running on the PC is using shared variables. You would create an image shared variable that will be deployed on the cRIO, and you can access that variable from both you VI running on the cRIO and the VI running of the PC. The VI running on the cRIO would write it and the monitoring VI running on the PC would read it and display the image.
I hope that clarifies things and give you an idea on how things would work.
One last thing to consider. We have different models of cRIOs featuring different processors, different FPGAs, and different slot availability.
Image processing is processor intensive. Make sure you pick the right one, based on your application.
For image processing on cRIO, if you budget permits, I would go for an Atom processor, as opposed to ARM.
Similarly, depending on the amount of image processing that you want to do on the FPGA, consider a FPGA that is big enough.
I would choose Kintex 7 over Zynq, as our image processing API supports processing 8 pixels at once per each clock tick on Kintex (as opposed to one pixel on Zynq). You'll be able to process the images at the USB bandwidth, and will only be limited by the transfer time.
Hope this helps and I didn't confuse you too much!
Thank you for such an elaborate reply. In-deed it helped me to understand the problem in much depth.
I tend to agree with the easier option of using shared variable between RT and windows PC for the image transfer. But will that be efficient? will it be slow? How do we handle such scenario?