LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Image processing and triggering

Hi,

 

I have a high speed camera which captures video and photo for my experiment. Just wonder if I can use the images capture from the camera online, process that with labview, and output a signal based on the image processing result?

 

I have a PCIe-7842r FPGA card which I used to process the signal, but not for images. 

 

Is it possible to realized my idea with my FPGA card solely? Would there be any labview examples similar to what I would like to achieve?

 

Thank you all!

 

Best,

Matthew

0 Kudos
Message 1 of 8
(3,207 Views)

Do you want to use your PCIe 7842R as a co-processor so stream (DMA) your image data to the board, process the images and stream it back to your host for display?

 

What are the specifications of your video data:

- Frame rate?

- Frame size?

- Pixel format and depth (monochrome, color, number of bytes)?

- All together what is your pixel rate?

 

PCIe should be fast enough to allow bi-directional streaming (unless your video is really big and/or really fast) but your board doesn't have DRAM and only a limited amount of block RAM (BRAM). You probably wont be able to do inter-frame processing, but likely be able to do processing on a limited number of consecutive lines (like 2D filtering).

 

The LabVIEW FPGA loop rate may be your limiting factor, it may be difficult to compile processing code much faster than 100-125 MHz, so if your pixel rate is faster you may have stream and process multiple pixels simultaneously and that complicates coding significantly.

 

Memory wise here is a numerical example. With 1.7 Mbits (216 kB) of BRAM, let's assume you need 64 kB for your two DMA FIFOs, it leaves you 152 kB for line storage and other usage like Look-Up-Tables. For a line length of 2048 pixels of 3 bytes it leaves you the option to store 20 to 25 lines.

 

Finally you have 48 DSP48 processing blocks available and that should allow you to do a minimum of processing like one 'color' filtering using a 3x3 kernel (using 27 DSP48 blocks).

0 Kudos
Message 2 of 8
(3,169 Views)

Do you want to use your PCIe 7842R as a co-processor so stream (DMA) your image data to the board, process the images and stream it back to your host for display?

--- Yes similar, I wanna stream and process the video for a signal triggering to control another device

 

What are the specifications of your video data:

- Frame rate?

--- The Frame rate is around 1000 fps

- Frame size?

--- 1 frame

- Pixel format and depth (monochrome, color, number of bytes)?

--- Color, 8 bits

- All together what is your pixel rate?

---1 million per second

 

The camera I am using is this one:

http://itg.beckman.illinois.edu/visualization_laboratory/equipment/image_capture/manuals/phantom_v91...

 

After rethinking my problem, I would like to process a single photo instead of processing a whole video. Would it be better to save the resources to just process a defined pixel region of the photo, and determine the size and outline of the object within, and output a signal based on the image processing result?

 

0 Kudos
Message 3 of 8
(3,152 Views)

Your frame size is actually 1632 x 1200 ~= 2 Mpixels. Looks to me like the camera is able to stream up to 500 frames per second and that is a lot of data to deal with, likely more than your FPGA board can deal with.

 

Your one image processing approach is on the other hand a lot easier but then are you sure it is worth the trouble to stream and process on the FPGA compared to doing all operations on your host machine? What are the speed and latency requirements for your trigger signal?

 

What type of processing do you plan to do on your image? FPGAs are great for simple and repeated operations like scaling or filtering, but more complex algorithms like outlining and measuring objects (though possible) are a lot more difficult to implement.

0 Kudos
Message 4 of 8
(3,146 Views)

Your one image processing approach is on the other hand a lot easier but then are you sure it is worth the trouble to stream and process on the FPGA compared to doing all operations on your host machine? What are the speed and latency requirements for your trigger signal?

 

--- I plan to output a signal in millisecs (or even smaller) after the processing of the image. The signal will hold for milli to microsecs with the FPGA board.

 

What type of processing do you plan to do on your image? FPGAs are great for simple and repeated operations like scaling or filtering, but more complex algorithms like outlining and measuring objects (though possible) are a lot more difficult to implement.

 

--- The processing I plan to do are outlining and morphological recognition. Say, the algorithm will be able to recognize the image of triangle from a circle, and output the signal based on the shape. 

The second idea would be to extract the RGB info of the region-of-interest in a photo, and compare a pre-set threshold to govern the signal output.

 

Do you think this is okay for my PCIe-7842r to do this? I also noticed there are some FPGA board from NI for image processing, would that be a better choice? Or GPU programming? 

 

0 Kudos
Message 5 of 8
(3,134 Views)

It sounds like you'll have to try out things to find out if your 7842R can do the job or if you need to 'upgrade' to more powerful FPGA (targetting image processing and DRAM boards).

 

Morphology may require you to stream the entire image to the FPGA before even starting your analysis. If that's the case you are not only delaying your processing (like you'd have to do on the host), you also run the risk of hitting the BRAM (no DRAM available) limitation depending on your image size. You may be able to do some sort of inline monitoring during streaming so you only keep the sub-Region-of-Interest that is relevant for you, reducing the size of your image to be processed. It all depends on your actual situation.

 

If you goal is to be able to differentiate shapes from each others (like triangle versus circle) you may not need a very high resolution. Maybe something as small as 9x9 per particle can be enough (depends on your actual particle shapes and contrast), so you could lowpass filter and decimate your incoming image 'on the fly' (that is during streaming) and significantly reduce the amount of data to be used by your FPGA analysis module.

 

Finally, I have to mention again, the overhead on the host side may overshadow the gain of processing on the FPGA compared to a pure host solution. So keep in mind to try the same algorithm on the host and benchmark the differences. Please keep us posted, this is the kind of evaluations we'd love to hear about.

0 Kudos
Message 6 of 8
(3,128 Views)

Do you know if there are any built-in example for image processing that can be done using my FPGA board? May I know where to locate the examples?

 

 

0 Kudos
Message 7 of 8
(3,108 Views)

I don't know of any image processing examples developed specifically for the 78xxR targets but there are a few examples shipping with the Vision FPGA module. Check the Example Finder under Toolkits and Modules >> Vision FPGA. These examples can also be used as inspiration for how to pack pixels with line/frame info as well as how to use VI-scoped FIFO to move data between processing blocks.

There is also IP-Net where you may find useful sample code.

Finally you could try to ping on the Machine Vision Forum.

0 Kudos
Message 8 of 8
(3,097 Views)