LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

IMAQdx Calculate Frames per second

To put it straight: Your image extraction loop is slower than your camera produces images.

Actor Framework
0 Kudos
Message 11 of 17
(278 Views)

@Quiztus2 wrote:

To put it straight: Your image extraction loop is slower than your camera produces images.


Yes, for sure, it normal and expected in this case. The acquisition is running at 250 FPS as stated, and depends on the image's size it is not always possible to perform "real time" processing and encoding. Therefore we have buffered acquisition, then slower encoding will not lost frames, because they will be buffered (as log as the buffer is not filled, of course). For fast "burst" acquisitions it is a common practice. Theoretically if buffer is large enough to hold whole sequence of the images, then "parallel" processing will not be ultimately required - we can start it after acquisition. Sometimes this buffer located inside of camera, sometimes on framegrabber, but nothing principal wrong to have this buffer in operational memory (I guess NI uses DMA to fill this buffer).

0 Kudos
Message 12 of 17
(271 Views)

In the first place I would question if parallel avi encoding is even necessary  here. Op shows screenshots with 5 secs recordings. Is sequential enconding an option here?

Actor Framework
0 Kudos
Message 13 of 17
(265 Views)

@Quiztus2 wrote:

In the first place I would question if parallel avi encoding is even necessary  here. Op shows screenshots with 5 secs recordings. Is sequential enconding an option here?


Yes, agree, for just 5 sec acquisition I'll do acquisition and encoding sequentially, not parallel. Fully parallel encoding make sense if we would like to save a little bit time, and this also can increase overall maximal recording duration, when memory can't hold whole sequence. On the other hand too aggressive CPU eating during encoding may cause frame loss at network transport layer, because usually we have UDP behind the scenes (but I haven't much experience with GigE Vision).

0 Kudos
Message 14 of 17
(243 Views)

I've actually tried the LabVIEW example - Acquire Every Image (Optimized Performance). It threw an error that said that for the ring acquisition to work, no processing needs to be done on the images before it's put into the buffer. I'm using a Lucid camera which has a color filter array in front of it. So, the color images need to be de-mosaiced to get into the RGB(U32) format. This counts as processing and so makes the ring acquisition bit invalid.

 

I'm actually completely fine with doing sequential processing. I just thought that since I don't need to actually save the per frame data, I just directly feed it into the avi file and write the video. When I use the Get Image Data function and then convert it into RGB frames, the program takes a while to finish writing all the data to the avi file which is completely acceptable for me. If the data gets too big to hold in memory - the plan is to save it as binary files and then write to video later and delete the binary files. I'm sure there's more clever ways of getting this done as well.

 

However, this is only to record video. There's another version of this program that just saves the raw image data as binary files to be opened in Python/MATLAB later.

 

Thank you all for your help. I can definitely say that I've learnt more about IMAQ.

0 Kudos
Message 15 of 17
(238 Views)

@NVVyjayanthi wrote:

... the color images need to be de-mosaiced to get into the RGB(U32) format. This counts as processing and so makes the ring acquisition bit invalid.


Glad to see that this was helpful for you. The only point is that I don't understand how de-mosaicing related to the Ring acquisition and why makes it a bit invalid. Ring acquisition will reuse buffers again and again and will work until your ring is not full, (then the newest images will overwrite the oldest one, which wasn't processed yet, and the game over). Acquisition is just acquisition, and processing can be run in parallel, for example, in your particular case you can start three threads (while loops) - one perform acquisition, the second - demosaicing, and the last one - encoding, then demosaicing will be parallelized with encoding and you will save overall processing time a little bit. I have had a very similar task last year - the images from four cameras needs to be stitched together (which takes huge time, because lens distortion needs to be performed, fading on overlaps plus flat field correction, sparkling noise filtering and so on), as result I have four acquisitions threads for four cams, then 16 threads for stitching and the last one for encoding, running fully parallel (I have had 26 cores Xeon CPU). For me parallel processing was very helpful, because in worst case each camera can deliver up to 80000 images, each one 1024x1024 16 bit, and this resulted encoding into 80000 images 2048x2048 in AVI "preview" and 16-bit multipage LargeTIFF, all together. The major problem was sequencer with indexes, because 16 processing threads running asynchronously, but the images needs to be written sequentially.

0 Kudos
Message 16 of 17
(227 Views)

@NVVyjayanthi wrote:

I've actually tried the LabVIEW example - Acquire Every Image (Optimized Performance). It threw an error that said that for the ring acquisition to work, no processing needs to be done on the images before it's put into the buffer. I'm using a Lucid camera which has a color filter array in front of it. So, the color images need to be de-mosaiced to get into the RGB(U32) format. This counts as processing and so makes the ring acquisition bit invalid.

 

I'm actually completely fine with doing sequential processing. I just thought that since I don't need to actually save the per frame data, I just directly feed it into the avi file and write the video. When I use the Get Image Data function and then convert it into RGB frames,


Obviously you can do the demosaicing after recording is complete or in the avi loop.

Actor Framework
0 Kudos
Message 17 of 17
(220 Views)