LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

IMAQdx Calculate Frames per second

Hello,

 

I'm using LabVIEW to record with a gigE camera at around ~250fps. It's not a triggered mode right now and hence, it varies between 247 to 252 fps. I'm trying to make sure that none of my frames are dropping and that I extract the entire buffer but I'm having trouble understanding the "Images Left Behind" value from the Calculate Framer per Second vi. I've attached my code and I've also included a photo showing the bit that's confusing me.

NVVyjayanthi_0-1710204284706.png

Buffer Number and Images Behind are the outputs from the Calculate Frames per Second vi. Acq Loop is the while loop count that I feed in as Buffer number to the Get Image2 vi. When I end the acquisition, why doesn't the Buffer Number go all the way up and Images Behind go down to 0 (basically flushing the buffer)?
Is there something wrong in how I'm thinking about this?

 

Thanks

NV

0 Kudos
Message 1 of 17
(753 Views)

Unfortunately I haven't any suitable cam in my hand at the moment, but comment in IMAQdx Calculate Frames per Second.vi stated that the "Images behind" is useful when acquiring every image, so operations can be skipped (i.e. display) to help catch up and avoid missing images if the acquisition gets too far behind.

 

Technically it is just difference between Buffer Count and Buffer Number

 

Screenshot 2024-03-12 10.40.48.png

 

If you "trying to make sure that none of my frames are dropping" then may be LostBufferCount property could be useful:

Screenshot 2024-03-12 10.42.55.png

 

and also this one — Detecting Packet Losses From a GigE Camera in LabVIEW

0 Kudos
Message 2 of 17
(703 Views)

Consider uploading a Snippet.png of your code, since I can't have a look into your VI.

With imaqdx you define the local ring buffer size on your PC. The GetImage2.vi reads then from that local buffer. If your camera acquires faster images than your PC can fetch them, you 'Inages Behind' increases. If this delta gets bigger than your local buffer, your camera overwrites in this case local buffers, which weren't read out yet. Since the local buffer size is restricted to max 250, your images behind counter can't go from 6040 to 0.

 

Check also if you have a IncompleteBufferMode Attribute and to what it is set. I think there is a Get Every Image example in LabView.



Actor Framework
0 Kudos
Message 3 of 17
(702 Views)

Thank you all for your replies. I'm still learning a lot about high speed acquisition and your suggestions definitely helped with that.

 

The Lost Packet Count shows up as 0 indicating that I'm not dropping any frames. When I started tracking the Processed Frames per Second value from the CalculateFramesPerSecond.vi, it was processing at 90Hz while acquiring at 250Hz. This is what also led to the images behind values being non-zero. This in itself isn't bad but when I stop the acquisition, the frames in the buffer aren't passed through my Channel writer that I'm using to write to an avi file. The vi snippet for this is below.

 

NVVyjayanthi_3-1710276557072.png

 

When I disabled my channel writer and stopped writing to the avi file, the acquisition rate and the processing rate started matching. This snippet is below. So just writing to that channel writer increased lag time. This still doesn't make sense to me seeing as this sort of producer-consumer architecture has worked for a lot of other posts I've seen in this community.

 

NVVyjayanthi_0-1710275821322.png

 

I used Get Image Data to save the data while acquiring it. This showed the same acquisition and processing frame rates.

 

NVVyjayanthi_1-1710276162247.png

 

With this method I have to add converting the data to an image -> demosaic-> write to avi.

 

The Get Every Image (Optimized Performance) example didn't work with the camera that I have. I suspect that it's due to the output format by the camera which is BayerBG8. Could the conversion between BayerBG8 ->RGB be causing the delay when I try to directly save the image to the avi?

 

0 Kudos
Message 4 of 17
(670 Views)

Just when you thought you understood how LabVIEW handles data and data flow, you start using LabVIEW Vision and find other "gotchas".

 

Let's start with the simple assumptions behind DAQmx acquisition and data saving.  Consider an N-channel A/D converter sampling at 1 kHz, 1000 points at a time.  You want to stream it to disk without losing any samples.  So you have a Producer While Loop that has a DAQmx Read, and every second outputs an N x 1000 array of data points.  You put this array into a Stream Channel and send it to the Consumer Loop, which write the data to an already-opened file.  No problems, no particularly large or complex data structures.

 

Now go to IMAQdx.  It seems simple -- you generate "frames" at 30 Frames/second, put each Frame in a Stream Channel, pass them to the Consumer Loop, where they are unpacked and written to the (already-opened) AVI file.

 

But what is a Frame?  First, it comprises 640 x 480 "pixels" (that's a medium-size Frame) that is 32-bits of RGB data, arriving every 1/30 of a second.  That's a lot of bytes to copy into the Stream, and unpack at the other end.  But that's not how Vision works -- the images are actually stored in a "Buffer", an area of the PC's memory (managed by the IMAQdx Driver) that holds the Frame, and all that is transfered by the Stream Channel is the address of the Buffer.  The Camera is assigned a pointer to this Buffer, and the hardware directly streams the image bits and bytes into this buffer.  The Producer passes the address of this Buffer to the Consumer, which uses this address to process the Image Data that it contains and write it (after suitable compression) to disk in AVI format.

 

The trick is that this takes time, often more than 1/30 of a second, as the AVI format includes image processing, compressing the image in three dimensions -- compressing each 2D image by encoding regions where the image doesn't vary point-to-point with fewer than 32 bits/pixel, and also processing across time, effectively encoding how much this part of the image changed from one frame to the next.

 

To be effective, you need multiple "frames" to chew on, which translates to allocating to IMAQdx a buffer size large enough for the AVI processing to be effective.  Suppose you specify 30 buffers.  This may require 40 MB, but that's pretty trivial for a PC.  You are streaming 30 addresses/second to the Consumer, and it manages to do its computation "in time" because you gave it a large memory area to work with.

 

Have you configured a reasonable Buffer Size when you configured your camera?

 

[When I was introduced to IMAQdx after 5 or so years of "ordinary LabVIEW", I didn't get it.  My colleague (who was using it) also didn't understand all the subtleties, but with the two of us reading the manuals and the LabVIEW Help, and writing little "demos" for ourselves, we had the "Ah Ha!" moment where we said "Let's allocate 10 buffers and see what happens ...".]

 

It would really help us to help you if you included more (all?) of your LabVIEW code.  Note that some of the more experienced LabVIEW users (such as I) don't have the latest versions, so it is a good idea to "Save for Previous Version" and specify, say, LabVIEW 2019 or 2021.  An entire Project, compressed by right-clicking the Project folder, choosing "Send To", "Compressed (zipped) folder", and attaching the resulting .zip folder, will be the most helpful (we can actually see how many buffers you specified with the IMAQdx Configure Acquisition VI).

 

Bob Schor

Message 5 of 17
(660 Views)

Hello Bob,

 

Thank you that answer! Things are starting to make some more sense once again.

 

When I allocate the buffer, I usually do 10k frames if I'm collecting ~5k frames. I roughly try to do 2x the number of actual frames I'm recording. The PC I'm using has 64GB RAM and the camera is connected via a gigE connection.

 

I've attached the LabVIEW 2019 codes where I'm using the Get Image2 vi and the Get Image Data vi.

 

 A follow-up question to this would be, if I stop the acquisition and there's frames still in the buffer, do they get processed, i.e. do their addresses still get passed on to the consumer loop?

 

Thanks,

NV

0 Kudos
Message 6 of 17
(654 Views)

OK, that was a fast (and responsive!) response.  I've taken a quick look at Record Video Lucid Camera Channels, and find a number of features that I haven't encountered before (such as the Codec Property "RingText.Text" being wired in to the Codec input -- I always wired the Enum for the Codec there).

 

I would suggest using the "Next" Buffer Number Mode -- this makes the Buffers you have allocated into a large Ring Buffer, very efficient and fast, moving as little as needed.

 

If you stop the Acquisition and also stop the Channel Writer (which it looks like you do), the Consumer will continue to run, processing the Buffers it receives and writing AVI frames until it reads the last element and the Last Element?" output goes True to stop the Consumer Loop.  Nothing should be lost (as long as you have enough memory for the Buffers and the Channel, which should be the case).

 

Bob Schor

0 Kudos
Message 7 of 17
(639 Views)

The Ring Text being connected to the codec was a leftover from when I was still learning about my different codec options and then running mini-experiments to check speed and size of the avi file.

 

I tried the "Next" mode and still the same result. I now get non-zero images missed.

 

Mode: Buffer Number

NVVyjayanthi_0-1710366569809.png

 

Mode: Every

NVVyjayanthi_1-1710366771030.png

 

Mode: Next

NVVyjayanthi_2-1710366785303.png

 

 

 

 

0 Kudos
Message 8 of 17
(601 Views)

I'd really like to try to help you, but you have been so reluctant to provide us with information that can help us understand what you are doing,

 

Please provide the following information:

  • What camera are you using?  [You don't need to provide the manufacturer and model number, but you should at least note what its "parameters" are, including its frame rate and image size (in horizontal and vertical pixels, and how many bits/pixel, typically 8, 16, or 32)].
  • Specify the nature of the AVI, including FPS, number of frames (or, equivalently, time duration of the video).
  • Please provide (ideally in LabVIEW 2019 or 2021) the VI or VIs that involve (a) opening, acquiring, and "exporting" Images with your camera, and (b) opening an AVI file and importing and saving to the AVI file the images from the camera.  I presume you are using a Producer/Consumer Design Pattern -- please show us the structure and interactions that involve the images.  Ideally, we should be able to write our own test routines for your camera (or equivalent cameras to which we have access) for testing.

Bob Schor

0 Kudos
Message 9 of 17
(592 Views)

Just few comments from my side.

First of them, keep in mind, that the IMAQ Images passed by reference, not by value!

The typical error which most engineers doing when starting working with IMAQ is something like that:

ref images.png

Here we have two loops - one is the "fast" acquisition loop and another one is slow "processing" loop.

The problem is that the images in the queue of the Channel referred to the SAME image.

You need to allocate array of images (Ring buffer), then transfer each image.

It seems to be that you found this already, because on the last VI you switched to LabVIEW arrays.

This will eliminate this issue, because arrays transferred by value, but this is very not efficient way to do this like this:

Screenshot 2024-03-14 09.52.32.png

If you need really fast processing, don't use conversion of IMAQ to LabVIEW array, and visa versa.

In additional, conversion 1D array to 2D is also caused penalties.

What I would like to recommend is to use configured Ring Acquisition like this

Screenshot 2024-03-14 10.58.01.png

Actually this based on the standard IMAQdx Example 

Screenshot 2024-03-14 10.59.39.png

Now if you still prefer to use Channel Writer (personally I prefer to use old school classical queues), then you have to limit size of the queue to the size of the Ring Buffer (makes no sense to have unlimited, because unprocessed images gets overwritten, because they are references in the fact):

Screenshot 2024-03-14 11.02.09.png

When this queue on the channel writer get full, then you will start to lost images, because they will be overwritten in the Ring Buffer, then Images Missed starts increased.

In this pattern you will not have "Images Behind", because they will be pushed to the queue pretty quickly, but indirect you will get this on the processing loop, because amount of the images in the Queue are the unprocessed behind the acquisition point.

Screenshot 2024-03-14 11.05.36.png

I would like to recommend to read Ring Acquisitions kb article for better understanding.

Here is slightly modified original NI Example, where I added simple logging (queue guaranteed to have race condition free):

ring.png

And then you will see how the images acquired and processed when acquisition faster than processing:

 

Ring 0 acquired in buffer 0
Ring 0 processed
Ring 1 acquired in buffer 1
Ring 1 processed
Ring 2 acquired in buffer 2
Ring 3 acquired in buffer 3
Ring 4 acquired in buffer 4
Ring 2 processed
Ring 5 acquired in buffer 5
Ring 6 acquired in buffer 6
Ring 7 acquired in buffer 7
Ring 8 acquired in buffer 8
Ring 9 acquired in buffer 9
Ring 3 processed
Ring 0 acquired in buffer 10
Ring 1 acquired in buffer 11
Ring 2 acquired in buffer 12
Ring 3 acquired in buffer 13
Ring 4 acquired in buffer 14
Ring 4 processed
Ring 5 acquired in buffer 15
Ring 5 processed
Ring 6 acquired in buffer 16

 

 

And you can recognize here that in extreme situation 10 images was acquired, but the only 3 processed and the image Ring 4 acquired into 14th buffer was overwritten.

Hope it helps for understanding a little bit. Kind of "double buffering" is also possible in theory, if you don't prefer to touch Ring Acquisition buffer directly, but not like you did with arrays (because in the fact, the queue in channel writer is your second buffer, but very inefficient in such form).

Technically your buffer should be large enough, and how large — it depends from the speed of both loops and overall acquisition time, of course.

Last year I finished project where 10000 FPS acquisition was required without frame loss, and that was done with cams with memory onboard, so the images first was acquired into internal 128GB buffer, then "slowly" unloaded via 10GB network. It was also required to encode frames into video, and I've used OpenCV Video Writer, which is slightly more efficient (but the root cause for taking OpenCV onboard was a codec and ability to save video in *.mp4 container rather than *.avi, and not the performance). In some cases, like Camera Link grabber you might have this Ring Buffer on board of the framegrabber (but this is not your case, I guess).

Message 10 of 17
(572 Views)