Machine Vision

cancel
Showing results for 
Search instead for 
Did you mean: 

IMAQdx Grab images using buffer in memory

Let me explain one reason (there are several) why sending an error as soon as a little bit more than 50% of the buffers are filled is not good :
In one example, I acquire synchronised images of 2 cameras.
Each camera takes 40 images with a hardware trigger. To be as fast as possible, acquisition is continuous.
If I declare 40 buffers for the driver and also 40 user buffers for each camera, and even if I start processing my buffers while they are acquired, as soon as a little bit more than 50% are filled, I have an overwrite error from the driver.
This is NOT logical, because even if the image processing is "late", the driver should send this error only if we try to acquire a 41th buffer while the first buffer is not processed.

To avoid this problem, I have to allocate more than 80 buffers for the driver for every camera!
I think this is not a "vague" example...

0 Kudos
Message 21 of 32
(3,316 Views)

Hi CAPTIC_LA,

 

I think you misunderstand how the internal buffer list works. It is not just a FIFO for the user's buffer data. For a continuous acquisition,  it is designed to cover two separate needs:

-Prevent hardware DMA buffers from underflowing by ensuring there are always buffers ready to receive data when it comes in (most hardware needs this queued ahead of time)

-Allow variability in user processing by acting as a buffer to allow access to older buffers and not just the latest

 

Since the buffer list is circular in nature, you can see that these two needs conflict with each other. After a buffer is acquired by the hardware, it is free for the user to access it. At some point in time, it will need to be recycled and re-queued to the hardware to receive a new buffer. IMAQdx uses the 50% heuristic to try to balance both sides. It could potentially try to prefer keeping buffers unqueued to hardware if they haven't been processed yet by the user, but this would mean that the hardware would underrun instead if the user's procesing code gets far enough behind. Since there are many more downsides to a hardware underrun, the driver prefers the alternative.

 

With the example you gave, it sounds like you instead should be using a non-continuous acquisition. There is no speed difference, the driver just doesn't recycle the bufferlist to make room for new images. You would set it for 40 images if that is all you expect to acquire. Furthermore, if you only need access to images ina  linear fashion, there is no reason then to use 40 user image buffers (the ones passed to Get Image) since the driver will never overwrite the underlying data buffers and so you can access them at whatever speed your processing runs at. You can simply re-use the same image buffer again and again in that case.

 

Eric

0 Kudos
Message 22 of 32
(3,311 Views)

Ok Eric, thank you for these explanations.

But in my specific case, the problem is, that if my acquisition is not continuous, and if I wait for a "start" signal on my PC (digital input for example) to start a one-shot acquisition on each camera, I might miss the first triggers...

0 Kudos
Message 23 of 32
(3,293 Views)
Hi,

I'm not sure I completely understand your application. Do you currently just have the camera free-running and then discard all images before some software-based trigger condition? Do you actually need any pre-trigger data?

Assuming I understand your application, a better option might be to simply start the camera on-demand when you get the trigger condition. This could be done by splitting the heavyweight Configure Acquisition from Acquisition Start or by putting the camera in a multi shot mode that is triggered by a software trigger. Both of these are low latency (at least comparable to just software-timed image discarding).
0 Kudos
Message 24 of 32
(3,263 Views)

Eric,

 

I read you comment with great interest.

Nice to hear this 50% limit actually exists. Where did you learn that? As far as I know it is not in the help files so it seems everybody needs to discover this for himself, or read this blog first.

I have thousands of frames coming in at 1000FPS. Assigning thousands of buffers to this ringbuffer uses all the memory I have. It is absurd that half of these buffers is not going to be used at all.

Breaking the process when still half of the memory is unused makes no sense to me.

I do not fully grasp this risk of DMA buffers underflowing, but as my frames come in at a steady pace I do not see why the margin  should be related to the total amount of buffers instead of FPS & Framesize

Why don't we have a property that can be changed from 50% to 10%. ?

You are probably right: I misunderstand how the internal buffer list works. I don't even know what an internal buffer list is. You continue by telling what it is not. Maybe a topology drawing would help ?

 

Best regards, Paul

 

0 Kudos
Message 25 of 32
(3,221 Views)

Hi Paul,

 

You can imagine the internal buffer ring as simply a FIFO that is filled in order by the hardware, with filled buffers de-queued for some time (where the user can access them) and that are eventually queued back in after reaching the 50% threashold of de-queued buffers. The user can only access de-queued buffers via GetImage, as the ones that are queued are "owned" by the hardware and can be in the process of being filled at any point (so the contents are volatile and cannot be used by the user until the hardware has told us it is safe). it is not as though 50% of the buffers are never used, it is just that only 50% may be accessed at any specific point (when in a continous/circular grabbing mode).

 

The reason the margin is related to percentage and not FPS/framesize is that the driver has very little knowledge of what the actual framerate will be in many cases. The camera may be triggered or have non-standard free-run modes that don't explicitly tell us how fast it will come. We could use the cameras connection speed as some sort of proxy, but this may not be available or accurate in all cases, or could lead to completely unreasonable allocations if there is a large mismatch between actual speed and maximum.

 

I definitely appeciate the problem you are seeing, and unfortunately it stems from you wanting to do something a bit outside the normal application of either a single-shot acquisition (where this behavior does not occur) or a normal continuous acqusition where you need to keep getting new data at all times. We are definitely considering some options to allow a user to adjust this 50% threshold, but there are some idiosyncrasies with how to properly expose this cleanly for all situations. I think we will definitely try to see about how better to solve this use case.

 

In the mean-time, I think one option you may wish to consider is to simply move the buffer list outside of IMAQdx and into your acquisition code. You could setup IMAQdx to use a minimal bufferlist (enough to protect against overflows) and call GetImage into a circular list of images your application maintains. This would give you explicit control over how you recycle your buffers while wasting a minimum of memory.


Eric

 

 

0 Kudos
Message 26 of 32
(3,209 Views)

Also related to my recent question: Multi-camera IMAQdx systems: shortcuts for stitched composite image, I've been now struggling with the no-copy scheme using the Ring Acquisition and Extract Image. My main concern is tracking the lost buffers due to the locks. Say the application runs for a long time and frames are occasionally missed due to buffer overflow, which is completely fine, we just need to know exactly which ones. The lost frames on the ring buffer level were brought up in this thread earlier but I didn't manage to find any way to actually keep track of them. The buffer numbers seem to increment only for grabbed frames and I don't understand what the Overwrite Mode is supposed to do—it doesn't seem to have any effect related to this.

 

For now I've used Get Image into a "user buffer" and implemented the FIFO buffer with locks myself. This way it's been easy to track the overflown frames because the actual grabbing never stalls and so the buffer number keeps always incrementing in the background, I can just skip copying into the user buffer.

 

So now I wish to rid the basically redundant copy. The access to the images in the IMAQdx buffer without copying certainly is there. Currently the only obstacle seems to really be this lost frame tracking and I just can't wrap my head around the whole ring acquisition buffering to overcome it. Am I missing some fundamental thing here or am I just out of luck with the way IMAQdx is currently designed?

0 Kudos
Message 27 of 32
(2,937 Views)

Hi,

 

As before, the answer is a bit complex and depends a lot on the camera type and even implementation.

 

Firstly, there are two places images can be "lost". One is in the transport layer, where we never acquire the image, and the second is where the image is acquired, but is no longer available to your processing loop (because it is too old). The second case is where the Overwrite mode feature is used for. In this case you keep track of it within your application code, either by tracking the numbers returned or by setting the Overwrite mode to fail when you don't get the buffer you ask for.

 

In the first situation, where the driver doesn't actually acquire an image, things get a bit more complicated. In the case of using Get Image, since the buffers are locked for such a short amount of time, unless you are using a small number of buffers with a high frame rate, it is very unusual to have an image dropped by the driver. As you suspected, once you use the Ring, since you are blocking the driver from recycling the buffer, it becomes much more likely if get behind.

 

In the case of some busses, like DCAM FireWire, we don't really know how many buffers we lost, just that we may have lost "some" when there are no buffers attached to the hardware queue. It could be 0,1, or more, and we increment the Lost Buffer count every time the queue is left empty (the underlying reasons stem from how isochronous transfers are exposed by the hardware and Windows).

 

In busses like GigE Vision, there isn't typically end-to-end flow control, so the camera just streams regardless of whether the driver has free buffers to receive the data. In this case, if the driver doesn't have a buffer available when the image starts, it drops the whole frame and increments the "Lost Buffers" count by however many that were skipped (exact).

 

On USB3 Vision, it gets even more interesting, because the bulk stream mechanism used by the protocol has flow control built in. The camera can't send data we don't have a buffer submitted for. This means that if you run behind, the camera starts having to slow down as well. The camera then has a bit of flexibility in how it handles this. On cameras with large framebuffers, there may be many images worth of buffering there and maybe you never drop any data if you eventually catch up. Other cameras have very small buffers and have to drop data in this case. When they do have to drop data, some cameras drop whole buffers, others send "empty"/"partial" buffers to catch up. Those buffers show up with meta data ("Custom Data" in VDM terminology) that indicates that they are missing data (similar to how packet loss is recorded on a per-image basis with GigE Vision). Even for cameras that skip whole buffers, how they record it can be different. Some increment the buffer number used in the transport layer for the skipped buffers, which our driver uses to update the "Lost buffers" count, others just skip acquiring it completly (similar to a missed trigger). A lot depends on how the camera's architecture is designed in terms of how they buffer and transmit data.

 

Unfortunately, when we do drop some whole frames, there isn't a good way in the driver today to know exactly which ones. The API mostly tries to tell you that some data was lost, but delivering exactly the ones that were is not well-exposed today. Here's some ways you can do it:

- GigE Vision and USB3 Vision cameras do have a block_id that is returned in the image stream. On many cameras this is returned as "Chunk data" in the image that you can extract as Custom Data from the image (this assumes that the camera increments the ID even for skipped buffers).

- GigE Vision and USB3 Vision cameras include a timestamp with the image that can be extracted. If you are taking images at a constant rate, the gap in timestamps should tell you where you skipped images.

- You could make some logic to query Lost Buffers every iteration and combine it with logic reading the Last Acquired Buffer number (to make sure you know where the buffer belongs).

 

Eric

Message 28 of 32
(2,918 Views)

That is quite eye-opening, now I got a much better idea of what to expect from the driver. Your explanations are much appreciated, once again.

 

The particular USB3 Vision camera, a Basler one, I've been now testing with doesn't ever seem to increment the Lost Buffer Count, was it buffer full or overwriting. And naturally the IMAQdx Buffer Number only increments for acquired images. The camera does provide a counter through the Chunk Data that at first glance seems to account for all the exposed images. That might be a satisfactory work around.

 

In practice we'll be only using Gige Vision and Camera Link (btw, any trivial solutions within the IMAQ interface to the same problem?) cameras so I'll have to come back to this later on. If I can use anything less device specific the better, obviously. But at least now I'm learning what a lot of the driver behaviour comes from.

0 Kudos
Message 29 of 32
(2,910 Views)

@vekkuli wrote:

(btw, any trivial solutions within the IMAQ interface to the same problem?)


I'm not certain. I think that driver exposes a similar lost buffers-style cumulative counter, so that might be the only option.

 

0 Kudos
Message 30 of 32
(2,889 Views)