Imagine a system using for example multiple GigE cameras through the IMAQdx interface where we wish to form a composite stitched image from the multiple camera views. The stitching principle is naive, straigthforward concatenation, one next to another.
The problem is that where it is trivial to build such a composite image, it's difficult to do it very efficiently. The image sizes are large, tens of megapixels, so every copy matters. Alternative hardware configurations would open up a lot of options but say we're stuck using GigE cameras and (atleast initially) the IMAQdx interface. What tricks, or even hacks, can you guys imagine facing this challenge?
I've seen some talk about the IMAQdx grab buffers and it appears to me that one cannot manually assign those buffers or access them directly. The absolute optimal scenario would of course be to hack your way around to stream the image data directly next to each other in the memory, sort of as shown below in scenario1.png:
The above, however, doesn't seem to be too easily achieved. Second scenario then would be to acquire into individual buffers and perform one copy into the composite image. See illustration below:
Interfaces usually allow this with relative ease. I haven't tested it yet but based on the documentation using ring buffer acquisition and "IMAQdx Extract Image.vi" this should be possible. Can anyone confirm this? The copying could be performed by external code as well. The last scenario, without ring buffer, using "IMAQdx Get Image2.vi" might look like this:
The second copy is a waste so this scenario should be out of the question.
I hope this made some sense. What do you wizards say about it?
Solved! Go to Solution.
Unfortunately there is no concept of a "sub-image" where you could have the acquisition transparently acquire into an image that represents a sub-region of a larger image. However, you are correct that the ring acquisition with Extract is the right way to go to remove a copy. In this mode, the user allocates the internal buffers IMAQdx is using by way of standard images, and then they can directly access those without copying. You could simply extract the image from each camera's ring buffer (zero-copy), then use the ImageToImage VI to copy it into a region in your larger image (one-copy).
Aww, ran into problems right away... I plugged a USB3 Vision in for some testing and started "Low-Level Ring.vi" example:
Error -1074360235 occurred at IMAQdx Configure Ring Acquisition.vi Possible reason(s): NI-IMAQdx: (Hex 0xBFF69055) For a ring acquisition with this camera, the image width must be a multiple of the image's required line alignment.
Error -1074360236 occurred at IMAQdx Configure Ring Acquisition.vi Possible reason(s): NI-IMAQdx: (Hex 0xBFF69054) For a ring acquisition with this camera, the images must have a border size of 0 pixels.
Well, I guess those sort of make sense in the context: the IMAQ image container memory alignment probably lays the rules here. The border isn't an issue with the composite image but it would be if I wanted to use the no-copy technique with one camera systems. Is there more information about these constraints anywhere? Googling the error codes gives me nothing. I'd like to know what are the actual conditions I'm facing here.
EDIT: The acceptable widths were multiples of 64 (with 8 bits/pixel)
Sorry, the contraints are not really well documented as they are dependent on platform, camera type, camera capabilities, and how the driver handles things. All of these are subject to change and so we decided instead to try to make the errors be very self-descriptive to explain how to fix any requirements.
You are correct that these fundamentally come down to making sure that the image buffer specified is able to be directly transferred to by the driver. The largest requirement is that the image data type is the same and doesn't need any decoding/conversion step. The other requirements are more flexible and change depending on many factors:
- No borders, since this adds a discontinuity between each line. This error doesn't apply to GigE Vision (since the CPU moves the data into the buffer) or to USB3 Vision cameras that have a special "LinePitch" feature that can allow them to pad the image lines. The USB drivers of more modern OSes (like Win8+) have more advanced DMA capabilities so it is possible/likely that this also can be ignored in the future.
- Line width must be a multiple of 64-bytes (the native image line alignment on Windows) - same as border
So, if you end up using GigE Vision cameras, this should just work. If you want to use USB3 Vision you have a few more contraints to work with.
I see. It doesn't sound too bad then in the end. And that was fantastic answer right there, thank you! Let me just go create a bunch of fake accounts to kudo it properly.