Machine Vision

cancel
Showing results for 
Search instead for 
Did you mean: 

Ring buffer acquisition error keeps popping up

Hello,

 

does anyone had this error in the past "Error -1074360237" on the Configure Ring Acquisition VI: For ring acquisitions, manipulating the image data prior to extraction is not allowed". I do not manipulate the image data and I am using the LL RIng Parallel Workers as an example. The configure VI crashes from the very beginning. I am using a camera with GigE bus.

 

Thanks

 

 

0 Kudos
Message 1 of 14
(5,207 Views)

What pixel format are you using? Because there are no internal buffers in this type of acquisition, the pixel format of the image is required to be of a type that does not require decoding. For example, a monochrome format such as "Mono8" would work, but a packed monochrome format such as "Mono12p" would not. Color formats that do not require decoding or pixel manipulation to match our BGRa pixel format can be used. For example, "BGRa8" and any Bayer format with the Bayer pattern attribute set to "None" would work, but YUV, RGB, or Bayer formats with a pattern cannot be used. If you need to use a pixel type that requires decoding, you can switch to a Grab instead.

Message 2 of 14
(5,176 Views)

Hi,

 

To add to what kensign wrote, it seems that the error message is a bit confusingly worded. The full text is below, and the important part is in bold:

 

NI-IMAQdx: (Hex 0xBFF69053) For ring acquisitions, manipulating the image data prior to extraction is not allowed. The combination of the Pixel Format, Output Image Type, Pixel Signedness, Swap Pixel Bytes, and Shift Pixel Bits attributes requires the image data to be altered.

 

So it isn't that you are manipulating the image data, just that the end result would have required IMAQdx to do so on your behalf. For GigE Vision cameras, the main requirement to use the Ring VIs is that the image datatype be identical to one of the internal Vision image datatypes. This would limit you to formats such as Mono8/10/12/16 and BGRA32. What formats does your camera support and which one are you trying to use?

Message 3 of 14
(5,169 Views)

Hello,

 

Thanks for your answers. I am using a Point Grey camera with the following pixel format available:

Mono8/12P/16

Bayer RG8/12P/16

YUV 411P/422P/444P

RGB 8P

 

So that means I can eliminate YUV and RGB for the ring buffer to work.

 

Then in Output image type, I have : Grayscale U8/I16/U16 and RGB U32/U64. When I try to set it up to grayscale (which I don't want), MAX returns "No decoder available for selected pixel format". It might be because my camera is a color camera by default?

 

Anyway, I need to choose either RGB U32 or U64. So accoridng to your explanations, this is not compatible with this ring buffer acquisition.

 

I am trying to record video with overlay on it but it seems that this write to AVI VI is so slow that it cannot be used (I get a max of 10-15fps) that is why I was trying to use a ring buffer.

 

Many thanks for your help.

Adrien

 

 

 

0 Kudos
Message 4 of 14
(5,152 Views)

Hi Adrien,

 

Unfortunately I don't think Point Grey cameras currently support the 4-byte RGB formats needed by NI Vision, but instead only implement 3-byte RGB formats. This requires some decoding work by IMAQdx and thus requires you to use the Grab instead of Ring APIs. However, the decoding is quite efficient and I don't think that is causing your performance issue.

 

You probably want to explore you choice of video codec and see if there are any other choices that can go faster.

Message 5 of 14
(5,136 Views)

Hello,

 

Thanks for the hint, I will look for posts related to this on the forum. I was so far using MJPEG codec, as the test VI found in labview examples showed that this one was the fastest within the pre-installed codec pack.

 

Many thanks.

 

Adrien

0 Kudos
Message 6 of 14
(5,128 Views)

I have three Basler cams (acA2040-120uc) connected over USB3. We want to use it for a behavioral tracking system.   

The National Instruments IMAQ data buffers do not support any of the pixel formats of this camera. Therefore the ring acquisition is not possible with this camera.  

But with the simple grab the performance is not enough for our application. We loose frames.

 

This is a big issue. Our project is stuck. Any recommendations? Is there an other camera type of same performance that supports the IMAQ pixel formats?

 

To my opinion the IMAQ data buffers should support the pixel formats used by most  cameras. There should be no transcoding of pixel format on the host since the FPGA on the camera is doing this nicely.

It is unbelievable that this high performance USB3 cards from National instruments and the recommended Basler cameras and the obviously well engineered driver with ring buffer acquisition do not work together. Please correct me if my anger is wrong. 

0 Kudos
Message 7 of 14
(3,763 Views)

@clicks wrote:

I have three Basler cams (acA2040-120uc) connected over USB3. We want to use it for a behavioral tracking system.   

The National Instruments IMAQ data buffers do not support any of the pixel formats of this camera. Therefore the ring acquisition is not possible with this camera.  

But with the simple grab the performance is not enough for our application. We loose frames.

 

This is a big issue. Our project is stuck. Any recommendations? Is there an other camera type of same performance that supports the IMAQ pixel formats?

 

To my opinion the IMAQ data buffers should support the pixel formats used by most  cameras. There should be no transcoding of pixel format on the host since the FPGA on the camera is doing this nicely.

It is unbelievable that this high performance USB3 cards from National instruments and the recommended Basler cameras and the obviously well engineered driver with ring buffer acquisition do not work together. Please correct me if my anger is wrong. 


It would be helpful to start new threads for topics such as this....

 

Sorry for your frustration. The Ring API requires that the camera be able to deliver data in the exact format that the driver requires, as you are accessing the buffers with zero decoding. For monochrome formats, this would mean an unpacked 8/10/12/14/16-bit format, and for color, this means a BGRA format with 8 or 16-bits per channel (32/64-bits per pixel). Unfortunately, the Basler camera in question (a color model), only supports 24-bit RGB/BGR formats, and doesn't support a mode with the alpha channel present.

 

While the IMAQdx driver could theoretically support giving you Ring API access to the 24-bit color images, as none of the support libraries for display or algorithms in Vision Development Module support this, there would not be much you could do with the data. They are all based on 32-bits-per-pixel color processing.

 

As to suggestions for what you could do:

  • You could implement a producer/consumer ring-style mechanism in your code to use the Grab API independently from your processing. Unless you have very multi-threaded processing code, chances are you have free CPU time in the background for this.
  • If your algorithm doesn't need color data, you could switch to Mono8. On the Basler cameras, this does a bayer decode first and then converts to grayscale.
  • FLIR/Point Grey has some cameras with the same sensor (In their Grasshopper GS3-U3-32S4C-C, and they at least have the Mono version on their Blackfly-S line). I can't find the detailed user manual right now, but as I recall, their cameras did provided a BGRa format that was compatible with IMAQdx's Ring API

 

0 Kudos
Message 8 of 14
(3,754 Views)

@BlueCheese Thank you for the clear answer. 

* The producer/consumer architecture was not suitable since it results in too much copying of data (in addition to the pixel transcoding). We have also a USRP 2945 connected to the same computer streaming a lot of synchronized data. We just need to merge the data and stream it to disk with as little copies as possible. But the circular buffer acquisition is intended for this cases, no?

 

* we need the color. But i tried the mono8 on the camera and grayscale8 on imaq but got the same error message.

 

* I see that the imaq operations (we just need copy image2image to merge the 3 frames and the write to avi) do only work on the given pixel formats. But why did NI choose pixel formats different than the most common formats of the cameras? Is this going to converge in the future? 

 

* Thanks for the hint to the Point Gray cams, i will check them out. Is Balser going to take the cameras back because of not being compatible with LabVIEW? But it was NI that recommends Basler...I feel so cheated.    

 

It looks like there is no option to bring this system to run under NI. It means going back to C and focus more on engineering than reverse engineering. But GNURadio, FFMpeg, OpenCV, etc. are nice tools and I am looking forward to unleash this power.   

0 Kudos
Message 9 of 14
(3,747 Views)

@JORY wrote:

@BlueCheese Thank you for the clear answer. 

* The producer/consumer architecture was not suitable since it results in too much copying of data (in addition to the pixel transcoding). We have also a USRP 2945 connected to the same computer streaming a lot of synchronized data. We just need to merge the data and stream it to disk with as little copies as possible. But the circular buffer acquisition is intended for this cases, no?

 

* we need the color. But i tried the mono8 on the camera and grayscale8 on imaq but got the same error message.

 

* I see that the imaq operations (we just need copy image2image to merge the 3 frames and the write to avi) do only work on the given pixel formats. But why did NI choose pixel formats different than the most common formats of the cameras? Is this going to converge in the future? 

 

* Thanks for the hint to the Point Gray cams, i will check them out. Is Balser going to take the cameras back because of not being compatible with LabVIEW? But it was NI that recommends Basler...I feel so cheated.    

 

It looks like there is no option to bring this system to run under NI. It means going back to C and focus more on engineering than reverse engineering. But GNURadio, FFMpeg, OpenCV, etc. are nice tools and I am looking forward to unleash this power.   


There is no additional copying of data if you use a producer/consumer architecture with the appropriate datatypes. For instance, if you use Vision images, it is a reference type and so you are just moving around handles. You should be able to do the same thing with 2D arrays, but you have to manually use the data value reference stuff and it can be tricky to get right.

 

To emulate the Ring using Grab, you'd simply have a FIFO of empty images, call Grab into one, pass to your consumer via aother FIFO to process, and then recycle it back to the empty pool when done. The only additional work is the pixel decoding, which for any RGB formats is super cheap on modern processors since it is vectorized and is just swapping around which bytes contain the data. There isn't a whole lot of benefit you get on the Ring versus something like this, except for the implementation being done in the driver instead, and some slight increase in memory bandwidth used on the system as the pixel data is decoded/swapped (even that is minimal, as in the Ring/zero-copy case, the first time you access the data, it will be slower since it has to be loaded into the CPU's cache at that point).

 

I'd need to see your code to understand why you are getting an error using Mono8 with a ring. It could be due to alignment requirements. Vision's processing requires a certain alignment per line, and depending on how you create your images (padding, for instance) as well as the Width set on the camera, it could be invalid for the Ring. Have you checked the help for the error you are getting?

 

I'm not really sure why you claim the camera is "not supported by LabVIEW". The combination works perfectly fine, just one advanced feature (the Ring API) isn't compatible with the 24-bit color format this camera supports.

0 Kudos
Message 10 of 14
(3,745 Views)