LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

IMAQdx GigE not robust at moderate CPU usage

A fairly open question, more a general request for information.

 

I am developing an application which is acquiring images from three Teledyne cameras, this is only a small part of the software.

 

I have been having real pain making the camera acquisition robust. Symptoms include:

- Sections missing from frames

- Entirely blank frames

- The actual cameras crashing and requiring a power cycle

 

My current understanding of how MIAQdx and GigE works is (Specific to a continuous acquisition using the acquire VI rather than a ring acquisition or a triggered acquisition):

- All UDP based

- Configure the camera to continuously stream image data over unicast

- Frames are split into chunks according to the set packet size, obviously this shouldn't be smaller that the network MTU. These packets are sent via a UDP unicast

- The IMAQdx driver is constantly monitoring the UDP port the image data is coming in on, and assembling the data into actual images and storing them in buffers

- Using the acquire vi I can copy the image data from the specified buffer to an IMAQ reference

 

Things I have done to improve the situation:

- Jumbo frames (I can't do this because our system will not allow for it, I am aware of why but it is an inordinate amount of work to correct it)

- Set desired peak bandwidth to 333Mbps for each camera (I understand that this determines the interpacket delay but have been unable to find specifics of exactly what this doing)

- Network card Interrupt moderation rate set to minimal

- Increased network cards receive buffer to the maximum value

- Ensured that frame rate is set low enough so as not to utilise more than 250Mbps per camera (settled on ~10-15fps per camera)

- Tried to reduce frame size but the cameras I have don't support binning or interleaving

- Tried implementing a ring acquisition as my understanding is that the acquire vi copied the image from the IMAQdx buffer to the IMAQ buffer whereas using a ring acquisition would allow me to access to image without copying the data. I couldn't do this because the none of the pixel formats are supported

- Optimise code to minimise CPU usage

 

It is this last point which is causing a noticeable problem at the moment. When my software, or something else, causing  more CPU utilisation the frames start getting corrupted. I am assuming this is because IMAQdx can't allocated enough resources to process the UDP datagrams quickly enough before the network card buffer overflows. I am only ever at ~25% CPU usage of 12 core machine though.

 

Specific questions I have:

- Is anything in my understanding of how IMAQdx works wrong/can anyone point me in the direction of some half decent documentation as to how it works

- Are there any other things I can look at altering which I haven't listed

- Is there a way of ensuring that the IMAQdx driver runs on its own thread so even if something else starts it still has the resources to process the UDP datagrams

0 Kudos
Message 1 of 3
(711 Views)

There's probably something wrong with your code.

Bill
CLD
(Mid-Level minion.)
My support system ensures that I don't look totally incompetent.
Proud to say that I've progressed beyond knowing just enough to be dangerous. I now know enough to know that I have no clue about anything at all.
Humble author of the CLAD Nugget.
0 Kudos
Message 2 of 3
(672 Views)

Can't help thinking that not using Jumbo frames is going to be a problem, but lets have some more info please.

 

Are you acquiring from the 3 cameras simultaneously? How big are the images?

 

Can you share some of your code - the acquisition part at least.

 

0 Kudos
Message 3 of 3
(654 Views)