I have the following setup:
LabVIEW 2017 SP1, VDM 2018, VAS 2018, Basler acA4600-7gc gige camera, D-Link DGS-1008P Gigabit switch, and tried on two different PCs - a Dell Optiplex and Dell Precision both Win10, with i7, and plenty of RAM.
I am trying to perform a machine vision operation where once the camera is in position I acquire a single image(greyscale) and perform an inspection. The inspection part runs fine and completes in 25-40mS, but the acquisition portion takes approximately 2.2-2.5 seconds. When I blow apart the the vision acquisition express vi and put timers in between the steps I see that the 'Snap.vi' is taking anywhere from 1.008 to 1.31 seconds on average (first time through can take even longer).
I have tried packet sizes of 1500, 8000, and 9000 with no change in speed(set in MAX and on my NIC card). The max desired peak bandwidth is set to 1000(max allowed). MAX takes just as long as the LabVIEW code to acquire the image. I have attached a stripped down project that will show the time for each operation should anyone happen to have a camera. I have been working with NI Tech support as well and they confirmed they are seeing the same amount of time for image acquisition, but I am in need of a quick solution as the machine cannot run properly with these vision delays.
If I need to provide more information or screenshots just let me know.
Solved! Go to Solution.
When you acquire continuously in MAX, what's the frame rate? I want to differentiate between the case where there's driver overhead taking a long time vs. the camera itself is taking a long time to acquire an image.
If the frame rate is slower than expected, can you provide the following extra information?
If the frame rate seems okay, I'd recommend switching to the low-level API and timing the different steps in there. If you drill down into the high-level Snap (IMAQdx Snap.vi on your block diagram), you can see several different steps: Configure Acquisition, Start Acquisition, Get Image, Stop Acquisition, and Unconfigure Acquisition. In an application where you are continually acquiring images, we don't recommend that you go through all of those phases each time you get an image, as that adds considerable (usually unnecessary) overhead. You should be able to configure things just once up front as long as you're not changing attributes that are critical to the acquisition (image size, pixel format, etc.), so then you can avoid the overhead of configuring/unconfiguring for each image.
Hope this helps,
Staff Software Engineer, Vision R&D
Thanks for the suggestions. My frame rate in Max is 7fps which is the camera default. That part seems to work fine.
When I tried to add timers into the individual steps in the Snap.vi, for some reason the timers don't update or even show as being read when I look at the block diagram (with or without debugging turned on). It won't allow me to save the Snap.vi with a new name ('Save as' is greyed out) and I am hesitant to save it to the same name as it looks like it will modify the vi in the NI_VAS.lvlib:IMAQdx SNAP.vi. Maybe it's still running the copy of Snap.vi already loaded into memory and it's not seeing my mods to the vi? Is there a simple way to do this without modifying the original vi? I must be missing something obvious here.
If I find a culprit in Snap.vi that is eating up time there is a chance I can break it out and and only re-execute the parts I need for each inspection, but this might get tricky. The machine has three cameras which perform 7-8 inspections depending on product. This particular camera will perform two different inspections, with only a need to change the exposure time in the acquisition portion of the code. I think everything else as far as camera settings will be the same.
Ok. I made a copy of the Snap.vi and here's what I am seeing so far:
IMAQdx Configure Aquisition.vi takes 275-313ms to run, It varies from iteration to iteration
IMAQdx Start Acquisition.vi takes about 27ms to run. Only varies by 1ms
IMAQdx Get Image.vi takes about 555ms to run. Only varies by 1ms
IMAQdx Stop Acquisition.vi takes about 137ms. Only varies by 1ms.
Thanks for the info. I have attached a little VI I made that configures a triggered acquisition up front and starts it, then sends a software trigger to acquire images. For what you describe, I think the most efficient thing you could do would be something like this, where you configure it all up front and start things, but just send a trigger when you need an image. Exposure Time is something that should be configurable while the acquisition is running, so you wouldn't need to stop or unconfigure to change that. For other attributes, you may need to conditionally add in those steps if necessary. The names of these attributes may not be an exact match for your camera, so you may need to tweak them slightly based on what things are called in MAX. I ran this with a Basler camera I had on my desk, so hopefully they should match.
Let me know if you have any questions about this.
Hope this helps,
Thank you very much. Yes it looks like I will have to pre-configure the setup and strip the loop down to just the Get Image.vi in order to get any speed from the vision software just as you did. Hopefully it will be enough. I still have no idea why it takes over 500ms to capture one image from a camera that can provide 7fps. There must still be some overhead in the Get Image.vi. Thank you for your help and quick response
Glad I could help. I agree with you that GetImage should not take that long at all. I think it's best practice to minimize the overhead of configure/unconfigure and start/stop if you don't need it for your use case, and I hope that unblocks you from making progress on your application. However, I still would like to answer the question of why GetImage is taking so long. Just now I heard about your escalation through support and I am talking to our applications engineer and product support engineer about taking some benchmarks of GetImage here in our office and trying to understand what you're seeing.
Another thing that you can try is swapping out the type of acquisition you're doing from a grab to a ring (if possible, see below). This avoids an extra copy of the data, which adds additional overhead. Take a look at the low-level ring example to see the slight differences in API.
About the ring vs. grab API:
A ring acquisition requires the user to allocate images and pass them to the driver while configuring the acquisition. The driver will then transfer image data from the camera directly to these user-allocated buffers, avoiding an extra copy. It is called a ring because the driver continually cycles through the allocated images, transitioning from the last buffer to starting over with the first one after each image has been filled with data. IMAQdx Extract Image prevents the driver from overwriting the image returned to the user while the image is used for processing. The image remains extracted until it is released by calling either IMAQdx Release Image, or IMAQdx Extract Image for a different buffer with the release previous flag set to true. This type of acquisition avoids an extra copy of the image in the acquisition loop, but it has some restrictions to support this optimization:
1. Image must not require decoding or pixel manipulation
- Compatible pixel formats: Mono, BGRa
- Incompatible pixel formats: Packed formats, Bayer formats using decoding, YUV, RGB, etc.
2. Image must have border size of 0 (not applicable for GigE Vision, Camera Link, and ISC-178x cameras)
3. Image width must be a multiple of 64 bytes (not applicable for GigE Vision, Camera Link, and ISC-178x cameras, as well as USB3 Vision cameras with the configurable Line Pitch attribute)
4. The size, image type, and border size of an extracted image cannot be modified
Let me know if you have any questions.
I still have no idea why it takes over 500ms to capture one image from a camera that can provide 7fps.
Note that the camera's 7fps spec is assuming various events on the camera are overlapped/pipelined. The camera can be transferring an image while triggering/exposing the next image. If you are timing the latency from triggering the camera to when the image is available on the host, you'll come up with a number that is longer than the number expected from the max frame rate.
I am looking to perform image acquisiton and processing in parallel loops. the Low-Level Ring parallel workers example would have worked, but my camera only supports YUY and JPEG formats, which results in
error: NI-IMAQdx: (Hex 0xBFF69053) For ring acquisitions, manipulating the image data prior to extraction is not allowed. The combination of the Pixel Format, Output Image Type, Pixel Signedness, Swap Pixel Bytes, and Shift Pixel Bits attributes requires the image data to be altered.
Is there any way to do this parallel acquisition and processing by getting around this error, or without using ring acquisition?