Machine Vision

cancel
Showing results for 
Search instead for 
Did you mean: 

Error -50150 and -1074360320 in CVS

Hi,

 

My application involves an inspection system using a CVS-1456 and two cameras DFK41BF02 from Imaging Source. The specs of the cameras are 1280 x 960 pixel,  15 fps max, 1/2" CCD. I'm using LabVIEW 8.6.1 for the program.

 

Initially, in MAX, I've set the video mode for both cameras to be 1280x960 (Mono 😎 3.75fps and the speed of transfer to be 200Mbps for each camera. The program, which was using high-level imaqdx functions, worked fine. Later, I find out that I need both cameras to grab simultaneously at a speed of 7.5fps and thus, changed the settings in MAX accordingly (increased the video mode to 7.5fps). When I tried to rerun the program, I can only capture from one of the camera while the second camera gives me Error -50150 (as can be seen attached). 

 

I've checked whether there's enough bandwidth to run both cameras at 7.5fps simultaneously with the program that I found here:

 

http://digital.ni.com/public.nsf/allkb/F7F4DA6482C401278625732D0066EF4E

 

It says that I am only using 52% (26% for each camera) of the bandwidth. So, there should be no problem with the bandwidth. What I did further was to use the low-level imaqdx functions and set the buffer to be 30. I still get the same error. I tried using a smaller buffer and when I reduced it to 2 for both cameras, I received a different error (still error, though) -1074360320.

 

Please explain why and how do I solve this headache. 

Download All
0 Kudos
Message 1 of 7
(4,305 Views)
0 Kudos
Message 2 of 7
(4,286 Views)

Hi Muks,

 

Thank you for the link. I'm not very well verse with DMA's, RAM vs. processor bits (32/64) but wasn't the problem for Windows application? I'm running my application in RT OS inside CVS. Also, based on my understanding of the last post (which may be wrong), the grab API does not automatically transfers the data to the user's buffer - it does this only when the user request for the image. I assume that these images will later be overwritten with new ones. If so, how does this contribute to limited memory?

 

By the way, what does error -50150 indicates? 

0 Kudos
Message 3 of 7
(4,278 Views)

Hi Shazlan,

 

It sounds like you are running into memory constraints on your CVS. If you have two separate 1-megapixel cameras and you are allocating 30 buffers each in the low-level configuration, you're using at least 60MB in buffers alone (assuming 8-bit data), ignoring all the buffers you may be using in your application. Since it looks like that is a color camera, the memory usage for the buffers your app is using can get pretty large once you decode those images into color. Unless you have a huge need for several seconds worth of image buffering, I'd consider cutting your buffering down to no more than 5 images if you want to run both cameras.

 

For using multiple, megapixel-plus color cameras, the CVS 1456 might not be the best choice with its 128MB of RAM (not to mention 733Mhz processor). You might want to consider something like the EVS for this task instead.

 

Eric 

Message 4 of 7
(4,266 Views)

Hi Eric,

 

Thank you for your reply. When I run the code with the buffer count more than 3, I got a -50150 error. When I reduce the buffer count to be 2 or 3, I got a different error code, which is -1074360320. Though the camera is a color camera, the application I am running is monochrome. Please find the VI attached. I understand that the -1074360320 error is an insufficient memory error but what is -50150 error? Why am I getting the -50150 when I set the application to have large buffers but received the insufficient memory error when I have already reduced the buffer?

 

"Unless you have a huge need for several seconds worth of image buffering..."

Honestly, how do I know whether I need large or small image buffering or not? Can you please advice? In my application, I need the camera to run at 7.5 fps 8-bit monochrome for detection of an object on a conveyor. Once the camera has detected that the object is in the middle of its ROI, the image is sent for processing. Otherwise, it continues to monitor. The same thing goes for the second camera (which is inspecting the other side of the product). Due to space constraints, I cannot add sensors for triggering (which is why I need 7.5 fps). 

 

Unfortunately, I have no other choice then to use the CVS. At the time when we purchased the CVS, I wasn't aware of the EVS (not sure when it was released).

0 Kudos
Message 5 of 7
(4,258 Views)

Hi Shazlan,

 

I haven't seen the -50150 error before but it comes from some of the internal OS API abstractions that many drivers are written against. My guess is as good as yours, but based on the description it seems that some internal state is inconsistent. Based on what you have described, it seems likely that this is due to hitting memory constraints.

 

Keep in mind that a LabVIEW RT system is a pretty complex beast with many simultaneous threads running and so out-of-memory conditions can create some very nasty side effects where different components start returning failures at different times. Even things as simple as translating an error into a string ultimately require memory along the way to construct that error description in some cases! I suspect that for whatever reason some underlying component is unable to properly identify and report the out-of-memory condition, or was in some code path where recovering from a downstream memory allocation failure would have required allocating memory in some fashion.

 

You might want to try monitoring your application to see how close you are to running out of memory in various states. You can monitor the memory by looking at the system in MAX, attaching the System Manager within LabVIEW, or directly instrumenting your code with some of the VIs on the RT palette. You may be able to reduce your memory footprint by uninstalling various drivers or optional components from your target that are unneeded.

 

As to the amount of buffers you should allocate, it of course depends on your application. There are several ways additional buffering helps you:

- If you have a larger amount, the driver has a higher likelyhood of ensuring that new buffers are attached to the DMA descriptors of your firewire hardware and none are skipped due to no buffers being available. Realistically though, CPUs are fast enough that with at least 3 buffers it is pretty easy to ensure that you can copy out your data and reattach the buffers to hardware fast enough to never lose buffers due to having no place to DMA them.

- If you are trying to process *every* image, then having a larger amount of buffers ensures that if your peak time needed to process a single image is higher than your frame rate, you will still be able to process every image as long as your average processing rate is greater than your frame rate. This only helps if your application is designed to accommodate having a "pipeline" of images to be processed and your output/result does not need to be done before acquiring the next image.

-If you for whatever reason want a "history" of images (maybe you process every 10th image and if you see something interesting you go back and process the 9 before it as well or just save them to disk)

 

Looking at the code you provided, you are always getting the Last/Newest image, so you don't appear to be trying to process every image. Given that you are likely trying to do something once you detect the object is in the center of your ROI, you probably don't want that action to happen based on an image that was acquired several seconds earlier and now the object is no longer even on the belt. Thus, I'd keep your buffer count as minimal as possible and continue just getting the latest.

 

I'd also check what kind of image your camera is returning. If it is returning a color image but you are simply converting it back to greyscale for processing, you might as well configure the camera to return a greyscale image directly and save a lot of memory. You can likely change the video mode and/or pixel format in MAX and save the configuration.

 

Eric

 

 

Message 6 of 7
(4,256 Views)

Hi Eric,

 

Thank you for your explanation. It helps me to understand things clearer. I was hoping it has something to do with the settings or programming (something that I can fix) but it seems the issue is with the hardware. I think I can get away with 3.75 fps but I need to read and monitor every frame that the camera captures. I personally haven't done this but it can be done by changing the imaqdx buffer mode to 'buffer number' (as oppose to 'Last') and wire the number of buffer frame that I want to acquire. This buffer frame number will increase +1 at a time and once it has reached the max, I will need to reset it back to zero (0). Am i right? Also, am I right to say that when the camera has filled up the buffer, it will overwrite the first image in the buffer? 

 

What worries me is I have encountered ceiling memory limitation even before any processing has taken place. In my application, there will be three to four rejection criterias that I need to detect. The way I am planning to do this is by creating an image (using IMAQ create) for each process and destroy it once the processing is done and I got the results. In order to save processing time, I was planning to run all 3 or 4 processes at the same time in parallel. Now, with such limited memory, should I change my plan to processing one criteria at a time (IMAQ Create for criteria 1 - Process 1 - Destroy - IMAQ Create for criteria 2 - Process 2 - Destroy - ...). Can someone advice the best programming structure to do this?

 

Lastly, is there a way for me to estimate the amount of RAM I require to run my program? CVS-1456 has 128MB of RAM. How much 'extra' do I need to leave unused for the machine to operate efficiently without errors?

 

Thank you. 

0 Kudos
Message 7 of 7
(4,236 Views)