From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

When Have I to use Frame Grabbers?

Hello to all. First of all thanks a lot for reading this post.

I would like to do a Pattern Matching + Golden template to detect defects in labels.

 

I am using a lineal camera 4k resolution.

The label is 45x25cm, so the image resolution is 4096px X 2400px, so it is a large image.

The another difficult is that the speed of inspection. I need to inspect each label in 108 milliseconds.

 

Patter Matching + Golden template takes 900 milliseconds with that image resolution (4096px X 2400px)...

 

So how can I improve the inspection time? Using frame grabbers?

 

Thanks a lot.

 

0 Kudos
Message 1 of 6
(2,919 Views)

You need to be more specific. Frame grabbers is a term used for devices that convert a camera signal (previously often an analog video signal, nowadays usually a digital camera bus such as CameraLink, DVI, SDI, LVDS, GigE, MIPI) into an image frame for processing in the computer. While different busses have differnt speeds of course, you won't be able to speed up your processing with a frame grabber. It is the essential part to even get your images from a camera into your computer and completely independent on the processing that you perform on the images afterwards.

If your processing is to slow for what you need, that may be because of to complex or ineffective processing algorithmes used. If you are sure you use the correct and optimal processing you will have to look for possibilities to speed up that processing. There are several possibilities that all tend to be about some form of parallelization. One could be to use several computers in parallel and let each of them do one image so with about 10 computers you would end up with an interval of about 90ms for the processing of one image. Or you can power up your computer to get more horse power to do things. And you can look into possibilities to have the Vision algorithmes use all the CPU cores that are available instead of trying to run all in one core. Then there are possibilities to look into extra processing power like pushing some of the processing to a GPU however this last one often sounds perfect in theory but most of the speed up of having those algorithmes get executed on a highly parallel GPU architecture is nullified through the extra overhead of getting your huge image memory buffers transfered to the GPU memory and then back into the main memory.

Rolf Kalbermatter
My Blog
Message 2 of 6
(2,890 Views)

@rolfk  ha escrito:

You need to be more specific. Frame grabbers is a term used for devices that convert a camera signal (previously often an analog video signal, nowadays usually a digital camera bus such as CameraLink, DVI, SDI, LVDS, GigE, MIPI) into an image frame for processing in the computer. While different busses have differnt speeds of course, you won't be able to speed up your processing with a frame grabber. It is the essential part to even get your images from a camera into your computer and completely independent on the processing that you perform on the images afterwards.

If your processing is to slow for what you need, that may be because of to complex or ineffective processing algorithmes used. If you are sure you use the correct and optimal processing you will have to look for possibilities to speed up that processing. There are several possibilities that all tend to be about some form of parallelization. One could be to use several computers in parallel and let each of them do one image so with about 10 computers you would end up with an interval of about 90ms for the processing of one image. Or you can power up your computer to get more horse power to do things. And you can look into possibilities to have the Vision algorithmes use all the CPU cores that are available instead of trying to run all in one core. Then there are possibilities to look into extra processing power like pushing some of the processing to a GPU however this last one often sounds perfect in theory but most of the speed up of having those algorithmes get executed on a highly parallel GPU architecture is nullified through the extra overhead of getting your huge image memory buffers transfered to the GPU memory and then back into the main memory.



Thanks a lot for replying.

 

The problem is the large image resolution (4026X2400)px.

The same vision algorithm takes around 70ms in a 800X600 px image...

 

I check many articles over the internet and most of them say that if you have to take a picture in a really short period of time, the best way is to use camera link. is that right?

With camera link I have to use frame grabber... but this fame grabber is only an interface from camera to PC? Doesn’t the card make the PC work less? If doesn't, what are the advantages in using camera link and frame grabber?

 

thanks a lot again.

0 Kudos
Message 3 of 6
(2,871 Views)

@Alvaro.S wrote:

So how can I improve the inspection time?


Obviously, by lowering the resolution Smiley Wink. Sounds sarcastic, but do you really need 0.1 mm resolution? 2K will make the entire process 4X faster.

 

Template matching is typically slow, as the entire pattern is convoluted with the original. You might speed things up by pattern matching subsets of the template with subsets (ROI's) of the image.

 

Or go for an other algorithm. For instance, simply subtracting two aligned images will also 'highlight' their differences, and will be a lot faster.

 

A frame grabber only makes sense if the acquisition part is the bottleneck, but even then it might not be faster at all...

0 Kudos
Message 4 of 6
(2,869 Views)

@Alvaro.S wrote:

@rolfk  ha escrito:

You need to be more specific. Frame grabbers is a term used for devices that convert a camera signal (previously often an analog video signal, nowadays usually a digital camera bus such as CameraLink, DVI, SDI, LVDS, GigE, MIPI) into an image frame for processing in the computer. While different busses have differnt speeds of course, you won't be able to speed up your processing with a frame grabber. It is the essential part to even get your images from a camera into your computer and completely independent on the processing that you perform on the images afterwards.

If your processing is to slow for what you need, that may be because of to complex or ineffective processing algorithmes used. If you are sure you use the correct and optimal processing you will have to look for possibilities to speed up that processing. There are several possibilities that all tend to be about some form of parallelization. One could be to use several computers in parallel and let each of them do one image so with about 10 computers you would end up with an interval of about 90ms for the processing of one image. Or you can power up your computer to get more horse power to do things. And you can look into possibilities to have the Vision algorithmes use all the CPU cores that are available instead of trying to run all in one core. Then there are possibilities to look into extra processing power like pushing some of the processing to a GPU however this last one often sounds perfect in theory but most of the speed up of having those algorithmes get executed on a highly parallel GPU architecture is nullified through the extra overhead of getting your huge image memory buffers transfered to the GPU memory and then back into the main memory.



Thanks a lot for replying.

 

The problem is the large image resolution (4026X2400)px.

The same vision algorithm takes around 70ms in a 800X600 px image...

 

I check many articles over the internet and most of them say that if you have to take a picture in a really short period of time, the best way is to use camera link. is that right?

With camera link I have to use frame grabber... but this fame grabber is only an interface from camera to PC? Doesn’t the card make the PC work less? If doesn't, what are the advantages in using camera link and frame grabber?

You're mixing up things here. You haven't mentioned how you get your images that you then process yet. The overal processing time for a single image frame is indeed tACQ (acquisition time) + tProc (Processing time). tACQ is mainly dependent on the way you read in the image and could be determined by the frame grabber. You don't describe how your program works but if for instance you simply read in test images from disk, your "frame grabber" here would be the disk. Unless you are using a very old rotating disk the time for reading even such an image from disk into memory is almost certainly very small in comparison to the tProc. Even if tACQ amounts to a significant portion of the allowable interval of 108 ms, you could deserialize the processing from the acquisition by having a loop doing the acquisition and then passing the image to a second loop through a queue or something that then performs the processing/analysis part.

 

But if the main part of your 900 ms period you mentioned is caused by the processing (analysis) of your image, you can use the super duper highest speed frame grabbing device and will still not get it done in 108 ms overal.

Rolf Kalbermatter
My Blog
0 Kudos
Message 5 of 6
(2,849 Views)

Which version of Vision Development Module are you using?

I am asking because starting from 2017, we added a new feature called Defect Map, which essentially combines pattern matching and defect inspection. Using this feature, you will benefit from some performance improvement.

Also, which pattern matching algorithm are you using (low discrepancy, value pyramid, gradient pyramid, geometric matching)? Do you need rotation invariant matching or your part does not rotate at all?

If you attach the Vision Assistant script, an image and template file you're using (if you can share them), we can help you optimize the parameters to use to improve performance on this large image/template.

If you're using the pyramid algorithm try setting the max pyramid level to store to 0 when you learn the template.

Try increasing the max pyramid level.

Try disabling subpixel accuracy

 

 

0 Kudos
Message 6 of 6
(2,833 Views)