From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

Machine Vision

cancel
Showing results for 
Search instead for 
Did you mean: 

find defects in a 3D image

Hello,

i'm working on a machine that uses a laser scanner to measure gears.
I acquire a XYZ points cloud for each tooth of the gear and i compare them with a "reference" point cloud that is my template, the result of this comparison is an image that represent the Z difference between source and template points cloud. The difference image has a dynamic of 0.3mm in Z using 256 gray levels, 128 grey if the difference is 0, values < 128 are "defects\hole" values > 128 are "defets\peack".

Now i have to develop the final part of the algorithm that discriminates real defects against noise (coming from measure error of gear itself).
I tried using threshold methods but doesn't work in a reliable way.


Attached a couple difference images that have defects

Thanks for any suggestion

Alessandro Ricco
Download All
0 Kudos
Message 1 of 17
(6,031 Views)

Hello,

 

how accuratelly do you position each tooth with respect to the reference tooth? Or do you align your measured tooth with the reference using for example iterative closest point? And then calculate the difference in the depth direction?

 

0.3 mm difference seems quite small. What is the accuracy of the triangulation sensor? What is the measuring region size? How did you calibrate the sensor?

 

Can you explain a bit more about the threshold approach? Regarding the measuring error - what if you measure the same material using a flat surface (to account for reflectivity) and calculate the standard deviation. Do you filter the measured values? This can also reduce the measuring noise.

 

Best regards,

K

 

 


https://decibel.ni.com/content/blogs/kl3m3n



"Kudos: Users may give one another Kudos on the forums for posts that they found particularly helpful or insightful."
0 Kudos
Message 2 of 17
(6,006 Views)

Hi Klemen.

 

i'm using ICP algo to align the source and the template data and than i calculate the depth image in Z direction.

 

The theorical resolution of the sensor is 4 micron i measure about 25mm in Y direction having a DOF in Z of 4 mm.

 

 

The current aproach is to perform a threshold with fixed values pairs than remove little particle than find objects and filter out the ones with area below a limit.

 

The threashold method is not reliable because don't take in to account the "distance" between zones with the same gray value, a defect is tipically a cluster of very close pixel out of the "good" range.

 

The error of the measuring system (machine + laser) is about +- 0,01 mm i don't filter data when acquired but only in the laset stage of image elaboration just before threshold.

 

regards

 

Alessandro Ricco
0 Kudos
Message 3 of 17
(5,995 Views)

Hello,

 

if I understand correctly (please correct me if I am wrong) you would like to isolate only the region where the defect occurs?

 

If this is true, try filtering the resultant depth image using a box or a Gauss filter and try segmenting the depth image using for example grab cut algorithm. You would need to seed some positive - foreground and negative - background pixels.

 

One more thing - you are calculating the difference between two depth images (Z-direction values). Do they overlap? The 3d data after the ICP does overlap, but that does not mean that the indices of both depth images after the ICP correspond.

 

Best regards,

K

 

 


https://decibel.ni.com/content/blogs/kl3m3n



"Kudos: Users may give one another Kudos on the forums for posts that they found particularly helpful or insightful."
0 Kudos
Message 4 of 17
(5,980 Views)

Hello.

 

correct i want to isolate the region with the defect

 

I'm experimenting with Grab Cut can you explain me what do you mean with "You would need to seed some positive - foreground and negative - background pixels." ?

 

You are right the ICP register but the indices doesn't ovelap, i tried calculating the euler distance before subtracting Z using the couple that are more near but is too slow i also tried to interpolate my template cloud and use the intepolated for subtraction but again is too slow

 

Regards

Alessandro Ricco
0 Kudos
Message 5 of 17
(5,957 Views)

Hello Alessandro,

 

by seeding the values I meant that the segmentation procedure (GrabCut for example) needs an input labeled mask.
The options are:

 

1. Obvious background

2. Obvious foreground

3. Possible background

4. Possible foreground

 

For more information, you can check the OpenCV's GrabCut arguments explanation here:

 

http://docs.opencv.org/modules/imgproc/doc/miscellaneous_transformations.html#grabcut

 

You basically need to provide some information of the underlying pixels before running the algorithm. Same goes for example with GrowCut segmentation. I know there is a GrowCut segmentation algorithm written in Matlab, but not sure for OpenCV.

 

Here is a working .dll implementation of OpenCV's GrabCut algorithm in Labview:

 

https://decibel.ni.com/content/blogs/kl3m3n/2013/07/30/color-histogram-matching-and-grabcut-segmenta...

 

It uses a ROI rectangle, where all pixels outside the rectangle are considered as a positive background pixels.

 

Regarding the difference in the depth direction, you would need to display the 3D pointcloud after the ICP alignment and extract the Zbuffer  value from both pointclouds using the same viewpoint. You would basically just need to snap the depth image from the active viewpoint. PCL (point cloud library) has this option I think. This should be much faster than your approach.

 

See: http://pointclouds.org/documentation/tutorials/range_image_creation.php

 

This should work after aligning both pointclouds and using a proper sensor position. First, get the depth image of one pointcloud and then the other using the same parameters of the sensor orientation. Then you just calculate the difference.

I have not tried this, but I think it should work. If you are interested, I can try to (when I get the chance) build a .dll which can be called in Labview? That is if you don't beat me to it! 🙂

 

Hope this helps a bit.

 

Best regards,

K


https://decibel.ni.com/content/blogs/kl3m3n



"Kudos: Users may give one another Kudos on the forums for posts that they found particularly helpful or insightful."
Message 6 of 17
(5,949 Views)

Hello Klemen,

i read on OpenCV documentation and understood the meant of seeding.

 

I'm using your example of grab cut, seems to work but again is not stable, the noise is very similar to defects so it is not simple to extract the defect zone.

 

I guess that one things to do is to improve the Z difference image, your method of aligning sensor should work but honestly i don't have any idea where to start so i warmly so, i accept your proposal to write a dll 🙂

 

In the mean while i'll continue to experiment with basic threshold method

 

Just for reference i'll attach a couple of Z difference image that i used with your grab cut algo.

 

Thanks again.

 

 

Alessandro Ricco
Download All
0 Kudos
Message 7 of 17
(5,931 Views)

Hello,

 

I have prepared an example of creting a range image from the 3D data. You can find it here:

 

https://decibel.ni.com/content/blogs/kl3m3n/2014/05/07/create-a-range-image-in-labview-from-3d-point...

 

I hope this helps you. If you encounter any problems or errors, please leave a comment at the linke above.

 

Regarding your images, there really is some noise, but the defect seem clear (hope I am looking at the right thing, if not they are not so clear Smiley Happy). Have you tried using some edge detection algorithms? Sobel for example or Laplacian (with some additional steps), or any built in edge detection functions? I see a larger intensity variation on the defects. Have you tried a box or a Gaussian filter? Perhaps median filter - I have used it to remove speckle noise in the past, if I remember correctly.

You could also try FFT (FFT-1) and see I you can attuenuate the noise frequencies (basically a convolution in the spatial domain).

Best regards,

K


https://decibel.ni.com/content/blogs/kl3m3n



"Kudos: Users may give one another Kudos on the forums for posts that they found particularly helpful or insightful."
0 Kudos
Message 8 of 17
(5,897 Views)

Hello,

 

thanks for the example works good with your data i'm trying to adapt to use with my.

 

yes the defects is clear to human eyes, so as you suggested i tried convntional aproach: box filter -> convolution -> sobel -> threashold -> blob detection seems to work reliable now.

 

But i have some concern about ICP, i'll try to explain.

 

I compose my data starting from a 2D matrix (the acquired distance image) in oder to have 3 1D array X,Y,Z composed only by valid measure point, basically X and Y are calculated (depends on the resolution of the scanning system so are the size of the acquired image) and Z is the actual measure.

 

When i execute ICP i obtain again 3 array X,Y,Z translated and rotated now i want to reconstruct my 2D matrix (my distance image) . My idea was create a matrix of zeroes with the size of the original distance image and replace eanch Z in the position describet by X and Y array but at this point i got trouble because X and Y array are not integer value as before transformation but they are fractional number so i can't use it directly. What i'm doing now is cast fraction number in int and use it as matrix indexes... i know is very dirty....

 

I suppose this issue could be solved using your "3d buffer extractor" as described but it want as inpunt 3 2D matrix (i suppose the original scanned image included non valid point) but again i have my data in 3 1D array.

 

Again, the final algorithm seems able to detect defects but i'm not happy because i know that the final transformation (from ICP to 2D image) introduces errors and i would like to avoid it.

 

Thanks again

Alessandro Ricco
0 Kudos
Message 9 of 17
(5,879 Views)

Hello,

 

I am sorry, but I do not quite understand your reasoning regarding the "re-acquisition/re-construction" of the depth image after the ICP algorithm. Does the ICP algorithm work ok now?

 

I've used your test data (that you've posted some time ago), and I think the data is prior to ICP alignment. I did not perform the alignment, but only obtained the range image from both example datasets. So, one of the 3D point clouds looks like this, right:

 

part.png

 

And the range images and the difference between the two:

 

RangeImageTest - Copy_FP.png

 

If I am not mistaken, the defect is visible (inside the red circle)?

 

To adapt the example (range image "re-construction") for your data, just change the parameters of the called .dll function in Labview form 2D to 1D. Set rows = 1 and cols the size of your 1D array.

 

Best regards,

K


https://decibel.ni.com/content/blogs/kl3m3n



"Kudos: Users may give one another Kudos on the forums for posts that they found particularly helpful or insightful."
0 Kudos
Message 10 of 17
(5,860 Views)