Machine Vision

cancel
Showing results for 
Search instead for 
Did you mean: 

Locating Billiard Balls

Unless you have different types of lights in different locations or colored lights, moving the balls around should not affect the hue or saturation.  It will affect the intensity, but not the others (with the exception of very dark areas).  A green ball should have the same hue and saturation levels when it moves from a bright to medium lighting situation.  Only the intensities should change significantly.  The only problems are overlighting (washing out colors to a bright white spot) and underlighting (so dark that you can't really make out the color).

 

This is how green screens work in video.  The computer looks for the specific hue and saturation of the green background, and shadows don't affect it much as long as there is enough light to measure the color.  The background is easily removed from the remaining image.

 

Bruce

Bruce Ammons
Ammons Engineering
Message 11 of 27
(1,178 Views)

Looking closely at your image, I see a lot of compression artifacts.  When a camera compresses an image, it loses a lot of the color detail.  The difference is not significant to the human eye, but it means a lot to the computer, especially when you are looking at hue and saturation of small areas.

 

Is this a cheap webcam?  Is there any way you can reduce the compression or eliminate it?  Can you get a higher quality camera that doesn't have compression?

 

Bruce

Bruce Ammons
Ammons Engineering
Message 12 of 27
(1,174 Views)

BruceAmmons escreveu:

Looking closely at your image, I see a lot of compression artifacts.  When a camera compresses an image, it loses a lot of the color detail.  The difference is not significant to the human eye, but it means a lot to the computer, especially when you are looking at hue and saturation of small areas.

 

Is this a cheap webcam?  Is there any way you can reduce the compression or eliminate it?  Can you get a higher quality camera that doesn't have compression?

 

Bruce


It's a Microsoft Lifecam Cinema 720p that can give us up to 1280x720 but we set it to 800x600 to make the processing a little bit faster since we are using myRIO and it goes very slowly. If you think that increasing the resolution would improve our results I wouldn't mind the extra time it would take on processing.

 

Another doubt I have about image is if the format (JPEG/YUY) or even the fps rate makes any difference on the "compression" you said. The photos were taken using the following setup on NI Vision Acquisiton Express:

 

Video Mode: 800 x 600 JPEG 30.00 fps

Brightness: Manual 200

Contrast: Manual 4

Saturation: Manual 155

Sharpness: Manual 25

Backlight Compensation: Manual 0

Power Line Frequency: 60 Hz

Exposure: AutoAperaturePriority

White Balance: Manual 4087

Focus: Manual 7

Zoom: Manual 0

Pan: Manual 0

Tilt: Manual 0

 

I'd be even more grateful if you could tell me how to avoid that "compression".

 

lifecam.jpg

0 Kudos
Message 13 of 27
(1,170 Views)

The problem is the JPEG compression.  The camera uses it to make the image smaller for sending to the computer over USB2.  The computer decompresses the image for display, etc.  If the camera has any options for a low frame rate, no JPEG image, that would be ideal.  If there is a setting for the amount of compression, the closer you can get to zero compression the better.

 

I assume this is a project for a class, and you can't spend more money on a better camera.  You would get a much higher quality image with a machine vision camera and lens, with no compression.  It would take a while for the image to transfer over USB2, though.

 

I think the current resolution should be fine.  I don't think I would go to a higher resolution, because it takes more time to send more pixels (or they get compressed even more).

 

Bruce

Bruce Ammons
Ammons Engineering
Message 14 of 27
(1,160 Views)

Here is an example of the compression.  If you look at the image c.png from your first post, and zoom in enough to see the individual pixels, you will notice that the pixels are in 3x3 groups.  The edges between different colors are jagged and follow the 3x3 grid.  If you look at the numbers on the balls, they are a blurry mess.  Without compression, you would be able to read the numbers very clearly.  If you look at the purple stripe ball in the upper left corner, the stripe is only two or three 3x3 blocks wide.  The purple color is uneven, which makes it almost impossible to identify a single hue.  The colors are all off at the edges of all the balls, which makes it very difficult to identify when the colored area is a thin strip.

 

Bruce

Bruce Ammons
Ammons Engineering
Message 15 of 27
(1,158 Views)

You are right. We can't get other camera because using this one is mandatory.

 

There's also the YUY 2 image format. I'm gonna test it and reduce the fps to 7.5 instead of 30 to see if the image quality will improve.

About the compression, I hope the chages can help us to reduce it.

As soon as possible I'll post the results below.

 

 

 

0 Kudos
Message 16 of 27
(1,144 Views)

there is some solution for this project 
machine vision method you can sign your balls with some phosphorescence material (excited with uv light )   and add uv ring light around your camera 
in this case all ball will be shine 
image processing method 
you have to test some edging algorithm to find best method of finding edge of ball after that make binary image with this edge information and add this image  with main image 
in this case around of ball will be turn to white and separate each other to detect 


Message 17 of 27
(1,130 Views)

also it is better to remove background 
for example you can get image the time that you do not have ball and then find difference
of image between empty board and beard with balls the difference is balls  also based and this difference area and divide it to one ball area find the number of ball in locations 

Message 18 of 27
(1,129 Views)

Hatef.fouladi escreveu:

there is some solution for this project 
machine vision method you can sign your balls with some phosphorescence material (excited with uv light )   and add uv ring light around your camera 
in this case all ball will be shine 
image processing method 
you have to test some edging algorithm to find best method of finding edge of ball after that make binary image with this edge information and add this image  with main image 
in this case around of ball will be turn to white and separate each other to detect 



Hatef, thank you for your participation.

 

- Unfortunately we are not allowed to change any property of the balls.

 

- I've tested some edging tools on vision assistant>machine vision and couldn't edge the balls properly.  I understood your idea, it's a very clever solution, have you ever done it? I would be very thankful if you could help me with the tools/script I should use...

 

 

0 Kudos
Message 19 of 27
(1,093 Views)

BruceAmmons escreveu:

The problem is the JPEG compression.  The camera uses it to make the image smaller for sending to the computer over USB2.  The computer decompresses the image for display, etc.  If the camera has any options for a low frame rate, no JPEG image, that would be ideal.  If there is a setting for the amount of compression, the closer you can get to zero compression the better.

 

I assume this is a project for a class, and you can't spend more money on a better camera.  You would get a much higher quality image with a machine vision camera and lens, with no compression.  It would take a while for the image to transfer over USB2, though.

 

I think the current resolution should be fine.  I don't think I would go to a higher resolution, because it takes more time to send more pixels (or they get compressed even more).

 

Bruce



Hey Bruce, I tested the other image format a few hours ago, the results were even worse...

 

Here you can see that the ball numbers are harder to identify. (take a look at number 5)

 

YUY2:

a.png

 

 

JPEG:

No_LED_F.png

0 Kudos
Message 20 of 27
(1,088 Views)