Machine Vision

cancel
Showing results for 
Search instead for 
Did you mean: 

Multiple Object Tracking Failure at Paths Intersection

- I am facing an issue only in the condition where the 2 rovers' paths intersect (or close to each others)

- Tracking is based on shape adapted mean shift algorithm and background subtraction

- Frames are conflicted at the intersection point and can't recognize which frame belongs to which rover

- Camera is not perpendicular to the plane and perspective distortion is corrected by perspective calibration

- Rovers are having different sizes as they are getting bigger when they are close from the camera and getting smaller when they are distant from the camera

- The target is getting the position of each rover precisely (real time localization), target is achieved when the rovers are at separated distances, but fails when they intersect.

 

A GIF to elaborate the condition:

 

Any clue how to overcome this problem ?

 

 

0 Kudos
Message 1 of 10
(4,366 Views)

Object tracking doesn't do a good job of handling objects that overlap and keeping them differentiated correctly when they separate again. Since the image looks pretty simple to process, you may have more success with a threshold, particle filtering to remove noise, and then particle analysis to find the objects you're interested in and get their position. Because you know one is bigger than the other, you can use the area to distinguish them. If the camera distortion is too great (i.e. the large object has a smaller area than the small object when it is far away and the small object is close to the camera), you can use a calibrated image and get the calibrated particle analysis area results to determine which one is the bigger one.

 

Hope that helps,

Brad

Message 2 of 10
(4,303 Views)

I will give it a try by changing the object tracking to particle filtering approach and report the results here.

 

 

Because you know one is bigger than the other, you can use the area to distinguish them.

this case is only when the rovers are separated distantly from each others, but the rovers are always moving all across the map, which will cause at some FOV the rovers will have approximately the same area (physically the rovers have the same area), thus it can not be distinguished by area.

 

you can use a calibrated image and get the calibrated particle analysis area results to determine which one is the bigger one.

I've tried to apply the calibrated image to the live footage but the result is a very sluggish opreation (low FPS) and here is my old topic discussing this point:

http://forums.ni.com/t5/Machine-Vision/Low-FPS-after-applying-perspective-calibration-to-acquire-a/t...

Normally you don't apply correction to a live image.  It is usually applied to a single image before you do your analysis.

 

0 Kudos
Message 3 of 10
(4,297 Views)

I wasn't saying to correct the entire image (this is very slow), you can apply a calibration to an image without correcting it. When you apply a calibration to an image, the image size doesn't change, but the image will contain calibration info that subsequent processing functions can use to determine the real world results. This is what I was suggesting and should be much faster. So just apply the calibration without correcting it and you will see real world areas come out of the particle analysis without having to correct the image.

 

Hope that helps,

Brad

Message 4 of 10
(4,292 Views)

I misunderstood it. Yeah, I've done the calibration already and the results are very good.

0 Kudos
Message 5 of 10
(4,290 Views)

@Joeynn wrote:

I misunderstood it. Yeah, I've done the calibration already and the results are very good.



hi sir, I'm doing something similar for my university project. I have to track the movement of several marbles using a USB camera. The camera can be either pointing directly to the floor or different angles towards the floor. Can I have your VIs so I can learn from your coding and adapt it my project? I'm completely lost as I have no experience in image processing before. Thank you for your help sir.

 

PS. I have managed to achieve single object tracking using subtraction and the centroid block to get its position. But the centroid is limited to getting a single point on the image. (centre of energy of image) Thus I'm unable to track more than 1 object 

0 Kudos
Message 6 of 10
(4,254 Views)


The camera can be either pointing directly to the floor or different angles towards the floor


The best option is to point the camera perpendicularly to the floor because it will not complicate the calibration process. If you did that, you will need to do a simple calibration. The second option when the camera is not perpendicular to the floor, you will need to apply a perspective calibration.

 

Can I have your VIs so I can learn from your coding and adapt it my project?

There are 2 approaches for tracking as far as i know. The first one is the object tracking vi techniques and the second one is particle filter. In my first post, I've used the object tracking technique. Later, i will post my trial using the second approach.

 

tr.png

 

If you need to do multiple tracking task, you will need the following:

- Add more than a tracking session

- Create multiple ROI then ungroup them using IMAQ Ungroup ROIs vi block

- Save the ungrouped ROIs to an array

- Index the array and relate it to the specified ROI to the defined object 

0 Kudos
Message 7 of 10
(4,243 Views)

Here are my trials with binary image and particles after thresholding:

(Intersection Condition)

 

1- Locating the rovers over specified criteria through IMAQ Count Objects 2 vi

 

- Unlike the shape adapted meanshift, the bounding boxes are merged together in the intersection and are considered as a one big bounding box which is the main problem causing not to identify the rovers separately causing the issue of not localizing each rover properly

- After the intersection, the bounding boxes are seperated again but the number is not assigned properly to the same rover before the intersection condition 

 

Any possible solutions here ?

 

2- Applying the object tracking technique on a binary image

 

 

 

The results are the same as the first post and the intersection problem is not solved 

0 Kudos
Message 8 of 10
(4,230 Views)

Can I have a look at your VIs? Will be clearer for me if can see the coding

0 Kudos
Message 9 of 10
(4,193 Views)

Are you tracking specific objects or motion? I am unsure whether the user has to control the ROI parameters(select a particular object) or is the software performing tracking for any moving objects. 

0 Kudos
Message 10 of 10
(4,188 Views)