LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

How can I get an object's x and y pixel location in real time?

Greetings,

 

I am working on a large project and I am having trouble with one small aspect of it.  I am trying to track a moving object using a VI generated by the NI Vision Assistant tool which works great when I am manually processing images.  The problem is that I want the system to run in real time, so I made the VI created by Vision Assistant a sub-VI with outputs being the "number of particles" and the "particle measurements (Pixels)."

 

I then dropped this sub-VI into my larger frame grabbing program which needs the x and y pixel position of tracked objects to perform some real-time analysis on the moving objects.  Now, here is the meat of the issue: the particle analysis block in my sub-vi that is supposed to output my particle measurements needs a front panel control to select each pixel (ex. particle 1-X pixel, particle 1-Y pixel, particle 2-X pixel...).

 

Since I need to work with these pixel measurements in real-time I cannot be using the front panel to manually select each measurement for every frame.  I need to just get these measurements all at once for every frame.

 

Also, I am really only tracking one object at a time right now so there is no need for a complex multiple partical solution.  Once I take care of some interlaced scanning issues Labview should stop telling me I have more than one object in the frame.

 

Thanks in advance

~Chad 

0 Kudos
Message 1 of 7
(5,718 Views)

Hi Chad,

 

I was not clear on a few things. First, how exactly are you tracking a moving object using particle measurements? What exactly do you currently do on the front panel to get the particle measurements? A screenshot of the block diagram & front panel would be useful. This would give me a better idea about what exactly you are trying to do. 

Vivek Nath
National Instruments
Applications Engineer

Machine Vision
0 Kudos
Message 2 of 7
(5,686 Views)

What exactly are you tracking? Is it something simple that "match pattern 2" will pick up?

0 Kudos
Message 3 of 7
(5,681 Views)

To track you can use

 

1.Pattern matching.

2.Co-ordinate systems.

3. Particle analysis and filter. 

 

To genereally name them. Give us details to help you further......

0 Kudos
Message 4 of 7
(5,675 Views)

I am tracking the movement of various objects by subtracting the current image from the previous image and displaying the result.  Anything that moves is white and anything that doesn't is black.  I then have vision assistant give me a list of particles and their pixel positions based on certain criteria.  I then use the data from my two cameras pointing at 45 degrees inward in order to triangulate the object's 3-D position in space (this takes a few steps).  Then I use the frame rate of the camera and the scalar distance between an object in multiple frames to determine its scalar and vector velocities.  All of this is done in real time.

 

I  was originally having issues with the pixel positions but have since wrote a vi that interprets a matrix of the pixel positions which then feeds them into my position/velocity vi.

 

The vi I attached is built to work with my "bitflow alta frame grabber" so feel free to ask questions if anything is not understood.

 

My new task has been to develop a way to only track one object at a time, and have that object be the same object in both cameras. 

 

Questions or comments welcomed 

0 Kudos
Message 5 of 7
(5,662 Views)

If you want to identify the motion tracking for each particle, I would try isolating each particle from the original image first. This can be done based on pixel value range of the particle or shape of the particle. Thus, the particle can be isolated by thresholding or by shape detection. So, then you have an image now consisting only of particle number 1. Then, you can subtract the second image from the first, and check to see if particle 1 has moved and take the necessary measurements. This can be done  for each particle that is in the image.

 

I hope this idea is helpful to you. 

Vivek Nath
National Instruments
Applications Engineer

Machine Vision
0 Kudos
Message 6 of 7
(5,648 Views)

You are right to assume that thresholding and object detection will be used in the final design.  That much is certain.  In fact if you look at the vi this is pretty much what is being done.  There is an important realization to make for anyone trying something similar.  That is, you need a method of making sure that a POI detected in one camera is the same POI in the other.  If there are three birds flying around a feeder and you want to analyse thier flight movement all you have to do is look at the two video streams frame by frame and make sure that Labview knows that birds 1 2 and 3 in camera A will not nessisarily be in the same left to right order in camera B.  

 

As I mentioned earlier, this is a real time application so I don't have the luxury of manually matching objects seen in each camera manually.  Thus I am currently developing a vi which uses the distances between the cameras and the "angles of view" of both wide angle lenses to match tracked points based on the geometrical consistancies of a triangle, i.e angles adding up to 180 and such.

 

When you are sitting down with a file of frames taken from a video run it is easy to adjust the threashold or detect certain objects due to their shape. The real challenge of this project has been tracking birds of different shapes and collors with trees swaying and people walking around in the camera's field of view while distinguishing motion paths and velocities all in real time.  Once this project is finished I will post all my vi files.

 

 

0 Kudos
Message 7 of 7
(5,617 Views)