Motion Control and Motor Drives

cancel
Showing results for 
Search instead for 
Did you mean: 

3 Point Alignment

Hi all,
 
I'm would like to build an algorithm for 3 point alignment for an application which uses a stage, a video camera and a navigation software of a chip. The stage and the navigation map have different distance units and may be rotated to one another. But as I understand, the 3 point allignment algorithm should be able to account for all of these differences. (The three points give enough information to have all the unknowns to be solved in the linear transformation equations).
 
The goal is to be able to move the stage according to the chip map coordinates and its units, after synchronizing on 3 mutual points on the chip map and on the stage (the chip is watched upon via the camera).
 
I have not been able to find any useful material on the net, can anybody help me out ?
 
I truly appreciate your help.
0 Kudos
Message 1 of 5
(5,097 Views)

Mic_Scale,

I believe the document below will help you get off to a good start...

http://zone.ni.com/devzone/conceptd.nsf/webmain/544693068FB4893186256E75006C84AA

If you could be a bit more specific regarding both your system and what you want to know, I might be able to help you out more.  What type of data is coming from the chip (how is it formatted) how is that data sent to your LabVIEW system.  Describe the 3 points in more detail.  Overall, if you could provide a very detailed explaination of what you are doing I can do a better job of answering any questions.

Thanks,
Lorne Hengst
Application Engineer
National Instruments

0 Kudos
Message 2 of 5
(5,081 Views)
Hi Lorne H.
 
Thank you very much for getting back and showing me this document, this is a good start for me.
 
I would like to be a little more specific about what I'm trying to do.
 
I'm controlling a stage which has a camera attached to it. The camera thus takes pictures of a sample which is currently observed. The stage has its own coordinate system (X, Y) and its own size of steps on each of the coordinates. In addition, I have a detailed "map" of the current sample, which has too its own coordinate system.
 
In the end I would like to be able to go to the "map", find a location that interests me and then input these coordinates into my application that would move the stage to this location on the real sample.
 
In order to co-align these 2 coordinate systems I've heard that a good way to do that is by the "3 point alignment". In this approach 3 points which are found on a map and on the stage are compared, and so a transformation between the coordinates can be built.
 
So my question is where could I read a document about this kind of work.
 
Thanks.
0 Kudos
Message 3 of 5
(5,069 Views)

Mic_Scale,

Do you already have the hardware you are going to use for the system? or are you looking to spec out a new system?

If you are using a firewire camera, a cameralink camera, or an analog camera,  you can use a National Instruments products bring your image into LabVIEW and then use NI Vision 8.0 to calibrate the image to match your map.  You would need to detect 4 objects in your image and know the x,y position of these points in inches, yards, feet, etc.  Then, the IMAQ Vision calibration VIs can convert the pixel locations of the image to the real world values.  Once this is done you can pick any point on the image and find its corresponding real world value.

This should accomplish what you want, but instead of 3 points you will need 4 points.

Tell me what you think.

Lorne Hengst
Application Engineer
National Instruments

0 Kudos
Message 4 of 5
(5,057 Views)

What you are talking about, way back in 2006, is a basis transform, or a coordinate transform. Computer graphics systems do this kind of thing all the time so there is a lot of literature on how to do it.

 

In particular you are doing a rotation transform, a translation transform and a scaling transform.

 

Each transform can be represented by a matrix equation. X'=A*X where X' is the transformed coordinate, A is the square matrix representing the transform, and X is the original coordinate. (So X might be the map coordinate while X' might be the stage coordinate.)

 

The trick for you is to calculate the several factors you need to create the A matrix. In your case this is Xo, Yo, Theta, and S. Xo and Yo represents the translation from one coordinate system to the other. So if you picked up the X and Y axis and moved it without rotating or changing scale, what would the distance be from the X origin to the X' origin. Theta is the amount the coordinates system rotated. S is the amount the coordinate system scaled.

 

One of the gotcha's here is if the coordinate system has inverted axis from each other. If the map has positive X going right and the stage has positive X going left you have to have a negative in the matrix to account for this. 

 

 The values you need can be calculated by using non-linear system of equations techniques. I use the Levenberg-Marquart library in NI LabVIEW.

 

X'=Sx*(X*cos(Theta)-Y*sin(Theta)+Xo)

Y'=Sy*(X*sin(Theta)+Y*cos(Theta)+Yo)

 

3 points gives you six equations with 5 unknowns. Because of the sin(Theta) and cos(Theta) terms, the equations are non-linear. 

 

Once you have the 5 unknowns (Sx, Sy, Theta, Xo, and Yo) you can use the equations above to transform any point. 

 

http://en.wikipedia.org/wiki/Coordinate_rotation

http://en.wikipedia.org/wiki/Change_of_basis

http://en.wikipedia.org/wiki/Transformation_matrix 

 

Take special note of Afine Transformations. This is how the graphics guys do it. 

0 Kudos
Message 5 of 5
(4,458 Views)