LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

LabVIEW 7.0 eigenvectors

In my LV 7.0 application I need to compute eigenvecors without normalizing the eigenvectors. LV's standard eigenvector routine normalizes each eigenvector. Does anybody know of a LabVIEW 7.0 VI that will calculate eigenvectors without normalizing them?
0 Kudos
Message 1 of 7
(3,866 Views)
My linear algebra book is at home. Have you considered writing the algorithim yourself?
Message 2 of 7
(3,866 Views)
Why is normalization a problem here? The modulus of an eigenvector is arbitrary so you can multiply it with any convenient value.


LabVIEW, C'est LabVIEW

Message 3 of 7
(3,866 Views)
That is not the point. I did not ask someone to question why I need them. I do not have time to explain this to you. I simply wanted to know if there was a LabVIEW VI out there that did not normalize the eigenvectors. A simple 'No' would be sufficient. Thanks anyway.
0 Kudos
Message 4 of 7
(3,866 Views)
The modulus of an eigenvector is totally irrelevant data. Different algorithms may give a different set of eigenvectors differing only for their modulus. Normalizing the eigenvectors is simply a way to give unique solutions. Normalized eigenvector set is as good solution as any other. Since you asked for un-normalized eigenvectors (seems to me like asking for an un-normalised sine function) I wondered if you had a complete understanding of what you were asking for and posted for a complement of information. Sorry to have bothered you with that. However, you prefer single word answers so the reply to your original question is "Yes"
.


LabVIEW, C'est LabVIEW

Message 5 of 7
(3,866 Views)
I guess my problem is this, and maybe somebody here can explain why this is ok. Let's say you have a matrix A = [2 -4;-1 -1] (where the semicolons separate rows). The eigenvalues for this matrix are 3 and -2, and the corresponding eigenvectors are [-4;1] and [1;1]. LabVIEW 'normalizes' the eigenvectors by dividing by the value in each vector with the largest magnitude. So, LabVIEW's eigenvectors for this matrix are [1;-0.25] and [1;1]. By 'normalizing' this way the first eigenvector actually changes directions 180 degrees. Notice they don't just divide by the magnitude, they also include the sign. This is not irrelevant as the direction has been changed by the normalization. I agree with Jean-Pierre that normalization in the usual sense
(changing the magnitude only) would not be a problem, but this type of normalization seems odd.

Does anybody out there know why LabVIEW 'normalizes' the eigenvectors this way? I've never seen this type of normalization in any text. How can you be 'normalizing' a vector when you alter its direction?
0 Kudos
Message 6 of 7
(3,866 Views)
I agree that the normalization used with LabVIEW is arbitrary but has the advantage of returning a unique solution to the eigenvalue problem. [-4,1] is an eigenvector of the given matrix but [4,-1] is also a good eigenvector, only with its sign (direction) reversed. Whether you get [-4,1] or [4,-1] (or any multiple) depends of the details of the algorithm used to solve the eigenvalue equation. With [4,-1] found as solution, the normalization done by LabVIEW doesn't change the direction. That's why you can't tell that LabVIEW normalization changes the sign because there is no preferred direction to the eigenvector in the first place.
For example, take a 3D matrix that rotates any object by some degrees according to
a rotation axis. Any array located on the rotation axis is an eigenvector of the rotation matrix because it is not affected by the rotation (eigenvalue = 1). However there is no preferred direction to the rotation axis so you can choose as eigenvector an array pointing in one direction or the opposite.


LabVIEW, C'est LabVIEW

0 Kudos
Message 7 of 7
(3,866 Views)