LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Increase precision of formular nodes

Hi!
I´ve got a little problem using more than two decimal digits in formular
nodes.
Formular inputs are "x" an "a0", "a1", "a2" and so on. Output is y.
"x" is a measured value, the a_i are polynomial-coefficents. It is important
that the a_i have at least 7 digits of precision because the function is
from order a0 * x^6 + a1 * x^5 + ... and the difference between y_is (a_i
with 2 digits) and y_should (a_i with 7 digits) is massiv - i saw that by
putting that data to an excel-sheet.
Incoming x has 3 digits of precision, the output y should have 5. Giving y
this precision is not that problem (adding %.5f to "write file"). The
problem is to tell the formular node to get the a_i with more that 2 digits
of precision.
I tried to increase
the number of digits on the input-constants a_i to 7
digits by using the "format&precision"-menu and also by using extended
representation - but this won´t work.

Asking the help-menu only told me that the precision of my formular node
depends on the precision of my general system. So I went to the Windows-menu
and increased it generally to 8 digits of precision - but even that is
working.

Any ideas how I can solve this problem?
Thanks,
Sascha
0 Kudos
Message 1 of 4
(2,644 Views)
A common mistake that beginners make is confusing the precision of a numeric dispaly with the precision of the actual number itself. Declaring a number as a DBL means that it is a 64 bit IEEE format with about 15 decimal digits. This information is readily available in the help file. Whatever number of digits you see in the Excel spreadsheet is dependant on how you've formatted the cells and how you've exported the data from LabVIEW. Create an indicator in LabVIEW at a place you want to check and make the DISPLAY precision 15 digits.
0 Kudos
Message 2 of 4
(2,644 Views)
Hi Dennis,

> A common mistake that beginners make is confusing the precision of a
> numeric dispaly with the precision of the actual number itself.

I´ve done that mistakes in the past.
But this was not the problem now. I saw 15 digits of precision on my
frontpanel and typed them from there to my excel-sheet.
And that´s how I discovert the mistake - the difference between excel and LV
inspite of the same input (value and precision).

But now theres the funny side: yesterday I got nuts because of that mistake.
After reading your message I thought I should let you make an experiment -
put a = -0,209727585315704 and x = 6,89193 into an formular node to
calculate y = a * x^6 - so that you can imagine it on your own by comparing
the LV-result with LV.
But
today it worked. The same .vi that I used yesterday (giving wrong
values) is giving true values today.
What was wrong yesterday?
Kind of strange...
I hope it works tomorrow, too.

> Declaring a number as a DBL means that it is a 64 bit IEEE format with
> about 15 decimal digits.

Ups - that is new to me. I thought DBL means "two" digits...I think I should
RTWFM [W=whole]
🙂

Nevertheless: Thank you.
Sascha
0 Kudos
Message 3 of 4
(2,644 Views)
And one more word to numerical caution.
In order to add up the polynominal factors _every_ computer that does floatingpoint arithmetic has to shift the values to one common exponent. So if the highest (with x^6) evaluates to say 10000000 (or 1*10^7) and the lowest is 0,00000001 (1*10-8) you'll just lose it.
In Labview you have three options for floating point representation. I am not sure how Execl does inherent calculations and I do not know of a way to control it.
So it might very well be that it uses either SGL (6 digits) DBL (15 Digits) or even EXT(18 Digits). Or it might even use decimal arithmetic with a fixed decimal point.

Gabi
7.1 -- 2013
CLA
0 Kudos
Message 4 of 4
(2,644 Views)