LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

single precision conversion

Can someone explain why doing a floating point conversion compromises digit of precision accuracy? For example, if I wire the double precision number "16.0001000" to the "to Single Precision Float" vi input, the output is "16.0000991". 
 
I have an application in which this level of accuracy is critical. I can think of several work arounds, but I still would like a good explanation for why this is the case.
 
pmac
 
 
0 Kudos
Message 1 of 10
(5,779 Views)
It has to do with the binary representation of numbers inthe computer. Many numbers with finite representation in base ten are infinitely repeating in binary. There are numerous posts regarding this issue in the archives.

Perhaps the easiest workaround for higher precision is to use double or extended precision representation. They have the same problem but it occurs at much smaller magnitudes.

If your data can be represented adequately as 32 bit integers (64 bit in the latest versions of LV), then you can avoid the problem completely as integer arithmetic is exact.

If you need higher precision than any of these approaches can give, you will need to code your own extended precision system, which is generally a lot of work to make sure everything is right.

Lynn
0 Kudos
Message 2 of 10
(5,771 Views)
It's simply not possible to represent the number 16.0001 with a single precision number. If it's critical to keep that kind of precision, you need to use a double. There's really no way around it.
0 Kudos
Message 3 of 10
(5,769 Views)
SGL only gives you about 6 decimal digits of precision. If you need more precision, don't convert to SGL. 🙂 everything beyond 6 decimal digits is random.
 
Many nice decimal fractions cannot be represented exactly in binary. If you would show all digits, you would see that your 16.0001000 is actually 16.0000999999999998 in DBL representation.
 
What computations do you do where this level of accuracy is critical?
0 Kudos
Message 4 of 10
(5,766 Views)

I have data that was acquired at a 10k samples/s for about 30 seconds. When I process the data I generate a double precision array of time increments based on the sample rate and the number of samples and append it as the first column of data. Then I write it all to a text file using the "Write to Spreadsheet File" vi in which the default data type for the data input is single precision (LV automatically converts). When I viewed the text file I used MSExcel with 4 digits of precision so the displayed numbers were rounded to their proper values. When someone else viewed the data using a different program the rounding error showed up which appeared as a sampling/timing error. I suppose all I need to do is change the representation of the "Write to Spreadsheet File" data input to double precision to prevent this from happening again, but while I was able to figure out what had happened I didn't have a good explanation as to why. -Thanks

pmac  

0 Kudos
Message 5 of 10
(5,752 Views)
Write to spreadsheet file.vi on LV 8.2 takes a double. Are you on an older version?
0 Kudos
Message 6 of 10
(5,749 Views)
Yeah. LV 7.0
0 Kudos
Message 7 of 10
(5,746 Views)
Just use array to spreadsheet string on the DBL and write as text file (and an analogous operation for reading, converting back to DBL). Alternatively, you could make DBL version of "read/write  spreadheet file" by editing them slightly and saving them under a new name elsewhere. Make sure to change the icon too, e.g. give it a different color.
 
If precision is that important, you might want to use binary files. The don't have any formatting loss at all. 🙂
 
 
Of course if this is just measurement data, don't be fooled by fake precision. even SGL has a 22 bit mantissa, so if your DAQ hardware acqures in 16 bits, SGL is plenty. 🙂
0 Kudos
Message 8 of 10
(5,741 Views)

 


@altenbach wrote:
 
Of course if this is just measurement data, don't be fooled by fake precision. even SGL has a 22 bit mantissa, so if your DAQ hardware acqures in 16 bits, SGL is plenty. 🙂


Unless LabView uses a different floating point representation than IEEE, a 32-bit floating point number (Single) has a 23 bit mantissa

0 Kudos
Message 9 of 10
(5,726 Views)
Sorry, yes, bits 0..22, or 23 bits. 🙂
0 Kudos
Message 10 of 10
(5,714 Views)