Overview
This VI presents 3 methods to set the number of digits of precision in a DBL number.
Description
The following information can be found in this KnowledgeBase article.
Method 1:
Method 2:
Method 3:
Requirements
LabVIEW 2012 (or compatible)
Steps to Implement or Execute Code
1. Download the attached VI Set Significant Bits of Double Precision Data_LV2012_NI Verified
2. Choose a method by selecting the tab
3. Run the VI
Additional Information or References
Note that some of these methods may permanently lose information so you can't restore the other digits of precision after conversion as for method 1 and 2.
Block Diagram
Front Panel
**This document has been updated to meet the current required format for the NI Code Exchange. **
Example code from the Example Code Exchange in the NI Community is licensed with the MIT license.
Is there a bug inside LabVIEW?
Just change under Format and Precision the Digits numeric to e.g. 26 and you will see that if you convert a
INT 942 value to a Double value you will get
941.999999999999999 ...
If I use e.g. a .NET language I get by a Double division of 94200/100=942.0
With LabView I get 941.999999999999999
DerIng
Which one of the three methods is most efficient to use for a large amount of data and wouldn't take much computing?
I am feeling Method #2 is.
Any idea?
Does it work to format data to output file (tdms)?