01-25-2007 08:57 AM
01-25-2007 09:06 AM
01-25-2007 09:07 AM
01-25-2007 09:15 AM
01-25-2007 09:53 AM
I have data that was acquired at a 10k samples/s for about 30 seconds. When I process the data I generate a double precision array of time increments based on the sample rate and the number of samples and append it as the first column of data. Then I write it all to a text file using the "Write to Spreadsheet File" vi in which the default data type for the data input is single precision (LV automatically converts). When I viewed the text file I used MSExcel with 4 digits of precision so the displayed numbers were rounded to their proper values. When someone else viewed the data using a different program the rounding error showed up which appeared as a sampling/timing error. I suppose all I need to do is change the representation of the "Write to Spreadsheet File" data input to double precision to prevent this from happening again, but while I was able to figure out what had happened I didn't have a good explanation as to why. -Thanks
pmac
01-25-2007 09:59 AM
01-25-2007 10:11 AM
01-25-2007 10:28 AM
01-25-2007 11:25 AM
@altenbach wrote:
Of course if this is just measurement data, don't be fooled by fake precision. even SGL has a 22 bit mantissa, so if your DAQ hardware acqures in 16 bits, SGL is plenty. 🙂
Unless LabView uses a different floating point representation than IEEE, a 32-bit floating point number (Single) has a 23 bit mantissa
01-25-2007 11:43 AM