From 04:00 PM CDT – 08:00 PM CDT (09:00 PM UTC – 01:00 AM UTC) Tuesday, April 16, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Will I lose data converting from double-precision to single-precision float?

Solved!
Go to solution

Before you say yes... 

 

I am using a crio device in scan-mode interface. The values that the scan mode returns are in double-precision floating point. Apparently I am supposed to be able to choose between double-precision floating point ("calibrated") and fixed-point ("uncalibrated") data, but this functionality appears to be exclusive to the fpga interface and is not available with the scan engine. Both data types are 64-bits per value, so when it comes to size-on-disk, either way is basically the same anyways.

 

The system is continuously recording 13 channels of double-precision floating point at 200Hz. Using the binary file write method, I have measured this to be about 92 MB/hr to disk. (over 120mb/hr with tdms and a lot more for write to spreadsheet) Simply put, this 92 mb/hr rate is just too much data to disk on this system. 

 

The modules I am recording from, the 9236, 9237, and 9215 c-series modules, have 24-bit ADCs or less. Does this mean I don't need 64 bits to represent the number and maintain the same accuracy?

 

Can I coerce/cast the double-precision floating point values I am receiving from the scan engine i/o variables to another, smaller data type like single-precision floating point and still maintain the same accuracy?

Message Edited by rex1030 on 08-27-2009 04:19 PM
---------------------------------
[will work for kudos]
0 Kudos
Message 1 of 11
(4,760 Views)

If you configure them properly, any data you may lose will almost certainly be related to the bit noise of your ADC.  SGL has a 24-bit significand (mantissa) so it can (in principle) hold the exact reading of your ADC.  The conversion to DBL (performed by your interface) and then to SGL (done by you) is certainly lossy, but that loss is probably in the last bit (or two).  I do not know the specifics of the models you listed, but if those ADCs have the dynamic range to relegate bit-noise to the 24th bit, I'd be surprised/interested in getting a few myself.  Most likely, the last few bits are just noise and the conversion should not cause you any noticable problems.  I don't see any reason not to try, the savings in disk space seems worth it.

Message 2 of 11
(4,736 Views)
between noise and hardware accuracy, i doubt it makes much of a difference.
Message 3 of 11
(4,732 Views)

Hi there

 

 

The ADC delivers a U32 raw value, so you will loose data when converting U32 to SGL (simply because SGL has a much wider value range than U32). Can you read out the raw U32 data from the ADC and save that? Don't knwo about c-Modules, DAQmx allows to read the ADC raw value.

Best regards
chris

CL(A)Dly bending G-Force with LabVIEW

famous last words: "oh my god, it is full of stars!"
Message 4 of 11
(4,705 Views)
Solution
Accepted by topic author rex1030

Nickerbocker wrote:
between noise and hardware accuracy, i doubt it makes much of a difference.

You can test it by looking at the difference between a DBL and DBL converted to SGL. But I support the hint from Nickerbocker. I do not think it will make difference 



Besides which, my opinion is that Express VIs Carthage must be destroyed deleted
(Sorry no Labview "brag list" so far)
Message 5 of 11
(4,695 Views)

For some reason the scan interface isnt letting me output the raw U32 values. I think it might have something to do with the fact that I am using both scan engine and fpga interfaces in the project, but I am not sure. Any ideas why this is happening?

 

This is a picture of me right mouse clicking on one of the modules to try to get it to put out raw U32 values like this knowledgebase article says I can.

 

fixed point on scan engine.PNG

 

I put in a service request.. (Reference#7256901) because to be honest I have to know for sure whats going on. I am not one of the engineers/scientists that designed the experiment so I cannot say whether it is ok to chop some of the end of the number off. I would rather simply be able to record all the data in a smaller amount of disk space then go tell them we have to compromise somewhere, whether on sample rate or precision. 

 

 

---------------------------------
[will work for kudos]
0 Kudos
Message 6 of 11
(4,666 Views)

I guess I wasn't reading the article carefully enough.

 

"As of LabVIEW 8.6, there is an option to use the cRIO Scan Engine to interface modules directly to LabVIEW Real-Time by using I/O variables.  In this case, the option to select raw or calibrated is not available in the properties dialog.  When using Scan Mode, all data is automatically returned scaled and calibrated."

 

Is there any way around this?

---------------------------------
[will work for kudos]
0 Kudos
Message 7 of 11
(4,633 Views)
Upon testing, it appears the difference between single-precision coerced number and the original double-precision number is in the pico to nano-strain range. Its negligible so no worries. Thanks for the help guys.
---------------------------------
[will work for kudos]
0 Kudos
Message 8 of 11
(4,568 Views)

Hi there

 

I've been curious so i did some testing for myself. I see that the voltage resolution and the difference between SGL and DBL representation are both ~ 10^-6 V, giving an error of ~1Bit, sou you will loose about 1Bit of information. But it has been already stated that the NEB is < 24 (but i haven't read the data sheet), so using SGL should be OK.

Best regards
chris

CL(A)Dly bending G-Force with LabVIEW

famous last words: "oh my god, it is full of stars!"
Message 9 of 11
(4,543 Views)

Technically you'll lose 2 bits of information as a Singles mantissa is 22 bits (according to help), though it's probably of zero relavance.

 

Just for the sake of it, you can multiply your double with 2^32 and store it in a U32 int and use it as fixed floating point ... It'd only need to be done when reading and writing as calculation can be done as doubles.

 

That is if you're after that last bit and still want less data to write on disk. 🙂

 

(not sure if it's helpful, i just like the theory)

 

/Y

G# - Award winning reference based OOP for LV, for free! - Qestit VIPM GitHub

Qestit Systems
Certified-LabVIEW-Developer
Message 10 of 11
(4,534 Views)