LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

LabVIEW FXP-Number Rounding

Dear Community,

I am currently developing a robust real-time LabVIEW program where accuracy is paramount, and correct rounding is crucial for the calculated values.

I have a specific question for the LabVIEW Team or experienced users:

X is a 33-bit number, and Y is also a 33-bit number. When I multiply these two numbers (X * Y), the result is a 66-bit number. However, LabVIEW only allows for 64-bit numbers.

My question is: Does rounding occur on the product of X * Y, or are X and Y rounded before the multiplication to ensure that the result fits into a 64-bit number?

 

Your help on this matter would be greatly appreciated.

Best regards, Darko

0 Kudos
Message 1 of 4
(280 Views)

Hi Darko,

 

is there a reason to use FXP on a realtime target? Which exact FXP representations do you use?

Can't you use DBL with 53 bits mantissa?

Can't you use integer data and do the scaling (like FXP does) on your own?

 

Out of interest: what is the physics behind this 66 bit value?

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
0 Kudos
Message 2 of 4
(242 Views)

Hello Gerd,

Thanks for the reply.

 

The reason behind using FXP is the performance, since calculations with them are faster than with DBL.

 

What is of interest to me is how does LabVIEW round a number which, for example, could be 66 bits. So even if I use a FXP representation set by myself, the question is still the same, only the bit length is changed (shorter). Precisely, I'm interested in the accuracy of the representation, since the measurements (plasma systems) should be as precise as possible, and I should avoid making mistakes with measurement representation or calculations.

 

Kind Regards, Darko

0 Kudos
Message 3 of 4
(234 Views)

To be clear, LabVIEW allows FXP numbers with 66 integer bits, it's just that the increment is 4.

 

When multiplying two 33bit numbers, the result will probably still be very exact, but that would be easy to test.

 

Can you explain why you need so many digits? If this is measurement data, do you really have a 30+bit digitizer with noise that is less than 1 in 2^33 bits? Do you really need a precision that corresponds to less than the length of your toe compared to the distance to the moon and then calculate a value that is 2^33 times more precise? How are you going to present that data to the users?

 

If you are doing theoretical calculations that require that kind precision, maybe you should implement bignum math. Now the bits are only limited by the computer memory.

 

0 Kudos
Message 4 of 4
(227 Views)