From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

json with type double number

Solved!
Go to solution

How do I avoid the value conversion that occurs when flattening a type double number to JSON? a value of 9999.12 will convert to 9999.12000000008 in the JSON string.

0 Kudos
Message 1 of 8
(11,546 Views)

Hi lhandph,

 

what's the problem?

The conversion looks ok for me… 😄

 

(Create a numeric control. Give it the value "9999.12". Change display settings to show 20 significant digits: It will show "9999.1200…0790"!)

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
0 Kudos
Message 2 of 8
(11,541 Views)

LabVIEW will maintain the precision of the DBL when converting to a JSON string - to get around this I usually do the conversion to a string myself with the required level of precision (e.g. using %.2f).


LabVIEW Champion, CLA, CLED, CTD
(blog)
0 Kudos
Message 3 of 8
(11,539 Views)

Thanks Sam. I thought of that too, but my understanding is that the web service I am sending this to requires type double. Can I send all data as string type in the JSON Post or does this depend on the web service?

0 Kudos
Message 4 of 8
(11,527 Views)
Solution
Accepted by lhandph

My understanding is that JSON is pretty loosely type-defined - I don't think the JSON interpreters will detect the difference between MyDBL = 121.000000001 and MyDBL = "121.00". (unless it particularly detects the type, I've definitely replaced a DBL with a String in some of my WebSockets stuff and I didn't need to change any of the Javascript). You'd need to test it though.

 

Edit: Actually, having just tried it with this simple VI, the LabVIEW unflatten from JSON throws an error:

JSONTypes.png

 

If you're communicating with an actual web service that's expecting a DBL, then what's the problem with sending 12.0000999999?


LabVIEW Champion, CLA, CLED, CTD
(blog)
Message 5 of 8
(11,518 Views)
Solution
Accepted by lhandph

The conversion is unavoidable if you are starting with a double value. Remember that the 9999.12 you see in the editor is also a decimal conversion from the actual stored 64-bit value (it has been converted from what you typed in to the closest inexact DBL value, then re-generated and coerced/formatted (if necessary) for display.

 

Both 9999.12 and 9999.1200000000008 convert to the exact same double value--and there are actually an infinite number of valid decimal strings that map to that same value. For display purposes the convention is to choose the shortest decimal string that guarantees an exact round trip. That way when you type .1 you should see 0.1 in the display and not 0.100000000000000006. Although the second string is slightly closer to the stored value, they both end up at the same DBL value if rounded correctly and are therefore exactly equivalent.

 

For the JSON strings, I'm assuming performance is more of a concern than readability, so it looks like we just use 17 digits as the minimum number guaranteed to round-trip correctly in all cases. That avoids all the extra logic required to figure out the shortest possible decimal representation of each data value.

Message 6 of 8
(11,495 Views)

The only concern we had was maintaining the two decimal points of accuracy rather than the inexact approx. value. We were worried about the web service rounding or truncating the long inexact value instead of receiving and using exactly what we sent from our application program.

Thanks for the help. I will discuss with the web service developers and see if we can either send as a string representation of the value and they convert it to a number or keep it as a type double and they round or truncate on their end.

Thanks again.

0 Kudos
Message 7 of 8
(11,487 Views)

That's where your misunderstanding is - a floating point number isn't an exact representation of a decimal number - it's an approximation. If you take your DBL value and display it in an indicator, labview rounds it to something sensible for display. If you turn up the precision to show enough digits, you'll see the 0.0000000000008 difference between the number and how it's rounded/displayed in the indicator. 

 

If I wanted the exact decimal representation of the number, I would probably send as an integer multiplied by 10 to the power of the number of decimals (e.g. 19.12 x 100 = 1912) and then divide back on the other end.


LabVIEW Champion, CLA, CLED, CTD
(blog)
Message 8 of 8
(11,480 Views)