From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Flatten to string gives different results on an RT system when compared to a Windows system?

Solved!
Go to solution

When I flatten a moderately complex structure to a string in LabVIEW Windows, I get a slightly different result than when I do the same thing in (interactive) LabVIEW RT. Looking at the hex display of the strings produced, toward the tail of the strings I see:

Windows: 999A 0000 ... FB99 9999 9999 9999 9A00
RT:           9999 A000 ... FB99 9999 9999 99A0 0000

Shouldn't I get the same value from both systems?

0 Kudos
Message 1 of 8
(3,211 Views)

They should be identical.  Can you share the data structure, loaded with default values?  Try simplifying by removing elements until they match?

0 Kudos
Message 2 of 8
(3,201 Views)

The VI with which I tested is attached. Testing was done with LabVIEW 11.0.1f2 on WinXP and a cRIO-9024 & RIO v12.0.

I too thought that they should be identical. At this point I'm less interested in paring down the data structure until it suits LV than I am in coming up with something in LabVIEW to work with the data structure. This is not the only data structure that LV stumbles over. I've tried six totally different structures, one more complex and most less complex and five of them produce a mismatch.

 

The attached VI was built to be run on a host system and then deployed and run interactively on an RT system so that the different results can be seen.

0 Kudos
Message 3 of 8
(3,189 Views)
Solution
Accepted by topic author WNM

Your problem is use of the Extended data type.  I don't have a cRIO to do more in-depth testing, but the only elements that don't match are the two Smoothing elements.  The two values differ by approximately -5.54976E-18 (ie, not much - less, in fact, than the machine epsilon value).  If you use the context help and hover over an EXT wire, you'll see a note that says precision varies by platform, so this is to be expected.  Also documented in the help, under the topic "numeric, data types table".  Is there any reason why a double-precision is not sufficient?

Message 4 of 8
(3,182 Views)

@nathand wrote:

Your problem is use of the Extended data type.  ...   Is there any reason why a double-precision is not sufficient?


Thank you!    That was exactly the problem and double-precision will be fine for that value (and any others that were causing problems).

 

0 Kudos
Message 5 of 8
(3,174 Views)

In case you're not aware of it - you can use the "Equals" comparison on two clusters that contain the same type of data, and you'll get a cluster of booleans with the same structure and element names.  So I just unflattened the two sample strings you provided, put them through an Equals, and checked which elements didn't match.

Message 6 of 8
(3,170 Views)

I know about the compare aggregates/elements and how to see the Boolean elements that differ in clusters. It just did not occur to me to unflatten the two strings and then make the comparison. I would have thought that the shared/common cluster type input to the unflatten node for the given platform where it was being used would have forced the clusters back into a state where they both matched but apparently the difference is maintained in this case and I'm grateful that it made identifying the problem relatively easy (for you at least). I'll keep this "trick" in mind for the next time something like this happens. Thanks for sharing!

0 Kudos
Message 7 of 8
(3,166 Views)

@WNM wrote:

I know about the compare aggregates/elements and how to see the Boolean elements that differ in clusters. It just did not occur to me to unflatten the two strings and then make the comparison. I would have thought that the shared/common cluster type input to the unflatten node for the given platform where it was being used would have forced the clusters back into a state where they both matched but apparently the difference is maintained in this case and I'm grateful that it made identifying the problem relatively easy (for you at least). I'll keep this "trick" in mind for the next time something like this happens. Thanks for sharing!


Extended precision format is indeed platform dependend. On Windows (and probably all x86 with integrated floating point arithmetic unit) LabVIEW uses the 80 Bit extended format of the x87 FPU unit for these numbers, so only really slightly more precise than the 64 Bit of the double precision. On ancient Sparc CPUs LabVIEW did use a software emulation library that was using 128 Bits of data, on 68k systems it used the Motorala 96 Bit floating point format, which effectively only uses 80 Bits too, by adding 2 filler Bytes between Mantissa and Exponent. On Power PC CPUs, which most NI-RT systems are based upon, LabVIEW uses a normal 64 Bit double precision floating point number since PPC CPUs do not provide a native extended precision number format. (However Power PC based Macintoshes used the Apple double-double format which is in fact a combination of two doubles to get both a broader exponent and mantissa). And PA-RISC based systems also simply used a normal double for the extended format.

 

If LabVIEW would flatten this straight away you would not only end up with completely differen bit patterns in the data but also with different byte lengths which would be terrible for interplattform operation. Therefore the LabVIEW developers chose to represent an extended floating point value in the flattened format always as 128 Bit extended precision floating point format with 15 Bit of exponent and 112 Bit of mantissa. Each platform does the right thing to convert between flattened format and native format, but that of course involves floating point arithmetic operations that have a natural imperfection, that lays for every basic floating point operation in the format specific Epsilon, so depending on the used operations and order thereof the actual difference could end up to be even above 1 Epsilon. The fact that this imprecision seems to be below 1 Epsilon in this case could be chance or it could also mean that there were some very smart people in the LabVIEW development team who did understand the floating point bit manipulation problems very well, and chose for one of the most optimal conversion methods in terms of precision.

 

Extra Note: Even if you use double precision values, which in theory are all exactly the same on every single CPU, you should never expect the results of floating point computations to be exactly the same. There can be various reasons for the results to differ by less than the format specific Epsilon including different microcode implementations of the various arithmetic operations, and especially when you cascade several floating point arithmetic operations you can end up with differences that can be much higher than the Epsilon.

Rolf Kalbermatter
My Blog
Message 8 of 8
(3,139 Views)