ā03-06-2026 03:12 PM
I've got a module that I believed I had tested and debugged years ago, but seldom used. I went to use it and found a bug. It is currently in LV 2024, but I checked back to LV 2016 and the same bug appears, so it's not something that NI inadvertently created in newer LV versions since I first wrote it. Must have been there for quite a while.
The basic functionality is that my sub-VI runs a linear fit on a small set of data points, using LV timestamps (converted to doubles, not the "new" super-long timestamp) as the x axis. But the Linear Fit VI fails to give the right results. This is the NI_AALPro.lvlib:Linear Fit.vi that you can find in the Mathematics/Fitting palette, or in vi.lib/Analysis/6fits.llb. For test data, use:
x y
91.0 130
92.0 130
93.0 131
For this data, the slope is 0.5, as expected.
But now use some timestamps for x:
x y
3855668091.0 130
3855668092.0 130
3855668093.0 131
For this data, the calculated slope is -0.0018. Though the slope should be the same, since the data is just being translated down the x axis (a long way).
(You may want to change the formatting of the X input array when doing this test to be able to see the data accurately.)
The same thing happens with Linear Fit Coefficients.vi. And for both VI's with any of the three available algorithms chosen.
I'm probably just missing something simple here, some limitation related to resolution or numeric type or something. Can anyone explain why this fit fails when using timestamps as the x data? Or how to make it work?
My current work-around is to subtract a large number (385566000) from all the time-stamps, to shift the x axis down to a range that works. I guess that's OK, but it's still a mystery to me why this is happening.
TIA,
DaveT
Solved! Go to Solution.
ā03-06-2026 05:42 PM - edited ā03-06-2026 05:42 PM
I did a quick check to find a threshold where this starts to happen and in the test case I made, also with 3 variables, it started to occur when X was at around 31,635,422. When that happened, it started to alternate, giving one correct result, followed by an incorrect one, then back to a correct one.
If I had to guess, this likely means that whatever sort of internal floating point values it uses hit a certain point where the number is so high that it can't be represented exactly. The maximum value a 64-bit (double-precision) float can represent while still incrementing by steps of exactly 1 is 2^53. By a really fun coincidence, if you take the square root of 2^53, then divide it by 3, you get 31,635,421 (plus some decimals).
So, likely the "bug" is that it uses 64-bit floating points internally, and if the sum of the squares of the numbers you fit into those points is too large, strange things begin to happen.
ā03-06-2026 11:39 PM
Thanks for the time you spent figuring this out. I think your explanation makes sense. The only remaining mystery is: how did I manage to validate this code some years ago when I wrote it? I must have called it differently somehow...
ā03-07-2026 07:58 AM
As has been said, you are running into a linear algebra conditioning problem if x is gigantic. Have a look at slide #15 of my 2017 presentation for a more severe example. (see also)