03-16-2017 05:59 PM
In this thread, it seems that directly subtracting time-stamps (with big coercion dots!) seems more accurate than doing the same after converting one input explicitly to DBL first.
While the "to-DBL" eliminates the coercion dots, it causes errors of about a ms. Interesting!
03-17-2017 05:26 AM - edited 03-17-2017 05:50 AM
I wouldn't call that very surprising though 1ms for simple subtraction sounds not completely right.
A double has a mantissa with 52 bits. The current number of seconds since January 1,1904 is getting close to overflow 32 bits (in 2037 or so it will overflow the 32 bits and then using an uInt32 for a LabVIEW timestamp as LabVIEW 3.x did, will start to go havoc).
This leaves around 20 bits for the fractional part of the seconds which should be more in the range of us than ms.
A timestamp on the other hand has an int64 for the seconds, and an uInt64 for the fractional seconds and although I think it currenly only really uses the most significant 32 bits for the fractional seconds, that does mount up to a precision of less than 1 ns. And LabVIEW does the math using the full 32 bit and maybe even 64 bits (leaving the least significant 32 bits normally as 0) but that has of course a much larger range before rounding errors will be significant enough to be noticed.
But the real problem is might be more in the To DataTime and From DataTime nodes. The fractional seconds in there is maybe only in ms. Together with rounding errors when converting from the ms resolution to and from full Timestamp resolution there could be an extra rounding error that makes the rounding operation in the To Double trip over a limit.
03-17-2017 11:43 AM
Yes, a millisecond seems way too much. Probably worth investigating.
03-20-2017 02:06 PM
Hi guys!
This topic challenged what I though about Labview performance regarding the coercion dots.
I started to benchmark a simple vi and it grew like a little monster with some stats in the way.
If you run it one at a time, sometimes the coertion dot gives a lower time in microseconds or ticks, I guess that it relates to the memory place where it writes the value to, but,
If you "run it continuously" for a few ...thousands iterations, in the long run, the conversion does help to the execution time.
Please check it and tell me if I'm missing something or if there is a better way to benchmark/analyse it
03-20-2017 02:44 PM
@niarena wrote:
Hi guys!
This topic challenged what I though about Labview performance regarding the coercion dots.
I started to benchmark a simple vi and it grew like a little monster with some stats in the way.
If you run it one at a time, sometimes the coertion dot gives a lower time in microseconds or ticks, I guess that it relates to the memory place where it writes the value to, but,
If you "run it continuously" for a few ...thousands iterations, in the long run, the conversion does help to the execution time.
Please check it and tell me if I'm missing something or if there is a better way to benchmark/analyse it
You left debugging on and forgot to save default data. Other than that, the FOR loops are fairly useless aren't they?
03-20-2017 03:23 PM
Are you running the top half code and bottom half at the same time? If so, how do you know that one half of the code isn't stealing clock cycles from the other half?
03-20-2017 04:16 PM - edited 03-20-2017 04:20 PM
@JÞB wrote:
@niarena wrote:
Hi guys!
This topic challenged what I though about Labview performance regarding the coercion dots.
I started to benchmark a simple vi and it grew like a little monster with some stats in the way.
If you run it one at a time, sometimes the coertion dot gives a lower time in microseconds or ticks, I guess that it relates to the memory place where it writes the value to, but,
If you "run it continuously" for a few ...thousands iterations, in the long run, the conversion does help to the execution time.
Please check it and tell me if I'm missing something or if there is a better way to benchmark/analyse it
You left debugging on and forgot to save default data. Other than that, the FOR loops are fairly useless aren't they?
And the attached vi shows a bit of performance improvement with explicitly coercing the I32 to DBL (Roughly 18% faster) on an Inter i3 under W10 x 64 NOTE: Snippets do not preserve execution options so use the attached vi
03-20-2017 04:53 PM
Just to prove that Benchmarks are difficult to construct
Ignore my last example it is flawed the explicit conversion is @35% faster than coercion
.
10-12-2024 11:24 AM
(15 years later ...:D )
@altenbach wrote:
(code in this example simply zeroes all negative values of a 2D array)
Of course the correct way to do this would be as follows, of course.
No, I won't benchmark it. Also, the compiler has probably improved and we don't really know if the old discussion is still valid. Still, the earlier example served as a demonstration to take coercion dots with a grain of salt.