LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Dealing with Coercion Dots

In this thread, it seems that directly subtracting time-stamps (with big coercion dots!) seems more accurate than doing the same after converting one input explicitly to DBL first.

While the "to-DBL" eliminates the coercion dots, it causes errors of about a ms. Interesting!

Message 11 of 19
(1,919 Views)

I wouldn't call that very surprising though 1ms for simple subtraction sounds not completely right.

 

A double has a mantissa with 52 bits. The current number of seconds since January 1,1904 is getting close to overflow 32 bits (in 2037 or so it will overflow the 32 bits and then using an uInt32 for a LabVIEW timestamp as LabVIEW 3.x did, will start to go havoc).

This leaves around 20 bits for the fractional part of the seconds which should be more in the range of us than ms.

A timestamp on the other hand has an int64 for the seconds, and an uInt64 for the fractional seconds and although I think it currenly only really uses the most significant 32 bits for the fractional seconds, that does mount up to a precision of less than 1 ns. And LabVIEW does the math using the full 32 bit and maybe even 64 bits (leaving the least significant 32 bits normally as 0) but that has of course a much larger range before rounding errors will be significant enough to be noticed.

 

But the real problem is might be more in the To DataTime and From DataTime nodes. The fractional seconds in there is maybe only in ms. Together with rounding errors when converting from the ms resolution to and from full Timestamp resolution there could be an extra rounding error that makes the rounding operation in the To Double trip over a limit.

Rolf Kalbermatter
My Blog
0 Kudos
Message 12 of 19
(1,891 Views)

Yes, a millisecond seems way too much. Probably worth investigating.

0 Kudos
Message 13 of 19
(1,860 Views)

Hi guys!

 

This topic challenged what I though about Labview performance regarding the coercion dots.

I started to benchmark a simple vi and it grew like a little monster with some stats in the way.

 

If you run it one at a time, sometimes the coertion dot gives a lower time in microseconds or ticks, I guess that it relates to the memory place where it writes the value to, but,

If you "run it continuously" for a few ...thousands iterations, in the long run, the conversion does help to the execution time.

 

Please check it and tell me if I'm missing something or if there is a better way to benchmark/analyse it
BenchmarkCoercionDotConvertion.png

0 Kudos
Message 14 of 19
(1,824 Views)

@niarena wrote:

Hi guys!

 

This topic challenged what I though about Labview performance regarding the coercion dots.

I started to benchmark a simple vi and it grew like a little monster with some stats in the way.

 

If you run it one at a time, sometimes the coertion dot gives a lower time in microseconds or ticks, I guess that it relates to the memory place where it writes the value to, but,

If you "run it continuously" for a few ...thousands iterations, in the long run, the conversion does help to the execution time.

 

Please check it and tell me if I'm missing something or if there is a better way to benchmark/analyse it
BenchmarkCoercionDotConvertion.png


You left debugging on and forgot to save default data.  Other than that, the FOR loops are fairly useless aren't they?


"Should be" isn't "Is" -Jay
0 Kudos
Message 15 of 19
(1,811 Views)

Are you running the top half code and bottom half at the same time?  If so, how do you know that one half of the code isn't stealing clock cycles from the other half?

0 Kudos
Message 16 of 19
(1,804 Views)

@JÞB wrote:

@niarena wrote:

Hi guys!

 

This topic challenged what I though about Labview performance regarding the coercion dots.

I started to benchmark a simple vi and it grew like a little monster with some stats in the way.

 

If you run it one at a time, sometimes the coertion dot gives a lower time in microseconds or ticks, I guess that it relates to the memory place where it writes the value to, but,

If you "run it continuously" for a few ...thousands iterations, in the long run, the conversion does help to the execution time.

 

Please check it and tell me if I'm missing something or if there is a better way to benchmark/analyse it


You left debugging on and forgot to save default data.  Other than that, the FOR loops are fairly useless aren't they?


And the attached vi shows a bit of performance improvement with explicitly coercing the I32 to DBL  (Roughly 18% faster)  on an Inter i3 under W10 x 64 NOTE: Snippets do not preserve execution options so use the attached vi

Capture.PNG


"Should be" isn't "Is" -Jay
0 Kudos
Message 17 of 19
(1,792 Views)

Just to prove that Benchmarks are difficult to construct

Ignore my last example it is flawed the explicit conversion is @35% faster than coercion

Capture.PNG.

 


"Should be" isn't "Is" -Jay
Message 18 of 19
(1,787 Views)

(15 years later ...:D )

 


@altenbach wrote:


CoercionIsFaster.png
(code in this example simply zeroes all negative values of a 2D array)


 

 

Of course the correct way to do this would be as follows, of course.

altenbach_0-1728749982620.png

 

No, I won't benchmark it. Also, the compiler has probably improved and we don't really know if the old discussion is still valid. Still, the earlier example served as a demonstration to take coercion dots with a grain of salt.

0 Kudos
Message 19 of 19
(148 Views)