09-09-2005 02:13 AM
09-09-2005 04:03 AM
....but if it isn't inside the loop, [snip...] which would increase the time...
It looks like (in my warped corner of space-time), when running in UI mode:
With reference Outside the loop, tested code runs about 5% Faster ( => Slower when reference is Inside loop. )
Perhaps intial results reflected mis-interpretation of "noise" - compiler sensitivity to "Execution System"?
cheers
09-09-2005 06:10 AM
But, for the sake of argument, let's assume you're right.
Let's say the optimization works in such in way that if the compiler detects the reference inside the loop, it moves the entire loop into the UI thread, but if it isn't inside the loop, it doesn't, and then it has to do thread switching whenever accessing the property node, which would increase the time.
--- Well, assuming your item 1 is correct, that would make sense EXCEPT for the fact that the SEQUENCE changes the results. I can't see a reason the (empty) SEQUENCE would force it into another thread.
Blog for (mostly LabVIEW) programmers: Tips And Tricks
09-09-2005 06:18 AM
I'm not sure whose results you are referring to (this message board really needs work to clarify who's responding to whom), but notice this: when I run on OS X, the numbers go from 480+ uSec down to 14 or so.
that means two things:
1... Whatever they're doing to implement the ExecSys is WAY more of a burden on OS X than on Windows.
2... No way at all that that is "mis-interpretation of noise". I understand what you mean, but the timing thing runs lots of cases to reduce that, and it's hard to mistake 480 for 14.
Blog for (mostly LabVIEW) programmers: Tips And Tricks