03-24-2009 05:48 PM
03-24-2009 07:26 PM - edited 03-24-2009 07:26 PM
I ran your benchmarks on LV 8.6.1.f1, AMD phenom X4 on Vista Home Premium.
I got .051 for USR and .062 for feedback node.
The only comment I would have in your benchmarking setup is to move the array initialization out of the initial flat sequence frame where you get the start time so that the time to initialize the array (although it should be constant folded) doesn't affect the timer. My numbers above were based on moving that array out of the For loop entirely.
06-04-2009 03:52 PM
Hey guys,
Sorry I didn't see this post back in March...I guess it was one of y'all who asked about it on my blog. Anyway, I made some changes to the code to make it more closely resemble my original benchmarks. Among the changes I made:
1. Changed the data type to a scalar Boolean to minimize the impact of data size on overall times.
2. Replaced the boolean value being written to the USR/Feedback Node from a constant-folded entity (the Array Size output was constant-folded since both of its inputs were constants) to a control on the front panel, to eliminate any sort of unpredictable optimizations having to do with constant folding.
3. Wired the output of the Read VIs to the border of the For Loop. There are some optimizations that LabVIEW can sometimes perform if you don't wire the outputs of subVIs...I wanted to eliminate any unpredictability here as well. Now that I think about it, when I did my original benchmarks, I did not have the USR/Feedback Nodes wrapped in subVIs...they were called directly on the diagram of the benchmarking VI. So this could also explain some differences in our results.
After making these changes, I found that the Feedback Node was anywhere between 3% and 10% faster than the USR, depending on how many iterations I ran the loop. I have attached my modified copy of your benchmarking test below for your reference. This was tested on LabVIEW 8.6.0, Intel Quad Core on Vista Business 64-bit.
Another thing I noticed is that, if you change the benchmarking VI to only benchmark writing, the writing can be slower with the Feedback Node. It seems to be consistently faster when benchmarking only reading, though. Looking back, I think my original benchmarks focused on reading.
06-05-2009 07:40 AM
06-17-2009 01:09 PM - edited 06-17-2009 01:11 PM
>I found that the Feedback Node was anywhere between 3% and 10% faster than the USR, depending on how many iterations I ran the loop
Darren,
I seem to consistently get a different result using basically your exact VI with the following small changes. (1) add a chart for the result (2) place a while loop with 0ms wait around it to force an update).
(your "% faster" is a negative number!, see image).
LabVIEW 8.6.1, Old Athlon XP desktop.
06-18-2009 11:38 AM
altenbach wrote:
I seem to consistently get a different result using basically your exact VI with the following small changes. (1) add a chart for the result (2) place a while loop with 0ms wait around it to force an update).
- "As is", there is virtually no difference between the two.
- If I change both action engines to subroutine priority, the feedback is 5-10% slower than the USR. (!)
I don't know if we can trust any results once the chart is added, due to its effects on memory allocations. However, I definitely noticed some weirdness once I played with the priority settings. I also noticed that the relative speed of the feedback node compared to the shift register declined in LabVIEW 2009. So I have filed two CARs based on your findings:
CAR# 175377 - Feedback Node speed relative to USR decreases when inside a subroutine priority subVI
CAR# 175379 - Feedback Node speed relative to USR decreases in LabVIEW 2009
Thanks for bringing these issues to my attention!
06-18-2009 11:51 AM
Thanks!
Sorry, I get the same result without chart (8.6.1). The chart just helps elevate the difference above the noise level because we can see historical data. It is also interesting that the feedback node has significantly wider deviations between iterations. This is difficult to explain. 😉