LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Strange benchmark result need some explanation

I was testing the efficiency of the transpose function in Labview so I did a simple test setup starting with by using a constant as source. At some time I changed the constant to a control, and the processing time dropped from about 4200 msec to a number between 1 and 2 msecI could reproduce the result at every runWhy is it like this?

I include my test VI in 8.0

Edit: I was using Labview 8.6 for this test, and Could not see any major differences in buffer allocations

sample.PNG

 

Message Edited by Coq Rouge on 07-29-2009 10:51 AM


Besides which, my opinion is that Express VIs Carthage must be destroyed deleted
(Sorry no Labview "brag list" so far)
Message 1 of 14
(3,888 Views)

Its strange. According to the data flow whether it is control or constant the data will be passed to the for loop and it gets processed,I don't know why the strange difference between control and constants only for this function or for all the functions?????????????

Thanks Coq Rouge to bringing the issue.

Balaji PK (CLA)
Ever tried. Ever failed. No matter. Try again. Fail again. Fail better

Don't forget Kudos for Good Answers, and Mark a solution if your problem is solved.
0 Kudos
Message 2 of 14
(3,882 Views)

Hi CoqRouge,

 

you found one of LabView's memory handling optimization mysteries Smiley Wink

 

I changed the attachment to disable parallelism as much as possible and changed to use ShiftRegisters instead of tunnels. Now both cases are same in speed (more or less).

 

My guess: LV is optimized for handling data from controls. You have wired a constant and LabView has to copy it's data over and over again, hence the speed penalty. Just a wild guess...

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
Message 3 of 14
(3,889 Views)

GerdW wrote:

 

My guess: LV is optimized for handling data from controls. You have wired a constant and LabView has to copy it's data over and over again, hence the speed penalty. Just a wild guess...


But, maybe a correct guess. Smiley Happy

- Partha ( CLD until Oct 2024 🙂 )
0 Kudos
Message 4 of 14
(3,881 Views)

GerdW wrote:

Hi CoqRouge,

you found one of LabView's memory handling optimization mysteries Smiley Wink

My guess: LV is optimized for handling data from controls. You have wired a constant and LabView has to copy it's data over and over again, hence the speed penalty. Just a wild guess...


In my first test I did not have any parallelism, but changed the source from control to constant by right clicking on the source. But I get the same results regarding timing. Anyway, perhaps this demonstrate that in some cases a control with with default values is better than a constant. Or that a shift register is better if "Disable Indexing" is enabled on a loop output.   



Besides which, my opinion is that Express VIs Carthage must be destroyed deleted
(Sorry no Labview "brag list" so far)
0 Kudos
Message 5 of 14
(3,869 Views)

Then I was testing as shown in the figure below, the time was about equal and around 4200 msec

sample1.PNG

 

Then I tested like this in the same diagram the run time was about equal but the time jumped up to approximately 7800 msec 

sample.PNG

 



Besides which, my opinion is that Express VIs Carthage must be destroyed deleted
(Sorry no Labview "brag list" so far)
0 Kudos
Message 6 of 14
(3,844 Views)

I Guess this explain my first question. Then using a constant the data is read in every iteration even if it is placed outside the loop. Perhaps this is a flaw, and a correction is needed?



Besides which, my opinion is that Express VIs Carthage must be destroyed deleted
(Sorry no Labview "brag list" so far)
0 Kudos
Message 7 of 14
(3,834 Views)

There is something else going on here. The Transpose Array does actually not always physically copy data around. Instead it creates something called a subarray. This subarray is a special internal datatype in LabVIEW that contains not the real data but just a pointer the the original data. In addition to that it contains extra information such as the order of dimensions, the size of each dimension and even the direction of indexing. How it does this exactly is beyond my knowledge but basically when you do transpose an array what LabVIEW does is creating a subarray that contains the information that the dimensions are actually swapped to what they are normally expected.

 

Many array functions including the one that displays the data on a front panel for instance do know how to treat subarrays properly and also how to treat one, more or all special cases of such subarrays. If a particular function operating on a subarray does not know how to do its work on that particular type a conversion routine will create the real array out of that subroutine anyhow to let it be processed by that function.

 

Your original difference was most probably due to the fact that LabVIEW stores constants in a diagram differently than runtime data. A constant is stored with the VI to disk and loaded together with the VI into memory but marked read only, since it is a constant. Apparently the Transpose VI optimization decided for some reasons that the subarray approach will not work for constant data, so a new and realy transposed memory area gets allocated by it. This causes the time delay you see. For the case where the data comes from a control the buffer is not constant and Transpose Array simply produces a subarray and the only real runtime overhead is in displaying the data on the front panel.

 

Rolf Kalbermatter

Message Edited by rolfk on 07-29-2009 12:59 PM
Rolf Kalbermatter
My Blog
Message 8 of 14
(3,823 Views)

Coq Rouge wrote:

I Guess this explain my first question. Then using a constant the data is read in every iteration even if it is placed outside the loop. Perhaps this is a flaw, and a correction is needed?


No it is not a flaw. It is a decision about producing always the right result or being optimised and produce sometimes a wrong result. LabVIEW memory optimization is probably one of the most hairy areas in LabVIEW and there have been issues in the past where some dev went in to make a great optimization only to find out after the release that this did actually produce wrong results under some complicated situations.

I'm pretty sure some of the LabVIEw devs could come up with a very detailed explanation why it is the way it is in this case, but with NI Week in front of us, I doubt they have even time to read these boards Smiley Very Happy

 

In general you can have trust that such apparent shortcomings do have a very good reason and what is more important, I prefer a possible optimization to be dropped from LabVIEW than even having the remote change to run into calculation errors by over zealous memory optimization. Been there, had that and feeling still not happy about it! Smiley Happy

 

Rolf Kalbermatter

Rolf Kalbermatter
My Blog
Message 9 of 14
(3,815 Views)
Thank you for your answer Rolf. And to be honest this behavior do not cause any problem for me. I wanted to see if the transpose array could cause any problem for me regarding timing. The transpose array function is more than fast for my amount of data(2x20000 dbl) . But in the future, if I need an big array constant. I will test and see if a control set to default values and hidden, perform better.   


Besides which, my opinion is that Express VIs Carthage must be destroyed deleted
(Sorry no Labview "brag list" so far)
0 Kudos
Message 10 of 14
(3,805 Views)