LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Big Performance Degradation in LabVIEW 2012

Solved!
Go to solution

@Intaris wrote:

Roger,

 

can you share a benchmark or some benchmark data including LV 2014 so that we can discuss the numbers?


On Page 1 in this thread is a link to a LV 2011 project that you can use for benchmarking between LV versions. 

Recall that running this on a windows machine only gives you a rough estimate of the performance.

For a more reliable estimate, run the code on a RT (eg. cRIO / sbRIO) machine.

 

/Roger

 

0 Kudos
Message 61 of 111
(5,574 Views)

I don't have LV 2014 installed (not even 2013).

 

If someone can post some concrete numbers, that would be great.

0 Kudos
Message 62 of 111
(5,562 Views)

@Intaris wrote:

I don't have LV 2014 installed (not even 2013).

 

If someone can post some concrete numbers, that would be great.


Here's a comparison between LV2011 SP1 and LV2014, enjoy!

If the benchmark code is relevant and representative of the true performance, I leave up to you for judging. Smiley LOL

 

First up LV2011:

LV2011 SP1 Win7 64bit.JPG

 

After that we have LV2014:

LV2014 Win7 64bit.JPG

 

I hope this answers your question?

 

Edit: I run them both on a Win 7 64 bit virtual machine.

 

Br,

 

/Roger

 

Message 63 of 111
(5,539 Views)

Oh good lord.

 

Smiley Mad

0 Kudos
Message 64 of 111
(5,532 Views)

@Intaris wrote:

Oh good lord.

 

Smiley Mad


At least the performance drop isn't a magnitude worse. Smiley Wink

 

You might want to look into my benchmarking code and determine if it is representative and gives a correct estimate between LV versions.

Though, there is a CAR regarding this performance issue dating back to 2012 when I started this thread.

Given this, my best guess is that this performance regression isn't easily amendable with the current LV code base. Smiley Sad

 

/Roger

 

Message 65 of 111
(5,517 Views)

Maybe me inability to learn and master then incorporate LVOOP in our production RT code has so far been a great boon.. I frequently have to work hard to squeeze new functionality in our RT code without chipping away too much at the ever dwindling spare resources..  

Kudos' given to RogerIsaksson for his work and continued pokes at this thread.  I'm wondering if 2015 or 2015 SP1 might be the 'year'?!?  -From what I can tell, NI has been cooking up some drastic changes to LabVIEW, I just don't know if those changes are mostly marketing centered such as my personal fear: "weee, now labVIEW uses vectorized drawings so you can zoooooom! weeeee!" which, while neat, is not high on my wish list for LV improvements. 😛

QFang
-------------
CLD LabVIEW 7.1 to 2016
0 Kudos
Message 66 of 111
(5,485 Views)

@QFang wrote:

Maybe me inability to learn and master then incorporate LVOOP in our production RT code has so far been a great boon.. I frequently have to work hard to squeeze new functionality in our RT code without chipping away too much at the ever dwindling spare resources..  

Kudos' given to RogerIsaksson for his work and continued pokes at this thread.  I'm wondering if 2015 or 2015 SP1 might be the 'year'?!?  -From what I can tell, NI has been cooking up some drastic changes to LabVIEW, I just don't know if those changes are mostly marketing centered such as my personal fear: "weee, now labVIEW uses vectorized drawings so you can zoooooom! weeeee!" which, while neat, is not high on my wish list for LV improvements. 😛


I agree with your conclusion, runtime + editor performance and stability is king.

 

Though LV for sure needs an UI/programming style overhaul, its getting quite long in the tooth and clunky to work with to speak frankly.

Personally I dont just want LabVIEW "different" for the sake of "freshness". I want a paradigm shift into 3D-LabVIEW.

Though I might be considered nuts for even thinking about this.

 

While I have been bluntly kicked out from another NI forum, because my critique and comments is not: quote: "a net positive",  hilarious stuff (makes me wonder where does NI find these yahoos?).

 

Anyway, gotta give NI benefit of my unconditional Love Heart and Lulz! Smiley LOL

 

/Roger

 

Message 67 of 111
(5,467 Views)

Good morning, ladies and gentlemen. Some of you know me as the original architect of LVOOP. I have since moved on to other projects, but I continue to keep aware of what the team is working on, and I have continued to apply what pressure I can to get these performance problems addressed. And the compiler team has responded.

 

TL;DR: LV 2015 should, assuming all goes as planned through beta testing, return dynamic dispatching to slightly-better-than LV 2011 performance levels. If you want to verify this for yourself, please visit http://ni.com/beta and sign up for the beta program which should start in a couple weeks.

 

Details:

The performance degredation in dynamic dispatching started in LV 2012 and has gotten worse in 2013 and 2014. I do not like to make absolute promises about future functionality -- many things can change during alpha and beta testing. BUT, assuming all goes as planned, LV 2015 should have the same performance for dynamic dispatching as LV 2011. In fact, benchmarks this morning show it to be slightly better.

 

This will address the absolute performance problem. There is still a relative performance problem that has been present since LabVIEW 8.2 when LVOOP first released. Here's the theory: Suppose you had N different types of clusters.You could create one megacluster that had all the fields for all of those N clusters AND an Enum that has N values. Based on the value of the enum, that determines which of the fields you use. The enum essentially acts like a dynamic type selector. This is one way to have "inheritance" -- and you can override methods by having a Case Structure that checks that enum value and then invokes a different subVI in each frame of the Case Structure, one for each of the different enum values. In theory, dynamic dispatching should be very close to the performance of this "megacluster with enum". In practice, dynamic dispatching has always been slower, by different amounts in different LV versions.

 

On Windows desktop, with debugging DISABLED...

... LV 8.2, dynamic dispatching was 2.4x slower than megacluster case structure.

... LV 2014, dynamic dispatching was 4.2x slower than megacluster case structure.

... LV 2015, we are down to 2.6x slower than megacluster case structure.

Although 2015 fixes the things that were injected to slow down dynamic dispatching, the compiler has become ever more intelligent about optimzing case structure code such that getting back to the 2011 numbers still leaves us room for theoretical performance improvement. Still, we are very close. But we would like to close that gap between the two implementations.

 

Keep in mind, please, that we are talking about the difference between 300 nanoseconds for dynamic dispatching and 100 nanoseconds for megacluster case structure. Most users will never have this show up as a performance hot spot in their code. Intaris and a few other power users do have to worry about such time differences. If you are using the "LVOOP is slower than a megacluster" as your reason for not using LabVIEW classes in your applications, please consider the rest of your application and ask whether that is really likely to make a difference. Even with LV 2014 performance levels, dynamic dispatch overhead was not a key contributor to the slowness of most applications that I was asked to look at. A couple global variables and an unnecessary thread synchronizing Data Value Reference is worth hundreds of dynamic dispatch calls.

 

But, as I say, there are users for whom this performance gap matters.

 

A large portion of the slowness is because LV always allows dynamic loading of classes. Also, calls to the parent class VIs are compiled without any knowledge saved about the finite set of children -- it is assume that even the children already in memory might not be there when the VI is next loaded. LabVIEW does not have a concept of "locking" a hierarchy to prevent further classes from loading, nor of declaring "this child will always be present, so take it into account." These two facts mean that we compile in code to do more dynamic checking than we have for the megacluster's case structure. Fixing that will require new featues for the language.

 

I cannot give you any timetable or even roadmap for further closing of that gap. My hope is that just getting us back to 2011 levels releases a lot of the back pressure against using LVOOP for many projects. Most users should be back to a level that removes that aspect of OO as a concern for their projects.

Message 68 of 111
(5,439 Views)

Thank you very much for the update AristosQueue,  to be fair, my level of LVOOP has not yet evolved to a level where I use dynamic dispatching (so these performance bumps don't hit me), which is to say I'm barely scratching the surface of the power of (LV)OO.  It wasn't fair of me to imply that if I had implemented more LVOOP in our current software, we would be in a tight spot.  

My biggest (personal) challenge for 2015 is growing into more advanced applications of LVOOP..  I think I finally have a shot at lining up a (new) project that fits well with the Actor Framework along with 'enough time' to allow me to go that route.. At the completion of that, I should have enhanced my theoretical (LV)OOP knowledge with some much needed practical experience. 🙂

QFang
-------------
CLD LabVIEW 7.1 to 2016
0 Kudos
Message 69 of 111
(5,427 Views)

@AristosQueue (NI) wrote:

 

 

Keep in mind, please, that we are talking about the difference between 300 nanoseconds and 100 nanoseconds per dynamic dispatch call. Most users will never have this show up as a performance hot spot in their code. 


You are oversimplifying the impact. 

A large application have more than one class.

A 3X performance drop in dyndispatch over a large number of classes is most noticeable in any reasonably complex LVOOP application.

 

/Roger

 

0 Kudos
Message 70 of 111
(5,426 Views)