LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Big Performance Degradation in LabVIEW 2012

Solved!
Go to solution

@User002 wrote:

I know what, why not have your buddy the "banmeister" over here and lock this thread and kickban me? That _would_ make my day. Smiley LOL


I try to balance giving my customers what they ask for with what I think would actually help. In this case, I am NOT going to ask to have you banned even though you ask for it. Your level of vitriol is annoying but mild compared to others I have dealt with over the years, and I'd rather just leave you in the conversation for now. Banning you would not help your issues with LabVIEW nor will it help heal the wounds you have with NI. I engage with customers as a volunteer effort -- it is not part of my job description. I do it because I think giving you all an insight into how LabVIEW works helps you make better decisions for your own jobs. In most cases, that involves deciding when and how to upgrade versions and how to workaround issues. But there are cases where the correct decision is to stop using LabVIEW. Our service may be insufficient for your needs. I do not want to lose a customer, but if our level of service is that bad compared to your expectations, it may be time to part ways since our service level is unlikely to significantly change in the near future. If you choose to leave LabVIEW and the forums, that is your call. I will continue to try to help where I can should you choose to stay.

 

Yes, it took a few years for us to make fixing this CAR a priority. That's because it was a severe issue for some users but not for a majority, so it was deprioritized relative to other fixes/features. We do what we can to prevent CARs, but our development process is not medical/military level draconian, and so when issues arise, even significant regression issues, they get prioritized for fixing along with all the rest of the development.

Message 81 of 111
(5,327 Views)

@AristosQueue (NI) wrote:

@User002 wrote:

I know what, why not have your buddy the "banmeister" over here and lock this thread and kickban me? That _would_ make my day. Smiley LOL


I try to balance giving my customers what they ask for with what I think would actually help. In this case, I am NOT going to ask to have you banned even though you ask for it. Your level of vitriol is annoying but mild compared to others I have dealt with over the years, and I'd rather just leave you in the conversation for now. Banning you would not help your issues with LabVIEW nor will it help heal the wounds you have with NI. I engage with customers as a volunteer effort -- it is not part of my job description. I do it because I think giving you all an insight into how LabVIEW works helps you make better decisions for your own jobs. In most cases, that involves deciding when and how to upgrade versions and how to workaround issues. But there are cases where the correct decision is to stop using LabVIEW. Our service may be insufficient for your needs. I do not want to lose a customer, but if our level of service is that bad compared to your expectations, it may be time to part ways since our service level is unlikely to significantly change in the near future. If you choose to leave LabVIEW and the forums, that is your call. I will continue to try to help where I can should you choose to stay.

 

Yes, it took a few years for us to make fixing this CAR a priority. That's because it was a severe issue for some users but not for a majority, so it was deprioritized relative to other fixes/features. We do what we can to prevent CARs, but our development process is not medical/military level draconian, and so when issues arise, even significant regression issues, they get prioritized for fixing along with all the rest of the development.


Well thanks, that was nice of you concerning about me and other customers. Perhaps it could have been something you should have taken into consideration earlier? At least there seem to be a window in the ivory tower, small but nonetheless.

 

As for comparing our "niceness" I'm creating open source LV code, bugreports and Community docs _purely_ on a volunteer basis, I'm not on NI's payroll like you are, nothing wrong with that though. Just acknowledge the difference when accusing me of spouting vitriol.

 

It seems like a convenient way of dissing off people with a stake in these matters as "overreacting" when LV development for all intents and purposes moves at a glacial pace.

 

If you bothered to read my comments, I'm not having these performance problems, since I'm still on LV2011 for most of my projects. And also, I'm not holding a grudge against NI, only against arrogant and elitist attitudes.

 

/Roger

 

 

0 Kudos
Message 82 of 111
(5,306 Views)

http://www.reactiongifs.com/wp-content/uploads/2013/11/stewart.gif

 

Lots of name calling going on here.  Lets try to take a step back and try to not get personal.  These are very important discussions to have, about the usage of OO, and their reasoning for not adopting.  NI has invested lots of resources into making their native OO, and supporting it, and I'm sure they want to know the things we don't like about it, just like all of their products.  People have bad days, and get their feathers ruffled, so lets just try to discuss this in a way that is more productive.

 

Personally I've been trying to have little OO in my projects.  And I've had others question my abilities because of this decision.  As if I'm set in my old ways, unable to learn something new.  But my avoidance is sometimes due to these types of conversations.  It seems often I hear about large sized OO projects having problems, and that is a risk I'd like to avoid if possible.  I have a few classes in my larger applications, but I don't design, or use a framework based on it.  Is there a real problem?  I only have others words on the matter.  On the one hand I have people like AQ claiming things are not great but still very good.  And then I hear of others having major problems.  Maybe they just didn't do it right, maybe AQ is unaware of some use case that doesn't work well.  I'd like to try implementing a huge program with Actor and get some personal experience, but that can be a large undertaking, only to realize it will fall appart after months of effort, which is not something my employer would appreciate.  So for now I stick with a stable, scalable, reusable actor type framework that I know works well.

0 Kudos
Message 83 of 111
(5,077 Views)

@Hooovahh wrote:

http://www.reactiongifs.com/wp-content/uploads/2013/11/stewart.gif

 

Lots of name calling going on here.  Lets try to take a step back and try to not get personal.  These are very important discussions to have, about the usage of OO, and their reasoning for not adopting.  NI has invested lots of resources into making their native OO, and supporting it, and I'm sure they want to know the things we don't like about it, just like all of their products.  People have bad days, and get their feathers ruffled, so lets just try to discuss this in a way that is more productive.

 

Personally I've been trying to have little OO in my projects.  And I've had others question my abilities because of this decision.  As if I'm set in my old ways, unable to learn something new.  But my avoidance is sometimes due to these types of conversations.  It seems often I hear about large sized OO projects having problems, and that is a risk I'd like to avoid if possible.  I have a few classes in my larger applications, but I don't design, or use a framework based on it.  Is there a real problem?  I only have others words on the matter.  On the one hand I have people like AQ claiming things are not great but still very good.  And then I hear of others having major problems.  Maybe they just didn't do it right, maybe AQ is unaware of some use case that doesn't work well.  I'd like to try implementing a huge program with Actor and get some personal experience, but that can be a large undertaking, only to realize it will fall appart after months of effort, which is not something my employer would appreciate.  So for now I stick with a stable, scalable, reusable actor type framework that I know works well.


It's my pleasure to offer the audience a bit of Internet Drama. Smiley LOL

 

I can only speak of myself, I have had a lot of success with LVOOP and lots of kudos for NI/AQ for adding this brilliant feature into LV "back in the day". I have several LVOOP projects in my Community pages, and I leave them to speak for themselves. If it is good or bad examples - I let others decide. Smiley Indifferent

 

The problem I see it with (the silver bullet) Actor framework is the nondeterministic amount of "popcorn machines" running asynchronously in the computation space. Myself, I try to structure my applications in a more rigid/deterministic RT "process" oriented sense. That is when I start the application the processor and memory load will stay virtually flat through all of the execution (time and space). This way I'm always experiencing both best and worst case in terms of performance.

 

Thus no surprises for me in the edge case(s) and also easy to debug.

Perhaps this is why LVOOP works so well for me?

Others might disagree. Smiley Wink

 

/Roger

 

0 Kudos
Message 84 of 111
(5,050 Views)

Uhm, 2015 runs slow compared with 2011.

So nope, no performance regression have been fixed according to my measurements.

 

PerformanceDegradation_2015.jpg

 

Br,

 

/Roger

 

Message 85 of 111
(4,585 Views)

I've stopped benchmarking DD calls in the last time.  I've given up on my original ideas as event he 2011 performance numbers were actually not reasonable for me.

 

I'm holding out for the option to actually remove the "dynamic loading" portion of the class behaviour in order to allow all kinds of other compiler optimisations to be done with LVOOP (inlining.......).  I found out about how FPGA handles classes and dynamic dispatch and thought "I want that on RT and windows".

0 Kudos
Message 86 of 111
(4,536 Views)

@User002 wrote:

Uhm, 2015 runs slow compared with 2011.

So nope, no performance regression have been fixed according to my measurements.


Well crap. That wasn't supposed to happen. Can you post your benchmark VIs?

0 Kudos
Message 87 of 111
(4,514 Views)

Intaris wrote: I found out about how FPGA handles classes and dynamic dispatch and thought "I want that on RT and windows".

On desktop and RT, that would require whole-hierarchy compilation... a massive change from how LV currently compiles (each VI fully compiles by itself). It's something we've investigated, but I don't know if we'll ever really make the whole-hierarchy option available. If you want it, keep pushing for it each time you talk to people from NI. In my view, that's not something that NI is likely to see as a priority without active proding from customers.

0 Kudos
Message 88 of 111
(4,508 Views)

Hi,

 

The code is on page 1 in this thread:

 

Benchmarking Code

 

/Roger

 

0 Kudos
Message 89 of 111
(4,505 Views)

@AristosQueue (NI) wrote:
On desktop and RT, that would require whole-hierarchy compilation... a massive change from how LV currently compiles (each VI fully compiles by itself). It's something we've investigated, but I don't know if we'll ever really make the whole-hierarchy option available. If you want it, keep pushing for it each time you talk to people from NI. In my view, that's not something that NI is likely to see as a priority without active proding from customers.
Smiley Sad
Why would it be such a massive change?  Just because we don't allow dynamic loading of classes?  Have I misunderstood how FPGA handles this?  It boils down to knowing "This is the exact set of classes which will be active in this hierarchy, no more, no less" when compiling so that type-checking could be done at compile-time as opposed to at run-time.
Isn't this similar to "finalizing" a hierarchy? or "sealing" or whatever any given language calls it.
Spoiler
BTW, this explains why the DD overhead is observed at EACH level of "Call Parent method".
0 Kudos
Message 90 of 111
(4,476 Views)