LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

LabVIEW 2010 slow vi save performance

 

I feel myself urged to step into this thread and pick some points which i think should get some attendence:

@Herbert wrote:
[...]

On the other hand the strategy of NI is clear: NI sells more LabVIEW licences, if dummies are capable to produce the same quality of code as experienced and knowledgable software architects.[...]

 

Herbert


 

I have to contradict here. Code of good quality for my personal definition is readable and maintainable. It has primarily not much to do with execution speed or memory usage. Hence, no beginner will match good code inherently... he has to learn to organize and "think G".

 

This comes to the most important part:

My experience shows, that significant slow down during execution times is caused by "non LabVIEW'ish code". This results in longer optimization because DFIR and LLVM has to spin, turn, twist and spin the code again and again.

Now, what is "non LabVIEW'ish code" you might ask....

- overusage of variables (95% of all variables in all VIs all around the world are not necessary)

- sequence structures (with really rare exceptions which would be single frame ones....talking about Windows and RT only.)

- building large, monolythic VIs

- race conditions due to poor synchronization of parallel running 'tasks'

- Rube Goldberg code

- ...

(sure you can come up with some more points here!)

 

I do experience longer compile times in LV 2010, that's true. And it is good to give feedback on this. But keep in mind that compile times also depends on your code's quality. So keeping this one high will increase your effectivity and helps in preventing/finding bugs in your own application.

 

Norbert

Norbert
----------------------------------------------------------------------------------------------------
CEO: What exactly is stopping us from doing this?
Expert: Geometry
Marketing Manager: Just ignore it.
0 Kudos
Message 51 of 72
(1,901 Views)

Prompted by the changed compiler behavior in LabVIEW 2010 compared to LabVIEW 8.6.1 and earlier, having the consequence that 2010 really cannot handle the same ( complex ) code which could be handled by earlier versions, then in my opinion NI has got themselves a serious problem.

 

We program binary source code with the promise that the code can always be loaded by newer LabVIEW versions. And if it cannot run right away then we get a list of errors that we can fix. And then we can continue in the new version.

 

With LabVIEW 2010 then this standard procedure is no longer possible ( for my code ). Although the code can be loaded in 2010 without errors it cannot be edited in 2010. This means that I must code it in 8.6.1, which works.

 

But what happens if I run into some kind of programming problem in 8.6.1. NI will not release updates for an old version of LabVIEW. They will ( of course ) suggest to use the newest version of LabVIEW. And that is the problem. In order to be sure that the binary source will be maintainable and run for the foreseeable future then it must always be editable using the newest LabVIEW version.

 

Who will trust a programming environment which suddenly declares your source code for obsolete. An environment for which there is no second source ??

 

The opposite situation actually appeared in LabVIEW 6 where several programmers suddenly found out that their code could not run or be saved anymore because the code became too complex. That problem was resolved in LabVIEW 7 and all were happy and NI got some well deserved credit for fixing this. At least from me.

 

NI needs to address this upgrade issue properly. Making a document suggesting a rewrite of complex code because the new compiler apparently no longer can handle complex code makes me wonder where NI is heading. It doesn't sound promising.

 

Regards

 

0 Kudos
Message 52 of 72
(1,859 Views)

I cannot accept the statement that because of my "bad" code which was fine for Labview for the last 10 years suddenly I suddenly need to completely change all my programming because the compiler can't handle it any more - especially since nobody mentioned this tiny detail when introducing the new LV.

 

Keep in mind that LV originally was not designed for people using "good" code or good programming skills, but to get quick results for measurement setups for everyone, especially if they are not fit in programming. The typical application hence was probably coded quick and dirty, and LV managed for all the years to deliver this.

 

It might be the case that the majority of programs deal with smaller applications anyway, because this is where LV is originated. But LV developed functionality which make sit also quite good in handling large applications - and a growing number of programmers obviously uses this. So while the current optimizations might work well on the majority of the code, there is a substantial part of the users where is does not, and this needs to be addressed.

 

Face it, NI decided to sacrifice compilation performance in order to increase execution speed for some very special cases and overdid it – as simple as that. Now they should see how to optimize on both sides because the current situation is unbearable for many experienced users.

 

If NI really wanted to educate programmers to use "better" code because optimization is easier this way, they should tell people in advance and give examples of how to handle large applications better. Instead LV2010 was advertized as the annual upgrade with heaps of improvements as usual - and in many details it really is - I would like to use 2010 because it makes things easier, but for me it is simply not usable.

 

But again, this whole code quality discussion might be helpful, and please let us discuss it in practical detail, how to handle a large application with a complex user interface while at the same time everything is put in tiny Vis. But code quality while maybe causing it, it is not the reason for the current problems. And it must be clear for everyone: if large programs cannot be edited any more, or saving a VI after a tiny change suddenly takes half a minute or longer, then this is not acceptable (at least for a large part of the LV community).

 

If we need to change our programming style to be able to program in newer versions of LV then

1) officially tell us so

2) give us examples how to do this for large applications (especially with large user interfaces) - and don't hide it in huge whitepapers but put it into example projects to give us a quick start and most of all let everybody know that he need to change, and why.

3) give extended support at least LV2009, better 8.x, because "good" code or not, some people will simply not accept that they need to rewrite the code or a large application just because they need a new compiler because for example they need to support next OS version...

 

Rainer

0 Kudos
Message 53 of 72
(1,847 Views)

Hello Everyone,

 

As has been discussed before the optimizations in the LabVIEW 2010 compiler are causing the slow down in compilation time in VIs.  The compile time for most VIs is negligible.  However, some users with large block diagrams have experienced slow downs that render their VIs unusable.  National Instruments aggressively targeted this problem with the help of numerous customers that shared source code with us.  Because of this we believe that we have implemented a fix to this problem in LabVIEW 2010 SP1.  You can download LabVIEW 2010 SP1 from this site. Please read the reminder at the top of the page about needing an active SSP membership.

 

National Instruments did NOT believe that the solution to this problem was to rewrite code that was previously working fine.  We go to extreme efforts to make sure that our customers code will upgrade without consequence.  This is of course not 100% guaranteed.  In this case we didn't have diagrams large enough that would trigger the response that some of our customers have experienced.  When we received reports of this we took immediate action.  Our official repsonse to this issue is documented in this KnowledgeBase and now I've updated it for LabVIEW 2010 SP1.

 

While some of these programs could probably benefit from a different architecture or a refactoring, we never want to recommend this as a solution to an upgrade problem.

 

Regards,

Jon S.
National Instruments
LabVIEW NXG Product Owner
0 Kudos
Message 54 of 72
(1,803 Views)

This is great news Jon! I'm sure you all, Adam, and the NI team have been working hard on this balance.

 

I can report a return to pre-LabVIEW 2010 save behavior, back to a 2 second save for large code, in that everything is "back to normal" for my code which uses message-based queued state machines.

 

I hope rrawer (OP), and others, will see the same return to expected behavior.

 

 

And, to clarify, I assume this change only affects the optimizations before saving, right?

That is, I assume the answers to the following questions are "yes", irregardless

of the LLVMLargeVIThreshold setting:

 

1. Does it still optimize before running to VI in the LV environment? (so, there would be the (desired) delay before running and/or forced save?)

 

2. Does it still optimize during the build process?

 

Many thanks!

0 Kudos
Message 55 of 72
(1,776 Views)

 


@StudioGuy wrote:

This is great news Jon! I'm sure you all, Adam, and the NI team have been working hard on this balance.

 

I can report a return to pre-LabVIEW 2010 save behavior, back to a 2 second save for large code, in that everything is "back to normal" for my code which uses message-based queued state machines.

 

I hope rrawer (OP), and others, will see the same return to expected behavior.

 

 

And, to clarify, I assume this change only affects the optimizations before saving, right?

That is, I assume the answers to the following questions are "yes", irregardless

of the LLVMLargeVIThreshold setting:

 

1. Does it still optimize before running to VI in the LV environment? (so, there would be the (desired) delay before running and/or forced save?)

 

2. Does it still optimize during the build process?

 

Many thanks!


 

Hello StudioGuy,

 

I added a little bit more info to the Compiler token KB.  To directly answer your questions:

 

1. Does it still optimize before running to VI in the LV environment? (so, there would be the (desired) delay before running and/or forced save?)

This setting takes place in a VI by VI basis.  If the VI is over the threshold it will always compile without some of the optimizations.

 

2. Does it still optimize during the build process?

LabVIEW 2010 will compile during the build process.  This compilation will take the same settings that the LabVIEW development environment does when it comes to compiling.

 

The reason there is a hard switch is that some VIs were big enough that LabVIEW would run out of memory and eventually crash when trying to compile them.

 

 

Thanks for the feedback!

Regards,

Jon S.
National Instruments
LabVIEW NXG Product Owner
0 Kudos
Message 56 of 72
(1,765 Views)

I can't take credit for this fix. A few other developers did a lot of work to address this. I actually have to eat my words a bit. I suggested earlier that using the old compiler backend was not an option. I assumed that we wouldn't be willing to make such a large change in an SP1 release. However, we decided that this problem was important enough to be worth more risk and effort than we would normally be willing to take on in a service pack. The challenge was that once we switched to the new backend we had added some features and done some other work that assumed the new backend was in place. Those changes did not work with the old backend until we added similar capabilities to it, and then we had to do quite a bit of testing to make sure that we didn't introduce any new bugs with those changes.

 

Again, I can't take credit for any of this, but several of our other developers did quite a bit of work just to prove me wrong. Enjoy. 🙂

 

In the long run our goal is still to get rid of our old backend by making the new one capable of handling these larger VIs efficiently, but in the meantime we realize that the compiler performance was unacceptable in some cases. We'll continue to look for ways to address those issues so that we can have both fast compiles and more optimized compiled code. Thanks again to everyone who provided us with example code that we could test with. That helped immensely.

0 Kudos
Message 57 of 72
(1,762 Views)

OK,  I understand (but haven't seen) the memory issues; otherwise, one would think to always not optimize just for a save of an edit, but do optimize for a run / build (as in my Idea Exchange note).

 

So, given that, if this seems like a general good solution once you get all community feedback, perhaps the setting should be moved into the environment, and not an ini edit. Especially during building, this could be a default-enabled checkbox for optimizations. That way we could ensure that our built apps were fully optimized as NI would recommend.

 

I understand the long-view would have users not even worrying about this, but for now.

 

Regards and thanks!

 

[Edited for clarity]

0 Kudos
Message 58 of 72
(1,754 Views)

LabVIEW 2010 SP1 looks like a masterpiece addressing all my complaints and concerns as previously mentioned.

 

I am now able to load and edit my large and complex code without any delays worth mentioning. And this without changing any defaults at all.

 

I am also able to runtime built the application without excessive memory consumption. Instead of going to 3.5 GB during building it stays at 500 MB. This memory is not released afterwards so a LabVIEW restart still seems needed to get down to the 250 MB the system usually consumes when data is not loaded. But this is merely an observation.

 

I have not delved into the tuning options. I find it fantastic that LabVIEW 2010 SP1 now can be considered a trustworthy programming environment. That is all I need to know in order to have faith in the LabVIEW system future.

 

It is a strange coincidence that SP1 is released just so few days after voicing my concerns. But nice.

 

Regards to the NI R&D team for a job very well done. 

 

  

0 Kudos
Message 59 of 72
(1,697 Views)

Keep in mind that if you check memory usage in something like the task manager then it's only reporting how much address space we have reserved, but not how much is actively being used. We use an allocator that sits between the OS and LabVIEW. That allocator allocates pages of memory and gives pieces of them out to LabVIEW, but even if LabVIEW releases all of those pieces that make up a page the allocator holds on to that page in case it needs to give out more later. That memory will show up as used from outside the allocator (like in task manager), but it's still unused in LabVIEW. It's not hurting anything. If it's not being used the OS will page it out to disk and leave it there. It's just a misleading way to check memory usage these days.

 

If memory usage keeps climbing without bound then we have a problem, but if it goes up and remains steady (even without combing back down) then it's not necessarily a leak.

0 Kudos
Message 60 of 72
(1,692 Views)