LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

LabVIEW 2010 slow vi save performance

We are aware that some large VIs can take significantly longer to compile, but our benchmarks have shown a corresponding significant improvement in runtime performance which we think is worth it. If the compile of a single VI is taking too long then I suggest that you make that VI smaller by creating more subVIs. You can probably even find some duplicate code in there to replace with a single subVI. Having a single large VI has multiple downsides already, so it's always a good idea to refactor. You'll find that it's easier to manage in addition to being faster to compile.

0 Kudos
Message 11 of 72
(3,341 Views)

Dear Adam,

 

Thanks for the clraification.

I understand that the performance increase is a big plus point.

 

However I woull still prefer to get a chance to influence that saving behaviour - in a large application with lots of user interaction, performance is often not a big concern (appart from a liminted numbe rof VIs maybe), however increasing saving time during development by a factor for 20 is a pain in my opinion, in fact for me it leaves teh question weather I stick to 8.5 or finally uopgrade to 2010. And I would really like to switch to 2010 becasue the separation of code and object data could finally make group work with source control work...

 

Anyway - an optional switch in teh Project or even better on the VI level would really make things easier on the development side...

 

Nevertheless I will try to schrink the VI, but I wonder weather it's just teh fact that there are a 100 cases in the eventstructure, which won't change even if I try to pack everything in subVIs...

 

thanks,
Rainer

0 Kudos
Message 12 of 72
(3,326 Views)

The nature of some of the architecture changes we made make opting out impractical. We replaced the entire backend of our compiler (the part that generates machine code) with one that does optimizations at a lower level than was practical before. Even without actually doing the optimizations the new backend just does more work than the previous one. Opting out would mean using our old backend, which would mean supporting and maintaining three separate backends (LLVM [the new cross-platform one] plus our old x86 and PPC backends).

 

Keep in mind that saving is only slower when we have to compile. If you refactor your code so that you don't have to modify the top-level VI often then even if we're still slow while saving it then you won't have to worry about it much because you won't have to save it as much. We'll only have to recompile it when you modify it directly or change something in its subVIs that makes callers recompile.

Message 13 of 72
(3,301 Views)

Adam,

 

Thank you for the explanation.

 

Lynn

0 Kudos
Message 14 of 72
(3,288 Views)

Dear Adam,

 

To check your suggestions I did a simple experiment: Attached you find a simple VI constituting of an event structure, 50 buttons and 50 change value cases. 30 of the cases do simple calculations in a loop each, 20 cases do a value signalling event on another button.

 

If you now delete in case #1 (Button 1) one of the two for loops and waveform graphs and save then it already takes about 5 seconds to save (and the vi is only 80kb large). Again save the same vi in LV 8.5 is not noticeable.

 

In addition I checked the larger VIs and actually most of the event cases already are basically just triggering a sub vi and giving some input data - so there will not much to be streamlined. Comparing them to more complex Vis actually does not seem to make much of a difference in saving time.

 

I think this topic is a serious problem for anybody programming larger applications with user interactions. I don’t see how you could get around the user interface handled in one VI and if it’s not only constricting of a few buttons generating a 100 events is not uncommon. In this case you end up easily with code between 500kb and 1Mb.

 

A big convenience of LabVIEW so war was that you could build up the code function by function and right away test every added function with original data. For this reason (and certainly to prevent from code loss) I used to save the code frequently (so probably every couple of minutes or even faster. In fact the Strg+S is more or less happening automatically, which I noticed once starting to use LV2010 and wondering what just happened because 30secs nothing happened...)

 

Anyway - if I now have to wait 20 to 30 sec for every save on my main VI even if I save only if I want to run the application this easily adds up to 30minutes to on hour pure waiting time per day just for saving. And this is exactly what's happening - I tried it with three larger projects of mine and basically in every single one I end up with this saving time problem.

 

I don't see how a more complex application could be handled efficiently this way - and I really believe if you don’t find a solution for this compiler problem many people who were happy about many changes in LabVIEW in the past years enabling them to handle larger applications will have a big problem.

 

Don’t get me wrong - I’ve been using LabVIEW now since 15 years and still think it’s for me the ideal system which I want to use and which I am constantly supporting ever since even though people in the company keep telling me other systems nowadays provide more or less the same functionality then LabVIEW wand it’s much easier to get got programmes…

So I’d really like to keep using LV but this saving problem simply drives me crazy spending half an hour to an hour per day just for saving files. This is a drastic cut in programming efficiency and I cannot recommend any of my colleagues to use LV2010 in a bit larger applications (which in fact in our company every single one seems to qualify for).

 

I really hope you find a solution for this problem, because if there’s no other workaround LV2010 is simply not usable for larger applications – and I’m sure this problem will become big as soon as some more companies want to upgrade to LV2010.

 

sincerely,

Rainer

Message 15 of 72
(3,266 Views)

It is not just large "VIs" only, but large applications in general. My main VI is 500 kB, but there are many subVIs which could be called from the main GUI, hence they are in memory, and thus I think being compiled too for any top-level save.

 

While I appreciate the complier improvements, those of us who "save their work often" see this is as a big impact to workflow.

 

Can we find a middle ground...

 

While I understand the compiler is always running and showing us our broken run arrows and such, only when we save is it doing the more intensive longer compile. How about if that complete compile operation only happened when the user tries to RUN a VI? (And, of course during the build process, etc.)

 

For example, when you make an edit to a large project, and don't save but still try to run, that's just what it does -- the compiler does it's thing and the program runs after the usual delay. That's just fine.

 

Is this a reasonable compromise, and it sounds easy to implement (it already compiles before a run, just don't compile after a save)? If so, I'm happy to submit this recommendation to the Idea Exchange.

 

Regards to all.

0 Kudos