I have big labview project that takes quite some time to open, it takes log to build, sometimes it becomes unresponsive while moving VIs, editing VI is not smooth as normally (e.g. if I move an element it takes second or two to complete the operation), etc.
Now I'm starting with new project and I want to optimize the performances. What are some general directions to keep project at low memory consumption and have "normal" response times?
- maybe using data value references to pass big clusters (controls) into VIs?
- design project to to call VIs dynamically (e.g. dynamically call one submodule of the application)? But this means worst call times.
- using PPL? I used them, but didn't observe any major performance boost.
- does labview dll uses less memory then standard (subVI) labview call?
Any other ideas or thoughts?
1) For edit time latencies:
Take a look at the file size of your VI. If it's over half a megabyte (old LV5 metric, not sure what the current "threshold" is),
- assess whether you've inadvertently made a large array default value (arrays, graphs, images, whatever, make sure the default is empty unless you have a good reason otherwise)
- if it's code bloat (diagram huge), consider functional breakup.
2) for runtime latencies, consider making subvis that are loaded when needed, instead of monolithically loading the entire hierarchy. Many of my codes incorporate driver sets that are not needed in any particular instance, but I ignore it because they are not significant load latencies; your instance may be different.
Without more evidence, it's hard to comment further.
To continue Blair's response regarding edit time latency:
Labview compiles code on the fly after each edit. If your application has a lot of interlinked static dependencies it seems to take much longer to compile as an edit in one vi triggers recompiles in dependent vis (which in turn recompiles vis that depend on those, etc.) I believe this is the primary cause of the lag many users experience in the editor.
The solution I have found that works is dependency management. We're all familiar with "spaghetti code." In all the projects I've worked on with edit time lag, none of them managed the dependency tree well and they ended up with "spaghetti dependencies." It is unreasonable for us to expect LV to instantly recompile when we give it a project with spaghetti dependencies. It takes time to follow all those links and recompile the vis.
Learning dependency management not only gives you snappy editing environment, but it's a necessary skill for creating well-designed applications and (imo, though not shared by many) absolutely required for CLD level programmers. There are lots of ways to go about managing dependencies. OOP uses different techniques than procedural LV programming and the dependency tree might look different. Regardless, the idea is to limit the amount of static interlinking between vis.
One way to check how interlinked your vis are is to open a new project and add a vi exposing a component's functionality. Then go expand the dependencies branch and see how much of your project is included. If the dependent vis are limited to those that are part of that component, you're probably good. If the dependent vis include other components or, in worse cases, all of your project, you've got dependency management issues.
As for the time it takes to build an execuable, that's just a time consuming process. AFAIK there's not a lot you can do about reducing the build time for the entire project. If you don't want to have to rebuild the entire project for every change, you can build your functional components as dll's, ppl's, or even helper exe's. Obviously you'll have to write more source code to interact with the different deployment components than you would if it were all bundled into a single exe. You also have to manage the risk of incompatible components on a deployed system that you wouldn't have to worry about with a single exe. Whether or not the benefit of reduced compile time is worth the cost is up to you.
Clearly, different levels of application sophistication apparent here; your post points out underlying design issues that are not evident to most novice or intermediate-level programmers. Not sure what level the original poster is working at, but I've seen even small projects suffer from careless data-storage habits.
Is there any reference material on dependency management available?
Blair Smith wrote:
your post points out underlying design issues that are not evident to most novice or intermediate-level programmers.
Unfortunately, many CLA level programmers are not aware of the issue either. I think the lack of visibility stems from there being very few (relatively speaking) LV developers with computer science backgrounds or extensive experience building large applications in other languages.
Blair Smith wrote:
Is there any reference material on dependency management available?
No... at least there isn't any material focusing on dependency management I'm aware of. It's stuff I've learned via hard knocks and from various literature on other programming topics--mostly object oriented related stuff. "Dependency management" itself is a bit ambiguous as there are many different kinds of dependencies. In the context of my previous post I'm specfically referring to managing static dependencies. Managing other kinds of dependencies becomes important in different contexts.
For me, dependency management isn't an afterthought; it's one of the central principles to my entire design and developement process. If I don't understand the project's dependencies I can't predict exactly how an arbitrary change will affect the system, and if I don't know that I can't have any confidence there will be no unintended side effects after implementing the change. Seems to me the entire realm of software architecture can be traced back to managing some kind of dependency. Maybe it's just me and the way my brain is wired, but I don't see how anyone can claim to understand a system if they don't know the dependencies.
If you're comfortable with comp science lingo reading whatever you can find on "dependency injection" is a reasonably good starting point. (And if you're not, read it anyway and get comfortable with the lingo. ) Dependency injection is one technique for managing static dependencies. I use it a lot. It is also an example of the "Dependency Inversion Principle," which can be implemented using several different patterns depending on your specific needs. Both of these are subsets of the broader topic of dependency management, but it will at least start you down the path.
Blair Smith wrote:
Not sure what level the original poster is working at, but I've seen even small projects suffer from careless data-storage habits.
I don't disagree, and I think your advice was good. Any time large amounts of data are being manipulated the developer needs to be aware of excess data copying. I was highlighting a separate reason many developers run into editor lag on "large" projects.
When you say "large LabVIEW project" what do you mean? Approximately how many subvi's?
Do you use source control? (This can significantly slow down load times.)
I have a project of about 1200 VIs that takes about 4 minutes to open from Source Control.
It seems to take ages to load each of the LVLIBs each and every time.
The top level VI takes about 10 seconds to load.
Are there any conflicts indicated in the project explorer? Old versions of VIs still referenced somewhere in the project explorer linked to missing SUBVIs? Have you moved dependent SUBVIs and updated only the current working top level VI but not previous iterations which are still referenced in the project explorer?
The only time I have experienced really slow editing of VIs is when one of the VIs in the hierarchy is corrupt or the project is corrupt.
It seems to me like you have a corrupt VI and/or a corrupt project.
To fix a corrupt project you have to create a new one and add all the VIs into it the same as the old one. I've had to do this a couple of times.
To fix a currupt VI you have to work out which one is (ones are) corrupt and rewrite them. I don't know how to tell which element has become corrupt or what goes wrong, I just know that it can. If you copy and paste all the contents of a VI into a new one sometimes the corrupt element gets copied too and the problem persists. That is my experience, I've had to recreate a couple of SUBVIs too.
SubVIs is only one measure of complexity and not a very good one. My projects tend to always be above 500 subVIs and sometimes way in the 1000 subVIs but I wouldn't consider them particularly complex (of course that is also a very relative measure).
I had once the "pleasure" of inheriting a project that had in fact not that many subVIs, and the subVIs that existed were trivial wrappers around global variables!!! and IO accesses. In terms of subVI counts, the application was trivial.
But the main VI occupied over 10MB on disk, consisted of several loops and case structures containing huge stacked sequences with most sequence frames having case structures inside that enabled the content inside based on a global string state!!! Moving a single node or wire in this main VI took several seconds. Absolutely horrible to do even the most trivial edits. I worked over a week to put as much code as possible into clearly defined subVIs, also identifying identic or nearly identic copy-pasted code sequences and replacing them with parametrized subVIs and finally got to a point that the main VI was editable (albeit still terribly slow) so that I could place the different stacked sequences into subVIs so I could convert them into proper state machines, which they had been all along the way already, but in a terribly inefficient way. Basically for each state execution LabVIEW had to go through the entire stacked sequences to eventually run through the sequence that contained the case structure that executed on that state!! After the whole redesign, the application had quite a few more subVIs, occupied somewhat less disk space, had added functionality, but run much smoother and at a lower CPU overhead, and most importantly editing a wire in any of the VIs was snappy again.
One thing I did find out too during this, was that with such huge diagrams wire bends do add up to edit time responsiveness of LabVIEW. In the original diagram the wires run back and forth in all directions in a way that made a board of spaghetti look organized in comparison. Cleaning up those wire bends whenever possible had also a measurable effect on the responsiveness when editing that VI.
You know, I was wondering - what (if any) effect has the LV2010+ feature to "separate source from the code". My "gut feeling" always was that if you have a gazillion VIs depending on some other VI, moving a few wires around will not force the LV to "recompile everything". Does it also have any effect on the editing speed of a single huge VI? Since again, if you do not change the logic, LV doesn't need to recompile the entire file for each edit.
SubVIs is only one measure of complexity and not a very good one.
A fair point. I have never had the "pleasure" of working on code such as you have described and so I did not consider it. I try to keep my block diagrams to one or two screens which makes well structured code and use of SubVIs essential, thus avoiding the single diagram monstrosities.
- maybe using data value references to pass big clusters (controls) into VIs? This will not help, it will create more data copies and also slow down execution because reading data from front panel objects via references and property nodes forces it to use the user interface thread.
- design project to to call VIs dynamically (e.g. dynamically call one submodule of the application)? But this means worst call times. This is not the solution to your problem, it will add unnecessary complexity.
- using PPL? I used them, but didn't observe any major performance boost. You are right, it would not help.
- does labview dll uses less memory then standard (subVI) labview call? No, this will only make debugging harder.
First identify the source of the problem, then you can work on a solution.
The project being "large" is not the problem. Have you tried any of the suggestions offered so far?
If you open just a few of the SubVIs is it still slow to edit them?
Daklu, thank for your thoughts, I will look more into dependency management.
When you say "large LabVIEW project" what do you mean? Approximately how many subvi's? 2000+ VIs
Do you use source control? (This can significantly slow down load times.) Yes, but not as a plugin in LV.
I have a project of about 1200 VIs that takes about 4 minutes to open from Source Control. My takes about 5 min to load.
It seems to take ages to load each of the LVLIBs each and every time. I agree, that is why I was thinking making dlls from my libraries.
The top level VI takes about 10 seconds to load. My takes about 20 s.
Are there any conflicts indicated in the project explorer? No.
Old versions of VIs still referenced somewhere in the project explorer linked to missing SUBVIs? Not sure, I will take a look. Is there an easy way to find VIs that are not used by any other VI in the project?
Have you moved dependent SUBVIs and updated only the current working top level VI but not previous iterations which are still referenced in the project explorer? No.
The project being "large" is not the problem. Have you tried any of the suggestions offered so far? Not yet. I don't know if I will make any major changes to the project, because it is finished and application works well. My main intension was to gather information, experiences and ideas from other developers to design my new applications as good as possible.
If you open just a few of the SubVIs is it still slow to edit them? No.
I was thinking to use data value references (DVR) where I have large clusters of data passing through application. For example, I have Stric type def control (big cluster), which it is passed into 100 VIs. If I change something in that cluster, all VIs will have to be updated and that takes quite some time. Now if I pack this control into DVR (and create new stric type def control) and pass DVR into all 100 VIs, will this improve performance or not?