I created for remote real time data display that reads data off the ethernet and then performs some proceesing to convert the raw data to engineering units. I then created an apllication so that I could run this program on several (Apple) workstations. I would run two of these apllications at one time since I was reading data from two different sources. This worked really well. Recently, the workstations were changed from Apples to PCs running windows NT. I recreated the applications and ran them on the PCs. I have noticed that the applications now run much slower. I can bring one of the windows in the foreground and this window will run alot quicker but theother window is much too slow. If i put both windoes in the bacgr ound, they seem to run at the same speed, but it still is much slower than on the Apple. What can I do to optimize the speed of the applications?
"Dawn Davis" wrote in message news:email@example.com... > window is much too slow. If i put both windoes in the bacground, they seem > to run at the same speed, but it still is much slower than on the Apple. > What can I do to optimize the speed of the applications?
I've been fighting with application speed problems myself for the past little bit. I love to see discussion of this, because the NI devzone has very few suggestions. Lemme throw out a few general thoughts on ways to improve speed:
* Check the VI Options (edit a VI, right click the icon in top right corner) and there's a tab for execution parameters. Read up on the options, they adjust the execution priorities. This could solve some of your problems. In my case, it didn't help a lot because I wanted to adjust priorities among loops in my event-handler, not of subVIs.
* Parallellize your application by putting seperate events into seperate while-loops. In most cases serializing work is a real slow thing. Labview can decide when no data or control hazards exist, whenever possible it will execute stuff in parallel. Be forewarned, in many situations globals and locals will create all kinds of unforseen data dependancies between parallel while-loops. Be conscious of the underlying multitasking, Mac Unix and Windows handle things differently as described on the NI website.
* Spinning while-loops make REAL SLOW code. The timer-icon is one solution but not a very good one. I saw an order of magnitude speed increase changing my code to use the notify VI's set to never timeout. If this is right in your situation, by all means switch! The polling scheme serializes things too much. Event-messaging will let the system execute important things while other sections wait.
* Unforseen factors will change things, especially priority for the foreground application. One solution might be to change to the GOOP model (www.ni.com/goop). If written correctly, it should be easy to access each instance of the program as a seperate object. That way you'd never have a second app in the background, it would run out of a single application. This will also help eliminate tricky globals which helps create parallelism. The goop vi's really don't add much overhead, in my experience.
* Modify algorithms to be more efficient. Consider hiring some college undergrad fresh out of an algorithms course to calculate your Big-O complexity on algorithms. Fix those problems with better data structures! Labview will re-size arrays and structures but this is very slow, always prevent this from happening by initializing the array and not resizing. Get the professional version which has the VI profiler to check what's slow. Amdahl's law, make the common case fast!
* Look at the machine you're running it on. Labview is memory and processor hungry compared to something like C++. My company's app is pretty large and runs ok on a Celeron P400 with 128 meg of ram. There prob will be problems if the computer is memory starved, possibly the case since you said two copies are running at once.
* Move to 6i for any large app. The front-panel control reference is absolutely invaluable for complex programs. I cannot believe people could work without control reference.
* Some events are just dog-slow, for example polling the mouse properties in a picture indicator, or polling a front-panel control to see if it's changed since the last loop iteration. Running these in a lower priority loop might help, but I've never found a good solution.
That's just my quick thoughts on the issue. Email me or post follow ups, I'm always interested in this topic.
Joseph Oravec wrote in message news:firstname.lastname@example.org... > "Dawn Davis" wrote in message
> complexity on algorithms. Fix those problems with better data structures! > Labview will re-size arrays and structures but this is very slow, always > prevent this from happening by initializing the array and not resizing. Get
Careful with this. I'd always been doing this, reasoning that even if it wasn't quicker in every case than having Labview automatically extend arrays on auto-indexing, it should not be slower. However in a test with a for loop, bringing an array of numbers out, the autoindexing approach was twice as fast as the preinit/replace element scheme. This was on both small and large arrays .
I recall comment from Greg a while back saying that where it was possible, Labview did reallocation of the whole array before starting the loop; with a for loop, it's possible to work out in advance how much memory is needed and Labview does this on the fly. Even if you're in a while loop where it can't allocate the whole array, increasing the size doesn't entail allocating a new block of memory and copying the old array across to it, as I once thought.
I've been meaning to do some more tests with this, and certainly before I write anything else from scratch where it may be relevant I'm going to try both approaches with a skeleton program and see which approach is better in each case.
"Craig Graham" wrote in message news:email@example.com... > > complexity on algorithms. Fix those problems with better data structures! > > Labview will re-size arrays and structures but this is very slow, always > > prevent this from happening by initializing the array and not resizing. > > Careful with this. I'd always been doing this, reasoning that even if it > wasn't quicker in every case than having Labview automatically extend arrays > on auto-indexing, it should not be slower. However in a test with a for > loop, bringing an array of numbers out, the autoindexing approach was twice > as fast as the preinit/replace element scheme. This was on both small and > large arrays.
This is amazing, I believe you if the profiler said so; I just don't know how that can be true. Everything you read in books or webpages says to use pre-init/replace. Maybe somebody who works on the underlying labview engine could pop-in and explain the results.
I intended my original comment more to watch out for nasty complex algorithms. Spend the time on those algorithms and cut that n^2 time down to n*logn where possible. Maybe I'm the only Labview programmer in the world who does this, but usually I'm lazy the first time through and want to get a working prototype. Go back and reduce computational complexity and that'll help.
> I recall comment from Greg a while back saying that where it was possible, > Labview did reallocation of the whole array before starting the loop; with a > for loop, it's possible to work out in advance how much memory is needed and > Labview does this on the fly. Even if you're in a while loop where it can't > allocate the whole array, increasing the size doesn't entail allocating a > new block of memory and copying the old array across to it, as I once > thought.
That makes sense. There's lots of "loop-unrolling" style techniques that they could implement, so it's totally believable. It really isn't hard to look some loops, figure out how many times it will run, and work out a lot of factors in advance.
On the other hand with my real-life results labview is even pretty stupid about redundant calculations. For example, it didn't even automatically move a sin-cos calculation outside of a loop for me. This added seconds to a calculation of mine because the trig ended up executing N^2 times. Dragged it outside and voila only a single execution per call.
One surprise I might throw in. The formula nodes did NOT seem any different in my experience than the equivalent icon-primitives. However on Labview 5.1 calculations didn't seem optimized at all. First case was some long equation which didn't take too long. Second case was the similar equation but broken into several smaller more readable chunks which took a lot longer. The goal was to make a more readable formula node, but labview didn't optimize out un-used values apparantly.]
I got an email from somebody immediately after my posting about the "Wait" being very necessary in while loops on NT. This prevents while-loops from the popular sit-and-spin taking up system processing resources. Thought I had mentioned it originally and it is a pretty basic idea, but perhaps the clarification is necessary. It is not a cure-all but will prevent small loops from stealing time from useful functions.
Joseph Oravec wrote in message news:firstname.lastname@example.org... > "Craig Graham" wrote in message > news:email@example.com...
> This is amazing, I believe you if the profiler said so; I just don't know > how that can be true. Everything you read in books or webpages says to use > pre-init/replace. Maybe somebody who works on the underlying labview engine > could pop-in and explain the results.
I didn't use the profiler- very simple test. Just have a for loop that passes out an array of integers generated by the iteration count of the for loop. Take the timer value before and after the loop and subtract them. Make the array large (I used 100,000 elements) so the slight timing error caused by the OS (NT in this case) ticking over is insignificant and compare the times required to create the array with autoindexing and with pre-init. Easy enough for anyone to try.