NI Home > Community > NI Discussion Forums

LabVIEW Developers Feature Brainstorming

Reply
Member
brents
Posts: 3
0 Kudos

LabVIEW Performance

The LabVIEW team would like to do some targeted LabVIEW performance optimizations based on user feedback. What areas of optimization would greatly benefit you in your day to day work?

Please indicate:
  • The area that you feel needs improvement
  • What performance you are currently experiencing
  • What performance you would expect
  • How important is this optimization to you on a scale of Minor Inconvenience, Major Inconvenience, Showstopper
  • Your configuration (optional, but helpful) including
    •   LabVIEW version
    •   Computer model
    •   Processor type
    •   Processor speed
    •   Total Memory

Some tips on how to get reproducible performance measurements:
  • Quit expensive background processes and applications (e.g. virus scanners, Seti@Home)
  • Defragment your hard drive
  • If right after a reboot, let the system settle

Brent Schwan
National Instruments

Active Participant
shoneill
Posts: 1,545

Re: LabVIEW Performance

Here are a couple of my ideas, some may be useful, others maybe not.....

Whenever I have a time-critical process (Gauss fit on a live >30Hz signal) I find myself adapting my code to make all of the relevant VIs set to subroutine priority.  This can often have a large affect on performance.  This is without building an EXE.  In order to be able to do this, I nearly always have to save a version of certain VIs (for example mean.vi) as a subroutine, thus having two versions, one as a subroutine, one normal.  Is this neccessary, does this difference disappear when bulding an EXE or am I fooling myself into thinking this speeds things up. - Status : Minor inconvenience

Secondly, string processing can be quite slow.  I'm quite aware that string arrays are horribly inefficient due to the inability to predict storage space in RAM, but that leads me to my question.  Can't we have fixed-length strings in LabVIEW?  The Typedescriptor (at least in LV 6.1) has room for a length which is currently always set to FFFFFFFF.  Why not allow fixed-length strings?  I'm aware this would most likely be a NEW datatype, but I see it having many advantages in making certain string operations more efficient without having to switch to handling U8 arrays (which are kinda hard to read). Status : Minor inconvenience

I have some routines which parse any data types (via References) and I have experienced some hassle with arrays.  Although it's not a performance optimization in itself, it would be nice to have a property node to set the current index of the array so that it's not neccessary to reduce visible array size to 1, set array index and read the element, set the index back and then set the visible size back.  It's not TOO bad when desctivating front panel updates beforehand, but it's unwieldy. : Status : Minor inconvenience

I'm using a P4 2.8GHz with Hyper-threading, 1GB RAM, using LV 6.1 (I know it's old but hey....)

Just some thoughts, please feel free to ignore.

Shane.


Using LV 6.1 and 8.2.1 on W2k (SP4) and WXP (SP2)
Knight of NI
altenbach
Posts: 26,892

Re: LabVIEW Performance

[ Edited ]
I like the idea of fixed length strings IF it really would improve performance. In one of my programs, I handle a huge number of strings from a network logger that cannot ever be longer than 152 bytes and I have a fixed size FIFO buffer (LV2 functional global style) for up to a few thousands of those. While I have never encountered a bottleneck, it would make sense to be able to allocate this buffer flat im memory as an array with 2000 elements of 152 byte strings.
 
Some people have complaned about startup times in LabVIEW 8.0, so this should be improved.
 
Looking at typical code posted here, it is popular to use value properties instead of local variables. Casual testing in the past showed that value properties have a huge performance penalty, yet do basically the same thing in many cases. It would be nice if the compiler could detect if a simple value property node could, under the hood, be handled equivalent to a local variable and thus eliminate the performance hit.
 
 

Message Edited by altenbach on 02-28-2006 09:48 AM


LabVIEW Champion . Do more with less code and in less time .

Active Participant
Tomi_Maila
Posts: 419

Re: LabVIEW Performance

  • I have to deal with large amounts of data and the VI performance is largely determined by the amount of data that fits into LabVIEW memory at a time. To increase the performance LabVIEW should not limit to 1GB of memory, rather at least 4GB should be accessible.
  • Now that there are more and more multi core processors, the number of parallel threads should be user configurable or adapt somehow to the number of processors present.
  • I don't know how subroutine execution class is implemented but there should be an execution class similar to C inline directive to keep the subVI call overhead as small as possible.
  • For some reason there seems to be some overhead when calling external code from a DLL. This overhead should be minimized to absolute minimum. This would also increase the performance of many of the NIs own VIs.
  • Some of the NI signal analysis VIs are poorly coded and they are extremely inefficient. The efficiency of the signal analysis VIs should be improved.

Tomi
--
Tomi Maila
Member
zebulon
Posts: 1

Re: LabVIEW Performance

Not really a performance improvement, though it could improve performance too.

It would be delighting if it were possible to iconify controls on the front panel.

Say you have 5 clusters each the size of the screen on one front panel. Keeping such front panel clean and with nice overview is impossible. But with an option to turn a cluster into, let's say, the icon of its typedef, or some sort of small composite connector image, would make things much easier.

And it could save performance in some cases as the cluster would not have to be redrawn on screen when it is iconified, only when it's expanded back.
Member
tkreider
Posts: 26

Re: LabVIEW Performance

For you folks that have issues with LabVIEW and strings, think about doing string operations in a language like Perl or Tcl/Tk.

I have an old package I wrote to let me call Tcl/Tk from LabVIEW.  I dump the string into Tcl, then hit Tcl with whatever string operations are needed.  Tcl is lightning fast with lists, strings, and regular expressions (uses Perls regexp 5 code).  When the result is found, I suck it back into LabVIEW and dump the Tcl session.  This keeps memory management clean and allows the vastly superior string memory manager in Tcl to pull the heavy load.  This process also cuts down on speggetti LabVIEW code to do the string manipulation.

Another suggestion is to open a memory-locked file and pass big blocks around that way.  Windows treats it like shared memory between processes.  Dump in your data, call some other program via ActiveX (and yes, IPC is painfully slow when limited to ActiveX), let the other program pull the data from the shared memory and compute.  Pull the result back in from shared memory in LV and move on.  This works well with complex data types and big(!) data sets that ActiveX would choke on.  Think DSP functions.

BTW, LabVIEW rocks, keep up the good work NI!  I regularly win contests with my peers who tell me that LabVIEW is slow.  While they're still coding their version, I walk up with the answer and blow them away.  It's not about run-time speed, it's about time-to-solution, don't lose the formula.

Active Participant
Underflow
Posts: 230

Re: LabVIEW Performance

Hi brents,

Not sure if you're monitoring this one anymore, but...

Areas that need performance improvement:
 -total memory space available (showstopper)
 -garbage collection: what memory gets freed up, predicting on my end when it's going to get collected, and the speed with which memory is freed up (major inconvenience)
 -internal memory fragmentation (minor inconvenience)
 -graceful exit on "Memory is full".
 -per-instance profiling... okay, that's more of a feature request :smileywink:, but it's a major inconvenience anyway

Performance currently experiencing:
I am in a situation where the (immutable) business requirements are such that I must use strings.  A typical program load will require reading from file, indexing, manipulating, and storing 2D string arrays up to and beyond the memory limits of LV (7.1).   This is done in chunks as much as possible, but lack of true ragged edge arrays and some intractable array operations (transpose) mean that sometimes, everything needs to be in memory.  And once it's in memory even once, LV never quite seems to recover, even with the request deallocation primitive.  And once the memory capacity has been stretched to near breaking, LV becomes VERY reluctant to release it.  Simply exiting (not cleaning up, just.. stopping) a program can take minutes!

Performance I would expect (please note, I don't mean to imply that these are *reasonable* requests :smileyhappy:) :
 -larger memory space to play with
 -better garbage collection in the sense that more memory is freed after it's not used anymore
 -ability to predict garbage collection more accurately when writing code
 -faster memory deallocation under heavy load
 -ability to programmatically "reset" the memory, similar to what would occur if LV were restarted
 -profiling... tree structure summary, per-instance stats, plus direct BD overlay of debug and probe info rather than separate windows... okay, I'm just dreaming now!

Typical system:
LV7.1
WinXP/Intel single core
2-3GHz processor
1GB memory
7.2k SATA HD

Note:
  -Which default installed services from NI can be safely turned off to free up memory/processor?  Obviously, there is a dependency system, but what SW components are dependent on what services?  Is there a management component for these services?

Joe Z.
Active Participant
Kevin_Price
Posts: 1,916

Re: LabVIEW Performance

Just want to highlight the last part of Underflow's post:

Which default installed services from NI can be safely turned off to free up memory/processor?  Obviously, there is a dependency system, but what SW components are dependent on what services?  Is there a management component for these services?

This kind of thing has been a major pet peeve for me the last several years.  Not just NI software, but pretty much ALL big-company software.  Every time I do an install, I have to chase it around with a virtual pooper scooper.  Delete the shortcut off the desktop & the top level of the start menu, stop the items in the system tray and find the option that keeps them from going back there again, set the option that makes it stop nagging me to let it be the default application associated with a bunch of my files, check the task manager for services and processes that are running and use msconfig or the admin console to experiment at stopping and/or disabling them, and so on, and so on, and so on.

It's like every piece of software thinks it should be the one running your computer instead of you.  And I actually understand a bit how we've gotten here -- as time has gone one, computers are more and more like a home appliance that everyone owns.  And that has only happened *because* software tries to run things for you.  People who aren't computer-savvy wouldn't buy a computer, or a printer, or a digital camera, or an iPod, etc.,  if they needed a lot of tech knowledge before using it.  So I see why the software vendors setup all these annoying defaults -- I just wish they'd show more respect to the power users among us, and make it easier to selectively install only the stuff we want.

So that brings me back to NI, whose user base is signficantly more tech-savvy than the general population.  But even with NI, I keep disabling the USB device detector in the system tray because I don't use any of the USB DAQ devices.  And I often disable some Lookout and Citadel stuff with no obvious ill effects on the theory that I'm not using them either.  There are still at least another 5-10 more services and processes I'm not sure about, and it sure seems that in the Task Manager, just about every process listed thinks it needs to hoard 4 MB or more.  I can generally google the microsoft services to find out what to disable, but haven't found similar info about the NI services.  I'd like to stop and disable whatever is unnecessary, but trial-and-error hasn't been a very effective way to figure that out.

Help?

-Kevin P.

Member
drs T Schrama
Posts: 21

Re: LabVIEW Performance

Just a general suggestion,

I think if graphing could be faster, that would help. Escpescially STFT graphs.... the FFT algorithms are pretty fast, no complaints from me about those, but the graphing is somwhow doubling my execution time. Also the amplitude spectrum VI is about twice as slow as the power spectrum VI .. why .. when I use the sqrt function to achieve the same result, it doesn't slow it down that much!

lv 8.2, 2500+ AMD cpu on 1GB ram

Member
Nathan A. W.
Posts: 6
0 Kudos

Re: LabVIEW Performance

LabVIEW needs to support features of modern processors.
-- 64bit memory space
-- number of threads in LabVIEW needs to scale with number of cores (4 threads is not enough)
-- SSE support
 
Since MATLAB can do it, its probably a necessary feature to remain competitive.
 
I would like to make 12,000,000,000 element arrays and do math on them.
 
The system I would like to do it on is dual or quad socket, quad-core, 64GB RAM, >1TB RAID10, in either XP64 or Vista.