From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW Development Best Practices Blog

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 

Re: Let NI Benchmark Your Code and Help Us Make LabVIEW Faster

Elijah_K
Active Participant

In the world of large application development, performance is often critical.  Over the years, our investment in the LabVIEW compiler has aimed to ensure that most users don't have to spend time and effort worrying about how to improve the speed of the code or the utilization of memory (although you can, of course).  The LabVIEW compiler is good at identifying ways to optimize code and reduce the amount of required memory, but we're always trying to make it better...

The LabVIEW development team is actively working on improvements to the compiler that could make significant increases in execution speed, but we need the help of our users, and especially those of you developing large applications.  We need real-world applications from customers to benchmark so that we can understand how the changes we're making impact the execution performance.  Of course, one of the advantages for participants is that we can focus on any corner cases or areas of your application that you would like to see improved.

Please, if you have an application or an example you would like us to benchmark, contact me directly at Elijah.Kerry@ni.com.  Of course, your code would be used strictly for benchmarking purposes and would not be shared or reproduced externally under any circumstances. 

We can't wait to hear from you!

Elijah Kerry
NI Director, Software Community
Comments
rolfk
Knight of NI

In 98% (actually about 97.8% ) of the cases when someone complains about the speed of LabVIEW it is because they implement a bad algorithme to solve a particular problem. It is not about that you can't do good algorithmes in LabVIEW, but that it is so easy to do an algorithme at all in LabVIEW. So most people who would not even think about attempting to do the same in lets say C, or who would catastrophally fail if they did try, will go happily to code up a monster algorithme in LabVIEW, only to find out that it is magnitudes to slow for what they want. And then it is the failure of LabVIEW since it allowed them to code that damn monster up at all.

Rolf Kalbermatter
My Blog
battler.
Active Participant

The fact that NI is running this trial at all is there own admission that LabVIEW is slower than C.  Do you think Rolf?

Oh and I have seen (in fact I have several here on my desk) some terribly coded algorithms in C.. even more so than LV.

I think NI is doing the right thing with this trial.  They should look at scaling down (making modular) their run-time engine at the same time.

rolfk
Knight of NI

The fact that NI is running this trial at all is there own admission that LabVIEW is slower than C.  Do you think Rolf?

By your logic C compiler builders would not want to improve the speed of their compilers anymore since they are the standard you measure anything else by. But fact is even they try to improve the code their compiler produces.

This is not so much about if LabVIEW is sometimes slower than C or whatever, (it always will be slower in some things) but if the LabVIEW development team can identify areas where their code optimizer can do an even better job then it did so far. There is for sure potential potential but you should see that quite independently of a C compiler. The paradigmas used by C compilers and by LabVIEW are quite different and there will be differences. While LabVIEWs dataflow promotes parallelisme this at the same time also puts limitations on what the compiler can do, since it needs to guarantee protected operation. In C parallelisme is achieved by the programmer doing the right thing and that means not only writing extra code to do multithreading, but also observing any rules in terms of variable protection, serialized access, race conditions, prority inversion and such. LabVIEW does want the programmer to not have to worry over quite a few of these things, but that has its price in that it will never be as fast in certain things as what a C programmer can do. But at the same time the C programmer will have crashed his program zillion times before it works at all, while the LabVIEW programmer can concentrate almost from head up on implementing the algorithmes themselves instead of getting the framework right to start to test the actual algos.

And optimizing code generations and modularizing the runtime engine are two completely independant issues. Also things that could actually exclude each other in some ways, both because of manpower as well as technical reasons.

Rolf Kalbermatter
My Blog
vix
Active Participant
Active Participant

100% I agree with battler, and I had to spend a lot of time LabVIEW-optimizing my code.

If you can buy a faster PC, with more RAM every time a new release is availbale, speed is not a problem. But if you try to run a LabVIEW example on an old hardware....

I think that LabVIEW SignalExpress is written in LabVIEW, and I think that NI implemented good algorithms inside it... but everybody can easily see that speed is not one of the best thing of SignalExpress.

LabVIEW has some advantages over C (NI CVI for example):

  • easy to do wonderful User Interfaces
  • easy to code even for non-programmers
  • deployment for different platforms

but speed and hardware requirements are a big problem.

I asked NI many times for a modular RunTime, and I hope they'll do it soon or later...

Vix
-------------------------------------------
In claris non fit interpretatio

-------------------------------------------
Using LV from 7
Using LW/CVI from 6.0
rolfk
Knight of NI

You wouldn't have had to spend time to optimize the same code when written in C????

And before you would have been able to spend even time on that, write a lot of framework and support code to be able to start to benchmark your code at all??

And crash many times during that exercise??

Rolf Kalbermatter
My Blog
Elijah_K
Active Participant

The nature of dataflow programming gives us a unique opportunity to optimize runtime performance in ways that are very difficult for other languages like C.  A primary example is programming for multicore processors.  LabVIEW's ability to easily represent data-independent sections of code enabled the compiler to easily separate these into chunks that can be executed on different cores on the same time.  It's important to consider the amount of time it would take to optimize code when compared with the fact that LabVIEW does it for you.

Ultimately, it's very difficult to claim that any programming language is faster than another because so much depends upon how the code is written and the competence of the programmer.  You can optimize LabVIEW code to increase performance, and you can optimize C code to increase performance.  However, to help make this comparison, you can download a benchmark framework from here, which includes source code in LabVIEW and unmanaged C++ code for several very low-level algorithms.  As you'll see, performance is very similar (keep in that no manual optimizations were made in either language).

The reason for this thread is that R&D is very excited about some new technologies we're developing for the LabVIEW compiler to further optimize code, and we want to have as many real-world applications as we can to measure just how much these changes improve performance.  Let us know if you have something for us!

Elijah Kerry
NI Director, Software Community
Pierre-Alain_Probst
Member

There is often a conflict between writing readable code by using subVIs with well defined fuctionality and trying to optimize the execution speed which can request to remove subVIs to avoid passing parameters to them (see for example the PID for RT where all the subVIs for the Integral part and Derivative part have been removed). What I would like is the ability to tell the compiler to unfold the subVIs i.e. do the inverse of the functionality 'convert to subVI'. This is common in Fortran and gives you both readable code and fast execution speed.

tst
Knight of NI Knight of NI
Knight of NI

There's a private method for the SubVI class called Inline which does exactly that - it takes the code in the subVI and moves it into the caller instead of the subVI. I know there are some people who played around with it (you can try searching for "inline" in the LAVA forums), but I don't know how much performance you will gain. You should consider that this is a move which has its own implications (e.g. increasing the executable size and breaking code which relies on things like LV2 globals or dynamic dispatch VIs) and will possibly even hurt performance (e.g. if you have a non-reentrant VI which allocates a buffer and you call it more than once, the buffer only needs to be allocated once, but if you inline it, you will get several buffers).


___________________
Try to take over the world!
battler.
Active Participant

More control = potential for better performance.

NI should ensure, and it needs to, that in taking away the control they can achieve optimal performance; a difficult if not impossible task.  It would seem that by asking users for their code they aim at channelling their efforts only into certain areas.  The very nature of the dataflow architecture means that they are limited to taking this approach.

After all, why was LV invented?  Ans: So that engineers and scientists didnt have to program in C.

On the other hand, NI, can you claim that LV is faster than C?

rolfk
Knight of NI

On the other hand, NI, can you claim that LV is faster than C?

They won't claim that, because given a proficient enough C programmer he can always match the speed that a LabVIEW program can achieve. After all LabVIEW is programmed in C/C++ so it can't do mysteries to the CPU to achieve miracles.

What the previous post said, is that because of LabVIEWs dataflow paradigma, it is actually possible for the compiler to take advantage of modern multi core architectures, that a normal C compiler has a very hard time, and sometimes is simply unable to do automatically. It does not mean that a proficient C programmer can not program an application that makes efficient use of multicores too, but it means that if your C program is not expclicitedly written to do so it is almost impossible for the C compiler to do so anyhow, since it can not really analyze all dependencies over the multiple modules and compile entities.

So yes if the algorithmes used lend themselves for parallelization and the programmer has not taken any specific measures, LabVIEW will with a probability that aproaches certainity be able to produce faster code than a C compiler can.

But no there is no way LabVIEW can produce faster code than what a proficient C programmer can achieve, who knows how to create a thread pool, seperate algorithmes into parallel chunks, distribute them across multiple threads, combine the partial results into a final result set, synchronize the threads properly to avoid race conditions and protection faults, and a few other advanced programming techniques you usually don't learn in most programming classes.

It is however likely that at the time the C programmer is finished with this task and everything works, the LabVIEW programmer has since written several other applications doing many other things or maybe he is enjoying some vacation under the caribean sun.

If speed is your ultimate target, there is still assembly. Given enough knowledge, time and persistence, their is nothing that can beat a program written in assembly language.

Rolf Kalbermatter
My Blog
Elijah_K
Active Participant

Anything and everything that's done in LabVIEW could, as you pointed out, be programmed with any number of languages - including C.  By the same token, anything and everything that's programmed in C could be programmed in assembly.  It goes without saying that writing an entire program in assembly would take much longer than it would in a higher level language like C, which is why most modern applications are written using a higher level language.  That being said, it may still make sense to have some in-line assembly here and there.  The same is true of LabVIEW, which further decreases development time, but still allows you to call lower-level code and include other models of computation when you need it.

One could never accurately state that C's execution speed is faster than assembly.  However, the use of C makes it easier to develop high-performance applications by using pre-existing algorithms.  In the same way, the functions and analysis libraries included in LabVIEW have been optimized over many years for performance, and the compiler is smart enough to abstract memory management and thread scheduling. 

It is all these considerations that make comparing languages extremely difficult.  In the benchmarks I linked to in my prior post, we tried to be fair by using very simple, isolated operations to compare execution speed.  Consider File I/O as an example.  C and LabVIEW have unique ways of caching data and storing information.  As an example, the TDMS Streaming VI, which is available in LabVIEW when using DAQmx 9.0, automatically by-passes the Windows buffer and is capable of streaming up to 1.2GB/s (provided your HDDs can keep up!).  This is all transparent to the user, who just drops down a single VI.  When compared with the native File I/O operating in C, the TDMS Streaming VI is much faster.  Of course, one could go write a different function to stream data to disk in C (or assembly if you want), but luckily, we've already invested the time of multiple developers over several years to do this for you!

We're always improving and refining the LabVIEW compiler to be smarter and more effective.  The work we're doing now aims to further improve performance for all LabVIEW applications (insert disclaimer here), but as with any software development, the more we can test, the better we can make it!

Contact me if you want to share your code - thanks everyone!

Elijah Kerry
NI Director, Software Community