LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Meaningful LabView versions speed benchmarks - anybody?

In article <50650000000500000089F70000-1042324653000@exchange.ni.com>,
MichaelS wrote:

1. Due to conspiracy theories, one should do ones own benchmarking.
This is relatively trivial program and is even provided by NI. of
course it does not benchmark property nodes, events etc. that are all
post LV 3 but life is tough all over. Would you believe NIs benchmarks
even if it were posted?

2. That is why you use a standard benchmark. But then again that only
tells you how a benchmark performs.

3. If they are worth benchmarking then do it. I did it previously and
published a bunch. Now it is your turn
<>
see the comparisons between LV 4 and 5 and LV 5 and 6!

4. Actually I don't belive it is so. The "converter" that NI provides
for converting old versions of VIs is just old versions of LV. If you
are having stability problems then you are on the wrong platform. They
recommend it but for benchmarking go right ahead.

This rant is the best reason one should not publish benchmarking since
it is poorly understood.

And for the other poster, repeat after me, "labview has not been an
interpreted language during my lifetime!" 🙂

-Scott


> 1. The user DOES NOT KNOW if there is any slowdown in relevant
> subunits between the versions (see the original post). It is a
> responsibility of NI to provide user with this information.
>
> 2. As I mentioned before, the example you referred to is inconclusive
> and outdated at least. It compares something not worth comparing -
> applications that depend, as you rightly noticed, on user interface,
> programming style etc.
>
> What I asked for is info on comparison of LV "subunits" for versions
> 6, 6.1, 7: array building speed, dll calls etc. These are programming
> independent. These are the only elements worth benchmarking.
>
> *And* NI does not disclose this benchmarking information.
>
> For your information, installing multiple versions of LV on the same
> machine at the same time is usualy prohibited by the end user license.
> Also, LV then frequently becomes very unstable. Sometimes NI
> recommends to completely uninstall previous version before installing
> a newer one.
0 Kudos
Message 11 of 14
(710 Views)
> Thank you for understanding my question and suggesting to do something
> NI is expected to provide.
>
> Wouldn?t you say benchmarking of performance between LV versions is a
> valid and important technical spec for a version? Why then you expect
> a user to come up with it, given all inconveniencies of installing and
> uninstalling different LV versions on the same machine?
>
> It makes me suspect NI does not want to disclose that the addition of
> new bells and whistles to a new version usually comes up at a price.
> It slows LV down. Isn?t it the truth? And where are the benchmarks?
>

Keep in mind that I work for NI as one of the members of the development
team.

The way I look at it, there are three ways that people judge performance
depending on their personality.

1. They want to see something complicated, perhaps it needs to resemble
the app they want to write, but often it can be anything as long as it
shows that things are impressively fast.

2. They want to hear NI or another trusted source give them some
confirmation that performance is comparable. Often a tidbit such as
FFT's are 40% faster will be used as representative, even though it is
almost impossible to summarize the details in such a simple number.

3. People want to look at the raw data or measure it themself. They
want to stare at the plots, at the columns, and develop their own sense
of how things compare -- floating point is faster, but only when using
doubles, and it looks like some numeric formatting is slower, but by no
more than 10%.

#1 is best addressed by the games and examples on devzone and other
parts of the LV site. #2 is what the article comparing LV6 to LV6.1 was
doing, and since it didn't satisfy you, you probably want #3. That is
why I pointed out the VIs.


NI used to produce the log files for new releases to be used for
comparison, but to be honest it is so much data, that you cannot come up
with useful meta-numbers. Here are some reasons why.

People still use LV on 486, Pentium, P2, P3, and P4 processors. These
execute the same exact code in quite different ways depending on cache,
instruction ordering, and dual processor computers add still more
combinations. Next factor in OSes. LV still runs on Win95, 98, Me, NT,
2000, and XP. It would be nice if the OS didn't affect performance, but
the file I/O and memory management speeds will differ.

Scott listed a post to some benchmarks he did using the exact VIs I
pointed out on the ftp site. He summarizes that things in fact got
faster on PPC. His site also shows just how difficult it is to
interpret and summarize. Notice that he is measuring things on the
machines and OSes that he cares about, and his data probably doesn't
help you at all even though he shows lots of data to back up his summary.

So, your expectation seems to be that NI doesn't have a speedometer
attached to the new release, therefore we are embarrassed about our
performance. Here is my not-so-useful summary. Some things will be
faster, some slower, and overall, the performance is comparable given
the same OS and chip architecture. I know for a fact that the memory
manager is improved over 6.1, some code generation improvements were
added, single-point DAQ should be much faster, and while much new code
was added, most of it will not be executed when running your VI.

If you have some idea of what numbers are valid to compare, we will
consider doing the measurement. Or you can wait for someone to stumble
across the metric you want to see. Or you can get an eval and do the
measurements yourself using the VIs I mentioned or other VIs.

Greg McKaskle
0 Kudos
Message 12 of 14
(710 Views)
Thanks, Greg.

I am happy to hear an intelligent response from a NI engineer. I looked up Scott's site: very usefull and educational indeed. He did not (yet?) benchmark LV7, though.

I am happy also my poking comments brought what I needed - some info, and some help.

You see, I am trying to decide which part of the code of the large and computationally hungry LV application I have is better implemented as a .dll call. This is a fine balancing act between the performance, and the time costs to get it. For that (given the LV code is stylish and optimized what it is not yet), benchmarking is useful.

Thanks to all who replied.

P.S. Could _anybody benchmark a .dll call for LV 6, 6.1 and 7??
0 Kudos
Message 13 of 14
(710 Views)
> P.S. Could _anybody benchmark a .dll call for LV 6, 6.1 and 7??

You should find that they are the same speed. Somewhere around LV5.1 or
LV6, we wrapped the DLL call with a try/catch in case the DLL throws an
error or does something illegal. This adds a small amount of overhead,
but prevents many crashes of the development system while people are
experimenting DLL calls.

If you are having performance problems, be sure to use the profiler as
well, and you might want to look at some of the presentations on devzone.

And if you really want to get help with this, I'm going to make another
post later asking for VIs to use for an NIWeek presentation. We will
take the VIs and any issues you have with t
hem and see if we can find a
solution. We will keep the author anonymous and go over the before and
after. So, look for it if you are interested.

Greg McKaskle
0 Kudos
Message 14 of 14
(710 Views)