LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

algorithm comparison FLOPS

Hi Everybody,
 
Is there a method to compare two algorithms in Labview(v.7) when it comes to their floating point calculations or number of FLOPS?
 
I am looking to compare two algorithm (roughly),  something like FLOPS command in Matlab if any. I know one way for performance comparison might be comparing their looped execution time. But any FLOPS alike command ?
 
Thanks
0 Kudos
Message 1 of 10
(6,380 Views)
FLOPS is not a property of any algorithm, but a property of computer hardware.
 
If you want to compare two algorithms, just make a simple benchmarking VI and compare how long it takes to execute each algorithm N times, for example.
0 Kudos
Message 2 of 10
(6,373 Views)
I can't agree with you altenbach. FLOPS is just the number of floating point operations. I assume both MATLAB and LABVIEW perform the multiplication of two n*n array in n^2 FLOPS whatever the hardware is. I agree that at the end software vendor realization affects the # of FLOPS, the same way that software and hardware realization affects execution time. I also agree that when it comes to comparison, execution time can be a reasonable feature.Yet there is difference between FLOPS and execution time. Thanks for the comment.
0 Kudos
Message 3 of 10
(6,368 Views)
 

@Hed wrote:
 
FLOPS is just the number of floating point operations.

The more common definition is "FLoating point Operations Per Second", comparing different hardware with the same algorithm.

 
(You seem to be using the plural definition of FLOP, which is not as common, comparing the number of operations needed to perform a certain task, irrespecitve of hardware).
 
Both definitions are not really useful to compare two algorithms in LabVIEW. The only thing you should do is compare the timed performance between the two algorithms. Be aware that even the same algorithm can be implemented more or less efficient, depending on the skill and knowledge of the programmer. Make yourself a small bechmarking operation as a three-frame flat sequence. Make sure that nothing can run in parallel with the middle frame. Pure coding cosiderations such as "inplaceness" and avoidance of extra datacopes are crucial for very efficient code.
 
Take a tick count in each edge frame and place your code in the middle. If it is a fast code, put it in a loop for a few million iterations. Take the difference in tick count and divide it by the number of iterations, convert to seconds, and display it in SI units e.g. 45u (=45microseconds) per loop.
 
Watch out for constant folding. If your loop is folded into a constant, you might get false ultrafast readings. If you have LabVIEW 8.5, try the new inplace structure.
 
If you are dealing with variable size arrays, measure the speed as a function of array size: Is the execution time linear with N, with NlogN, with N*N, etc. How is the memory use? Plot log(time) vs. log(size). What is the slope of the curve? Are there any breaks? (e.g. when you exceed the cache size or when you start swapping).
 
If you are running on a multipurose OS (e.g. windows, mac, linux), there are many other things running at any given time, so the speed will have some variations. Some people are tempted to e.g. take the average, while the fastest speed is probably a better measure of true speed.
 
You can norrow the variation by raising the priority of the subVI (careful!). If the computation is within a subVI, you should make sure that the front panel of the subVI is closed. Often, you gain speed by disabling debugging.
 
If you have multiple CPUs/cores, watch the task manager? Are both being used? Code optimized for multicore might have a slight penalty on a single code system.
 
LabVIEW RT has much tighter controls on execution and you can debug down to the clock tick using the execution trace toolkit. I am not familiar with RT, though.
 
Anyway, I am curious what kind of algorithms you are trying to test. Maybe it is of general interest. You could even start an informal "coding callenge" to tap into the collective wisdom of the forum members.
 
For some ideas, here is a link to the coding challenge archive: https://forums.ni.com/t5/LabVIEW-Digest-Programming/bd-p/5341
 
 
Message 4 of 10
(6,359 Views)

Hi Hed,

I am coming down on Christian's side of this discusion.

FLOPS are an old measurement that is seldom used becuase it speaks about the hardware's capacity. It was replaced with the CPU clock rate since it was a stronger indication of the machines performance than FLOPS.

I look at it this way.

FLOPS are to horsepower as computer hardware is to cars.

and

Drivers are to automobiles as software is to computer hardware.

How long it takes to get to your destination is more dependent on who is driving and what path they take than the number of hourses under the hood.

So to settle the question, run a race (benchmark).

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 5 of 10
(6,338 Views)

Besides if your algo is measuring flops and not megaflops or gigaflofs then its not worth bench marking.

Just kidding.

Paul Falkenstein
Coleman Technologies Inc.
CLA, CPI, AIA-Vision
Labview 4.0- 2013, RT, Vision, FPGA
0 Kudos
Message 6 of 10
(6,318 Views)

Thanks guys for constructive comments.

I assumed FLOPS as commulative number of floating point operations and not floating point per second, due to addiction to Matlab FLOPS command which is no longer available after using  LAPACK!

By that meaning, FLOPS is a representative of computational complexity of the algorithm, something that resembles the  total number of multiplications/additions/subtractions/divisions in the whole algorithm. It was just an indicator. I know it depends on the processor and compiler, but assume the compiler is optimally written, running a routine in my laptop or desktop usually ends up to almost the same FLOPS ( in matlab context). From an sig/proc algorithm developer point of view, that indicator is closer to "the total number of basic arithmatic operations"  than execusion time.

In my application since I compare two algorithms , the execution time also helps. Altenbach my algorithm is two different localization methods that I implented with labview. One is way faster than the other ( N versus N*N) . I just wanted to add some benchmark comparison in my report in addition to theoretical comparison in my reprt. I guess your comments cleared the way.

In another note, any of you guys can let me know how to write a report and  represeenting a work done in Labview,. I've done it a lot with Matlab but this is my first in Labview.

Showing the picture of the Block diagram or Front panel or any other idea ?

0 Kudos
Message 7 of 10
(6,274 Views)
Now we see why operational definitions are so important in communications.....Smiley Wink

I believe MatLab's "FLOPS" is (or more accurately, was) very similar to, if not the same as, "Big Oh notation".

Due to the confusion factor, I'm surprised they used the same acronym.

Message Edited by Bill@NGC on 09-22-2007 04:58 PM

0 Kudos
Message 8 of 10
(6,183 Views)


@Bill@NGC wrote:
I believe MatLab's "FLOPS" is (or more accurately, was) very similar to, if not the same as, "Big Oh notation".

http://www.mathworks.com/access/helpdesk/help/techdoc/ref/flops.html

According to the documentation, it just "counts" FP operations. Do you think it expresses it as a function of input size or do you need to repeatedly use the flops command for different input sizes and analyze it yourself? Just curious.

 

0 Kudos
Message 9 of 10
(6,177 Views)
I looked at a few more examples and I'll correct my previous post and say it was a tool used to help calculate the "Big Oh".
0 Kudos
Message 10 of 10
(6,173 Views)