LabVIEW Idea Exchange

cancel
Showing results for 
Search instead for 
Did you mean: 
DFGray

Highlight Common Performance Issues

Status: New

It is very common to build an array or string in a loop using the Build Array or Concatenate String primitives. Unfortunately, while easy, this also incurs a large performance degradation (see this post for some examples) and this is not communicated to the creator in any meaningful way, other than slow code. I propose changing the primitive appearance in some obvious way to show that a performance degradation is happening. This would only happen if the primitive was in a loop, since these functions are a normal part of LabVIEW programming.  Some possible methods of showing issues:

 

  1. Change the border of the icon from a black line to an alternating red and yellow line (for an over-the-top effect, make it "crawling ants")
    BuildArray-RedYellowBorder.png
  2. Change the background of the icon from light yellow to dark red, while changing the glyphs to white, so you could read them
    BuildArray-DarkRedBackground.png
  3. Double the width of the border and make it red instead of black
    BuildArray-WideRedBorder.png

Accompanying any of these would be a mouse-over or pop-up to explain why the colors changed. The change would also be explained in the context and primary help. I would definitely want a switch, probably in Tools->Options and the labview.ini file to turn the behavior off for advanced users. New users would see the change and should be able to find out why it is occurring fairly quickly. The help files for the nodes should give alternatives for using them in a loop, such as the post above or replacing a Build Array with a conditional index.

 

The majority of the problem comes from Build Array, Delete from Array, Insert Into Array, and Concatenate Strings, so an initial pass at this problem could target them only. The Build Array and Concatenate Strings issues could be largely removed by using compiler optimizations similar to the ones currently being used for the conditional index, although algorithm changes by the developer can lead to higher performance than the generic case (see link above). The Delete from Array and Insert Into Array issues are usually solved by algorithm changes on the part of the developer, so an indication of an issue would be useful.

 

14 Comments
AristosQueue (NI)
NI Employee (retired)

The VI Analyzer has a few tests that flag potential performance issues like these. We have discussed adding immediate feedback like what you propose. Generally the conversation ends with everyone feeling that performance slips like this are better tracked with a post-analysis tool like VI Analyzer rather than being in the code. Why? Because of the adage "make it work first then make it work faster." Calling attention to the user's code optimization issues during development may be calling attention to totally irrelevant details if the code isn't a hotspot.

http://c2.com/cgi/wiki?OptimizeLater

Darin.K
Trusted Enthusiast

I would strongly prefer that all of the effort that would go into this be directed toward improving the memory manager so that this would not be necessary.  Simply recognizing append operations and acting accordingly would be a vast improvement in many common use cases. 

 

The real issue here I think is the lack of data structures which often forces us into shoehorning everything into an array and trying to create our own pseudo-memory-managers.  Not only do we lack the structures, but we lack the tools to implement them in a reasonable fashion (recursive data structures and templates for instance).

AristosQueue (NI)
NI Employee (retired)

Darin: False choice. The resources that we have available to improve the optimizers in the compiler are much fewer than those available to improve VI Analyzer. There is no meaningful "redirection of effort" because the engineers involved are not substitutable. Also, the work needed to change the optimizer to recognize and correctly improve some of these cases is substantial -- highlighting to a human "this may be better if done differently" is much simpler and allows human intelligence to make the final call. Optimizing the code on behalf of the user requires the algorithm to know beyond doubt that the change is a good one to make, a much higher bar requiring more effort. Saying we should do one and not the other presents a dichotomy that doesn't exist. Note that Damien's post mentions several cases that are not currently tackleable by algorithmic optimizers.

 

PS: The compiler does recognize append operations in some cases today.

Darin.K
Trusted Enthusiast

Here is the choice I see:  Do work to fix the compiler to improve one of the largest bottlenecks in LV code handling "Big Analog Data" or do nothing.  Every little bit of work which goes in the compiler has a lasting impact.  DFGray sees some performance issues in LV code he could either walk down the hall and talk to someone in the compiler team or someone in the VI analyzer team.  The result of the first choice is the chance we get real improvements in our present and future code for free.  The result of the second choice is that we get another VI analyzer test to throw on the pile.  Why not just change the icons of those functions to have a red border?

 

I can write a VI analyzer test.  I can not change the LV memory manager or syntax to help the compiler recognize my intentions.  I want your resources steered toward those things which I can not do for myself. 

 

The compiler is the foundation of LV, VI analyzer tests are coats of paint on the walls.  No amount of paint is going to hide the cracks in the foundation.  You are telling me that you can afford to paint the house ten times and have the painters available, but not fix the foundation.

 

The compiler does recognize "some" append cases today, and the dice returns 0 "some" times. Smiley Wink

AristosQueue (NI)
NI Employee (retired)

Darin, your comments suggest that the compiler team hasn't been working on these issues arduously for the last 30+ years. I do not know what gives you that impression, but they spend a lot of time working on identifying optimizations. Every release of LabVIEW includes more of them. If you think that work on these problems only happens because of Damien's ask, you're very wrong. Damien highlights another possibility that could be implemented faster than these theoretical compiler optimizations can/will get done without stealing a jot of energy from the compiler team.

 

> I can write a VI analyzer test.  I can not change the LV memory manager

> or syntax to help the compiler recognize my intentions.  I want your

> resources steered toward those things which I can not do for myself. 

 

But have you already written it? And do you expect that every user who needs that analysis has both skill and time to write it? If you have it to share, post it so that Darren can ship it with the next VI Analyzer release. Otherwise, if it needs to happen, then someone here will need to do that and ship it with LabVIEW. We split our time between dropping rapelling ropes for power users and carving staircases and ramps for other users. Some ideas are good ideas because they increase power. Others are good ideas because they increase usability.

 

> No amount of paint is going to hide the cracks in the foundation.

 

Lucky for us, your analogy doesn't hold. The VI Analyzer brings human intelligence into the equation in areas where the compiler's intelligence is lacking, and human intervention can compensate for the compiler to produce high performance code.

Darin.K
Trusted Enthusiast

As often happens you are twisting my comments around.  The gains the compiler team has made are self-evident, I have code from LV2 and LV3 which required a CIN to create an integer array of 200 elements because LV was too slow.  (Ramp still uses a CLFN to this day).   But the fact remains that NI insists that we need not concern ourselves with memory management.  If that is the case then we NEED the speed that can only come from breaking thse and other bottlenecks.   My comments reflect the fact that if it is a one-man team it should be two or four.  If it is a ten-man team it should be twenty.  I do not question the work of those who are there, I question the structure that management has in place that seems to undervalue the need.

 

And it is not simply a matter of writing better LV code in many cases. I have G code to decode JPEG images.  A small image (320x240) takes almost two seconds.  Simply pulling out the G-based IDCT and using the C-based one which ships with the math functions,  drops the time by an order of magnitude (and pulls it to within two orders of magnitude of a C-based version).  And it is not simply bad G code (in this case), the number of additions and multiplications for a fast lookup-table implemented IDCT are known from first principles.  I can simply put a simple loop which adds and multiplies the same number of times and the hit is the same as my original code.

 

Over the last 25 years I have seen many comments along the line of "the compiler team optimized for this particular pattern".  Look no further than the so-called Magic Pattern of unbundle/bundle.  What this means is that every so often someone who stares at G walks down the hall and talks to people who stare at syntax trees. 

DFGray
NI Employee (retired)

While I have no problem with the make-it-work-then-optimize philosophy, in this case, it often results in quite a bit of refactoring. I would prefer developers see the problem up front and deal with it then using known design patterns.

 

This problem is most common among new or casual LabVIEW programmers, many of whom do not have VI Analyzer (a $1k add-on). I have lost track of the number of people I have helped on these forums who have this issue. While we can fix the compiler to some extent, I don't think we can fix everything. This proposal is a good compromise that should be relatively easy to implement.

Norbert_B
Proven Zealot

I think the basic point of this discussion is "usability and transparency".

 

Step back a little on the general question:

Is it desirable that a tool which provides specific functionality while following a process in a more or less strict manner also provides feedback on how well the user follows this process? Or is it the task of the tool? Or, to turn things upside down: Is it the task of the tool to "correct" the process internally if the user is not following standard usage?

 

Please think about the last question: It is the cry for intelligence within the tool to identify and optimize disqualification of the user. While this is certainly desirable for (many) situations, you have to keep in mind that it modifies the interaction with the tool.... so the result/reaction of the tool gets "intransparent". It can differ from the expectations of the user.

 

Speaking of programming languages, we all know situations where the programmer implements something which is simply not performing what he wants it to perform. Introducing intransparency reduces the options to get help as nobody can correct the code unless you are familiar with those optimizations done by the compiler; the compiler is getting more and more intransparent for the average user.

While most users profit from it, it shouldn't be a disadvantage overall, but for certain cases, it will be.

 

Even nowadays, some developer urge for more "under the hood" information to keep transparency they are used to have from C/C++. Most developers are fine with intransparencies brought by C# as it is easier for them to work with.

 

So the main question that drives this discussion is obviously:

Is it feasible that a tool optimizes disqualifications of the user by itself and therefore most likely introduce intransparencies (argument of Darin) or is it better to let the user know about his disqualification directly (Damien) or by using some additional tool which could possibly give a more detailed information about the complete situation as a "quality initiative" (Stephen).

 

I honestly don't know which one is best, maybe a little bit of all three. Fact is: VI Analyzer is the option which can provide the solution fastest as it is a separate tool which can add the functionality quite easily.

 

Norbert

Norbert
----------------------------------------------------------------------------------------------------
CEO: What exactly is stopping us from doing this?
Expert: Geometry
Marketing Manager: Just ignore it.
Darin.K
Trusted Enthusiast

I would love full transparency, but I have learned to accept that there is an implicit deal with LV that it will insist on handling certain things for us.  If I may rephrase my argument, it would be "If you are going to do it for us, please do it right."

 

Let's look at one of the functions mentioned: Insert into Array.  This is a dog and should always be highlighted.  But why?  It is not necessarily a dog, it is often a necessary job.  Does it do it well?   It has one nice feature, the execution time is constant regardless of where the element is being inserted.  The only problem is that the constant value is basically the worst-case scenario.  If you leave the index unwired, you are explicitly and unambiguously telling the compiler that you are appending.  Why then is it still copying and moving to a new buffer?    My latest tests show that it is 100x slower than Build Array which does recognize append operations.  Prepend operation times are basically the same for both, it seems Build Array may be a bit friendlier to the memory manager.  I would much rather the implementation be fixed than have it flagged and force me to go in and replace Insert Into Array with Build Array.  (Not that Insert gets anywhere near my code, just an example).

Manzolli
Active Participant

It would be nice have a button to turn on and off the highlight of bottlenecks detected by the compiler. Once on, a tip strip would give a hint of what is going on and some tips to do better coding.

 

@DFGray Highlight options: I liked the 3rd option the most because it remembers the old implicit conversion indicator.

André Manzolli

Mechanical Engineer
Certified LabVIEW Developer - CLD
LabVIEW Champion
Curitiba - PR - Brazil