LabVIEW Idea Exchange

cancel
Showing results for 
Search instead for 
Did you mean: 
PCorcs_DMC

In Place Auto Indexing

Status: New

Hello LabVIEW Users,

 

While working with a complex configuration application, I found myself heavily utilizing in place operations. Throughout the process of contstructing the code, I found myself commonly having to create an In Place Indexing structure (pictured below). I thought it would be really nice if you could mark an auto-indexed array for in place operation. As always, I am open to suggestions, but thought this might be a nice creature feature:

 

Capture.png

 

Cheers, and happy coding. 

~Pcorcs
14 Comments
RavensFan
Knight of NI

Are you sure that your code in the 2nd image is not being operated on inplace?  When I create your VI and show buffer allocations, I don't see any buffers being allocated which makes me think it is being done in place.

PCorcs_DMC
Member

Hi, 

 

The short answer is 'No.' I don't know that this simplified example does not compile as a in place operation, I'd like to think it does. I say this with the disclaimer that I am not an expert on the interworkings of the LabVIEW compiler, or its optomization algorythms. I would also say, very few outside of NI's R&D group are experts in this subject matter. 

 

That said, before dismissing this idea entirely, I have received multiple responses to this prose with people looking at just the wires displayed (an array of doubles and the increment function). Of course I would not need to implemnt "in placeness" to increment an array. 

 

(The next part is best read with a Sci-Fi movie announcer voice.)

 

In a world, where LabVIEW applications often demand complex data structures, and memory allocation is not perfectly detected by the Profiling utility, one structure stands alone to offer programmers directives to specify that data be treated In Place. That structure is...the In Place Element structure. It enforces memory management where no other structures can (or are documented as doing so). 

 

(End announcer voice)

 

My most common use for this construct is when implemented arrays of objects (LabVIEW OO). I have had to debug numerous memory leaks throughout my programming experience, and have found that enforcing "in place"-ness I have significantly greater control. In speaking with other developers, I find they are also implemented structures like the one above. 

 

Thank you for viewing, and I am certainly happy to discuss further. 

~Pcorcs
altenbach
Knight of NI

> My most common use for this construct is when implemented arrays of objects (LabVIEW OO). I have had to debug numerous memory leaks throughout my programming experience, and have found that enforcing "in place"-ness I have significantly greater control. In speaking with other developers, I find they are also implemented structures like the one above. 

 

Most of the time, LabVIEW finds excellent way in re-using bufferns and enforcing implaceness. The IPE structure is rarely needed. Maybe you can elaborate a little more on your exact use case that you say is different.

 

In your original example above you could even eliminate the FOR loop entirely and nothing would change (it might even be faster because of better use of SSE instructions). Chances are that the compiler will actually eliminate the FOR loop anyway.

 

Who are these "developers" and what are their qualifications? How was the profiling and benchmarking done? How did you indentify the exact location of the memory leaks?

PCorcs_DMC
Member

Hello, 

 

Most of the time, LabVIEW finds excellent way in re-using bufferns and enforcing implaceness. The IPE structure is rarely needed. 

 

I understand this is the intention; however, all documentation surrounding IPE structures suggest that developers use them, not they "most of the time this is unnecessary, so why bother." Nor does it outline absolutely specific use cases for the perfect application of IPE. My experience (which I understand is completely subjective) seems in indicate that IPE structures ought be used when possible, particularly on array data that utilizes objects. 

 

Maybe you can elaborate a little more on your exact use case that you say is different.

 

Yes, as I was discussing, my typical use case involves the use of arrays of Objects. Typically, these arrays of objects dynamically dispatch operations that can modify Object data. Taken directly from the LabVIEW 2013 Help for IPE Structures: 

 

You can right-click a node and select Mark As Modifier to indicate that LabVIEW modifies the data you wire to the node, even if the block diagram does not indicate a modification of data occurs. The Mark As Modifier option is useful when you work with dynamic dispatch terminals. While the parent implementation of a dynamic dispatch subVI might not modify the data wired to the node, a child implementation might modify the data. Using the Mark As Modifier option then optimizes performance by minimizing the number of copies of the data LabVIEW creates. 

 

In this case, I have found that IPE structures are necessary, and that the buffer allocations are not always captured by the toolset in LabVIEW. Experimentally, I have been able to track memory usage of these types of constructs by wrapping them in a SubVI and continuously polling the VI profiler. By ensuring I use the type of IPE structure shown above, I have stemmed memory growth. 

 

In your original example above you could even eliminate the FOR loop entirely and nothing would change (it might even be faster because of better use of SSE instructions). Chances are that the compiler will actually eliminate the FOR loop anyway.

 

I agree with you, and understand that the incrementing of a numeric array can be handled is a more appropriate way. I was trying to format my example to match examples of IPE listed int he LabVIEW Help. Here's the stock photo I was modeling after to simplify the discussion:

IPE_Examples.png

 

In the above example, almost all of those operations "can" be handled differently. It's just a placeholder ofr "an operation." The specific function is not as important as the general concept.

 

Who are these "developers" and what are their qualifications? How was the profiling and benchmarking done? How did you indentify the exact location of the memory leaks?

 

I don't think any professional developer appreciates air quotes. I also think this has gone a bit personal. I don't see any other ideas where development credentials are listed prior to the idea being heard. I am sorry for the movie announcer voice, probably took us down an unnecessary path. I believe I discussed my profiling technique above, to round out your set of questions. 

 

I appreciate your consideration of this idea. I am sorry that I did not communicate this more clearly at the onset. I really want to stress that the above example is meant to demonstrate the code structure, not built for the soul purpose of increment operation.

 

Thanks, 

~Pcorcs
StephenB
Active Participant
In RT code I always make these in place for loops even though I'm not sure I need to. However, 'not sure I need to' is kind of the theme of the in place element structure.... But when my #1 concern is performance... I just end up defaulting to always using them A compiler R&D comment here would be nice
Stephen B
Mads
Active Participant

The problem with the original idea, or rather the presentation of the idea, is that the case it uses to illustrate why the idea is needed is a case close to what is frequently used as a text-book example of when auto-indexing is an optimal solution (due to, among other things, its in-place behaviour). That is probably why you got the "developers" comment.

 

In general (as StephenB also indicates), it is difficult to know exactly when an IPE structure will bring a benefit. I've rewritten code that I was 100% sure would become more efficient with the use of IPE structures, only to see no or even negative effects. And I've tried using it in what a lot of people considered boarderline cases,  finding it to produce significant improvements. Personally I've ended up only trying it when I'm unable to achieve satisfactory performance without it (Trading possible performance boosts for coding speed and simplicity...).

 

The core of the idea is good. Making it it easier to use/test the application of IPE behaviour would be great (call it an in-place (!) IPE, or a shortcut to apply/try out enforced IP behaviour...).

Intaris
Proven Zealot

I've had code in the past which ran like a dog (with three legs) when autoindexing through object arrays on RT (All the same object type, only calling static methods!).  Putting an IPE around it sped things up drastically.  I'm talking about 10x performance increase since the majority of the cases were simple accessor VIs.

For me the IPE has become a "If it's running slower than expected, try using the IPE" kind of tool.  A lot of the time it speeds things up on RT at least.

Norbert_B
Proven Zealot

OK, this is honestly more a functionality discussion than a feature request discussion. Therefore, i think you should move/continue the topic to the general LV forum.

I don't grant a kudo here, sorry. It is my experience that meddling further with the IPE will increase confusion instead of reducing it: When should/do you have to use the IPE?

 

Two litlle sidenodes here in the feature request:

1. The IPE had its time back in LV 8.6 and earlier (it was introduced with LV 8.5), but with the compiler optimizations in LV 2009 and 2010, most use cases "disappeared". IPE still has its use cases, but these are not very common. (see example here. Up to LV 2009, the displayed use case was improved with IPE, but as off 2010, there is no significant difference left).

2. "Performance" is most often a very ambiguous term. Are you looking at memory usage? On execution speed? Most often, these two play hand in hand, but in certain cases, they contradict each other. So without stating what exactly you are looking for, discussion might just follow down the wrong path...

 

Norbert

Norbert
----------------------------------------------------------------------------------------------------
CEO: What exactly is stopping us from doing this?
Expert: Geometry
Marketing Manager: Just ignore it.
Intaris
Proven Zealot

Can't speak for others but for me on RT, performance = execution speed.

 

The point is that there ARE still cases where the IPE is required.  They may be getting fewer and fewer in number, but they're still there.

AristosQueue (NI)
NI Employee (retired)

TL;DR: On all of our compilers (all desktops and RT), the second image will be inplace.

 

Longer answer:

Although that particular image will be inplace, there are similar diagrams in which inplaceness cannot be maintained, for example, if you fork the wire going into the For Loop to another node that uses the array. Unless that other node is another For Loop doing autoindexing on the same array, in which case the compiler will perform a loop fusion so that both loops can operate inplace on the data. Unless...

 

... and so on. My point is that the LV compiler is very smart about situations like this, generally smarter than any single human attempting to assert what should happen on any given diagram. If the compiler decides that the upstream inplaceness cannot be maintained, you adding that inplace element structure won't help -- you'll only be working inplace on the already-copied array.

 

Worrying about compiler optimizations ahead of time is a waste of effort. Write your code as clean as you can make it. Then run it. If performance isn't good enough, find the hot spot and fix it. If that hot spot is one that you think the compiler should have fixed on its own, let us know. In general, if you try to be smarter than the compiler these days, you'll just end up complicating your code.

 

I am NOT saying you shouldn't ever use the Inplace Element Structure. When you are accessing an element, modifying it, then putting it back, that's valuable information to the compiler. But when you have a simpler syntax that tells the compiler the same thing, use that.