From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW Idea Exchange

cancel
Showing results for 
Search instead for 
Did you mean: 
SteveChandler

Hard Inline VI

Status: New

 

When you inline a subVI it is not the same thing as placing the contents of the VI on the callers block diagram. Almost but not quite.

 

The difference is that all outputs of a VI (inlined or not) become available at the same time when the VI completes execution.

 

 

inlined VI Settings.PNG

 

Set the VI above to inlined and use it in another VI

 

using inlined VI.png

 

The idea is an additional inline setting as follows.

 

inlined VI Settings.PNG

 

Now the VI will truely be inlined as if it were not a sub VI at all.

 

hard inlined VI.png

 

The name "hard inline" is just a suggestion and could be called something else.

 

 

=====================
LabVIEW 2012


35 Comments
AristosQueue (NI)
NI Employee (retired)

We discussed this idea when we were creating inlining. Let me give you our reason for not doing it and let you guys debate it. It's the sort of thing we could easily revisit.

 

Since the node looks like a subVI, anyone reading the diagram is going to expect it to behave like a subVI. It shouldn't matter to a user how the node is implmented, and inlined or not is an implementation detail. Therefore, we want to do the inlining with the same synchronization barrier for results that a regular subVI would have. Although it is true that many configuration options on a subVI can affect its call behavior, for the most part, things like which thread it runs in or whether it is reentrant or not are of no concer to the caller, so long as the subVI does the job it is documented to do. This would have introduced a change of behavior that seemed, to us, to be non-VI-like behavior.

 

Moreover, you could imagine a subVI being written such that it acquires a Semaphore in one independent branch, with some other code in parallel, and it would be unexpected, from the point of view of someone reading the diagram, to expect that the two branches wouldn't both be finished before the code proceeded. Common use case? No, but it would be very odd.

 

Combine that logic with the fact that subVIs with truly independent outputs are rare, and it seemed like synchronizing the outputs was the right choice.

 

What are your counterarugments?

altenbach
Knight of NI

I agree that hard inlining would open a can of worms that would make code very unpredictable and hard to read and debug.

 

(In this particlular example, you simply would make two inlined subVIs instead, one with the fast and one with the slow code.)

SteveChandler
Trusted Enthusiast

You are right that independent outputs are rare. One use case that I should have mentioned is that with this feature it would be possible to update an indicator on the caller using a wire instead of a reference. I just have it in my head that this feature could be useful to me someday.

 

Yes it is non-VI-like behaviour. But it seems reasonable that an inlined VI would behave differently. I would expect an inlined VI to behave just as if I copied it's code to my block diagram.

 

Yet I totally understand that it could be unexpected and weird to someone who didn't write the VI.

 

=====================
LabVIEW 2012


SteenSchmidt
Trusted Enthusiast

I'd vote for inlining itself behaving like true inline expansion, thus an extra level of "Hard inlining" not being necessary. I.e. inlining quite concretely replacing the subVI with its contained code for compilation.

 

The only minor drawback I see is the one pointed out; that inline expansion would make for some interesting twists with execution highlighting enabled for instance - here's suddenly a "subVI" that runs as soon as one of its inputs are ready, and which will return output possibly asynchronously. But I think it's a matter of getting used to this behaviour just as with reentrancy etc. Will this really be more puzzling for a LV-beginner than seeing the "waiting" glyph on a non-reentrant VI that is executing somewhere else?

 

Reasons for my point of view:

 

1) True inline expansion is what I'd expect when the feature is called inline. This is a very minor argument though, since I can easily accept implementation specific caveats like the ones you mention Stephen - there are differences in implementation of similar features in every programming language after all.

2) Adding a synchronization barrier probably nullifies most of the potential performance benefits of true inlining; namely completely removing the call and return instructions. In other programming languages output synchronization isn't even necessary to consider, since a requirement for an inline expansion candidate usually is a single function call only. This not being a requirement in an inlined LabVIEW subVI just makes the LV inlining feature the most powerful inlining implementation out there Smiley Happy. Basically macros with inline expansion performance.

3) The synchronization barrier, depending on how it's implemented, might make it impossible to keep the code close together memorywise. Most of my C and C++ experience stem from embedded targets, where instruction cache misses really has an impact. One of the biggest benefits on a low-power embedded target from inline expansion is that instruction cache performance is improved quite alot). Desktop CPUs with multiple cores, huge L1 caches, very effective branch prediction etc. might not be hit this way at all. I couldn't say, but you might have more factual knowledge on these targets in this regard. True inline expansion ought to allow for higher compile optimization at least?

4) Disregarding the maybe negligible performance hits from spatial non-locality of reference and the synchronization barrier itself, there could be some much larger performance benefits to gain by allowing truly independent code branches remain independent. It seems a very tough brake to lay on inlining code when you actually have the possibility to inline multi-function code, just to make execution highlighting seem less weird. Because when the code is deployed no one is going to wonder about the asynchronism of the inlined subVIs. Or am I mistaken here?

 

With that said, the possibility of inlining in itself is great. Beats subroutine.

 

Cheers,

Steen

CLA, CTA, CLED & LabVIEW Champion
SteveChandler
Trusted Enthusiast

@altenbach,

 

It would require someone to explicitly set this option. Hopefully they know what they are getting themselves into but it can always be turned off if they are in over their heads.

 

But what it does is add additional flexibility that can make new things possible. More precicely it gives new ways of doing things that are already possible with a larger block diagram.

=====================
LabVIEW 2012


AristosQueue (NI)
NI Employee (retired)

> True inline expansion ought to allow for higher compile optimization at least?

 

No. What is inlined is the code before the optimization pass. The optimziations are still applied.

 

> True inline expansion is what I'd expect when the feature is called inline.

 

I'm not sure that I communicated what's happening correctly... the code is inlined, as in, its code is indeed copied to the caller diagram. But a sequence structure is placed around that code. When I talked about "synchronization", this was not synchronization between calls to the same subVI, but just synchronization of the outputs. Some of your comments make it sound like you thought I was talking about synch between calls to the same subVI.

 

Here's a VI that would have serious problems if it were inlined and we *didn't* put the synchronization in play. The Semaphore Acquire has no data dependencies on the two VI inputs, so if you inlined it, it would immediately acquire the semaphore long before the subVI was ready to execute.

 

SynchProblem.png

SteveChandler
Trusted Enthusiast

Yes there will be caveats. Just like you don't go making every VI reentrant you wouldn't go and hard inline everything.

 

What do you think about the possibility of updating indicators on the caller with a wire rather than a reference? I am not sure if there would be any performance benefits but it would be a lot easier.

=====================
LabVIEW 2012


JÞB
Knight of NI

Steve,  reducing the overhead of a inline call by even removing the phantom sequence structure' tunnels sound great, but certainly breaks the data flow paradigm of "all outputs from a structure are available at the structure's completion." There is functionallity in LabVIEW to support dynamic data availability from co-rerelated-but-non-synchronous (if that term is not oxymorononic) computations simialar to your argument that it would take a larger BD.  The navagation window!  while not easy to take, if you must have one output available before another they necessarilly lack the cohesion implied by a sub-vi.  NICE discussion though-  I could be persuaded- but you need a specific cohesieve example that cannot be worked around to get me off the fence.  I encourage you to try to get me off the fence!!!!


"Should be" isn't "Is" -Jay
Norbert_B
Proven Zealot

I have to confess that i am, even though i find this suggestion interesting, against it. AristosQueue did already explain it in a very good way: If "true inlining" would be used, the very basic functionality of LV would be broken:

A node is executed if all inputs carry valid values, and a node will return on all of its outputs valid values if it finishes execution.

 

So as already pointed out, "true inlining" would reduce maintainability from the calling VI since there is obviously a node (the subVI) but it does not behave like a node.

There is one "node" 'misbehaving' like that: Flat sequence structure. And there have been quite some confusions on this structure resulting for instance in race conditions.

 

To quote DFGray from here:

"If it is not rock simple, a new user will not see it".

Transferring it to this topic: If it is not behaving like a normal subVI, new users will either never use it or (this would be the problem) simply misusing it, creating race conditions and alike.

Norbert
----------------------------------------------------------------------------------------------------
CEO: What exactly is stopping us from doing this?
Expert: Geometry
Marketing Manager: Just ignore it.
tst
Knight of NI Knight of NI
Knight of NI

This reminds me of a recent discussion - http://lavag.org/topic/14221-is-labview-a-pure-dataflow-language (and it shows the flat sequence exception Norbert was refering to).


___________________
Try to take over the world!