LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Diagram Disable Structure and Asynchronous SubVI Bug?

Solved!
Go to solution

Diagram Disable bug.JPG

 

In this diagram, the boolean value won't arrive at the Boolean 2 indicator until ASYNC has completed running.

 

Without the Diagram Disable Structure, it does not wait for ASYNC to complete.

 

I always believed the Diagram Disable structure would compile the Enabled case as if the structure weren't there, but that is clearly not the case.

 

Is this a bug?

0 Kudos
Message 1 of 16
(3,308 Views)

I know this has been talked about before.  I just can't remember exactly where and too busy to do a thorough search right now.  But, yes, that is the desired behavior was the result.  It sort of acts like a sequence structure.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
0 Kudos
Message 2 of 16
(3,288 Views)

I did search, but couldn't find anything.  I understand that it is working like a sequence structure, but I'd love to see the explanation of why that is the DESIRED behavior.

Message 3 of 16
(3,272 Views)

My search came up with this.

 

So there is a CAR open to look into it. Don't know what the status of that CAR is though.

0 Kudos
Message 4 of 16
(3,266 Views)

Thanks!

 

I saw that post, but since it was about optimization, and I'm talking about dataflow behavior, I didn't consider it the same.  It does mention that disable structures are treated like flat sequences, but that doesn't explain why it would be desired behavior.

0 Kudos
Message 5 of 16
(3,259 Views)

It has to work that way otherwise the entire dataflow paradigm on which LV is based would be broken. The enabled case of the Diagram Disable structure is a node. None of its outputs are available until all the code within the node completes execution. So it is like a sequence structure but also like a case structure or a for loop.

 

Lynn

0 Kudos
Message 6 of 16
(3,244 Views)
Solution
Accepted by topic author Matt_Dennie

There is a technical term for this, it is called laziness (no pun intended, OK maybe a little bit).  A lazy design pattern does not load or initialize an object until it is actually needed.  The LV R&D team applies this anywhere and everywhere.

 

The bottom line is that the DSS is a node (the Flat Sequence Structure is not BTW), and as a node it begins execution when all of its inputs are available and halts execution when all of its outputs are completed.  As a node, the DSS *must* follow this rule.

 

Now the question is: why was the DSS implemented as a node?  I suggest because it was the easiest way, requiring the fewest changes. Nothing profound there.  They simply chose the lazy implementation: create nothing new if you can help it, but rather choose the existing object which gives the closest approximation to the desired behavior. 

 

Of course now the ship has sailed so now we have the 'breaking existing code' uphill battle to fight as well.

 

Similar complaint when inlining subVIs, an invisible "node" wraps the subVI so the code is in an intermediate state between truly inlined and a normal subVI.

 

I understand the current choice, but would not argue it is "desired" behavior by any stretch.  Sometimes LV R&D tries to have their cake and eat it too, they will compromise on the implementation for reasons of coding convenience and then try to convince you that the result really is the desired one, only you do not realize it.   That is when I get a little cranky, so far I have not heard any arguments that this was done for any other reason besides expedience.

Message 7 of 16
(3,243 Views)

Thanks, Darin... I'm inclined to agree with your analysis.

 

I would have imagined that the diagram disable structure, which we are encouraged to imagine as "commenting out code," gets removed at the earliest of the code transformations during compilation, so that it literally behaves as if it were not there.  Instead it seems to behave like a case structure with unreachable cases that get excused from compile time errors.

 

I don't see this as breaking data flow... on the contrary I find the current implementation breaks data flow, but this depends on your expectation of whether the diagram disable structure creates a new node.  Even the faint thin border of the structure seems to encourage the programmer to imagine the code without the structure.

 

From everyone's comments, I'll make the conclusion that this is expected and to some people desired behaviour, and therefore not a bug.

0 Kudos
Message 8 of 16
(3,216 Views)

Hi, Darin.K

 

I second you.

I was stumbling over this issue while optimizing an FPGA code for throughput, assuming that DDS are like C preprocessor statements. I asked the NI support about that behaviour and they told me, after some research, that this is "expected behaviour". There was no explanation given, just the queasy statement, that it is as it is and it be "expected".

 

I use DDS frequently for documentation and for modular choices of code. Modular choice means that the DDS page contain code alternatives which do the same thing in different ways (or for different hardware architectures). This does not mean at all that I intend to put timing constraints to the code optimizer.

It is oftenly not true that code that is functionally related needs to be in the same timing domain. But in a clear code it is desireably to place it close together on the block diagram to make the relationship obvious. And, that done, a DDS would be the perfect means to choose between alternative implementations of theese code segments, but we cant do that as long as the DDS acts like a node.


So there is really something missing in LabVIEW: A DDS which looses its node-nature before the compiler starts its work.

Can this really be so difficult? I feel that there is a kind of preprocessor already.

Thinking of it, there are some discussions about unnecessary FPGA recompiles in relation to DDS and CDS. Maybe this whole family of issues could be solved by implementing disable structures the right way?

 

Then, maybe I'm missing something essencial, but I never heard a clear reason why disable structures are implemented the way they are.

 

Regards,

 

   RalfO

 

0 Kudos
Message 9 of 16
(3,103 Views)

I don't think I agree with you here. The flat sequence structure would behave exactly the same, no matter if it is internally implemented as a node or not. This is how LabVIEW dataflow has worked since the inception of LabVIEW and I see no reason why it should change.

 

The border of a structure is a border that has well defined behaviour. The structure will not start before all its inputs are satisfied and the output tunnels will not be passed on until everything inside the structure has finished execution. If you want to have Boolean2 update while Async VI is still executing you have to place its terminal inside the DSS (or (Flat) Sequence Structure). This has been so since I first started to work in LabVIEW in version 2.2.1 and better stays so as long as LabVIEW wants to stay dataflow driven.

 

All text code programming I know of wouldn't behave differently although there you don't have dataflow anyhow so the point is really mute as the sequential flow of the code is all that matters there.

 

Inlined code is a special case. If they wouldn't do that you could get all kinds of very difficult to debug issues, because your code executes different depending on the inline status and may cause side effects that cause different results because of that.

Rolf Kalbermatter
My Blog
Message 10 of 16
(3,096 Views)