From 04:00 PM CDT – 08:00 PM CDT (09:00 PM UTC – 01:00 AM UTC) Tuesday, April 16, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

Actor Framework Discussions

cancel
Showing results for 
Search instead for 
Did you mean: 

Warnings getting erased by scripted Send Message VIs

So I was wondering why no warnings were making their way through to be handled (relayed) in one of my actor core override functions.

It seems the 'Priority Enqueue.vi' of the Message Priority Queue library just drops the error cluster if it doesn't contain an error.
This is pretty unfortunate as I can't rely on any warning information to survive in the error cluster when calling a scripted send message VI.

So my question is - is this by design, or is the 'Priority Enqueue.vi' missing a merge error in the No Error case?

0 Kudos
Message 1 of 4
(1,590 Views)

By design -- my design specifically. Merge Error is performance expensive, and this was a function that we were trying to keep as lean as possible. I decided to optimize against a rare case that is arguably a bad design. I stress "arguably"... opinions vary wildly. But I stand by "rare" -- you're the first user to question this in the 10 years the Actor Framework has been public (8 since it shipped as part of LabVIEW).

 

I wrote the AF during a brief window when I had buy-in I thought to stop propagating warnings in our G designs... they were even going to be dropped from NXG entirely. The ground shifted back on that decision. Warnings are still a major problem for G code (inefficient, often mishandled, generally wrong to ever propagate*), and I would happily eliminate the bool from the error cluster.

 

Given that the conventions of LabVIEW have not changed, arguably, this should be amended, even at the relatively high performance cost. Our compiler has gotten better at optimizing, CPUs are faster... the overhead will never disappear, but it might not be so bad today compared to 2010 when I made this decision.

 

The "right" fix would be to make the border node on the In Place Element structure take an error in -- but they would have to either execute even if there was an error or else skip the execution of the entire IPE structure, neither of which is standard behavior and would definitely confuse some users (I say this based on the design notes of other devs who created the data value reference border nodes).

 

Given that this is the first time it has come up (in my memory, at least), I'm going to let it stand.

-------------

* Warnings mean that the node did its job -- no error! -- but it had to do the work in some unusual way. But from the view of everything downstream, that means it succeeded.

 

If you have a function that can produce warnings, in the vast majority of code I've reviewed (and I dug into this topic rather specifically for about two years a decade ago), as soon as that node finishes execution, you should either be upgrading that warning to an error or clearing it, never propagating it. It is just noise downstream, and no later node generally knows how to react other than to log it sometimes. For this reason, I argue that warnings should always be a separate output on such nodes that the caller can either evaluate or not as they see fit -- also serves as a signal to callers that there is a warning condition that may need evaluation.

0 Kudos
Message 2 of 4
(1,568 Views)

AQ - How performance intensive is the merge errors? I use them quite frequently (including in low level framework code that should be optimized) and am wondering if there is a better strategy. 

0 Kudos
Message 3 of 4
(1,506 Views)

@paul.r wrote:

AQ - How performance intensive is the merge errors? I use them quite frequently (including in low level framework code that should be optimized) and am wondering if there is a better strategy. 


It isn't expensive per call, but it is expensive in its cumulative effect when used in tight loops or ubiquitously in apps. About 10 years ago, I discovered from sampling customer apps that people sent to me that the Merge Errors VI was the most commonly used node in LabVIEW that was not a built-in node. It gets used a lot. That made me look into the node for the first time, and I realized there was a small improvement I could make to it's performance, so I did. And, whoa... unexpectedly, our performance benchmark suite suddenly saw an across the board improvement in time, even though no subsystem showed particular improvement. It was the kind of improvement you usually get by upgrading CPU.

 

So, I created the Merge Errors primitive to replace the Merge Errors VI. The nice feature was growability, but the most common case is the 2-input merge, and in that case, I saved a couple instructions by creating custom assembly code over what the LV compiler at the time generated from equivalent G code. Few single VIs showed any benefit, but nearly every full LV app that we benchmarked showed a 2 to 3 percent improvement in performance, with a couple spiking as high as 8 percent.

 

So my experience was that Merge Error could provide just enough friction to be noticeable, and eliminating it in a tight loop could make a serious difference.

 

Given that "send a message" is the number one action common to all AF projects, and that AF is used to build large apps as its primary goal, and given just how hard it is to benchmark that sort of thing ahead of time, I was leery at design time of anything that added even a small bump.

 

On the other hand, the general rule is "optimize later", and it is entirely possible that I made a wrong call on this one. But removing the Merge Errors later would have been an optimization that I could not apply later without breaking warnings functionality.

 

That's the history... that's my experience with it. Your mileage may vary.

Message 4 of 4
(1,500 Views)